uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,498,315 | arxiv | \section{Introduction}
\label{sec:Intro}
The adsorption of single polymers in dilute solution onto a substrate has been extensively studied for many years via a variety of theoretical models and techniques \cite{Eisenriegler1982,DeBell1993,Vrbova1996,Vrbova1998,Vrbova1999,Grassberger2005,Owczarek2007,Luo2008,Klushin2013,Plascak2017}. The critical phenomenon associated with this transition is a fundamental one in the landscape of statistical physics.
In dilute solutions at high temperatures the configuration of the polymer is dominated by entropic repulsion, forming an expanded phase where the polymer is desorbed from the surface.
Of particular interest is when there is also an attractive interaction between the monomers and the surface. In this situation, the configuration of the polymer is further influenced by energetic considerations and at low temperatures the polymer seeks to lower its energy by staying close to the surface and is adsorbed. The transition between these regions occurs at the adsorption temperature $T_\text{a}$ where the polymers display critical phenomena \cite{DeBell1993}.
Many generalizations have been studied and aspects of this behavior still attract much interest \cite{Grassberger2005,Luo2008,Klushin2013,Plascak2017}. One fruitful set of models use self-avoiding paths on a lattice to represent the polymer.
If we consider the thermodynamic limit of infinitely long polymers, the internal energy per monomer $u_\infty$ associated with contacts with a surface is expected to be zero for temperatures above $T_\text{a}$ and strictly positive below $T_\text{a}$. The singular behavior for $T \rightarrow T_\text{a}^-$ is given by the thermal exponent $\alpha$
\begin{equation}
u_\infty \sim \left(T_\text{a}-T\right)^{1-\alpha}\;,
\label{eq:alpha}
\end{equation}
while the length scaling behavior of the finite length internal energy $u_n$ per monomer
defines an exponent usually labeled $\phi$
\begin{equation}
u_n = \frac{\langle m \rangle}{n} \sim n^{\phi-1}\;,
\label{eq:phi}
\end{equation}
where $\langle m \rangle$ is the mean number of interactions (contacts with the surface).
This scaling implies that at $T_a$ there is
$
\langle m \rangle \sim n^\phi .
$
For high temperatures $\langle m \rangle$ is expected to be bounded while at low temperatures $\langle m \rangle$ is asymptotically linear in length $n$ so that a positive thermodynamic internal energy exists. This broad behavior characterizes the adsorption transition. Now the upper critical dimension for the adsorption transition is expected to be $d_\text{u}=4$ and the mean field value of $\phi$ is $1/2$. Interestingly, in two dimensions exact results from both directed models and the hexagonal lattice predict that $\phi=1/2$. Careful simulations in three dimensions \cite{Grassberger2005} have verified the prediction of field theoretic expansions around $d=d_\text{u}=4$ that $\phi\neq 1/2$ in three dimensions. A value just below $1/2$ was estimated by Grassberger as $0.484(3)$ \cite{Grassberger2005}.
One can also consider the scaling around the adsorption point in temperature and length together. We denote the exponent controlling the crossover to be $1/\delta$ in line with previous works. Until recently it was accepted that \smash{$1/\delta=\phi$} (we detail below one scaling argument for this correspondence). In fact, in both mean field theory and in two dimensions \smash{$1/\delta=1/2$}. Luo \cite{Luo2008} suggested that in three dimensions they may be different. Recently, it was further suggested by Plascak {\em et al.}~\cite{Plascak2017}, that both exponents may not be universal: to be specific, by adding monomer-monomer interactions to the model both these exponents depend continuously on the strength of the interaction even well away from any critical point induced by the monomer-monomer interactions. It is well known that when monomer-monomer interactions are sufficiently positive (low temperatures) a collapse transition can occur. They suggested that even repulsive interactions can induce a a non-universality.
To investigate the numerical validity of these claims we have simulated a range of models in both two and three dimensions. We consider self-avoiding walks (SAWs) on the hexagonal, square and simple cubic lattices, and self-avoiding trails (SATs) on the square and simple cubic lattices. We do not consider monomer-monomer interactions in the model, in which case SAWs and SATs are believed to be in the same universality class in all dimensions with the same finite-size scaling exponents.
Although well studied, we include the square and hexagonal lattice models as a useful benchmark for our methods since there is little dispute about the adsorption transition scaling in two dimensions. In particular, the case of self-avoiding walks on the hexgonal lattice has been solved and the critical exponents, transition temperature and connective constant are known exactly \cite{Batchelor1995}.
For all of these lattice models, we use a variety of methods of analysis designed to estimate the key critical exponents including those used by Plascak {\em et al.}~\cite{Plascak2017}. Even in the two-dimensional lattice models it is apparent that the systematic error inherent in all these methods often swamps the statistical error. Moreover, the spread of the results give a much better correlation with the correct values for the critical temperature and exponents \smash{$\phi=1/\delta=1/2$} than any individual estimate. With this in mind, we find that in the three-dimensional case the central estimates agree with Grassberger's estimate \cite{Grassberger2005} that \smash{$\phi<1/2$}. However, we find no evidence that $1/\delta$ and $\phi$ are different as suggested by Luo \cite{Luo2008} and Plascak {\em et al.}~\cite{Plascak2017}.
Moreover, we find the values for SAWs and SATs to be numerically equivalent and so find no evidence of any non-universality as suggested by Plascak {\em et al.}~\cite{Plascak2017}. We rather suggest that the previous results were simply a case of systematic errors from higher order corrections to scaling leading to apparent differences. We finally provide our own estimate of \smash{$\phi=1/\delta=0.484(4)$} in three dimensions.
\section{The models}
\label{sec:Model}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{lattice_paths}
\caption{Self-avoiding walk on (a) the hexagonal lattice and self-avoiding trails on (b) the square lattice and (c) the simple cubic lattice, in the presence of an impermeable adsorbing surface. Sites in contact with the surface, other than the origin, are marked blue. Walks on the square and simple cubic lattices are the same with respect to the surface, but multiply-visited sites, marked red, are forbidden. }
\vspace{-0.5cm}
\label{fig:LatticePaths}
\end{figure}
A self-avoiding trail (SAT) is a lattice path with the restriction that no two bonds between consecutive steps may overlap. A self-avoiding walk (SAW) has the additional restriction that lattice sites cannot be occupied more than once. The set of SAWs is a subset of the set of SATs.
The impermeable adsorbing surface is represented by restricting trails/walks to \smash{$x_d\ge 0$} for a $d$-dimensional lattice with coordinate system $x_i$, for \smash{$i=1,\ldots d$}. \fref{fig:LatticePaths} shows fragments of a SAW on (a) the hexagonal lattice and a SAT on (b) the square lattice and (c) the simple cubic lattice, near an impermeable boundary layer. In particular, note that on the hexagonal lattice, only every second site is considered on the surface.
The surface-monomer interaction is modeled by assigning an energy $-\epsilon$ to any monomer on the surface \smash{$x_d = 0$}. This does not include the initial point at the origin fixing the path to the surface.
\subsection{Thermodynamic quantities}
\label{sec:Thermo}
A trail (or walk) $\psi_n$ of length $n$ with one end fixed to the surface and with $m$ contacts with that surface has total interaction energy $-m\epsilon$ and corresponding Boltzmann weight $\kappa^m$, where \smash{$\kappa = \exp(\epsilon/k_\text{B}T)$}. Thus, the partition function of the set $\mathcal{T}_n$ of walks/trails of length $n$ is
\begin{equation}
Z_n(\kappa) = \sum_{\psi_n \in \mathcal{T}_n} \kappa^m.
\label{eq:Partition}
\end{equation}
The (reduced) finite-size free energy is
\begin{equation}
f_n(\kappa) = - \frac{1}{n} \log Z_n(\kappa) ,
\end{equation}
while the thermodynamic limit is given by
\begin{equation}
f_\infty(\kappa) = \lim_{n\rightarrow\infty} f_n(\kappa).
\end{equation}
A general thermodynamic quantity is
\begin{equation}
\langle Q \rangle(\kappa) = \frac{1}{Z_n(\kappa)}\sum_{\psi_n \in \mathcal{T}_n} \kappa^m Q(\psi_n).
\label{eq:ThermoQuantity}
\end{equation}
In particular, we are interested in the internal energy
\begin{equation}
u_n (\kappa) = \frac{\langle m \rangle}{n},
\label{eq:InternalEnergy}
\end{equation}
which, considered as the fraction of the walk/trail that is adsorbed to the surface, serves as our order parameter.
The other quantity of interest is the mean-squared end-to-end radius $ R^2_n$. In the presence of an interacting surface we distinguish between the parallel and perpendicular components, with respect to the surface. For a $d$-dimensional system these components are defined as
\begin{alignat}{1}
R^2_{\parallel,n} (\kappa) &= \sum_{i=1}^{d-1}\langle {x_{i,n}}^2 \rangle, \\
R^2_{\perp, n} (\kappa) &= \langle x_{d,n}^2 \rangle, \\
\label{eq:EndToEndRadius}
\end{alignat}
where $x_{i,n}$ is the $i$-th coordinate of the $n$-step of the path.
Recall that for the simple cubic lattice the adsorbing surface is the $(x_1,x_2)$-plane at \smash{$x_3=0$} and in two dimensions the surface is the \smash{$x_1$-axis} at \smash{$x_2=0$}.
\subsection{Scaling laws and critical temperatures}
\label{sec:Scaling}
The exponent $\phi$, usually expected to be universal, determines the scaling of the order parameter at the critical point for long chains: $u_n\sim n^{\phi-1}$. For the finite values of $n$ considered in numerical simulations, it is necessary to also include finite-size correction terms. From finite-size scaling theory we have
\begin{equation}
u_n \sim n^{\phi-1} f_u^\text{(0)}(x) [1 + n^{-\Delta}f_u^\text{(1)}(x) +\ldots ],
\label{eq:UnScaling}
\end{equation}
where the $f^{(i)}$ are finite-size scaling functions of the scaling variable \smash{$x=(T_\text{a}-T)\,n^{1/\delta}$} and \smash{$\Delta\lesssim 1$} is the first correction-to-scaling term.
The exponent $1/\delta$ therefore describes the \emph{crossover} around the adsorption critical point. It can also be described as the shift exponent associated with the deviation of temperature from the critical point. That is, the finite-length critical temperature differs from the infinite-length critical temperature according to
\begin{equation}
T_\text{a}^{(n)} \sim T_\text{a} + n^{-1/\delta} f_T^\text{(0)}(x) [1 + n^{-\Delta} f_T^\text{(1)}(x)+\ldots].
\label{eq:TempScaling}
\end{equation}
Somewhat confusingly in the literature, the exponent $\phi$ is often referred to as the \emph{crossover} exponent since it has, until recently, been accepted that there is a crossover scaling variable \smash{$x=(T_\text{a}-T)\,n^{\phi}$} describing the scaling around the adsorption point. Below we provide a scaling argument that connects $\phi$ and $1/\delta$ \cite{Eisenriegler1982,Rensburg2004}. The argument starts with the scaling of the partition function. At any fixed temperature the partition function scales as
\begin{equation}
Z_n(\kappa) \sim A \mu^n n^{\gamma^{(1)} -1},
\end{equation}
where $\gamma^{(1)}$ is the entropic exponent that takes on one value at high temperatures and different values at the adsorption point and at low temperatures. Let us denote the value at the adsorption point as $\gamma^{(1)}_\text{a}$. The connective constant $\mu(\kappa) = \log f_\infty^{-1} $ is temperature dependent and directly related to the thermodynamic limit of the free energy. Following the same standard scaling hypothesis as above, one expects
\begin{equation}
Z_n(\kappa) \sim A \; \mu_\text{a}^n \; n^{\gamma^{(1)} _\text{a} -1}\, {\cal Z}\left( t n^{1/\delta} \right),
\end{equation}
for $\kappa$ near $\kappa_\text{a}$, where $\mu_\text{a} =\mu(\kappa_\text{a})$ and $t=T_a-T$.
This form can be deduced from a similar ansatz for the scaling of the corresponding generating function. The (reduced) finite-size free energy therefore scales as
\begin{equation}
f_n(\kappa) \sim -\frac{1}{n} \log\left( A n^{\gamma^{(1)} _a -1} \right) + f_\infty(\kappa_\text{a}) + \frac{1}{n} {\cal F}\left( t n^{1/\delta} \right),
\end{equation}
where the first terms are temperature independent.
The key point is that the internal energy is given, up to a multiplicative constant, by the temperature derivative of the free energy, so this form immediately implies that
\begin{equation}
u_n \sim n^{1/\delta-1} {\cal F^\prime}\left( t n^{1/\delta} \right).
\label{eq:ScalingArg}
\end{equation}
Comparing \eref{eq:ScalingArg} to \eref{eq:UnScaling} yields $\phi=1/\delta \,$.
A related argument concerns the crossover from the temperature scaling of the internal energy in \eref{eq:alpha} to the length scaling in \eref{eq:phi} via the crossover form in \eref{eq:UnScaling}. The scaling function should behave as
\begin{equation}
f_u^\text{(0)}(x) \sim x^{(1-\phi)\delta},
\end{equation}
which eliminates the length dependence and leads to
\begin{equation}
1- \alpha = (1-\phi)\delta.
\end{equation}
If we also accept the previous argument that $\phi=1/\delta$ then this implies that
\begin{equation}
\alpha = 2 -\delta = 2 - \frac{1}{\phi}\;.
\end{equation}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{squSAWs_Tcn_methods_combo}
\vspace{-1.cm}
\caption{The four methods for obtaining $T_\text{a}^{(n)}$, illustrated with data for SAWs on the square lattice. For clarity, error bars of thermodynamic quantities have been omitted and only \smash{$n=128,256,512,1024$} are shown. Black circles mark (a) $\Gamma$: positions of $\max\Gamma_n$, (b) BC: intersections of $U_4$ at various $n$ with $U_4$ at \smash{$n=128$}, (c) R2: intersections of $R^2$ exponents $\nu_\perp$ with $\nu_\parallel$, and (d) ratio: intersections of $\phi^{(n_i)}$ with $\phi^{(n_{i+1})}$. }
\label{fig:TcnMethods}%
\vspace{-0.5cm}
\end{figure*}
Despite these arguments, Luo \cite{Luo2008} conjectured that $\phi$ and $1/\delta$ may be different in three dimensions.
One way to extract $1/\delta$ separately rather than by calculating the temperature shift directly is to consider the log-derivative of $u_n$,
\begin{equation}
\Gamma_n(\kappa) = \frac{d\log u_n}{dT} =
(\log\kappa)^2
\frac{\langle m^2 \rangle - \langle m \rangle^2}{\langle m \rangle}.
\label{eq:LogDerivative}
\end{equation}
As a second derivative of the free energy, we expect a critical scaling form
\begin{equation}
\max \Gamma_n \sim n^{1/\delta} f_\Gamma^\text{(0)}(x) [1 + n^{-\Delta} f_\Gamma^\text{(1)}(x) +\ldots].
\label{eq:GammaScaling}
\end{equation}
By \eref{eq:LogDerivative}, $\Gamma_n$ is related to the specific heat. The peaks of the specific heat are often used to locate the collapse transition of trails in the bulk but this approach is inaccurate for locating the adsorption transition \cite{Rensburg2004}. Nevertheless, it is usually assumed that $x$ is small enough to use \eref{eq:GammaScaling} to determine $1/\delta$.
\section{Methods}
\label{sec:Methods}
The key to estimating $\phi$ and $1/\delta$ is to accurately locate the finite-size critical temperatures $T_\text{a}^{(n)}$. We explore four methods of calculating $T_\text{a}^{(n)}$, illustrated in \fref{fig:TcnMethods} using data for SAWs on a square lattice as an example. First, the simplest but least accurate is to consider the locations of $\max\Gamma_n$ as estimates of $T_\text{a}^{(n)}$; this method is labeled `$\Gamma$'. Despite the issues relating to the specific heat, it is a useful comparison to the other methods.
Second, we calculate the Binder cumulant
\begin{equation}
U_4(\kappa) = 1 - \frac{1}{3}\frac{\langle m^4 \rangle}{\langle m^2 \rangle^2},
\label{eq:Binder}
\end{equation}
a quantity that, for large $n$, tends toward a universal constant value at the critical point \cite{Binder1981}. Thus, intersections of curves of $U_4$ at different $n$ with the curve at fixed \smash{$n_\text{min}=128$} are used to locate the finite-size critical temperatures. This method is labeled `BC'.
The third method, labeled `R2', looks at the scaling of each component of the mean-squared end-to-end radius. For either component $i$,
\begin{equation}
R_{i,n}^2 \sim n^{2\nu_i},
\label{eq:R2Scaling}
\end{equation}
where \smash{$i=\perp,\parallel$} and the Flory exponent $\nu_i$ depends on the phase and dimension of the system and is calculated by simply inverting \eref{eq:R2Scaling}:
\begin{equation}
\nu_i = \frac{1}{2} \log_2\frac{R_{i,n}^2}{R_{i,n/2}^2}.
\label{eq:NuRatio}
\end{equation}
At high temperatures, the polymers are desorbed and both perpendicular and transverse components of $R^2$ scale as per the $d$-dimensional bulk. Below the adsorption temperature, the polymers' extent away from the surface vanishes and thus \smash{$R^2_\perp\to 0$} (or \smash{$\nu_\perp\to 0$}). The polymers are adsorbed to the surface to become a quasi-\smash{($d-1$)}-dimensional system and \smash{$\nu_\parallel^{(d)}\to \nu_\text{bulk}^{(d-1)}$}. At some intermediate temperature the components of $\nu$ cross and in fact the intersections locate the finite-size critical temperatures $T_\text{a}^{(n)}$.
In view of \eref{eq:NuRatio}, the fourth method, labeled 'ratio', is to calculate the exponent $\phi$ directly as the leading order of the order parameter. That is,
\begin{equation}
\phi = 1 + \log_2\frac{u_{n}}{u_{n/2}}
\label{eq:PhiRatio}
\end{equation}
is calculated over a range of $n$. As a function of temperature, it is known that, in addition to value of $1/2$ at the critical point, the scaling exponent of the internal energy vanishes at high temperatures and tends to unity at low temperature. For SAWs on a square lattice this is borne out in \fref{fig:TcnMethods}(d). Then, as with the R2 and BC methods, we can locate the critical temperatures $T_\text{a}^{(n)}$ from the intersections of curves of \eref{eq:PhiRatio} for successive values of $\{n_i,n_i+1\}$.
While finite-size scaling methods are the main focus, we can also consider other ways of estimating exponents. To that end, we consider that as well as the intersections for the ratio method locating the critical temperatures, \eref{eq:PhiRatio} is a direct estimate of $\phi$. This `direct' method provides a set of finite-size estimates, $\phi^{(n)}$, which, in the limit \smash{$n\to\infty$}, extrapolate to an alternative estimate of $\phi$ without reference to the scaling form \eref{eq:UnScaling} and its dependence on locating the critical temperatures.
\subsection{Numerical simulation}
\label{sec:Numerical}
Trails and walks are simulated using the flatPERM algorithm \cite{Prellberg2004}, an extension of the pruned and enriched Rosenbluth method (PERM) \cite{Grassberger1997}. The simulation works by growing a walk/trail on a given lattice up to some maximum length $N_\text{max}$. At each step the cumulative Rosenbluth \& Rosenbluth weight \cite{Rosenbluth1955} of the walk/trail is compared with the current estimate of the density of states $W_{n,m}$. If the current state has relatively low weight (i.e.~by being trapped or reaching the maximum length) the walk/trail is `pruned' back to an earlier state. On the other hand, if the current state has relatively high weight, then microcanonical quantities are measured and $W_{n,m}$ is updated. The state is then `enriched' by branching the simulation into several possible further paths (which are explored when the current path is eventually pruned back). When all branches are pruned a new iteration is started from the origin.
FlatPERM enhances this method by altering the prune/enrich choice such that the sample histogram is flat in the microcanonical parameters $n$ and $m$. Further improvements are made to account for the correlation between branches that are grown from the same enrichment point, which provides an estimate of the number of effectively independent samples. We also run 10 completely independent simulations for each case to estimate the statistical error.
The main output of the simulation is the density of states $W_{n,m}$ of walks/trails of length $n$ with $m$ contacts with the surface, for all \smash{$n\le N_\text{max}$}. Thermodynamic quantities are then given by the weighted sum
\begin{equation}
\langle Q \rangle(\kappa) = \frac{\sum_{m} Q_m\kappa^m W_{n,m}}{\sum_{m} \kappa^m W_{n,m}}.
\label{eq:FPQuantity}
\end{equation}
For example, the $q^\text{th}$ order moments needed for the thermodynamic quantities in \sref{sec:Thermo} are calculated directly as
\begin{equation}
\langle m^q \rangle = \frac{\sum_{m=0}^n m^q\kappa^m W_{n,m}}{\sum_{m=0}^n \kappa^m W_{n,m}}.
\label{eq:FPEnergy}
\end{equation}
Other microcanonical quantities $r_{\perp,n}^2$ and $r_{\parallel,n}^2$ are also calculated during the simulation.
\setlength{\tabcolsep}{4pt}
\begin{table}[t!]
\caption{Details of flatPERM simulations. In all cases the number of samples and effectively independent samples is the average of 10 independent runs.
}
\begin{tabular}{llrrrr}
\hline \hline
& Walks/ & \multicolumn{1}{l}{Max} & & \multicolumn{1}{l}{Samples at} & \multicolumn{1}{l}{Ind.~samples} \\ [-1pt]
Lattice & trails & \multicolumn{1}{l}{length} & \multicolumn{1}{l}{Iterations }& \multicolumn{1}{l}{max length} & \multicolumn{1}{l}{max length} \\ \hline
hex & SAW & 4096 & $1.8\times 10^7$ & $2.3\times 10^9$ & $1.0\times 10^7$ \\
hex & SAW & 1024 & $5.5\times 10^5$ & $2.0\times 10^{10}$ & $2.6\times 10^8$ \\[1ex]
squ & SAW & 1024 & $3.7\times 10^5$ & $3.9\times 10^{10}$ & $3.2\times 10^{8}$ \\
squ & SAT & 1024 & $3.7\times 10^5$ & $3.9\times 10^{10}$ & $3.1\times 10^{8}$ \\[1ex]
sc & SAW & 1024 & $4.4\times 10^5$ & $3.5\times 10^{10}$ & $5.4\times 10^8$ \\
sc & SAT & 1024 & $4.4\times 10^5$ & $3.4 \times 10^{10}$ & $5.9\times 10^8$ \\
\hline \hline
\end{tabular}
\label{tab:SimDetails}
\end{table}
In this work we used the flatPERM algorithm to simulate walks and trails on the square and simple cubic lattices up to length $1024$, and walks on the hexagonal lattice at the exact adsorption transition, \smash{$\kappa_\text{a}=1+\sqrt{2}$}, up to length $4096$ and without fixed weight up to length $1024$. Details of the simulations run in this work are summarized in Table \ref{tab:SimDetails}. Note that flatPERM is generally an athermal simulation but in the case of walks on the hexagonal lattice at the exact critical temperature, a fixed weight $\kappa_\text{a}$ is applied at each step by altering the usual Rosenbluth \& Rosenbluth weigh
. That is, the term $\kappa^m W_{n,m}$ in \eref{eq:FPQuantity} is calculated during the simulation (at fixed \smash{$\kappa=\kappa_\text{a}$}) and the density of states is output as $W_n$; the sample histograms are {\em not} flattened with respect to $m$. This both saves memory and reduces equilibration time so that longer lengths can be simulated.
\section{Results and Discussion}
\label{sec:Results}
To understand the analysis, we will look at the case of SAWs on a square lattice in some detail before presenting the combined results for all lattices. First, some general remarks that apply to all cases. In all finite-size scaling fits, we assume that the scaling variable $x$ is constant with respect to $n$ so that the $f^{(i)}(x)$ may be treated as constants. This is readily verified to be true, although $x$ is not necessarily small in all cases. We find that the correction-to-scaling term is always necessary for a good fit and after considering the case of square SAWs we do not report the power-law only results. Finally, even with a correction-to-scaling term, we always consider \smash{$n=128,\ldots,1024$} since \smash{$n<100$} is too far from the scaling regime.
\subsection{SAWs on square lattice}
\label{sec:ResultsSquSAWs}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{squSAWs_un_gamma_combo}
\includegraphics[width=\columnwidth]{squSAWs_phi_ratios}
\includegraphics[width=\columnwidth]{squSAWs_exponents}
\caption{For SAWs on the square lattice, (a) log-log plot of $\Gamma_n$ vs.~$n$ and $u_n$ vs.~$n$. The latter are calculated using the extrapolated values of $T_\text{a}$ from the BC, R2 and ratio methods. Solid curves are appropriate fits with correction-to-scaling term. Power-law only fits are also shown as dotted lines, where visible. (b) Plot of $\phi^{(n)}$ calculated directly from ratios of $u_n$, vs.~$1/\sqrt{n}$, and extrapolated to large $n$. (c) Estimates of the exponents using the various approaches discussed in the text. For specific values see Table \ref{tab:ExponentResultsBest}.}
\label{fig:SquSAWsAnalysis}
\vspace{-0.5cm}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth,trim={0 1cm 0 1cm},clip]{2D_temp_ALL}
\caption{Finite-size critical temperatures for two-dimensional lattice models (a) SAWs on the hexagonal lattice and (b) SAWs and (c) SATs on the square lattice. For each of the four methods the solid lines are fits with correction to scaling and dotted lines are power-law only.}
\label{fig:2DTemperatures}
\vspace{-0.5cm}
\end{figure*}
As discussed in \sref{sec:Methods} and following \cite{Plascak2017}, the canonical method to estimate the exponents is as follows. The first step is to calculate $1/\delta$ from $\Gamma_n$. A log-log plot of $\max\Gamma_n$ for \smash{$n=128,\ldots,1024$} is shown in \fref{fig:SquSAWsAnalysis}(a) (blue, left) along with fits to \eref{eq:GammaScaling}. We get \smash{$1/\delta=0.5264(12)$} for a power-law only fit and \smash{$1/\delta=0.51528(86)$} by including a correction-to-scaling term.
Although there is not a lot of difference between the fits on this scale, given the known value of \smash{$1/\delta=1/2$} in two dimensions, it is clear that the correction-to-scaling term is significant.
The next step is to consider the critical temperatures. Figure \ref{fig:2DTemperatures}(b) shows the finite-size critical temperatures, $T_\text{a}^{(n)}$, for SAWs on the square lattice, using the four methods discussed in \sref{sec:Methods}.
Using the (correction-to-scaling) value of $1/\delta$ just found, we also show fits according to \eref{eq:TempScaling} for each of these sets. Solid lines are fits with correction-to-scaling term, and dotted lines are power-law only. Extrapolating the power-law only fits to \smash{$n\to\infty$} obtains \smash{$T_\text{a}=1.74548(70)$}, $1.74001(49)$, $1.74429(76)$, $1.7517(15)$ for the $\Gamma$, R2, BC and ratio methods, respectively. Using correction-to-scaling fits instead obtains \smash{$T_\text{a}=1.74292(54)$}, $1.7399(19)$, $1.74151(69)$, $1.7510(57)$ for the $\Gamma$, BC, R2 and ratio methods, respectively.
The $T_\text{a}$ from each method appear to have good agreement, yet a few points should be made. Firstly, although it may not be clear from just the reported values for the case of square lattice SAWs, the correction-to-scaling fits are generally better than using power-law only. The R2 method is the best for locating the $T_\text{a}^{(n)}$, having much less variation over this range of $n$, and having much smaller error bars for each $T_\text{a}^{(n)}$ than other methods. The small error bars are in due in part to the fact that the method relies on intersections of near-perpendicular curves, as opposed to the near-parallel curves of the ratio method. This more than counteracts the lack of correction-to-scaling terms in the R2 method compared to the rest of the analysis.
The BC method, via the Binder cumulant, presents the most difficulty. Notice that the $T_\text{a}^{(n)}$ deviate from the trend at large $n$. We note that the correction-to-scaling term cannot account for this kink and even if it could the extrapolation $n\to\infty$ would be significantly different from the other methods. Instead, we account for this by only using data up to $n\lesssim 600$ in the fits, where the scaling law fits well. This cutoff was determined to be the point where the error in the fitting parameters started to diverge as data for larger $n$ was added to the fit.
As to why this kink is present, we hypothesize that it is a limitation of finite simulations. While our data is equilibrated to a high degree, the fourth-order moment $\langle m^4\rangle$ that appears in \eref{eq:Binder} is more susceptible to error as $n$, and therefore maximum possible values of $m$, increase. It would take orders of magnitude more samples to ensure that fourth-order moments are equilibrated.
Of course, we cannot rule out that it is a quirk of the flatPERM algorithm and other simulation methods may not have this issue. However, we note that our simulation is up to the reasonably long length of $1024$ and the kink occurs at greater lengths than those considered in previous works that use the Binder cumulant \cite{Plascak2017}.
The ratio method of determining $T_\text{a}$ also has some flaws. The individual $T_\text{a}^{(n)}$ are closer to the R2 method than the others, yet the individual error bars are much larger, and the extrapolated value $T_\text{a}$ does not agree with the other three methods. However, the latter point is not a general observation for all lattice models.
Lastly, the $\Gamma$ method is interesting because at first glance the extrapolated value of $T_\text{a}$ appears to agree with the other methods. This is in contrast to the obvious difference between this method and the others at finite $n$, as clearly visible in \fref{fig:2DTemperatures}(b). This gap is indicative of the fact that the locations of the peaks of $\Gamma_n$ are not claimed to properly approximate the critical temperatures. In fact, the scaling variable, \smash{$x=(T_\text{a}-T)\,n^{1/\delta}$}, is significantly greater than unity for the $\Gamma$ method. Attempting to use this method anyway is fraught due to the relation to the specific heat, as mentioned earlier. It is also a distinctly different approach to the other methods which all have the common aspect that curves for different $n$ should intersect near the critical temperature, representing the existence of a universal value of the given thermodynamic quantity at the critical temperature. Furthermore, although not necessarily clear for SAWs on the square lattice, on closer inspection the values of $T_\text{a}$ for the $\Gamma$ method are generally off compared to the other three methods. Given these concerns, we thus record the extrapolated value of the critical temperature for the $\Gamma$ method, but will not go on to use it to calculate $u_n$ and thus $\phi$. Even without this argument, the resulting values of $\phi$ are consistently off compared to the other three valid methods.
Turning to $\phi$, a log-log plot of $u_n(T_\text{a})$ for \smash{$n=128,\ldots,1024$} is shown in \fref{fig:SquSAWsAnalysis}(a) (right) along with fits to \eref{eq:UnScaling}. Since the $T_\text{a}$ estimates are so close together for this lattice model the curves of $u_n$ overlap strongly on this scale. For the three valid finite-size scaling methods, BC, R2 and ratio, and using power-law only fits, this obtains \smash{$\phi=0.5325(17)$}, $0.5292(23)$ and $0.5094(35)$, respectively. Including the correction-to-scaling term gives \smash{$\phi=0.52062(24)$}, $0.51493(87)$ and $0.4849(21)$, respectively. Here, the correction to scaling is a clear improvement for the R2 method, marginal for the BC method and questionable for the ratio method.
There is also an alternative approach whereby we evaluate $u_n(T_\text{a}^{(n)})$ at each different $T_\text{a}^{(n)}$, rather than the single extrapolated $T_\text{a}$. This is similar to the calculation of $1/\delta$ where the maxima of $\Gamma_n$ occur at different temperatures for each $n$. Using the correction-to-scaling fits, this obtains {$\phi=0.527(11)$}, $0.5142(60)$ and $0.481(21)$. The choice of whether to use $T_\text{a}$ or the set of $T_\text{a}^{(n)}$ is not {\em a priori} clear, but the resulting values of $\phi$ have much larger errors and spread between methods. They are shown in \fref{fig:SquSAWsAnalysis}(c) as a comparison, but it is clear that using the single $T_\text{a}$ to obtain $\phi$ is a better approach.
The last estimate of $\phi$ comes from the ratio method which, as explained in \sref{sec:Methods}, estimates $\phi$ more directly by extrapolating the $\phi^{(n)}$ at the critical points to \smash{$n\to\infty$}, as shown in \fref{fig:SquSAWsAnalysis}(b). We assume the ansatz
\begin{equation}
\phi^{(n)} = \phi + \frac{C}{\sqrt{n}} + \ldots,
\label{eq:}
\end{equation}
where $C$ is a constant, obtaining \smash{$\phi=0.5097(37)$}. In total, we thus obtain four estimates of $\phi$ for SAWs on a square lattice - three from valid finite-size scaling methods and one from the direct method - and one estimate of $1/\delta$, All are listed in Table \ref{tab:ExponentResults} and we will discuss how to combine these values in the next section.
\subsection{Two dimensions}
\label{sec:2DResults}
\setlength{\tabcolsep}{2pt}
\begin{table}[t!]
\caption{Valid results for all lattice models and methods. All values are from fits with correction-to-scaling terms.
}
\begin{tabular}{l|l|llllr}
\hline \hline
& Method & \multicolumn{1}{c}{$1/\delta$} & \multicolumn{1}{c}{$T_\text{a}$} & \multicolumn{1}{c}{$\phi$} & \multicolumn{1}{c}{$\phi$ \,[$T_\text{a}^{(n)}$]} \\
\hline
hex & $\Gamma$ & $0.4851(11)$ & $1.14014(51)$ & $-$ & $-$ \\
SAW & BC & $-$ & $1.13465(40)$ & $0.51014(76)$ & $0.5137(55)$ \\
& R2 & $-$ & $1.13566(44)$ & $0.5058(13)$ & $0.5077(46)$ \\
& ratio & $-$ & $1.1374(19)$ & $0.4960(16)$ & $0.499(12)$ \\
& direct & $-$ & $-$ & $0.5002(20)$ & $-$ \\ [1ex]
& fixed $\kappa$ & $0.5060(12)$ & $1.13459\ldots$ & $0.496(10)$ & $-$ \\ [1ex]
squ & $\Gamma$ & $0.50525(40)$ & $1.74292(51)$ & $-$ & $-$ \\
SAW & BC & $-$ & $1.7399(19)$ & $0.52062(24)$ & $0.527(11)$ \\
& R2 & $-$ & $1.74151(69)$ & $0.51493(87)$ & $0.5142(60)$ \\
& ratio & $-$ & $1.7510(57)$ & $0.4849(21)$ & $0.481(21)$ \\
& direct & $-$ & $-$ & $0.5097(37)$ & $-$ \\[1ex]
squ & $\Gamma$ & $0.50393(39)$ & $1.6978(14)$ & $-$ & $-$ \\
SAT & BC & $-$ & $1.6887(15)$ & $0.51172(67)$ & $0.516(12)$ \\
& R2 & $-$ & $1.69201(74)$ & $0.50127(17)$ & $0.5029(56)$ \\
& ratio & $-$ & $1.6975(34)$ & $0.4839(10)$ & $0.482(10)$ \\
& direct & $-$ & $-$ & $0.4973(23)$ & $-$ \\[1ex]
sc & $\Gamma$ & $0.47911(56)$ & $3.5504(73)$ & $-$ & $-$ \\
SAW & BC & $-$ & $3.5146(83)$ & $0.4887(19)$ & $0.500(15)$ \\
& R2 & $-$ & $3.5271(36)$ & $0.4799(24)$ & $0.4691(54)$ \\
& ratio & $-$ & $3.519(20)$ & $0.4847(21)$ & $0.474(31)$ \\
& direct & $-$ & $-$ & $0.4907(19)$ & $-$ \\[1ex]
sc & $\Gamma$ & $0.48368(40)$ & $3.7557(85)$ & $-$ & $-$ \\
SAT & BC & $-$ & $3.707(12)$ & $0.4927(12)$ & $0.493(14)$ \\
& R2 & $-$ & $3.7294(53)$ & $0.4745(25)$ & $0.4717(52)$ \\
& ratio & $-$ & $3.726(11)$ & $0.4800(18)$ & $0.482(21)$ \\
& direct & $-$ & $-$ & $0.4865(16)$ & $-$ \\
\hline \hline
\end{tabular}
\label{tab:ExponentResults}
\end{table}
We now present the results for the other two-dimensional lattice models and discuss how to combine the results. For each lattice model, the intermediate quantities $\Gamma_n$, $u_n$ and $\phi^{(n)}$ and much of the details of the calculations are qualitatively identical to that of square SAWs discussed in the preceding section. In fact, the results of square SAWs tend to have larger errors and some of the issues are less of a problem in the other lattice models. As such we skip to presenting temperature and exponent results for the other cases. Furthermore, we also saw in the last section that fitting to the scaling forms is generally improved by the addition of a correction-to-scaling term and this is more true for the other lattice models. Henceforth we report only the correction-to-scaling results, where applicable.
Figure \ref{fig:2DTemperatures} shows the critical temperatures for the two-dimensional lattice models. The results of extrapolating the fits to $T_\text{a}$ are reported in Table \ref{tab:ExponentResults}, along with all estimates of exponents $1/\delta$ and $\phi$. Additionally, we visualize the exponents in Figures \ref{fig:2DExponents}.
For these plots, the horizontal axis has no meaning except to cluster the results for each lattice model.
In addition to all the methods discussed so far, for the case of SAWs on the hexagonal lattice, we have the further benefit of knowing the exact critical temperature \smash{$\kappa_\text{a}=1+\sqrt{2}$} \cite{Batchelor1995}. Incorporating this weight directly into the simulation greatly reduces equilibration time and allowed us to simulate SAWs on the hexagonal lattice up to length \smash{$n=4096$} in the same time as the full simulations up to length $1024$. In this case we do not need to locate the finite-size critical temperatures $T_\text{a}^{(n)}$; the exponents are determined directly from $\Gamma_n$ and $u_n$, obtaining \smash{$1/\delta=0.5060(12)$} and \smash{$\phi=0.496(10)$}. Note that the former comes from the scaling of $\Gamma_n(\kappa_c)$ rather than $\max\Gamma_n$; an inverse of the $\Gamma$ method for the other lattice models, potentially with similar limitations to estimating $1/\delta$. However, the value of $\phi$ is shown in \fref{fig:2DExponents} (black) for comparison to other lattice models and methods. Despite the ability to simulate much larger chains, the statistics of this simulation are not the same as the others and so these values should be considered a benchmark only. Nevertheless, it is a good test of the accuracy of the flatPERM algorithm and the significance of corrections to scaling in our methods. It also validates using $T_\text{a}$ over the set of $T_\text{a}^{(n)}$.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{2D_exponents_BEST}
\caption{Exponents for two-dimensional simulations. Black is the special case of SAWs on hexagonal lattice simulated at fixed exact critical temperature up to \smash{$n=4096$}. The dashed gray line marks the expected value of the 2D crossover exponent $\phi=1/2$.}
\label{fig:2DExponents}
\vspace{-0.5cm}
\end{figure}
Regarding the critical temperatures, we immediately see that several features mentioned in the analysis of square SAWs are common to all lattice models. As mentioned earlier, the $\Gamma$ method is the worst at estimating the critical temperature at finite lengths, and is known to be an unreliable method. So, while the temperatures from the $\Gamma$ method have been shown, this method is not used in any further results. The R2 method appears to be the best at locating the critical temperature, given that the errors in $T_\text{a}^{(n)}$ are the smallest for this method, and the trend as \smash{$n\to\infty$} displays the smallest correction to finite-size scaling. The values of $T_\text{a}^{(n)}$ from the ratio method are very close to those of the R2 method for most lattice models, yet the errors are much larger due to the way curves of $\phi^\text{(n)}$ intersect. Nevertheless, resulting $T_\text{a}$ and $\phi$ estimates from the ratio method are good.
One exception is for the BC method on the hexagonal lattice, where the deviation from trend at larger $n$ does not occur like the square lattice models. however, for consistency, we make the same restriction to \smash{$n\lesssim 600$}. A more general issue with the BC method is that it is parameter dependent, namely due to the minimum value of $n$ used as the common interceptor with curves of $U_4$ at larger $n$. We use \smash{$n=128$} as the minimum, intending that the range of $n$ is consistent with other methods, and thus the finite-size temperatures for the BC method are comparable to the R2 and ratio methods. If a larger range of $n$ is considered by using a smaller value for the minimum, then the temperatures are much closer to the $\Gamma$ method, which we have already noted as unreliable. We find our range of $n$ to be a good tradeoff between minimizing the effect of corrections to scaling from smaller $n$ and having enough data to achieve a good fit to the scaling form. Note that altering the minimum value of $n$ does not alter the value of $n$ where the kink starts.
Despite these cautions, it is not plausible to conclude that, say, the R2 method is better than the others and should always be used for these kind of calculations. Even if it appears to be the best way to determine $T_\text{a}$ it is not overwhelmingly better than the other methods. The issues with the BC and ratio methods are technical, and should be retained as valid.
Thus, despite omitting some methods as invalid or too imprecise, we still have a spread in the valid estimates for the exponents, as seen in \fref{fig:2DExponents}. Rather than relying on the statistical errors reported so far, we instead view the variance in results as evidence of a larger systematic error.
Regarding the different exponents, it is further clear from \fref{fig:2DExponents} that $1/\delta$ falls within the spread of the $\phi$ estimates. One can compare $1/\delta$ to $\phi$ for a specific method to find a pattern, or omit certain values that appear to be outliers or unreliable, but generally, across all lattice models, this does not hold. We are forced to consider that $1/\delta$ is not distinct from $\phi$.
Moreover, we could even say that the calculation of $1/\delta$ from the scaling of $\max\Gamma_n$ is yet another method for estimating the crossover exponent of the adsorption transition, equally valid as using the three finite-size scaling methods or calculating $\phi$ directly from ratios of $u_n$.
Arguably, the main reason that the statistical error is so small is that it arises from the very small errors in the calculated thermodynamic functions which are in turn due to being averaged over the ten independent simulations for each lattice.
The better statistics of the shorter length simulations cover the fact that we are not able to find the critical temperature as accurately due to correction-to-scaling effects and the differences in methodologies. Compare this to the simulation of \smash{$n=4096$} SAWs on the hexagonal lattice, where the error in $\phi$ is also statistical, but we have complete confidence in knowing the exact temperature. It is therefore striking that the black error bar in \fref{fig:2DExponents} is so comparable in magnitude to the spread of exponent estimates for the \smash{$n=1024$} simulations. Given the issues with reported statistical errors, when making these averages we omit statistical errors beyond the third decimal place as being too far removed from the systematic spread in values.
The final task is therefore to combine the results for all lattice models into results for two dimension, which we will apply to three dimensions in the next section. While it is not possible to pick out one method over another, we note that they are not all equivalent. The R2, BC and ratio methods are similar in that they first estimate the critical temperature which is then used to find $\phi$ from the finite-size scaling of $u_n$. In order to compare to $1/\delta$ and the direct estimates of $\phi$ we average the values of $\phi$ for the three finite-size scaling methods, obtaining $\phi^\text{(FSS)}$, listed in Table \ref{tab:ExponentResultsBest} for each lattice model. Also listed are the critical temperatures of each lattice model, averaged from the three finite-size scaling methods.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.75\textwidth,trim={0 1.5cm 0 1cm},clip]{3D_temp_ALL}
\caption{Finite-size critical temperatures for three-dimensional lattice models (a) SAWs and (b) SATs on the simple cubic lattice. For each of the four methods the solid lines are fits with correction to scaling and dotted lines are power-law only.}
\label{fig:3DTemperatures}
\vspace{-0.5cm}
\end{figure*}
\begin{figure}[b!]
\centering
\includegraphics[width=\columnwidth]{3D_exponents_BEST}
\caption{Exponents for three-dimensional (simple cubic) simulations. The dashed gray line marks the average estimate of the 3D crossover exponent $\phi=0.484(4)$.}
\label{fig:3DExponents}
\vspace{-0.5cm}
\end{figure}
The $\phi^\text{(FSS)}$ value is now comparable to $\phi^\text{(direct)}$, which is from $u_n$ but without finite-size-scaling, and to $1/\delta$ which comes from a different, but related thermodynamic quantity. We average these three values equally to obtain the exponent for each lattice model, also listed in Table \ref{tab:ExponentResultsBest}. Recall that the values for $1/\delta$ and the direct estimate for $\phi$ are already listed in Table \ref{tab:ExponentResults}. Finally, the exponents are averaged over all lattice models in each dimension. Thus, for two-dimensions we obtain \smash{$\phi=0.501(2)$}, in agreement with the known value of $1/2$. The uncertainty in this value is due to the spread from the different methods and models rather than the statistical error in those values. There is still some spread in the value of this final exponent for the two-dimensional lattice models,
As a final remark, we note that an alternative approach is to use the average $T_\text{a}$ to calculate a single $\phi^\text{(FSS)}$, but we found no meaningful difference. It is well known that the value of $\phi$ is sensitive to accurately knowing the critical temperature. This alternative would require an estimate of the error in $\phi$ by propagating the error in the average temperature, itself a product of the spread in individual values $T_\text{a}$. By \eref{eq:UnScaling}, this is not a straightforward procedure. We found that any reasonable attempt to do this produces an error in $\phi$ that is the same magnitude as the spread in $\phi$ values from individual methods as already reported. Thus the presence of a systematic error is clear either way.
\subsection{Three dimensions}
\label{sec:3DResults}
Having verified our methodology on the two-dimensional lattice models, we now turn to the three-dimensional simple cubic lattice models. The critical temperatures for SAWs and SATs on the simple cubic lattice are shown in \fref{fig:3DTemperatures} and the exponents are visualized in \fref{fig:3DExponents} and listed in Table \ref{tab:ExponentResults}.
The main point where the analysis of the simple cubic lattice models differs from the two-dimensional cases is with the kink in critical temperatures from the BC method. In \fref{fig:3DTemperatures} we see that at higher $n$ the $T_\text{a}^{(n)}$ diverge faster than for the square lattice. However, the point at which this kink begins is the same, so we have the same range of $n\lesssim 600$ for this method. All other methods proceed in the same manner as in the two-dimensional analysis.
The final results for the critical temperatures and exponents for the simple cubic lattice models are determined in the same way as the two-dimensional case and are summarized in Table \ref{tab:ExponentResultsBest}.
For SAWs on the simple cubic lattice we find, after averaging over the different finite-size scaling methods, that \smash{$\phi^\text{(FSS)}=0.484(4)$}, compared with \smash{$\phi^\text{(direct)}=0.491(2)$} and \smash{$1/\delta=0.4791(6)$}. Similarly, for SATs we find \smash{$\phi^\text{(FSS)}=0.482(9)$}, compared with \smash{$\phi^\text{(direct)}=0.487(2)$} and \smash{$1/\delta=0.4837(4)$}. As with the two-dimensional case, and knowing the source of the error bars, these values are not distinct enough to definitively separate them. Hence, assuming equality of $\phi$ and $1/\delta$, we estimate \smash{$\phi=0.485(6)$} for SAWs and \smash{$\phi=0.484(2)$} for SATs.
Averaging over the values of both three-dimensional models gives our best estimate \smash{$\phi=0.484(4)$}. Even given the magnitude of the potential systematic error, we conclude that for three-dimensions $\phi$ does deviate from the mean-field value of $1/2$. However, we find that there is not a clear difference between SAWs and SATs, nor do we find evidence for $1/\delta$ being different from $\phi$.
\begin{table}[t!]
\caption{Best results for the adsorption temperature and the finite-size scaling estimates of $\phi$ for each lattice model. Bold values are the combined result for the crossover exponent for each lattice model and dimension.}
\begin{tabular}{l|ll|l}
\hline \hline
& \multicolumn{1}{c}{$T_c$}
& \multicolumn{1}{c}{FSS $\phi$} & \multicolumn{1}{c}{$\phi$ [$=1/\delta$]}\\\hline
hex~4096 & 1.13459\ldots& 0.496(10) &\\ \hline
hex~SAW & 1.136(1) & 0.504(7) & {\bf 0.496(10)} \\
squ~SAW & 1.744(6) & 0.507(19) & {\bf 0.507(2)} \\
squ~SAT & 1.693(4) & 0.499(14) & {\bf 0.500(3)} \\ \hline
2D & & & {\bf 0.501(2)} \\
\hline
sc~SAW & 3.520(6) & 0.484(4) & {\bf 0.485(6)} \\
sc~SAT & 3.720(12) & 0.482(9) & {\bf 0.484(2)} \\ \hline
3D & & & {\bf 0.484(4)} \\
\hline\hline
\end{tabular}
%
%
\label{tab:ExponentResultsBest}
\end{table}
\section{Conclusion}
\label{sec:Conc}
We have performed a comprehensive study of self-avoiding walks and trails on two- and three-dimensional lattices with an adsorbing boundary. Numerical simulations up to polymer length of 1024 provide a wealth of data for studying the adsorption transition. A variety of analyzes were used to estimate the critical temperature and scaling exponents of this transition. Using both the square and hexagonal lattices, and in the latter case also using exact results, we confirm the mean-field value of the crossover exponent \smash{$\phi=1/2$} (also obtained from exact solution methods and conformal field theory), with our own estimate of \smash{$\phi=0.501(2)$}. What is not apparent in this final result is that applying individually valid methods to each of the lattice models produces a large spread in estimates. This suggests a \emph{significant systematic error} in any individual estimate greater than the statistical error intrinsic to the numerical analysis.
Applying the same methodology, averaging over several estimates, to the three-dimensional lattice models of SAWs and SATs on the simple cubic lattice, we provide a final estimate \smash{$\phi=0.484(4)$}. This is in agreement with other recent works that suggest a deviation from the mean-field value in three dimensions, and thus that the crossover exponent is not super-universal. However, as with two dimensions, there is systematic error across the different methodologies which does not allow for a distinction between the crossover exponent $\phi$ and the shift exponent $1/\delta$. In fact, we suggest that direct estimates of the shift exponent are yet another way of estimating the crossover exponent, and not of estimating a distinct quantity.
As well as variety in the analysis of thermodynamic quantities, we have considered walks and trails equally for the square and simple cubic lattices. As the SAW model can be considered as the strongly repulsive limit of the interacting SAT model, the agreement of our results for the two models also indicates that the universality of the critical exponent is not broken by (repulsive) monomer-monomer interactions. Of course, this constitutes only two data points on the scale of variable monomer-monomer interaction strength but assuming universality raises the possibility of more accurate exponent estimates: Considering both the interacting walk and interacting trail models one may be able to achieve greater accuracy in locating the critical temperature by varying the monomer-monomer interactions to minimize corrections-to-scaling in quantities such as the end-to-end distance scaling, which has had some previous success \cite{Prellberg2001}. Additionally, it raises the question of studying the general interacting SAT model with strongly attractive interactions, known to be in a different universality class to the interacting SAW model at its collapse point.
\begin{acknowledgments}
Financial support from the Australian Research
Council via its Discovery Projects scheme (DP160103562)
is gratefully acknowledged by the authors. Numerical simulations were performed using the HPC cluster at University of Melbourne (2017) Spartan HPC-Cloud Hybrid: Delivering Performance and Flexibility. https://doi.org/10.4225/49/58ead90dceaaa
\end{acknowledgments}
|
1,116,691,498,316 | arxiv | \section{Introduction}
Nowadays it is strongly believed that the universe is experiencing
an accelerated expansion. Recent observations from type Ia
supernovae \cite{SN} in associated with Large Scale Structure
\cite{LSS} and Cosmic Microwave Background anisotropies \cite{CMB}
have provided main evidence for this cosmic acceleration. In order
to explain why the cosmic acceleration happens, many theories have
been proposed. Although theories of trying to modify Einstein
equations constitute a big part of these attempts, the mainstream
explanation for this problem, however, is known as theories of dark
energy. It is the most accepted idea that a mysterious dominant
component, dark energy, with negative pressure, leads to this cosmic
acceleration, though its nature and cosmological origin still remain
enigmatic at present. \\
The most obvious theoretical candidate of dark energy is the
cosmological constant $\lambda$ (or vacuum energy)
\cite{Einstein:1917,cc} which has the equation of state $\omega=-1$.
An alternative proposal for dark energy is the dynamical dark energy
scenario. So far, a large class of scalar-field dark energy models
have been studied, including quintessence \cite{quintessence},
K-essence \cite{kessence}, tachyon \cite{tachyon}, phantom
\cite{phantom}, ghost condensate \cite{ghost1,ghost2} and quintom
\cite{quintom}, interacting dark energy models \cite{intde},
braneworld models \cite{brane}, and Chaplygin gas models \cite{cg1},
etc.
An interacting tachyonic-dark
matter model has been studied in Ref. {\cite{c10}}.\\
In this paper, we consider the issue of the tachyon as a source of
the dark energy. The tachyon is an unstable field which has become
important in string theory through its role in the Dirac-Born-Infeld
(DBI) action which is used to describe the D-brane action \cite
{7,8}. It has been noticed that the cosmological model based on
effective lagrangian of tachyon matter
\begin{equation} \label{lag}
L=-V(T)\sqrt{1-T_{,\mu}T^{,\mu}}
\end{equation}
with the potential
$V(T)=\sqrt{A}$ exactly coincides with the Chaplygin gas model
\cite{{fro}, {gori}}.\\
Some experimental data have implied that our universe is not a
perfectly flat universe and recent papers have favoured a universe
with spatial curvature \cite{curve}. As a matter of fact, we want to
remark that although it is believed that our universe is flat, a
contribution to the Friedmann equation from spatial curvature is
still possible if the number of e-foldings is not very large
\cite{miao2}. Cosmic Microwave Background (CMB) anisotropy data
provide the most stringent constraints on cosmic curvature $k$.
Assuming that dark energy is a cosmological constant, the three-year
WMAP data give $\Omega_k=-0.15 \pm 0.11$, and this improves
dramatically to $\Omega_k= -0.005 \pm 0.006$, with the addition of
galaxy survey data from the SDSS \cite{47}. The effect of allowing
non-zero curvature on constraining some dark energy models has been
studied by \cite{curva}. Recently Clarkson et al \cite{clar} have
shown that ignoring $\Omega_k$ induces errors in the reconstructed
dark energy equation of state, $\omega(z)$, that grow very rapidly
with redshift and dominate the $\omega(z)$ error budget at redshifts
$(z \succeq 0.9)$ even if $\Omega_k$ is very small. Due to these
considerations and motivated by the recent work of Chakraborty and
Debnath \cite{c11}, we generalize their work to the non-flat case.
\section{Tachyonic fluid model}
Now we consider the single tachyonic field model, so the action for
the homogeneous tachyon condensate of string theory in a
gravitational background is given by
\begin{equation}\label{E1}
S=\int d^{4}x \sqrt{-g}~\left[\frac{R}{16 \pi G}+\mathcal{L}\right]
\end{equation}
where $R$ and $\mathcal{L}$ are scalar curvature and Lagrangian
density respectively,$\mathcal{L}$ is given by,
\begin{equation}\label{E2}
{\mathcal{L}}=-V(T)\sqrt{1+g^{\mu \nu}
\partial_{\mu}T\partial_{\nu}T},
\end{equation}
where $T$ is tachyon field, and $V(T)$ is the tachyonic potential.
The Fridmann-Robertson-Walker (FRW) metric of universe is as,
\begin{equation}\label{E3}
ds^2 =dt^2-a^2(t) \left[\frac{dr^2}{1-kr^2}+r^2 d\Omega^2\right],
\end{equation}
here $k=1,0,-1$ are corresponds to closed, flat and open universe
respectively. By using the Einstein's equation we have following
expression
\begin{equation}\label{E4}
\rho_{tot}=\frac{3}{2}\left(\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right),
\end{equation}
\begin{equation}\label{E5}
p_{tot}=-\frac{1}{2}\left(2\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right),
\end{equation}
where we have assumed $4\pi G=1$. On the other hand, the
energy-momentum tensor for the tachyonic field is,
\begin{equation}\label{E6}
T_{\mu\nu}= -\frac{2}{\sqrt{-g}}\frac{\delta S}{\delta{g^{\mu
\nu}}}=p_T g_{\mu\nu}+(p_T+\rho_T) u_\mu u_\nu,
\end{equation}
where the velocity $u_\mu$ is
\begin{equation}\label{E7}
u_\mu=- \frac{\partial_{\mu}T}{\sqrt{-g^{\mu \nu} \partial_\mu T
\partial_\nu T}},
\end{equation}
with $u^\nu u_\nu=-1$.\\
By using the energy-momentum tensor we have following expressions
\begin{equation}\label{E8}
R=6\left(\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right),
\end{equation}
\begin{equation}\label{E9-1}
\rho_T=\frac{V(T)}{\sqrt{1-\dot{T}^2}},~~~~~ p_T=-V(T)
\sqrt{1-\dot{T}^2}.
\end{equation}
Thus the equation of states of tachyonic field becomes
\begin{equation}\label{E10}
\omega_T=\frac{p_T}{\rho_T}=\dot{T}^2-1,
\end{equation}
\begin{equation}\label{E9}
p_T \rho_T=-V^2(T),
\end{equation}
Now we consider two fluid model consisting of tachyonic field and
barotropic fluid respectively. The EoS of the barotropic fluid is
given by
\begin{equation}\label{E11}
p_b=\omega_b \rho_b,
\end{equation}
where $p_b$ and $\rho_b$ are the pressure and energy density of
barotropic fluid. Thus, the total energy density and pressure are
respectively given by,
\begin{equation}\label{E12}
\rho_{tot}=\rho_b+\rho_T,
\end{equation}
\begin{equation}\label{E13}
p_{tot}=p_b+p_T,
\end{equation}
In the next following section we consider two cases, first we
investigate the case where
two fluid do not interact with each other and second we consider
interacting case.
\section{Non-interacting two fluids model}
In this section we assume that two fluid do not interact with each
other. As we know the general form of conservation equation is
\begin{equation}\label{E14}
\dot{\rho}_{tot}+3\frac{\dot{a}}{a}(\rho_{tot}+p_{tot})=0,
\end{equation}
this equation lead us to write the conservation equation for the
tachyonic and barotropic fluid separately,
\begin{equation}\label{E15}
\dot{\rho}_{b}+3\frac{\dot{a}}{a}(\rho_{b}+p_{b})=0,
\end{equation}
and
\begin{equation}\label{E16}
\dot{\rho}_{T}+3\frac{\dot{a}}{a}(\rho_{T}+p_{T})=0.
\end{equation}
By using the equation (\ref{E15}) one can obtain the energy density
$\rho_b$ as a follow,
\begin{equation}\label{E17}
\rho_b=\rho_0 a^{-3(1+\omega_b)}.
\end{equation}
In order to obtain $T$ and $V(T)$ we first obtain the $\rho_T$ and
$p_T$ in term of $a(T)$,
\begin{equation}\label{E18}
\rho_T=\frac{3}{2}\left(\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)-\rho_0
a^{-3(1+\omega_b)},
\end{equation}
and
\begin{equation}\label{E19}
p_T=-\frac{1}{2}\left(2\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)-\rho_0
\omega_b a^{-3(1+\omega_b)},
\end{equation}
Here by using the equation (\ref{E10}), (\ref{E18}) and (\ref{E19})
the corresponding field for the tachyon will be as
\begin{equation}\label{E20}
\dot{T}=\sqrt{\frac{\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}-\frac{\ddot{a}}{a}-\rho_0(1+\omega_b)a^{-3(1+\omega_b)}}
{\frac{3}{2}\left(\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)-\rho_0a^{-3(1+\omega_b)}}},
\end{equation}
and from equation (\ref{E9}), the $V(T)$ is given by,
\begin{equation}\label{E21}
V(T)=\sqrt{\left[\frac{1}{2}\left(\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}+2\frac{\ddot{a}}{a}\right)+\rho_0\omega_ba^{-3(1+\omega_b)}\right]
\left[\frac{3}{2}\left(\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)-\rho_0a^{-3(1+\omega_b)}\right]}
\end{equation}
Now we take following ansatz for the scale factor, where increase in
term of time evolution
\begin{equation}\label{E22}
a(t)=\sqrt {b k+c \cosh^2(\beta t)},
\end{equation}
where $b$, $c$ and $\beta$ are constant. By substituting above scal
factor into (\ref{E18}) and (\ref{E19}) the $\rho_T$ and $p_T$ can
be given by following expression,
\begin{eqnarray}\label{E24}
\rho_T=\frac{3}{2}\frac{c \cosh^2(\beta t)\left(c\beta^2 \sinh^2(\beta t)-k\right)+3k^2b}{\left(bk+c \cosh^2(\beta t)\right)^2}\nonumber\\ -\rho_0 \left(bk+c \cosh^2(\beta t)\right)^{-\frac{3}{2}(1+\omega_b)}
\end{eqnarray}
and
\begin{eqnarray}\label{E25}
p_T=-\frac{1}{2}\frac{\cosh^2(\beta t)\left[ 3c^2 \beta^2\cosh^2(\beta t)-c^2 \beta^2+kc+4cbk\beta^2\right]-2cbk\beta^2+k^2 b}{
\left(bk+c \cosh^2(\beta t)\right)^2}\nonumber\\
-\rho_0 \omega_b \left(bk+c \cosh^2(\beta t)\right)^{-\frac{3}{2}(1+\omega_b)}
\end{eqnarray}
By putting the $\rho_T$ and $p_T$ in equation
$\dot{T}=\sqrt{1+\frac{p_T}{\rho_T}}$ and drawing the corresponding
$\dot{T}$ in term of time, we obtain $\dot{T}= \delta ~sech(\eta t)$
where $\delta$ and $\eta$ are given in term of $b$, $c$ and $\beta$.
Then we have,
\begin{equation}\label{E27}
T=\frac{\delta}{\eta} \arctan(\sinh(\eta t)),
\end{equation}
Also, the corresponding potential Eq. (\ref{E21}) will be form of
$A+B e^{-C t^2}$. Now one can use Eq. (\ref{E27}) and reconstruct
potential $V$, in term of tachyon field $T$, as following
\begin{equation}\label{E27-1}
V(T)=A+B ~{\rm e}^{-\frac{C}{\eta^2} \left( {arcsinh} \left( \tan
\left( {\frac {T \eta}{\delta}} \right) \right) \right) ^{2}},
\end{equation}
By substituting Eqs. (\ref{E24}) and (\ref{E25}) in (\ref{E10}), one
can obtain the equation of state of tachyon field in term of time.
The
graph of this EoS in
term of time evolution is given by Fig. (1).\\
\begin{tabular*}{2cm}{cc}
\hspace{0.25cm}\includegraphics[scale=0.37]{withoutinteraction.eps}\hspace{0.25cm}\includegraphics[scale=0.355]{withoutinteraction1.eps}\\
\hspace{2.5cm}Figure 1:\,The EoS is plotted in $\rho_0=-30$, $c=20$, $b=0.5$,
$\beta=0.75$,\\\hspace{2.5cm} and $k=+1, 0, -1$ as
the solid, dot and dash
respectively.\\
\end{tabular*}
\section{Interaction two fluids model}
Now we are going to consider an interaction between the tachyonic
field and the barotropic fluid. Here we need to introduce a
phenomenological coupling function which is a product of the Hubble
parameter and the energy density of the barotropic fluid. In that
case there is an energy flow between the two fluid. so, the equation
of motion corresponding to the tachyonic field and the barotropic
fluid are respectively,
\begin{equation}\label{E28}
\dot{\rho}_{T}+3\frac{\dot{a}}{a}(\rho_{T}+p_{T})=-Q,
\end{equation}
\begin{equation}\label{E29}
\dot{\rho}_{b}+3\frac{\dot{a}}{a}(\rho_{b}+p_{b})=Q,
\end{equation}
where the quantity $Q$ expresses the interaction between the dark
components. The interaction term $Q$ should be positive, i.e. $Q>0$,
which means that there is an energy transfer from the dark energy to
dark matter. The positivity of the interaction term ensures that the
second law of thermodynamics is fulfilled \cite{Pavon:2007gt}.
At this point, it should be stressed that the continuity equations
imply that the interaction term should be a function of a quantity
with units of inverse of time (a first and natural choice can be the
Hubble factor $H$) multiplied with the energy density. Therefore,
the interaction term could be in any of the following forms: (i)
$Q\propto H\rho_{T}$ \cite{Pavon:2005yx,Pavon:2007gt}, (ii)
$Q\propto H\rho_{b}$ \cite{Amendola:2006dg}, or (iii) $Q\propto
H(\rho_{T}+\rho_{b})$ \cite{Wang:2005ph}. The freedom of choosing
the specific form of the interaction term $Q$ stems from our
incognizance of the origin and nature of dark energy as well as dark
matter. Moreover, a microphysical model describing the interaction
between the dark components of the universe is not available
nowadays. Here we consider $Q=3H \sigma \rho_b$, where $\sigma$ is a
coupling constant. By using the equations (\ref{E28}) and
(\ref{E29}) one can obtain the $\rho_b$, $\rho_T$ and $p_T$ as a
following,
\begin{equation}\label{E30}
\rho_b=\rho_0 a^{-3 (1+\omega_b-\sigma)},
\end{equation}
\begin{equation}\label{E31}
\rho_{T}=\frac{3}{2}\left(\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)-\rho_0
a^{-3(1+\omega_b-\sigma)},
\end{equation}
and
\begin{equation}\label{E32}
p_{T}=-\frac{1}{2}\left(2\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)-\rho_0
(\omega_b-\sigma) a^{-3(1+\omega_b-\sigma)}.
\end{equation}
Similar to previous case the $\dot{T}$ and $V(T)$ can be obtained by
the following expression,
\begin{equation}\label{E33}
\dot{T}=\sqrt{1-\frac{\frac{1}{2}\left(2\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)+\rho_0(\omega_b-\sigma)a^{-3(1+\omega_b-\sigma)}}
{\frac{3}{2}\left(\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)-\rho_0a^{-3(1+\omega_b-\sigma)}}},
\end{equation}
and
\begin{equation}\label{E34}
V(T)=\sqrt{\left[\frac{1}{2}\left(2\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)+\rho_0(\omega_b-\sigma)a^{-3(1+\omega_b-\sigma)}\right]
\left[\frac{3}{2}\left(\frac{\dot{a}^2}{a^2}+\frac{k}{a^2}\right)-\rho_0a^{-3(1+\omega_b-\sigma)}\right]}.
\end{equation}
Putting the value of $a(t)$ from equation (\ref{E22}) in $\rho_T$,
$p_T$, $\dot{T}$ and $V$ we have,
\begin{eqnarray}\label{E35}
\rho_T=\frac{3}{2}\frac{c \cosh^2(\beta t)\left(c\beta^2 \sinh^2(\beta t)-k\right)+3k^2b}{\left(bk+c \cosh^2(\beta t)\right)^2}\nonumber\\ -\rho_0 \left(bk+c \cosh^2(\beta t)\right)^{-\frac{3}{2}(1+\omega_b-\sigma)}
\end{eqnarray}
and
\begin{eqnarray}\label{E36}
p_T=-\frac{1}{2}\frac{\cosh^2(\beta t)\left[ 3c^2 \beta^2\cosh^2(\beta t)-c^2 \beta^2+kc+4cbk\beta^2\right]-2cbk\beta^2+k^2 b}{
\left(bk+c \cosh^2(\beta t)\right)^2}\nonumber\\
-\rho_0 (\omega_b-\sigma) \left(bk+c \cosh^2(\beta t)\right)^{-\frac{3}{2}(1+\omega_b-\sigma)}
\end{eqnarray}
By substituting the $\rho_T$ and $p_T$ in equation
$\dot{T}=\sqrt{1+\frac{p_T}{\rho_T}}$ and drawing the corresponding
$\dot{T}$ in term of time we have graphs of $T$ and $V$ as the same
results with non-interaction case.\\ By substituting Eqs.
(\ref{E35}) and (\ref{E36}) in (\ref{E10}), graphs of the EoS are
given in term
of time evolution in Fig. (2).\\
\begin{tabular*}{2cm}{cc}
\hspace{0.25cm}\includegraphics[scale=0.37]{withinteraction.eps}\hspace{0.25cm}\includegraphics[scale=0.355]{withinteraction1.eps}\\
\hspace{2cm}Figure 2:\,The EoS is plotted in $\rho_0=-30$, $c=20$, $b=0.5$,
$\beta=0.75$,\\\hspace{2cm} $\sigma=0.25$ and $k=+1,
0, -1$ as the solid, dot and dash
respectively.\\
\end{tabular*}
\section{Conclusion}
Within the different candidates to play the role of the dark energy,
tachyon, has emerged as a possible source of dark energy for a
particular class of potentials \cite{tac}. In the present paper we
have studied the tachyonic field model in non-flat universe. At
first we have considered the non-interacting case, where the tachyon
field and barotropic fluid separately satisfy the conservation
equation. We have obtained the evaluation of scale factor, energy
density, pressure, tachyon field and potential of tachyon field in
term of cosmic time. After that we have reconstruct the potential of
tachyon in term of tachyon field. The evaluation of EoS, and the
late time behavior of this equation is given by fig 1.\\
Studying the interaction between the dark energy and ordinary matter
will open a possibility of detecting the dark energy. It should be
pointed out that evidence was recently provided by the Abell Cluster
A586 in support of the interaction between dark energy and dark
matter \cite{Bertolami:2007zm}. However, despite the fact that
numerous works have been performed till now, there are no strong
observational bounds on the strength of this interaction
\cite{Feng:2007wn}. This weakness to set stringent (observational or
theoretical) constraints on the strength of the coupling between
dark energy and dark matter stems from our unawareness of the nature
and origin of dark components of the Universe. It is therefore more
than obvious that further work is needed to this direction. Due to
this we have extended the our consideration to the interacting case
in a separate section of this paper.
\section{Acknowledgement} The authors would like to thank
an anonymous referee for crucial remarks and advices.
|
1,116,691,498,317 | arxiv | \section{Introduction}\label{sec:introduction}
We consider closed strictly convex surfaces $M_t$ in ${\mathbb{R}}^3$ that contract with normal velocities equal to the positive powers of the Gauss curvature,
\begin{equation}\label{flow equation K}
\frac{d}{dt} X = -K^{\sigma}\nu.
\end{equation}
For all $\sigma > 0$, this is a parabolic flow equation.
We have a solution on a maximal time interval $[0,T)$, $0 < T < \infty$.
Chow \cite{bc:deforming} proves that the surfaces converge to a point as $t \rightarrow T$.
For $\sigma = 1$, this is the Gauss curvature flow.
It was introduced by Firey as a model for the shape of wearing stones on beaches \cite{wf:shapes}.
Firey conjectured that, after appropriate rescaling, the surfaces converge to spheres.
This is also referred to as convergence to a ``round point''.
The conjecture was confirmed by Andrews in \cite{ba:gauss}.
Andrews and Chen \cite{ac:surfaces} extended this result to all powers $\frac{1}{2} \leq \sigma \leq 1$.
The crucial step in their proof, Theorem 2.2, is to show that the quantity
\begin{equation}\label{monotone quantity K}
\max_{M_t} \frac{\left(\lambda_1-\lambda_2\right)^2}{\lambda_1^2\lambda_2^2} \cdot K^{2\sigma}
\end{equation}
is non-increasing in time. $\lambda_1$, $\lambda_2$ denote the principal curvatures of the surfaces $M_t$.
We give a more detailed introduction to the standard notation in Section \ref{sec notation}.
For many other normal velocities $F$ monotone quantities $w$ like \eqref{monotone quantity K} are known,
which are monotone during the corresponding flows and vanish precisely for spheres.
For $F=|A|^2$ Schn\"urer \cite{os:surfacesA2} obtains $$w = \frac{\left(\lambda_1-\lambda_2\right)^2}{\lambda_1\lambda_2}\cdot H$$ and
for $F=H^\sigma,\; 1 \leq \sigma \leq \sigma_{\ast},\; \sigma_{\ast} \approx 5.17$, Schulze and Schn\"urer \cite{fs:convexity} use
$$w=\frac{\left(\lambda_1-\lambda_2\right)^2}{\lambda_1^2\lambda_2^2}\cdot H^{2\sigma}.$$
For $F=\tr A^\sigma,\; 1 \leq \sigma < \infty$, Andrews and Chen \cite{ac:surfaces} get $$w=\frac{\left(\lambda_1-\lambda_2\right)^2}{\lambda_1^2\lambda_2^2}\cdot\left(\tr A^\sigma\right)^2.$$
This is the first example where flows of an arbitrarily high degree of homogeneity converge surfaces to round points.
In \cite{os:surfacesA2}, Schn\"urer proposes criteria for selecting monotone quantities like \eqref{monotone quantity K}.
To the author's knowledge, to date, all known quantities which fulfill these criteria can be used to prove convergence to a round point.
This is why we work with these criteria as a definition in this paper.
Our question is whether such monotone quantities exist for equation \eqref{flow equation K} if $\sigma > 1$.
Their monotonicity is proven using the maximum-principle
so we name these quantities \textit{maximum-principle functions}.
\begin{definition}\label{def mpf}
\textbf{(Maximum-principle function)}
Let $w$ be a symmetric rational function of the principal curvatures,
\begin{align}
w = \frac{p\left(\lambda_1,\lambda_2\right)}{q\left(\lambda_1,\lambda_2\right)}.
\end{align}
Here, $p \not\equiv 0$ and $q \not \equiv 0$ are homogeneous polynomials.
$w$ is called a \textit{maximum-principle function} for a normal velcocity $F$,
if
\begin{enumerate}
\item\label{I}
\begin{enumerate}
\item $p(\lambda_1,\,\lambda_2)\ge0,\,q(\lambda_1,\,\lambda_2)>0$ for all $0<\lambda_1,\,\lambda_2$,
\item $p(\lambda_1,\,\lambda_2)=0$ for $\lambda_1=\lambda_2>0$.
\end{enumerate}
\item\label{II}
$\deg p>\deg q$.
\item\label{III}
$\fracp{w(\rho,1)}{\rho}<0$ for all $0<\rho<1$ and
$\fracp{w(\rho,1)}{\rho}>0$ for all $\rho>1$.
\item\label{IV}
$L\left(w\right) := \frac{d}{dt} w-F^{ij}w_{;\,ij} \leq 0$ for all $0<\lambda_1,\,\lambda_2.$ \\
We achieve this by assuming
\begin{enumerate}
\item terms without derivatives of $\left(h_{ij}\right)$ are nonpositive, and
\item terms involving derivatives of $\left(h_{ij}\right)$ at a critical point of $w$,
\textit{i.e.} $w_{;i} =0$ for all $i=1,\,2$, are nonpositive.
\end{enumerate}
\end{enumerate}
\end{definition}
As in \cite{os:surfacesA2,os:surfaces}, we motivate conditions \eqref{I} to \eqref{IV}.
For all flow equations considered, spheres contract to round points.
So we can only find monotone quantities if $\deg p \leq \deg q$ or $p\left(\lambda,\lambda\right) = 0$ for all $\lambda>0$.
If $\deg p < \deg q$, we obtain that $w$ is non-increasing on any self-similarly contracting surface.
So this does not imply convergence to a round point.
Condition \eqref{III} ensures that the quantity decreases, if the ratio of the principal curvatures $\lambda_1/\lambda_2$ approaches one.
By condition \eqref{IV} we check that we can apply the maximum-principle to prove monotonicity. \\
The linear operator $L\left(w\right)$, which corresponds to the general flow equation $$\frac{d}{dt} X = -F \nu,$$ fulfills an identity of the form
\begin{align*}
L\left(w\right) = C_w\left(\lambda_1,\lambda_2\right) + G_w\left(\lambda_1,\lambda_2\right) h_{11;1}^2+ G_w\left(\lambda_2,\lambda_1\right)h_{22;2}^2,
\end{align*}
at a critical point of $w$. This is Lemma \ref{lem evolution equation w}.
We name the rational function $C_w\left(\lambda_1,\lambda_2\right)$ the \textit{constant terms} and
the rational function $G_w\left(\lambda_1,\lambda_2\right)$ the \textit{gradient terms} of the evolution equation.
To fulfill condition \eqref{IV} the constant terms $C_w\left(\lambda_1,\lambda_2\right)$ and the gradient terms $G_w\left(\lambda_1,\lambda_2\right)$
simultaneously have to be nonpositive for all $0 < \lambda_1,\,\lambda_2$.
Here, we obtain a contradiction for $F=K^{\sigma}$ if $\sigma>1$.
\text{\quad\, Our main theorem is}
\begin{theorem}\label{main theorem}
For a family of smooth closed strictly convex surfaces $M_t$ in ${\mathbb{R}}^3$ flowing according to
\begin{equation*}
\frac{d}{dt} X = -K^{\sigma}\nu, \quad \sigma > 1,
\end{equation*}
there exist no \textit{maximum-principle functions}.
\end{theorem}
Despite this fact, it remains an open question whether for any powers $\sigma > 1$, closed strictly convex surfaces converge to round points.
Due to Andrews we already know that this does not necessarily happen for all powers $\frac{1}{4} \leq \sigma \leq \frac{1}{2}$.
For $\sigma = \frac{1}{4}$, they converge to ellipsoids \cite{ba:contraction}. For all powers $\frac{1}{4} < \sigma \leq \frac{1}{2}$,
surfaces contract homothetically in the limit \cite{ba:motion}. \\
In Section \ref{sec proof strategy} we explain the proof strategy. In Section \ref{sec evolution equations} and \ref{sec dehomogenized polynomials} we outline the proof of our main Theorem \ref{main theorem}.
\section{Notation}\label{sec notation}
For this paper, we adopt the chapter on standard notation from \cite{os:surfacesA2}.\\
The linear operator $L$ corresponding to the general flow equation
\begin{equation}\label{eq general flow equation}
\frac{d}{dt} X = -F\nu
\end{equation}
is defined by
\begin{equation}\label{def linear operator L}
L\left(w\right) := \frac{d}{dt} w - F^{ij} w_{;ij}.
\end{equation}
We use $X=X(x,\,t)$ to denote the embedding vector of a manifold $M_t$ into ${\mathbb{R}}^3$ and
${\frac{d}{dt}} X=\dot{X}$ for its total time derivative.
It is convenient to identify $M_t$ and its embedding in ${\mathbb{R}}^3$.
The normal velocity $F$ is a homogeneous symmetric function of the principal curvatures.
We choose $\nu$ to be the outer unit normal vector to $M_t$.
The embedding induces a metric $g_{ij} := \langle X_{,i},\, X_{,j} \rangle$ and
the second fundamental form $h_{ij} := -\langle X_{,ij},\,\nu \rangle$ for all $i,\,j = 1,\,2$.
We write indices preceded by commas to indicate differentiation with respect to space components,
e.\,g.\ $X_{,k} = \frac{\partial X}{\partial x_k}$ for all $k=1,\,2$.
We use the Einstein summation notation.
When an index variable appears twice in a single term it implies summation of that term over all the values of the index.
Indices are raised and lowered with respect to the metric
or its inverse $\left(g^{ij}\right)$,
e.\,g.\ $h_{ij} h^{ij} = h_{ij} g^{ik} h_{kl} g^{lj} = h^k_j h^j_k$.
The principal curvatures $\lambda_1,\,\lambda_2$ are the eigenvalues of the second fundamental
form $\left(h_{ij}\right)$ with respect to the induced metric $\left(g_{ij}\right)$. A surface is called strictly convex,
if all principal curvatures are strictly positive.
We will assume this throughout the paper.
Therefore, we may define the inverse of the second fundamental
form denoted by $\left(\tilde h^{ij}\right)$.
Symmetric functions of the principal
curvatures are well-defined, we will use the mean curvature
$H=g^{ij} h_{ij} = \lambda_1+\lambda_2$, the square of the norm of the second fundamental form
$\A2= h^{ij} h_{ij} = \lambda_1^2+\lambda_2^2$, the trace of powers of the second fundamental form
$\tr A^{\sigma} = \tr \left(h^i_j\right)^{\sigma} = \lambda_1^{\sigma} +\lambda_2^{\sigma}$, and the Gauss curvature
$K= \frac{\det h_{ij}}{\det g_{ij}} = \lambda_1\lambda_2$. We write indices preceded by semi-colons
to indicate covariant differentiation with respect to the induced metric,
e.\,g.\ $h_{ij;\,k} = h_{ij,k} - \Gamma^l_{ik} h_{lj} - \Gamma^l_{jk} h_{il}$,
where $\Gamma^k_{ij} = \frac{1}{2} g^{kl} \left(g_{il,j} + g_{jl,i} - g_{ij,l}\right)$.
It is often convenient to choose normal coordinates, i.\,e.\ coordinate systems such
that at a point the metric tensor equals the Kronecker delta, $g_{ij}=\delta_{ij}$,
and $(h_{ij})$ is diagonal, $(h_{ij})=\diag(\lambda_1,\,\lambda_2)$.
Whenever we use this notation, we will also assume that we have
fixed such a coordinate system. We will only use Euclidean coordinate
systems for ${\mathbb{R}}^3$ so that the indices of $h_{ij;\,k}$ commute according to
the Codazzi-Mainardi equations.
A normal velocity $F$ can be considered as a function of $(\lambda_1,\,\lambda_2)$
or $(h_{ij},\,g_{ij})$. We set $F^{ij}=\fracp{F}{h_{ij}}$,
$F^{ij,\,kl}=\fracp{^2F}{h_{ij}\partial h_{kl}}$.
Note that in coordinate
systems with diagonal $h_{ij}$ and $g_{ij}=\delta_{ij}$ as mentioned
above, $F^{ij}$ is diagonal.
For $F=K^\sigma$, we have $F^{ij}=\sigma K^{\sigma} \tilde{h}^{ij}$.
\section{Proof strategy}\label{sec proof strategy}
To prove our main Thoreom \ref{main theorem}, we use an elementary fact about polynomials in one variable. If a polynomial in one variable $\rho$, which is not constantly zero, is nonpositive for all $\rho > 0$, then the coefficient of its leading term has to be negative. As mentioned before, we focus on condition \eqref{IV} in the Defintion \ref{def mpf} of the \textit{maximum-principle functions}. We use an indirect proof and assume the existence of a \textit{maximum-principle function}. We calculate the constant terms $C_w\left(\lambda_1,\lambda_2\right)$ and the gradient terms $G_w\left(\lambda_1,\lambda_2\right)$ for general homogeneous symmetric polynomials $p\left(\lambda_1,\lambda_2\right)$ and $q\left(\lambda_1,\lambda_2\right)$. We state them in the algebraic basis $\lbrace H,K \rbrace$, where $H$ is the mean curvature and $K$ is the Gauss curvature, i.\,e.\ $p(H,K) :=\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1}H^{g-2i}K^i$. Using the identity $q^2 L\left(p/q\right) = q\,L\left(p\right) - p\,L\left(q\right)$ at a critical point of $p/q$, where we also choose normal coordinates, we easily see that $C_w\left(\lambda_1,\lambda_2\right)$ and $G_w\left(\lambda_1,\lambda_2\right)$ differ from a polynomial only by a nonnegative factor. Dividing by this nonnegative factor, we transform $C_w\left(\lambda_1,\lambda_2\right)$ and $G_w\left(\lambda_1,\lambda_2\right)$ into polynomial versions of the constant terms and the gradient terms. The next step is to dehomogenize the polynomial version of $C_w\left(\lambda_1,\lambda_2\right)$ and $G_w\left(\lambda_1,\lambda_2\right)$ by setting $\lambda_1=\rho,\,\lambda_2=1$ and vice versa. Since the polynomial version of the constant terms is symmetric and the polynomial version of the gradient terms is asymmetric, we obtain only three instead of four polynomials in one variable $C\left(\rho\right)$, $G_1\left(\rho\right)$ and $G_2\left(\rho\right)$.
Due to the property of a \textit{maximum-principle function}, all three polynomials in $\rho$ have to be nonpositive for all $\rho \geq 0$.
Now we calculate the leading terms of all three polynomials in one variable. Here, for technical reasons we have to distinguish nine cases. However, each case results in a contradiction. As it turns out, the coefficients of the leading terms of $C\left(\rho\right)$, $G_1\left(\rho\right)$ and $G_2\left(\rho\right)$ never can be simultaneously negative. This concludes the proof of our main Theorem \ref{main theorem}.
\section{Evolution equations, constant terms and gradient terms}\label{sec evolution equations}
\subsection{Evolution equations}
In the first part of Section \ref{sec evolution equations}, we do some preliminary work.
We calculate the covariant derivatives of the mean curvature $H$ and the Gauss curvature $K$.
Furthermore, we present the evolution equations, corresponding to the general flow equation \eqref{eq general flow equation} of the following geometric quantities
\begin{itemize}
\item induced metric $g_{ij}$,
\item inverse of the induced metric $g^{ij}$,
\item second fundamental form $h_{ij}$,
\item mean curvature $H$,
\item Gauss curvature $K$,
\item and general function $w\left(H,K\right)$ depending on $H$ and $K$.
\end{itemize}
\begin{lemma}
The covariant derivative of the mean curvature $H$ is given by
\begin{equation}
\label{H diff}
H_{;k} = g^{ij} h_{ij;k}.
\end{equation}
\end{lemma}
\begin{proof}
Direct calculations yield
\begin{align*}
H_{;k}=&\, \left(g^{ij} h_{ij}\right)_{;k} {\displaybreak[1]}\\
=&\, \underbrace{ \left( g^{ij} \right)_{;k} }_{=\, 0} h_{ij} + g^{ij} h_{ij;k} {\displaybreak[1]}\\
=&\, g^{ij} h_{ij;k}.\qedhere
\end{align*}
\end{proof}
\begin{lemma}
The covariant derivative of the Gauss curvature $K$ is given by
\begin{equation}
\label{K diff}
K_{;k}=\,K \tilde{h}^{ij} h_{ij;k}.
\end{equation}
\end{lemma}
\begin{proof}
Direct calculations yield
\begin{align*}
K_{;k}=&\,\left(\frac{\det h_{ij} }{\det g_{ij} }\right)_{;k} {\displaybreak[1]}\\
=&\, \frac{1}{\left(\det g_{ij}\right)^2} \left( \left(\det h_{ij}\right)_{;k}\left(\det g_{ij}\right) - \left(\det h_{ij}\right) \left(\det g_{ij}\right)_{;k} \right) {\displaybreak[1]}\\
=&\, \frac{1}{\left(\det g_{ij}\right)^2} \Bigg( \left(\frac{\partial}{\partial h_{ij}} \det h_{ij}\right) \left(h_{ij;k}\right) \left(\det g_{ij} \right)
-\left(\det h_{ij}\right)\left(\frac{\partial}{\partial g_{ij}} \det g_{ij}\right) \underbrace{\left(g_{ij;k}\right)}_{=\,0} \Bigg) {\displaybreak[1]}\\
=&\, \frac{1}{\det g_{ij}} \left(\det h_{ij}\right) \tilde{h}^{ji} h_{ij;k} {\displaybreak[1]}\\
=&\, K \tilde{h}^{ij} h_{ij;k}.\qedhere
\end{align*}
\end{proof}
\begin{lemma}
The metric $g_{ij}$ evolves according to
\begin{equation}
\label{g evol}
{\frac{d}{dt}} g_{ij}=-2Fh_{ij}.
\end{equation}
\end{lemma}
\begin{proof}
We refer to \cite{os:alpbach}.
\end{proof}
\begin{corollary}
The inverse metric $g^{ij}$ evolves according to
\begin{equation}
\label{ig evol}
{\frac{d}{dt}} g^{ij}=2Fh^{ij}.
\end{equation}
\end{corollary}
\begin{proof}
Direct calculations yield
\begin{align*}
{\frac{d}{dt}} g^{ij} =&\, -g^{ik} g^{ls} \underbrace{{\frac{d}{dt}} g_{kl}}_{\text{see}\,\eqref{g evol}} {\displaybreak[1]}\\
=&\, 2 F g^{ik} h_{kl} g^{lj} {\displaybreak[1]}\\
=&\, 2 F h^{ij}.\qedhere
\end{align*}
\end{proof}
\begin{lemma}
The second fundamental form $h_{ij}$ evolves according to
\begin{equation}
\label{2ff evol}
\begin{split}
L\left(h_{ij}\right)=&\,F^{kl}h^a_kh_{al}\cdot h_{ij}-F^{kl}h_{kl}\cdot h^a_ih_{aj}\\
&\,-Fh^k_ih_{kj}+F^{kl,\,rs}h_{kl;\,i}h_{rs;\,j}.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
We refer to \cite{os:alpbach}.
\end{proof}
\begin{lemma}
The mean curvature $H$ evolves according to
\begin{equation}
\label{H evol}
\begin{split}
L\left(H\right)=&\,F^{kl} h^a_k h_{al} \cdot H + \left(F - F^{kl}h_{kl}\right) |A|^2+g^{ij}F^{kl,rs}h_{kl;i}h_{rs;j}.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
This is a straightforward calculation.
\begin{align*}
&\, L\left(H\right) {\displaybreak[1]}\\
=&\, {\frac{d}{dt}} H - F^{kl}H_{;kl} \qquad \textbf{(\text{see}\;\eqref{def linear operator L})} {\displaybreak[1]}\\
=&\,{\frac{d}{dt}}\left(g^{ij} h_{ij}\right) - F^{kl} \big(\underbrace{H_{;k}}_{\text{see}\,\eqref{H diff}}\big)_{;l} {\displaybreak[1]}\\
=&\,\underbrace{\left({\frac{d}{dt}} g^{ij}\right)}_{\text{see}\,\eqref{ig evol}} h_{ij} + g^{ij} \left( {\frac{d}{dt}} h_{ij} \right)
-F^{kl}\left(g^{ij} h_{ij;k}\right)_{;l} {\displaybreak[1]}\\
=&\,2 F h^{ij} h_{ij} + g^{ij} \left({\frac{d}{dt}} h_{ij} \right)
- F^{kl} \bigg(\underbrace{\left(g^{ij}\right)_{;l}}_{=\,0} h_{ij;k} + g^{ij} h_{ij;kl} \bigg) {\displaybreak[1]}\\
=&\,2 F |A|^2 + g^{ij} \underbrace{L\left(h_{ij}\right)}_{\text{see}\,\eqref{2ff evol}} {\displaybreak[1]}\\
=&\,2 F |A|^2 \\
&+g^{ij} \left( F^{kl}h^a_kh_{al}\cdot h_{ij}-F^{kl}h_{kl}\cdot h^a_ih_{aj}-Fh^k_ih_{kj}+F^{kl,\,rs}h_{kl;\,i}h_{rs;\,j}\right) {\displaybreak[1]}\\
=&\,2 F |A|^2 \\
&\,+ F^{kl} h^a_k h_{al} \cdot g^{ij} h_{ij} - F^{kl} h_{kl} \cdot h^a_i g^{ij} h_{aj} - F h^k_i g^{ij} h_{kj} + g^{ij} F^{kl,rs} h_{kl;i} h_{rs;j} {\displaybreak[1]}\\
=&\,2 F |A|^2 \\
&\,+F^{kl} h^a_k h_{al} \cdot H - F^{kl} h_{kl} \cdot |A|^2 - F |A|^2 + g^{ij} F^{kl,rs} h_{kl;i} h_{rs;j} {\displaybreak[1]}\\
=&\,F^{kl} h^a_k h_{al} \cdot H + \left(F - F^{kl}h_{kl}\right) |A|^2+g^{ij}F^{kl,rs}h_{kl;i}h_{rs;j}.\qedhere
\end{align*}
\end{proof}
\begin{lemma}
The Gauss curvature $K$ evolves according to
\begin{equation}
\label{K evol}
\begin{split}
&\,L\left(K\right) \\
=&\,K\left( F^{kl} h^a_k h_{al} \cdot \tilde{h}^{ij} h_{ij} - \left(F^{kl} h_{kl} + F\right) \tilde{h}^{ij} h^a_i h_{aj} \right. \\
&\qquad\left. +2FH + \left( \tilde{h}^{ij} F^{kl,rs} + F^{ij} \left( \tilde{h}^{kr} \tilde{h}^{ls} - \tilde{h}^{kl} \tilde{h}^{rs} \right) \right) h_{kl;i} h_{rs;j} \right).
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
This is a straightforward calculation.
\begin{align*}
&\, L\left(K\right) {\displaybreak[1]}\\
=&\, {\frac{d}{dt}} K - F^{kl}K_{;kl} \qquad \textbf{(\text{see}\;\eqref{def linear operator L})} {\displaybreak[1]}\\
=&\, {\frac{d}{dt}} \left(\frac{\det h_{ij}}{\det g_{ij}} \right) - F^{kl}\big( \underbrace{K_{;k}}_{\text{see}\,\eqref{K diff}} \big)_{;l} {\displaybreak[1]}\\
=&\, \frac{1}{\left( \det g_{ij} \right)^2} \left( \left( {\frac{d}{dt}} \det h_{ij} \right) \left( \det g_{ij} \right) - \left( \det h_{ij} \right) \left( {\frac{d}{dt}} \det g_{ij} \right) \right) \\
&\, -F^{kl} \left( K \tilde{h}^{ij} h_{ij;k} \right)_{;l} {\displaybreak[1]}\\
=&\, \frac{1}{\left( \det g_{ij} \right)^2 } \Bigg( \left( \frac{\partial}{\partial h_{ij}} \det h_{ij} \right) \left({\frac{d}{dt}} h_{ij} \right) \left(\det g_{ij} \right) \Bigg. \\
&\qquad \qquad \qquad
\Bigg. -\left( \det h_{ij} \right) \left( \frac{\partial}{\partial g_{ij}} \det g_{ij} \right) \bigg( \underbrace{{\frac{d}{dt}} g_{ij} }_{\text{see}\,\eqref{g evol}} \bigg) \Bigg) \\
&\, -F^{kl} \left( K_{;l} \tilde{h}^{ij} h_{ij;k} + K \left(\tilde{h}^{ij} \right)_{;l} h_{ij;k} + K \tilde{h}^{ij} h_{ij;kl} \right) {\displaybreak[1]}\\
=&\, K \left( \left(\tilde{h}^{ij} \right) \left({\frac{d}{dt}} h_{ij} \right) - \left( g^{ij} \right) \left( -2Fh_{ij} \right) \right) \\
&\, -F^{kl} \left( K \tilde{h}^{rs} h_{rs;l} \tilde{h}^{ij} h_{ij;k} + K \left(-\tilde{h}^{ir} \tilde{h}^{sj} h_{rs;l} \right) h_{ij;k} + K \tilde{h}^{ij} h_{ij;kl} \right) {\displaybreak[1]}\\
=&\, K \left( \tilde{h}^{ij} L\left( h_{ij} \right) + 2 F g^{ij} h_{ij} - F^{kl} \tilde{h}^{ij} \tilde{h}^{rs} h_{ij;k} h_{rs;l} + F^{kl} \tilde{h}^{ir} \tilde{h}^{sj} h_{ij;k} h_{rs;l} \right) {\displaybreak[1]}\\
=&\, K \bigg( \tilde{h}^{ij} \underbrace{L\left( h_{ij} \right)}_{\text{see}\,\eqref{2ff evol}} + 2 F H + F^{ij} \left( \tilde{h}^{kr} \tilde{h}^{ls} - \tilde{h}^{kl} \tilde{h}^{rs} \right) h_{kl;i} h_{rs;j} \bigg) {\displaybreak[1]}\\
=&\, K \bigg( \tilde{h}^{ij} \left(F^{kl} h^a_k h_{al} \cdot h_{ij} - F^{kl} h_{kl} \cdot h^a_i h_{aj} - F h^k_i h_{kj} + F^{kl,rs} h_{kl;i} h_{rs;j} \right) \bigg. \\
&\qquad \bigg. +2 F H + F^{ij} \left( \tilde{h}^{kr} \tilde{h}^{ls} - \tilde{h}^{kl} \tilde{h}^{rs} \right) h_{kl;i} h_{rs;j} \bigg) {\displaybreak[1]}\\
=&\, K \bigg( F^{kl} h^a_k h_{al} \cdot \tilde{h}^{ij} h_{ij} - F^{kl} h_{kl} \cdot \tilde{h}^{ij} h^a_i h_{aj} - F \tilde{h}^{ij} h^k_i h_{kj} + \tilde{h}^{ij} F^{kl,rs} h_{kl;i} h_{rs;j} \bigg. \\
&\qquad \bigg. +2 F H + F^{ij} \left( \tilde{h}^{kr} \tilde{h}^{ls} - \tilde{h}^{kl} \tilde{h}^{rs} \right) h_{kl;i} h_{rs;j} \bigg) {\displaybreak[1]}\\
=&\, K \bigg( F^{kl} h^a_k h_{al} \cdot \tilde{h}^{ij} h_{ij} - \left(F^{kl} h_{kl} + F\right) \tilde{h}^{ij} h^a_i h_{aj} \bigg. \\
&\qquad \bigg. +2 F H + \left(\tilde{h}^{ij} F^{kl,rs} + F^{ij} \left( \tilde{h}^{rs} \tilde{h}^{ls} - \tilde{h}^{kl} \tilde{h}^{rs} \right) \right) h_{kl;i} h_{rs;j} \bigg).\qedhere
\end{align*}
\end{proof}
\begin{lemma}
The function $w\left(H,K\right)$ evolves according to
\begin{equation}
\label{w evol}
\begin{split}
&\,L\big(w\left(H,K\right)\big) \\
=&\,L\left(H\right)\,w_H + L\left(K\right)\,w_K \\
&\,- F^{ij} \Big(H_{;i} H_{;j}\,w_{HH} + \left(H_{;i} K_{;j} + K_{;i} H_{;j}\right)\,w_{HK} + K_{;i} K_{;j}\,w_{KK} \Big),
\end{split}
\end{equation}
where $H$ is the mean curvature and $K$ is the Gauss curvature. $H$ and $K$ form an algebraic basis of the symmetric homogeneous polynomials in two variables. \\
Here, the $w$-terms are defined as
\begin{align*}
&\, w_H := \frac{\partial w}{\partial H},\;w_K := \frac{\partial w}{\partial K}, {\displaybreak[1]}\\
&\, w_{HH} := \frac{\partial^2w}{\partial H^2},\; w_{HK} := \frac{\partial^2w}{\partial H \partial K},\;w_{KK} := \frac{\partial^2w}{\partial K^2}.
\end{align*}
\end{lemma}
\begin{proof}
We use the chain rule.
\begin{align*}
&\, L\big(w\left(H,K\right)\big) {\displaybreak[1]}\\
=&\, \frac{d}{dt} w\left(H,K\right) - F^{ij} w\left(H,K\right)_{;ij} {\displaybreak[1]}\\
=&\, \frac{d}{dt} H\,w_H + \frac{d}{dt} K\,w_K - F^{ij} \left( H_{;i}\,w_H + K_{;i}\,w_K\right)_{;j} {\displaybreak[1]}\\
=&\, \frac{d}{dt} H\,w_H + \frac{d}{dt} K\,w_K - F^{ij} \left( H_{;ij}\,w_H + K_{;ij}\,w_K \right)\\
&\, - F^{ij} \Big(H_{;i} H_{;j}\,w_{HH} + H_{;i} K_{;j}\,w_{HK} + H_{;j} K_{;i}\,w_{HK} + K_{;i} K_{;j}\,w_{KK}\Big) {\displaybreak[1]}\\
=&\, L\left(H\right)\,w_H + L\left(K\right)\,w_K \\
&\, - F^{ij} \Big(H_{;i} H_{;j}\,w_{HH} + \left(H_{;i} K_{;j} + H_{;j} K_{;i}\right)\,w_{HK} + K_{;i} K_{;j}\,w_{KK}\Big). \qedhere
\end{align*}
\end{proof}
\subsection{Evolution equations at a critical point, where we also choose normal coordinates.}
In the second part of Section \ref{sec evolution equations}, we calculate the evolution equations of the following geometric quantities at a critical point of the general function $w\left(H,K\right)$,
where we also choose normal coordinates,
\begin{itemize}
\item mean curvature $H$,
\item Gauss curvature $K$,
\item and general function $w\left(H,K\right)$ depending on $H$ and $K$.
\end{itemize}
\begin{lemma}
The covariant derivatives of the second fundamental form $h_{ij}$ fulfill these identitities at a critical point of $w\left(H,K\right)$ \text{\textbf{(CP)}},
i.e. $w\left(H,K\right)_{;i} = 0$ for $i=1,2$, where we also choose normal coordinates \text{\textbf{(NC)}}, i.e. the metric tensor equals the Kronecker delta, $g_{ij} = \delta_{ij}$, and $\left(h_{ij}\right)$ is diagonal, $\left(h_{ij}\right) = \diag\left(\lambda_1,\lambda_2\right)$,
\begin{equation}
\label{h111}
h_{22;1} = -\frac{w_H + \lambda_2 w_K}{w_H + \lambda_1 w_K} \cdot h_{11;1} \equiv a_1 \cdot h_{11;1},
\end{equation}
\begin{equation}
\label{h222}
h_{11;2} = -\frac{w_H + \lambda_1 w_K}{w_H + \lambda_2 w_K} \cdot h_{22;2} \equiv a_2 \cdot h_{22;2}.
\end{equation}
\end{lemma}
\begin{proof}
Let $i=1$.
\begin{align*}
&\, w\left(H,K\right)_{;1} {\displaybreak[1]}\\
=&\, \underbrace{H_{;1}}_{\text{see}\,\eqref{H diff}} w_H + \underbrace{K_{;1}}_{\text{see}\,\eqref{K diff}} w_K {\displaybreak[1]}\\
=&\, \left(g^{ij} h_{ij;1}\right) w_H + \left(K \tilde{h}^{ij} h_{ij;1} \right) w_K {\displaybreak[1]}\\
=&\, \left(h_{11;1} + h_{22;1} \right) w_H + K \left( \frac{1}{\lambda_1} h_{11;1} + \frac{1}{\lambda_2} h_{22;1} \right) w_K \qquad \text{\textbf{(NC)}} {\displaybreak[1]}\\
=&\, \left(w_H + \lambda_2 w_K \right) h_{11;1} + \left(w_H + \lambda_1 w_K\right) h_{22;1} {\displaybreak[1]}\\
=&\, 0 \qquad \text{\textbf{(CP)}}.
\end{align*}
This implies
\begin{align*}
h_{22;1} = -\frac{w_H + \lambda_2 w_K}{w_H + \lambda_1 w_K} \cdot h_{11;1} \equiv a_1 \cdot h_{11;1}.
\end{align*}
Let $i=2$.
\begin{align*}
&\, w\left(H,K\right)_{;2} \\
=&\, \underbrace{H_{;2}}_{\text{see}\,\eqref{H diff}} w_H + \underbrace{K_{;2}}_{\text{see}\,\eqref{K diff}} w_K {\displaybreak[1]}\\
=&\, \left(g^{ij} h_{ij;2}\right) w_H + \left(K \tilde{h}^{ij} h_{ij;2} \right) w_K {\displaybreak[1]}\\
=&\, \left(h_{11;2} + h_{22;2} \right) w_H + K \left( \frac{1}{\lambda_1} h_{11;2} + \frac{1}{\lambda_2} h_{22;2} \right) w_K \qquad \text{\textbf{(NC)}} {\displaybreak[1]}\\
=&\, \left(w_H + \lambda_2 w_K \right) h_{11;2} + \left(w_H + \lambda_1 w_K\right) h_{22;2} {\displaybreak[1]}\\
=&\, 0 \qquad \text{\textbf{(CP)}}.
\end{align*}
This implies
\begin{align*}
h_{11;2} = -\frac{w_H + \lambda_1 w_K}{w_H + \lambda_2 w_K} \cdot h_{22;2} \equiv a_2 \cdot h_{22;2}.
\end{align*}
\end{proof}
\begin{lemma}
The covariant derivatives of the mean curvature $H$ \eqref{H diff} fulfill these identities at a critical point of $w\left(H,K\right)$, where we also choose normal coordinates,
\begin{equation}
\label{H diff ident 1}
H_{;1} = \left(1 + a_1\right) h_{11;1},
\end{equation}
\begin{equation}
\label{H diff ident 2}
H_{;2} = \left(1 + a_2\right) h_{22;2}.
\end{equation}
\end{lemma}
\begin{proof}
\begin{align*}
H_{;1} =&\, g^{ij}h_{ij;1} \qquad \textbf{(\text{see}\;\eqref{H diff})} {\displaybreak[1]}\\
=&\, h_{11;1} + h_{22;1} \qquad \text{\textbf{(NC)}} {\displaybreak[1]}\\
=&\, \left(1 + a_1\right) h_{11;1} \qquad \text{\textbf{(CP)}},
\end{align*}
\begin{align*}
H_{;2} =&\, g^{ij}h_{ij;2} \qquad \textbf{(\text{see}\,\eqref{H diff})} {\displaybreak[1]}\\
=&\, h_{11;2} + h_{22;2} \qquad \text{\textbf{(NC)}} {\displaybreak[1]}\\
=&\, \left(1 + a_2\right) h_{22;2} \qquad \text{\textbf{(CP)}}.
\end{align*}
\end{proof}
\begin{lemma}
The covariant derivatives of the Gauss curvature $K$ \eqref{K diff} fulfill these identities at a critical point of $w\left(H,K\right)$, where we also choose normal coordinates,
\begin{equation}
\label{K diff ident 1}
K_{;1} = \left(\lambda_2 + \lambda_1 a_1\right) h_{11;1},
\end{equation}
\begin{equation}
\label{K diff ident 2}
K_{;2} = \left(\lambda_1 + \lambda_2 a_2\right) h_{22;2}.
\end{equation}
\end{lemma}
\begin{proof}
\begin{align*}
K_{;1} =&\, K\tilde{h}^{ij}h_{ij;1} \qquad \textbf{(\text{see}\,\eqref{K diff})} {\displaybreak[1]}\\
=&\, K\left(\frac{1}{\lambda_1} h_{11;1} + \frac{1}{\lambda_2} h_{22;1}\right) \qquad \text{\textbf{(NC)}} {\displaybreak[1]}\\
=&\, \lambda_2 h_{11;1} + \lambda_1 h_{22;1} {\displaybreak[1]}\\
=&\, \left(\lambda_2 + \lambda_1 a_1\right) h_{11;1} \qquad \text{\textbf{(CP)}},
\end{align*}
\begin{align*}
K_{;2} =&\, K \tilde{h}^{ij}h_{ij;2} \qquad \textbf{(\text{see}\,\eqref{K diff})} {\displaybreak[1]}\\
=&\, K\left(\frac{1}{\lambda_1} h_{11;2} + \frac{1}{\lambda_2} h_{22;2}\right) \qquad \text{\textbf{(NC)}} {\displaybreak[1]}\\
=&\, \lambda_2 h_{11;2} + \lambda_1 h_{22;2} {\displaybreak[1]}\\
=&\, \left(\lambda_1 + \lambda_2 a_2\right) h_{22;2} \qquad \text{\textbf{(CP)}}.
\end{align*}
\end{proof}
\begin{lemma}
The evolution equation of the mean curvature $H$ \eqref{H evol} fulfills this identity at a critical point of $w\left(H,K\right)$, where we also choose normal coordinates,
\begin{equation}
\label{H evol ident}
L\left(H\right) = C_H\left(\lambda_1,\lambda_2\right) + G_H\left(\lambda_1,\lambda_2\right) h_{11;1}^2 + G_H\left(\lambda_2,\lambda_1\right) h_{22;2}^2,
\end{equation}
\qquad where
\begin{equation}
\label{H constant ident}
\begin{split}
C_H\left(\lambda_1,\lambda_2\right) =&\, F \left(\lambda_1^2 + \lambda_2^2\right) + \left( \frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2} \right) \left(\lambda_1 - \lambda_2\right) \lambda_1 \lambda_2,
\text{ and}
\end{split}
\end{equation}
\begin{equation}
\label{H gradient ident}
\begin{split}
G_H\left(\lambda_1,\lambda_2\right) =\, \frac{\partial^2F}{\partial \lambda_1^2} + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} a_1 + \frac{\partial^2F}{\partial \lambda_2^2} a_1^2
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1-\lambda_2} a_1^2.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
\begin{align*}
&\, L\left(H\right) {\displaybreak[1]}\\
= &\, F^{kl} h^a_k h_{al} \cdot H + \left(F - F^{kl} h_{kl} \right) |A|^2 + g^{ij} F^{kl,rs} h_{kl;i} h_{rs;j} \qquad \textbf{(\text{see}\,\eqref{H evol})} {\displaybreak[1]}\\
= &\, \left( \frac{\partial F}{\partial \lambda_1} \lambda_1^2 + \frac{\partial F}{\partial \lambda_2^2} \lambda_2^2 \right) \left(\lambda_1 + \lambda_2\right) {\displaybreak[1]}\\
&\, + \left( F - \frac{\partial F}{\partial \lambda_1} \lambda_1 - \frac{\partial F}{\partial \lambda_2} \lambda_2 \right) \left(\lambda_1^2 + \lambda_2^2\right) \\
&\, + \left( \sum^2_{i,j=1} \frac{\partial^2F}{\partial \lambda_i \partial \lambda_j} h_{ii;1} h_{jj;1}
+ \sum^2_{\substack{i,j=1 \\ i\neq j}} \frac{\frac{\partial F}{\partial \lambda_i} - \frac{\partial F}{\partial \lambda_j}}{ \lambda_i - \lambda_j} h_{ij;1}^2 \right) \\
&\, + \left( \sum^2_{i,j=1} \frac{\partial^2F}{\partial \lambda_i \partial \lambda_j} h_{ii;2} h_{jj;2}
+ \sum^2_{\substack{i,j=1 \\ i\neq j}} \frac{\frac{\partial F}{\partial \lambda_i} - \frac{\partial F}{\partial \lambda_j}}{ \lambda_i - \lambda_j} h_{ij;2}^2 \right)
\qquad \text{\textbf{(NC)}} {\displaybreak[1]}\\
= &\, \left( \frac{\partial F}{\partial \lambda_1} \lambda_1^2 + \frac{\partial F}{\partial \lambda_2^2} \lambda_2^2 \right) \left(\lambda_1 + \lambda_2\right) \\
&\, + \left( F - \frac{\partial F}{\partial \lambda_1} \lambda_1 - \frac{\partial F}{\partial \lambda_2} \lambda_2 \right) \left(\lambda_1^2 + \lambda_2^2\right) \\
&\, + \left( \frac{\partial^2F}{\partial \lambda_1^2} h_{11;1}^2 + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} h_{11;1} h_{22;1} + \frac{\partial^2F}{\partial \lambda_2^2} h_{22;1}^2
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} h_{12;1}^2 \right) \\
&\, + \left( \frac{\partial^2F}{\partial \lambda_1^2} h_{11;2}^2 + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} h_{11;2} h_{22;2} + \frac{\partial^2F}{\partial \lambda_2^2} h_{22;2}^2
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} h_{12;2}^2 \right).
\end{align*}
According to \cite{cg:curvature}, the terms
\begin{align*}
F^{ij,\,kl}\eta_{ij}\eta_{kl}=\sum^2_{i,j=1}\fracp{^2F}{{\lambda_i}\partial{\lambda_j}}\eta_{ii}\eta_{jj}+\sum^2_{\substack{i,j=1 \\ i\neq j}}\frac{\fracp F{\lambda_i}-\fracp F{\lambda_j}}{{\lambda_i}-{\lambda_j}}(\eta_{ij})^2
\end{align*}
are well-defined for symmetric matrices $(\eta_{ij})$ and $\lambda_1\neq\lambda_2$ or $\lambda_1=\lambda_2$, when we interpret the last term as a limit. \\
We get the constant terms
\begin{align*}
C_H\left(\lambda_1,\lambda_2\right) =&\, F \left(\lambda_1^2 + \lambda_2^2\right) + \left( \frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2} \right) \left(\lambda_1 - \lambda_2\right) \lambda_1 \lambda_2,
\end{align*}
and at a critical point of $w\left(H,K\right)$ \textbf{(CP)}, using identities \eqref{h111} and \eqref{h222}, we get the gradient terms
\begin{align*}
\begin{split}
G_H\left(\lambda_1,\lambda_2\right) =\, \frac{\partial^2F}{\partial \lambda_1^2} + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} a_1 + \frac{\partial^2F}{\partial \lambda_2^2} a_1^2
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1-\lambda_2} a_1^2.
\end{split}
\end{align*}
\end{proof}
\begin{lemma}
The evolution equation of the Gauss curvature $K$ \eqref{K evol} fulfills this identity at a critical point of $w\left(H,K\right)$,
where we also choose normal coordinates,
\begin{equation}
\label{K evol ident}
L\left( K \right) = C_K\left(\lambda_1,\lambda_2\right) + G_K\left(\lambda_1,\lambda_2\right) h_{11;1}^2 + G_K\left(\lambda_2,\lambda_1\right) h_{22;2}^2,
\end{equation}
\qquad where
\begin{equation}\label{K constant ident}
C_K\left(\lambda_1,\lambda_2\right) =\, \left( F\left(\lambda_1+\lambda_2\right) + \left( \frac{\partial F}{\partial \lambda_1} \lambda_1 - \frac{\partial F}{\partial \lambda_2} \lambda_2 \right) \left( \lambda_1 - \lambda_2 \right) \right) \lambda_1 \lambda_2,
\end{equation}
\begin{equation}\label{K gradient ident}
\begin{split}
G_K\left(\lambda_1,\lambda_2\right) =&\, 2\left(-\frac{\partial F}{\partial \lambda_1} a_1 + \frac{\partial F}{\partial \lambda_2} a_1^2 \right) \\
& + \left( \frac{\partial^2F}{\partial \lambda_1^2} + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} a_1 + \frac{\partial^2F}{\partial \lambda_2^2} a_1^2 \right) \lambda_2
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} a_1^2 \lambda_1.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
\begin{align*}
&\, L\left( K \right) {\displaybreak[1]}\\
=&\, K \bigg( F^{kl} h^a_k h_{al} \cdot \tilde{h}^{ij} h_{ij} - \left(F^{kl} h_{kl} + F\right) \tilde{h}^{ij} h^a_i h_{aj} + 2FH \bigg. \\
&\, + \bigg. \left( \tilde{h}^{ij} F^{kl,rs} + F^{ij} \left(\tilde{h}^{kr} \tilde{h}^{ls} - \tilde{h}^{kl} \tilde{h}^{rs} \right) \right) h_{kl;i} h_{rs;j} \bigg)
\qquad \text{see}\,\eqref{K evol}{\displaybreak[1]}\\
=&\, \lambda_1\lambda_2 \Bigg( 2\left( \frac{\partial F}{\partial \lambda_1} \lambda_1^2 + \frac{\partial F}{\partial \lambda_2} \lambda_2^2 \right) +
\left(F - \frac{\partial F}{\partial \lambda_1} \lambda_1 - \frac{\partial F}{\partial \lambda_2} \lambda_2\right) \left(\lambda_1+\lambda_2\right) \Bigg. \\
&\, \qquad + \frac{1}{\lambda_1} \left(\sum^2_{i,j=1} \frac{\partial^2F}{\partial \lambda_i \partial \lambda_j} h_{ii;1} h_{jj;1}
+ \sum^2_{\substack{i,j=1 \\ i\neq j}} \frac{ \frac{\partial F}{\partial \lambda_i}
- \frac{\partial F}{\partial \lambda_j} }{\lambda_i - \lambda_j} h_{ij;1}^2 \right) \\
&\, \qquad + \frac{1}{\lambda_2} \left(\sum^2_{i,j=1} \frac{\partial^2F}{\partial \lambda_i \partial \lambda_j} h_{ii;2} h_{jj;2}
+ \sum^2_{\substack{i,j=1 \\ i\neq j}} \frac{ \frac{\partial F}{\partial \lambda_i}
- \frac{\partial F}{\partial \lambda_j} }{\lambda_i - \lambda_j} h_{ij;2}^2 \right) \\
&\, \qquad + \frac{\partial F}{\partial \lambda_1} \left( 2 \frac{1}{\lambda_1 \lambda_2} h_{11;2}^2 \right)
- \frac{\partial F}{\partial \lambda_1} \left( 2 \frac{1}{\lambda_1 \lambda_2} h_{11;1} h_{22;1} \right) \\
&\, \qquad + \Bigg. \frac{\partial F}{\partial \lambda_2} \left( 2 \frac{1}{\lambda_1 \lambda_2} h_{22;1}^2 \right)
- \frac{\partial F}{\partial \lambda_2} \left( 2 \frac{1}{\lambda_1 \lambda_2} h_{11;2} h_{22;2} \right) \Bigg) \qquad \textbf{(\text{NC})} {\displaybreak[1]}\\
=&\, \lambda_1\lambda_2 \Bigg( F \left(\lambda_1+\lambda_2\right) + \left( \frac{\partial F}{\partial \lambda_1} \lambda_1 + \frac{\partial F}{\partial \lambda_2} \lambda_2 \right) \left(\lambda_1 - \lambda_2\right) \Bigg. \\
&\, \qquad + \frac{1}{\lambda_1} \left( \frac{\partial^2F}{\partial \lambda_1^2} h_{11;1}^2 + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} h_{11;1} h_{22;1}
+ \frac{\partial^2F}{\partial \lambda_2^2} h_{22;1}^2 + \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} h_{12;1}^2 \right) \\
&\, \qquad + \frac{1}{\lambda_2} \left( \frac{\partial^2F}{\partial \lambda_1^2} h_{11;2}^2 + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} h_{11;2} h_{22;2}
+ \frac{\partial^2F}{\partial \lambda_2^2} h_{22;2}^2 + \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} h_{12;2}^2 \right) \\
&\, \qquad + \Bigg. 2 \frac{1}{\lambda_1 \lambda_2} \left( \frac{\partial F}{\partial \lambda_1} \left( h_{11;2}^2 - h_{11;1} h_{22;1} \right)
+ \frac{\partial F}{\partial \lambda_2} \left( h_{22;1}^2 - h_{11;2} h_{22;2} \right) \right) \Bigg).
\end{align*}
According to \cite{cg:curvature}, the terms
\begin{align*}
F^{ij,\,kl}\eta_{ij}\eta_{kl}=\sum^2_{i,j=1}\fracp{^2F}{{\lambda_i}\partial{\lambda_j}}\eta_{ii}\eta_{jj}+\sum^2_{\substack{i,j=1 \\ i\neq j}}\frac{\fracp F{\lambda_i}-\fracp F{\lambda_j}}{{\lambda_i}-{\lambda_j}}(\eta_{ij})^2
\end{align*}
are well-defined for symmetric matrices $(\eta_{ij})$ and $\lambda_1\neq\lambda_2$ or $\lambda_1=\lambda_2$, when we interpret the last term as a limit. \\
We get the constant terms
\begin{align*}
C_K\left(\lambda_1,\lambda_2\right) =\, \left( F\left(\lambda_1+\lambda_2\right) + \left( \frac{\partial F}{\partial \lambda_1} \lambda_1 - \frac{\partial F}{\partial \lambda_2} \lambda_2 \right) \left( \lambda_1 - \lambda_2 \right) \right) \lambda_1 \lambda_2,
\end{align*}
and at a critical point of $w\left(H,K\right)$ \textbf{(CP)}, using identities \eqref{h111} and \eqref{h222}, we get the gradient terms
\begin{align*}
G_K\left(\lambda_1,\lambda_2\right) =&\, 2\left(-\frac{\partial F}{\partial \lambda_1} a_1 + \frac{\partial F}{\partial \lambda_2} a_1^2 \right) \\
& + \left( \frac{\partial^2F}{\partial \lambda_1^2} + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} a_1 + \frac{\partial^2F}{\partial \lambda_2^2} a_1^2 \right) \lambda_2
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} a_1^2 \lambda_1.
\end{align*}
\end{proof}
\begin{lemma}\label{lem evolution equation w}
The evolution equation of the function $w\left(H,K\right)$ \eqref{w evol} fulfills this identity at a critical point of $w\left(H,K\right)$,
where we also choose normal coordinates,
\begin{equation}\label{w evol ident}
L\big(w\left(H,K\right)\big) = C_w\left(\lambda_1,\lambda_2\right) + G_w\left(\lambda_1,\lambda_2\right) h_{11;1}^2 + G_w\left(\lambda_1,\lambda_2\right) h_{22;2}^2,
\end{equation}
\qquad where
\begin{equation}\label{w constant ident}
\begin{split}
C_w\left(\lambda_1,\lambda_2\right) =&\, C_H\left(\lambda_1,\lambda_2\right) w_H + C_K\left(\lambda_1,\lambda_2\right) w_K \\
=&\, \left(F \left(\lambda_1^2 + \lambda_2^2\right) + \left( \frac{\partial F}{\partial \lambda_1}
- \frac{\partial F}{\partial \lambda_2} \right) \left(\lambda_1 - \lambda_2\right) \lambda_1 \lambda_2 \right) w_H \\
&\, + \left( F\left(\lambda_1+\lambda_2\right) + \left( \frac{\partial F}{\partial \lambda_1} \lambda_1 - \frac{\partial F}{\partial \lambda_2} \lambda_2 \right)
\left( \lambda_1 - \lambda_2 \right) \right) \lambda_1 \lambda_2 w_K,
\end{split}
\end{equation}
\begin{equation}\label{w gradient ident}
\begin{split}
&\,G_w\left(\lambda_1,\lambda_2\right) \\
=&\, G_H\left(\lambda_1,\lambda_2\right) w_H + G_K\left(\lambda_1,\lambda_2\right) w_K \\
&\, -\frac{\partial F}{\partial \lambda_1} \left(\left(1+a_1\right)^2 w_{HH} + 2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right) w_{HK}
+ \left(\lambda_2+\lambda_1 a_1\right)^2 w_{KK} \right) \\
=&\, \Bigg( \left( \frac{\partial^2F}{\partial \lambda_1^2} + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} a_1 +
\frac{\partial^2F}{\partial \lambda_2^2} a_1^2 \right)
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} a_1^2 \Bigg) w_H \\
&\, + \Bigg( 2\left(-\frac{\partial F}{\partial \lambda_1} a_1 + \frac{\partial F}{\partial \lambda_2} a_1^2 \right) \Bigg. \\
&\, \qquad + \Bigg. \left( \frac{\partial^2F}{\partial \lambda_1^2} + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} a_1
+ \frac{\partial^2F}{\partial \lambda_2^2} a_1^2 \right) \lambda_2
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} a_1^2 \lambda_1 \Bigg) w_K \\
&\, -\frac{\partial F}{\partial \lambda_1} \left(\left(1+a_1\right)^2 w_{HH} + 2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right) w_{HK}
+ \left(\lambda_2+\lambda_1 a_1\right)^2 w_{KK} \right).
\end{split}
\end{equation}
Here, the $w$-terms are defined as
\begin{align*}
&\, w_H := \frac{\partial w}{\partial H},\;w_K := \frac{\partial w}{\partial K}, \\
&\, w_{HH} := \frac{\partial^2 w}{\partial H^2},\;w_{HK} := \frac{\partial^2 w}{\partial H \partial K},\;w_{KK} := \frac{\partial^2 w}{\partial K^2}.
\end{align*}
\end{lemma}
\begin{proof}
\begin{align*}
&\, L\left(w\left(H,K\right)\right) \\
=&\, L\left(H\right) w_H + L\left(K\right) w_K \\
&\,- F^{ij} \left(H_{;i} H_{;j} w_{HH} + \left(H_{;i} K_{;j} + K_{;i} H_{;j}\right) w_{HK} + K_{;i} K_{;j} w_{KK} \right) \qquad \textbf{(\text{see}\,\eqref{w evol})} {\displaybreak[1]}\\
=&\, L\left(H\right) w_H + L\left(K\right) w_K \\
&\, -\frac{\partial F}{\partial \lambda_1}\left(H_{;1}^2 w_{HH} + 2 H_{;1}K_{;1} w_{HK} + K_{;1}^2 w_{KK} \right) \\
&\, -\frac{\partial F}{\partial \lambda_2}\left(H_{;2}^2 w_{HH} + 2 H_{;2}K_{;2} w_{HK} + K_{;2}^2 w_{KK} \right) \qquad \text{\textbf{(NC)}} {\displaybreak[1]}\\
=&\, \left(C_H\left(\lambda_1,\lambda_2\right) + G_H\left(\lambda_1,\lambda_2\right) h_{11;1}^2 + G_H\left(\lambda_1,\lambda_2\right) h_{22;2}^2 \right) w_H \\
&\, + \left(C_K\left(\lambda_1,\lambda_2\right) + G_K\left(\lambda_1,\lambda_2\right) h_{11;1}^2 + G_K\left(\lambda_1,\lambda_2\right) h_{22;2}^2 \right) w_K \\
&\, -\frac{\partial F}{\partial \lambda_1} \left(\left(1+a_1\right)^2 w_{HH} + 2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right) w_{HK}
+ \left(\lambda_2+\lambda_1 a_1\right)^2 w_{KK} \right) h_{11;1}^2 \\
&\, -\frac{\partial F}{\partial \lambda_2} \left(\left(1+a_2\right)^2 w_{HH} + 2\left(1+a_2\right)\left(\lambda_1+\lambda_2 a_2\right) w_{HK}
+ \left(\lambda_1+\lambda_2 a_2\right)^2 w_{KK} \right) h_{22;2}^2 \\
&\, \qquad \text{\textbf{(see \eqref{H diff ident 1}, \eqref{H diff ident 2}, \eqref{K diff ident 1}, \eqref{K diff ident 2}, \eqref{H evol ident}, \eqref{K evol ident})}}.
\end{align*}
We get the constant terms
\begin{align*}
C_w\left(\lambda_1,\lambda_2\right) =&\, C_H\left(\lambda_1,\lambda_2\right) w_H + C_K\left(\lambda_1,\lambda_2\right) w_K {\displaybreak[1]}\\
=&\, \left(F \left(\lambda_1^2 + \lambda_2^2\right) + \left( \frac{\partial F}{\partial \lambda_1}
- \frac{\partial F}{\partial \lambda_2} \right) \left(\lambda_1 - \lambda_2\right) \lambda_1 \lambda_2 \right) w_H {\displaybreak[1]}\\
&\, + \left( F\left(\lambda_1+\lambda_2\right) + \left( \frac{\partial F}{\partial \lambda_1} \lambda_1 - \frac{\partial F}{\partial \lambda_2} \lambda_2 \right)
\left( \lambda_1 - \lambda_2 \right) \right) \lambda_1 \lambda_2 w_K \\
&\, \qquad \text{\textbf{(see \eqref{H constant ident}, \eqref{K constant ident})}},
\end{align*}
and at a critical point of $w\left(H,K\right)$ \textbf{(CP)}, we get the gradient terms
\begin{align*}
\begin{split}
&\,G_w\left(\lambda_1,\lambda_2\right) \\
=&\, G_H\left(\lambda_1,\lambda_2\right) w_H + G_K\left(\lambda_1,\lambda_2\right) w_K \\
&\, -\frac{\partial F}{\partial \lambda_1} \left(\left(1+a_1\right)^2 w_{HH} + 2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right) w_{HK}
+ \left(\lambda_2+\lambda_1 a_1\right)^2 w_{KK} \right) \\
=&\, \Bigg( \left( \frac{\partial^2F}{\partial \lambda_1^2} + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} a_1 +
\frac{\partial^2F}{\partial \lambda_2^2} a_1^2 \right)
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} a_1^2 \Bigg) w_H \\
&\, + \Bigg( 2\left(-\frac{\partial F}{\partial \lambda_1} a_1 + \frac{\partial F}{\partial \lambda_2} a_1^2 \right) \Bigg. \\
&\, \qquad + \Bigg. \left( \frac{\partial^2F}{\partial \lambda_1^2} + 2 \frac{\partial^2F}{\partial \lambda_1 \partial \lambda_2} a_1
+ \frac{\partial^2F}{\partial \lambda_2^2} a_1^2 \right) \lambda_2
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} a_1^2 \lambda_1 \Bigg) w_K \\
&\, -\frac{\partial F}{\partial \lambda_1} \left(\left(1+a_1\right)^2 w_{HH} + 2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right) w_{HK}
+ \left(\lambda_2+\lambda_1 a_1\right)^2 w_{KK} \right) \\
&\, \qquad \text{\textbf{(see \eqref{H gradient ident}, \eqref{K gradient ident})}}.
\end{split}
\end{align*}
\end{proof}
\subsection{Constant terms and gradient terms at a critical point, where we also choose normal coordinates. The normal velocity is equal to powers of the Gauss curvature.}
In the third and last part of Section \ref{sec evolution equations}, we calcuate the constant terms $C_w\left(\lambda_1,\lambda_2\right)$ and the gradient terms
$G_w\left(\lambda_1,\lambda_2\right)$ from Definition \ref{def mpf} of the \textit{maximum-principle functions}.
As before, we calculate these terms at a critical point of the general function $w\left(H,K\right)$,
where we also choose normal coordinates. Furthermore, we set the normal velocity to powers of the Gauss curvature, $F=K^{\sigma}$.
So far, the constant terms and the gradient terms are rational functions.
Now we divide each of them by some nonnegative factor in order to turn them into polynomials in two variables.
For the constant terms we get a symmetric polynomial and for the gradient terms an asymmetric polynomial.
Finally, we dehomogenize both polynomials by setting $\lambda_1=\rho$ and $\lambda_2=1$ and vice versa.
We obtain for the constant terms a polynomial in one variable $C\left(\rho\right)$ and
for the gradient terms two polynomials in one variable $G_1\left(\rho\right)$ and $G_2\left(\rho\right)$.
\begin{lemma} Calculating the evolution equation of the quotient of two functions, $w = \frac{p}{q}$, we obtain the following identity
\begin{align}\label{pq evol}
q^2\;L\left(\frac{p}{q}\right)=q\;L(p)-p\;L(q)
\end{align}
at a critical point of $w$, i.e. $w_{;i} = 0$ for $i = 1,\,2$.
\end{lemma}
\begin{proof}
\begin{align*}
L\left(\frac{p}{q}\right)=&\,\frac{d}{dt}\left(\frac{p}{q}\right)-F^{ij}\left(\frac{p}{q}\right)_{;ij}\\
=&\,\frac{\dot{p}q-p\dot{q}}{q^2}-F^{ij}\frac{1}{q^4}\bigg(\left(p_i q-p q_i\right)_{;j} q^2 - \underbrace{\left(p_i q-p q_i\right)}_{=0\;\textbf{(CP)}}\left(q^2\right)_{;j}\bigg)\\
=&\,
\frac{1}{q^2}\Bigg(\dot{p} q -p \dot{q} - F^{ij}\bigg(p_{;ij}\,q + \underbrace{p_i q_j - p_j q_i}_{=0} - p\,q_{;ij}\bigg)\Bigg)\\
=&\,\frac{1}{q^2}\bigg(q\,L(p) - p\,L(q)\bigg).
\end{align*}
\end{proof}
\begin{lemma}\label{lem constant terms}
We calculate the following constant terms at a critical point, where we also choose normal coordinates. The normal velocity is equal to powers of the Gauss curvature,
$F = K^\sigma$. \\
First we calculate the constant terms for the mean curvature $H$
\begin{align}
C_H\left(\lambda_1,\lambda_2\right) =&\, K^\sigma\left(\lambda_1^2 + \lambda_2^2 - \sigma\left(\lambda_1 - \lambda_2\right)^2\right),
\end{align}
and the constant terms for the Gauss curvature $K$
\begin{align}
C_K\left(\lambda_1,\lambda_2\right) =&\, K^{\sigma+1}\left(\lambda_1 + \lambda_2\right).
\end{align}
Then we calculate the constant terms for a rational function $w=\frac{p}{q}$
\begin{align}
\begin{split}
q^2 C_{\frac{p}{q}}\left(\lambda_1,\lambda_2\right) =&\,C_H\left(\lambda_1,\lambda_2\right) r_H + C_K\left(\lambda_1,\lambda_2\right) r_K \\
=&\,K^\sigma\left(\left(\left(\lambda_1^2+\lambda_2^2\right)-\sigma\left(\lambda_1-\lambda_2\right)^2\right)r_H + K\left(\lambda_1+\lambda_2\right)r_K\right).
\end{split}
\end{align}
Now we divide the previous constant terms by a nonnegative factor and get a polynomial in two variables
\begin{align}
C_r\left(\lambda_1,\lambda_2\right) :=&\, \frac{q^2}{K^\sigma} C_{\frac{p}{q}}\left(\lambda_1,\lambda_2\right).
\end{align}
We dehomogenize the previous polynomial setting $\lambda_1=\rho,\,\lambda_2=1$ and get a polynomial in one variable
\begin{align}
C\left(\rho\right) :=&\,C_r\left(\rho,1\right) = \left(\left(1-\sigma\right)\rho^2+2\sigma\rho+\left(1-\sigma\right)\right)r_H + \rho\left(\rho+1\right)r_K.
\end{align}
Here, the $r$-terms are defined as
\begin{align*}
&\, r_H:=q\;\frac{\partial p}{\partial H}-p\;\frac{\partial q}{\partial H}, {\displaybreak[1]}\\
&\, r_K:=q\;\frac{\partial p}{\partial K}-p\;\frac{\partial q}{\partial K}.
\end{align*}
\end{lemma}
\begin{proof}
We calculate the constant terms $C_H\left(\lambda_1,\lambda_2\right)$ for $F=K^\sigma$.
\begin{align*}
&\, C_H\left(\lambda_1,\lambda_2\right) {\displaybreak[1]}\\
=&\, F\left(\lambda_1^2+\lambda_2^2\right) + \left(\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}\right) \left(\lambda_1 - \lambda_2\right) \lambda_1 \lambda_2
\qquad \textbf{(\text{see}\,\eqref{H constant ident})} {\displaybreak[1]}\\
=&\, K^\sigma \left(\lambda_1^2 + \lambda_2^2\right) + \left(\sigma K^{\sigma-1}\lambda_2-\sigma K^{\sigma-1}\lambda_1\right)\left(\lambda_1-\lambda_2\right)\lambda_1 \lambda_2 {\displaybreak[1]}\\
=&\, K^\sigma \left(\lambda_1^2 + \lambda_2^2 - \sigma \left( \lambda_1 - \lambda_2 \right)^2 \right).
\end{align*}
We calculate the constant terms $C_K\left(\lambda_1,\lambda_2\right)$ for $F=K^\sigma$.
\begin{align*}
&\, C_K\left(\lambda_1,\lambda_2\right) {\displaybreak[1]}\\
=&\, \left( F\left(\lambda_1+\lambda_2\right) + \left( \frac{\partial F}{\partial \lambda_1} \lambda_1 - \frac{\partial F}{\partial \lambda_2} \lambda_2 \right) \left(\lambda_1 - \lambda_2\right)\right)\lambda_1 \lambda_2
\qquad \textbf{(\text{see}\,\eqref{K constant ident})} {\displaybreak[1]}\\
=&\, \left( K^\sigma \left(\lambda_1 + \lambda_2\right) + \left(\sigma K^{\sigma-1} \lambda_1 \lambda_2 - \sigma K^{\sigma-1} \lambda_1 \lambda_2\right)\left(\lambda_1-\lambda_2\right)\right) \lambda_1 \lambda_2 {\displaybreak[1]}\\
=&\, K^{\sigma+1} \left(\lambda_1 + \lambda_2\right) \lambda_1 \lambda_2.
\end{align*}
Using the identity $q^2 L\left(\frac{p}{q}\right) = q\,L\left(p\right) - p\,L \left(q\right)$ \eqref{pq evol} we calculate the constant terms $C_{\frac{p}{q}}\left(\lambda_1,\lambda_2\right)$.
\begin{align*}
&\, q^2 C_{ \frac{p}{q} } \left(\lambda_1,\lambda_2\right) {\displaybreak[1]}\\
=&\, C_H\left(\lambda_1,\lambda_2\right) r_H + C_K\left(\lambda_1,\lambda_2\right) r_K
\qquad \textbf{(\text{see}\,\eqref{pq evol},\;\eqref{w constant ident})} {\displaybreak[1]}\\
=&\, K^\sigma \left(\left(\lambda_1^2 + \lambda_2^2 - \sigma \left( \lambda_1 - \lambda_2 \right)^2 \right) r_H + K \left(\lambda_1 + \lambda_2\right) r_K \right).
\end{align*}
Dividing by the nonnegative factor $\frac{K^\sigma}{q^2}$ we get the polynomial in two variables $C_r\left(\lambda_1,\lambda_2\right)$.
\begin{align*}
C_r\left(\lambda_1,\lambda_2\right)=&\, \frac{q^2}{K^{\sigma}} C_{\frac{p}{q}}\left(\lambda_1,\lambda_2\right) {\displaybreak[1]}\\
=&\, \left(\lambda_1^2 + \lambda_2^2 - \sigma \left( \lambda_1 - \lambda_2 \right)^2 \right) r_H + K \left(\lambda_1 + \lambda_2\right) r_K.
\end{align*}
Now we dehomogenize the previous polynomial setting $\lambda_1=\rho,\,\lambda_2=1$. We get the polynomial in one veriable $C\left(\rho\right)$.
\begin{align*}
C\left(\rho\right) :=&\, C_r\left(\rho,1\right) {\displaybreak[1]}\\
=&\, \left(\rho^2 + 1 - \sigma\left(\rho-1\right)^2\right) r_H + \rho \left(\rho + 1\right) r_K {\displaybreak[1]}\\
=&\, \left( \left( 1-\sigma \right) \rho^2 + 2\sigma \rho + \left(1-\sigma\right) \right) r_H + \rho \left(\rho + 1\right) r_K.
\end{align*}
\end{proof}
\begin{lemma}\label{lem gradient terms}
We calculate the following gradient terms at a critical point, where we also choose normal coordinates. The normal velocity is equal to powers of the Gauss curvature, $F=K^\sigma$. \\
First we calculate the gradient terms for the mean curvature $H$
\begin{align}\label{H gradient Ksigma}
\begin{split}
G_H\left(\lambda_1,\lambda_2\right) =&\, \frac{\sigma K^{\sigma-2}}{\left(w_H + \lambda_1 w_K\right)^2} \bigg(\left(-\left(\lambda_1+\lambda_2\right)^2 + \sigma \left(\lambda_1 - \lambda_2\right)^2 \right) w_H^2 \bigg. \\
&\, \qquad \bigg. -2\left(\lambda_1 + 3\lambda_2\right) K w_H w_K - 2 \left(\lambda_1 + \lambda_2\right) \lambda_2 K w_K^2 \bigg),
\end{split}
\end{align}
the gradient terms for the Gauss curvature $K$
\begin{align}\label{K gradient Ksigma}
G_K\left(\lambda_1,\lambda_2\right) =&\, \frac{\sigma K^{\sigma-2}}{\left(w_H + \lambda_1 w_K\right)^2} \left(\sigma-1\right)\left(\lambda_1-\lambda_2\right)^2\lambda_2 w_H^2,
\end{align}
and the mixed terms (compare \eqref{w gradient ident})
\begin{align}\label{mixed terms Ksigma}
\begin{split}
&\, -\frac{\partial F}{\partial \lambda_1} \left( \left(1+a_1\right)^2 w_{HH} + 2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right) w_{HK} + \left(\lambda_2 + \lambda_1 a_1\right)^2 w_{KK} \right) \\
=&\, \frac{\sigma K^{\sigma-2}}{\left(w_H + \lambda_1 w_K\right)^2} \lambda_1 \left(\lambda_1-\lambda_2\right)^2\lambda_2^2 \left(-w_K^2 w_{HH} + 2 w_H w_K w_{HK} -w_H^2 w_{KK} \right).
\end{split}
\end{align}
Then we calculate the gradient terms for a rational function $w=\frac{p}{q}$
\begin{align}
\begin{split}
&\, q^2 G_{\frac{p}{q}}\left(\lambda_1,\lambda_2\right) \\
=&\,G_H\left(\lambda_1,\lambda_2\right) r_H + G_K\left(\lambda_1,\lambda_2\right) r_K \\
&\,-\frac{\partial F}{\partial \lambda_1} \left( \left(1+a_1\right)^2 w_{HH} + 2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right) w_{HK} + \left(\lambda_2 + \lambda_1 a_1\right)^2 w_{KK} \right) \\
=&\, \frac{\sigma K^{\sigma-2}}{\left(r_H + \lambda_1 r_K\right)^2}
\bigg(\left( \left(\sigma-1\right)\lambda_1^2-2\left(\sigma+1\right)\lambda_1 \lambda_2 + \left(\sigma-1\right)\lambda_2^2 \right) r_H^3 \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. \left( \left(\sigma-3\right) \lambda_1^2 - 2\left(\sigma+2\right) \lambda_1 \lambda_2 + \left(\sigma-1\right) \lambda_2^2 \right) \lambda_2 r_H r_K^2 \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. -2 \lambda_1 \left(\lambda_1+\lambda_2\right) \lambda_2^2 r_H r_K^2 \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. -\lambda_1 \left(\lambda_1-\lambda_2\right)^2 \lambda_2^2 r_K^2 r_{HH} \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. +2\lambda_1\left(\lambda_1-\lambda_2\right)^2\lambda_2^2 r_H r_K r_{HK} \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. -\lambda_1\left(\lambda_1-\lambda_2\right)^2\lambda_2^2r_H^2 r_{KK} \bigg).
\end{split}
\end{align}
Now we divide the previous gradient terms by a nonnegative factor and get a polynomial in two variables
\begin{align}
G_r\left(\lambda_1,\lambda_2\right) :=&\, \frac{q^2 \left(w_H + \lambda_1 w_K\right)^2}{\sigma K^{\sigma-2}} G_{\frac{p}{q}}\left(\lambda_1,\lambda_2\right).
\end{align}
We dehomogenize the previous polynomial setting $\lambda_1=\rho,\,\lambda_2=1$ and $\lambda_1=1,\,\lambda_2=\rho$, respectively. We get two polynomials in one variable
\begin{align}
\begin{split}
G_1\left(\rho\right) :=&\,G_r\left(\rho,1\right) \\
=&\, \left( \left(\sigma-1\right)\rho^2-2\left(\sigma+1\right)\rho + \left(\sigma-1\right) \right) r_H^3 \\
&\, + \left( \left(\sigma-3\right) \rho^2 - 2\left(\sigma+2\right) \rho + \left(\sigma-1\right) \right) r_H^2 r_K \\
&\, -2 \rho \left(\rho+1\right) r_H r_K^2 \\
&\, -\rho \left(\rho-1\right)^2 r_K^2 r_{HH} \\
&\, +2\rho\left(\rho-1\right)^2 r_H r_K r_{HK} \\
&\, -\rho\left(\rho-1\right)^2 r_H^2 r_{KK},
\end{split}
\end{align}
\begin{align}
\begin{split}
G_2\left(\rho\right) :=&\,G_r\left(1,\rho\right) \\
=&\, \left( \left(\sigma-1\right)\rho^2-2\left(\sigma+1\right) \rho + \left(\sigma-1\right) \right) r_H^3 \\
&\, + \rho \left( \left(\sigma-1\right)\rho^2 - 2\left(\sigma+2\right) \rho + \left(\sigma-3\right) \right) r_H^2 r_K \\
&\, -2 \rho^2 \left(\rho+1\right) r_H r_K^2 \\
&\, -\rho^2 \left(\rho-1\right)^2 r_K^2 r_{HH} \\
&\, +2\rho^2\left(\rho-1\right)^2 r_H r_K r_{HK} \\
&\, -\rho^2\left(\rho-1\right)^2 r_H^2 r_{KK}.
\end{split}
\end{align}
Here, the $r$-terms are defined as
\begin{align*}
&\, r_H:=q\;\frac{\partial p}{\partial H}-p\;\frac{\partial q}{\partial H},\;
r_K:=q\;\frac{\partial p}{\partial K}-p\;\frac{\partial q}{\partial K}, \\
&\, r_{HH}:=q\;\frac{\partial^2p}{\partial H^2}-p\;\frac{\partial^2q}{\partial H^2},\;
r_{HK}:=q\;\frac{\partial^2p}{\partial H \partial K}-p\;\frac{\partial^2q}{\partial H \partial K},\;
r_{KK}:=q\;\frac{\partial^2p}{\partial K^2}-p\;\frac{\partial^2q}{\partial K^2}.
\end{align*}
\end{lemma}
\begin{proof}
We calculate the gradient terms $G_H\left(\lambda_1,\lambda_2\right)$ for $F=K^\sigma$.
\begin{align*}
& \, G_H\left(\lambda_1,\lambda_2\right) {\displaybreak[1]}\\
=&\, \frac{\partial^2 F}{\partial \lambda_1^2} + 2 \frac{\partial^2 F}{\partial \lambda_1 \partial \lambda_2} a_1
+ \frac{\partial^2 F}{\partial \lambda_2^2} a_1^2 + \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} a_1^2
\qquad \textbf{(\text{see}\,\eqref{H gradient ident})} {\displaybreak[1]}\\
=&\, \sigma\left(\sigma-1\right) K^{\sigma-2} \lambda_2^2 + 2\sigma^2 K^{\sigma-1} a_1 + \sigma\left(\sigma-1\right) K^{\sigma-2} \lambda_1^2 a_1^2 \\
&\, + 2 \frac{\sigma K^{\sigma-1} \lambda_2 - \sigma K^{\sigma-1} \lambda_1}{\lambda_1 - \lambda_2} a_1^2 {\displaybreak[1]}\\
=&\, \sigma K^{\sigma-2} \left( \left( \sigma-1\right) \lambda_2^2 + 2\sigma K a_1 + \left(\sigma-1\right) \lambda_1^2 a_1^2 - 2 K a_1^2 \right) {\displaybreak[1]}\\
=&\, \sigma K^{\sigma-2} \Bigg( \Bigg. \left( \sigma-1 \right) \lambda_2^2 + 2\sigma K \left(-\frac{w_H + \lambda_2 w_K}{w_H + \lambda_1 w_K} \right) \\
&\, \qquad \qquad + \left(\left(\sigma-1\right) \lambda_1^2 - 2K\right)\left(-\frac{w_H + \lambda_2 w_K}{w_H + \lambda_1 w_K}\right)^2 \Bigg. \Bigg)
\qquad \textbf{(\text{see}\;\eqref{h111})} {\displaybreak[1]}\\
=&\, \frac{\sigma K^{\sigma-2}}{\left( w_H + \lambda_1 w_K\right)^2} \left( \left(\sigma-1\right) \lambda_2^2 \left(w_H + \lambda_1 w_K\right)^2 \right. \\
&\, \qquad \qquad \qquad \qquad - \left. 2\sigma K \left(w_H + \lambda_2 w_K \right) \left(w_H + \lambda_1 w_K\right) \right. \\
&\, \qquad \qquad \qquad \qquad + \left. \left( \left(\sigma - 1\right) \lambda_1^2 - 2 K \right) \left(w_H + \lambda_2 w_K\right)^2 \right) {\displaybreak[1]}\\
=&\, \frac{\sigma K^{\sigma-2}}{\left( w_H + \lambda_1 w_K\right)^2} \Big( \left(\sigma-1\right) \lambda_2^2 \left(w_H^2 + 2\lambda_1 w_H w_K + \lambda_1^2 w_K^2\right) \Big. \\
&\, \qquad \qquad \qquad \qquad - \Big. 2\sigma K \left(w_H^2 + \lambda_1 w_H w_K + \lambda_2 w_H w_K + K w_K^2\right) \Big. \\
&\, \qquad \qquad \qquad \qquad + \Big. \left( \left(\sigma - 1\right) \lambda_1^2 - 2 K \right) \left(w_H^2 + 2\lambda_2 w_H w_K + \lambda_2^2 w_K^2\right) \Big) {\displaybreak[1]}\\
=&\, \frac{\sigma K^{\sigma-2}}{\left(w_H + \lambda_1 w_K\right)^2} \Big( \left(-\lambda_2^2 - \lambda_1^2 - 2K + \sigma\left(\lambda_2^2 - 2K + \lambda_1^2 \right) \right) w_H^2 \Big. \\
&\, + \Big. \left( -2\lambda_1\lambda_2^2 - 2\lambda_1^2\lambda_2 - 4K\lambda_2 + \sigma\left(2\lambda_1\lambda_2^2 - 2K\left(\lambda_1+\lambda_2\right) + 2\lambda_1^2\lambda_2\right)\right) w_H w_K \Big. \\
&\, + \Big. \left( -K^2 - K^2 - 2K\lambda_2^2 + \sigma\left(K^2 - 2K^2 + K^2\right)\right) w_K^2 \Big) {\displaybreak[1]}\\
=&\, \frac{\sigma K^{\sigma-2}}{\left(w_H + \lambda_1 w_K\right)^2} \Big( \left(-\left(\lambda_1+\lambda_2\right)^2 + \sigma\left(\lambda_1-\lambda_2\right)^2\right) w_H^2 \Big. \\
&\, \qquad \qquad \qquad \qquad - \Big. 2 \left(\lambda_1 + 3\lambda_2 \right) K w_H w_K - 2 \lambda_2 \left(\lambda_1 + \lambda_2\right) K w_K^2 \Big).
\end{align*}
We calculate the gradient terms $G_K\left(\lambda_1,\lambda_2\right)$ for $F=K^\sigma$.
\begin{align*}
&\, G_K\left(\lambda_1,\lambda_2\right) {\displaybreak[1]}\\
=&\, 2 \left(-\frac{\partial F}{\partial \lambda_1} a_1 + \frac{\partial F}{\partial \lambda_2} a_1^2 \right)
+ \left( \frac{\partial^2 F}{\partial \lambda_1^2} + 2 \frac{\partial^2 F}{\partial \lambda_1 \partial \lambda_2} a_1 + \frac{\partial^2 F}{\partial \lambda_2^2} a_1^2 \right) \lambda_2
+ 2 \frac{\frac{\partial F}{\partial \lambda_1} - \frac{\partial F}{\partial \lambda_2}}{\lambda_1 - \lambda_2} \lambda_1 a_1^2 \\
&\, \qquad \textbf{(\text{see}\,\eqref{K gradient ident})} {\displaybreak[1]}\\
=&\, 2 \left(-\sigma K^{\sigma-1} \lambda_2 a_1 + \sigma K^{\sigma-1} \lambda_1 a_1^2 \right) \\
&\, + \left( \sigma \left(\sigma-1\right) K^{\sigma-2} \lambda_2^2 + 2\sigma^2 K^{\sigma-1} a_1 + \sigma\left(\sigma-1\right) K^{\sigma-2} \lambda_1^2 a_1^2\right) \lambda_2 \\
&\, + 2 \frac{\sigma K^{\sigma-1}\lambda_2 - \sigma K^{\sigma-1}\lambda_1}{\lambda_1-\lambda_2} \lambda_1 a_1^2 {\displaybreak[1]}\\
=&\, \sigma K^{\sigma-2} \Big( -2 K \lambda_2 a_1 + 2 K \lambda_1 a_1^2 + \left( \left(\sigma-1\right) \lambda_2^2 + 2\sigma K a_1 + \left(\sigma-1\right) \lambda_1^2 a_1^2\right) \lambda_2 \\
&\, \qquad \qquad - 2 K \lambda_1 a_1^2 \Big) {\displaybreak[1]}\\
=&\, \sigma K^{\sigma-2} \Big( \left(\sigma-1\right) \lambda_2^3 + \left(-2 K \lambda_2 + 2\sigma K \lambda_2 \right) a_1 + \left(\sigma -1\right) \lambda_1^2 \lambda_2 a_1^2 \Big) {\displaybreak[1]}\\
=&\, \sigma\left(\sigma-1\right) K^{\sigma-2} \Big(\lambda_2^3 + 2K \lambda_2 a_1 + \lambda_1^2 \lambda_2 a_1^2 \Big) {\displaybreak[1]}\\
=&\, \sigma \left(\sigma-1\right) K^{\sigma-2} \lambda_2 \Big(\lambda_2^2 + 2K a_1 + \lambda_1^2 a_1^2 \Big) {\displaybreak[1]}\\
=&\, \sigma \left(\sigma-1\right) K^{\sigma-2} \lambda_2 \left(\lambda_2 + \lambda_1 a_1 \right)^2 {\displaybreak[1]}\\
=&\, \sigma \left(\sigma-1\right) K^{\sigma-2} \lambda_2 \left(\lambda_2 - \lambda_1 \frac{w_H + \lambda_2 w_K}{w_H + \lambda_1 w_K} \right)^2
\qquad \textbf{(\text{see}\;\eqref{h111})} {\displaybreak[1]}\\
=&\, \frac{\sigma K^{\sigma-2}}{\left(w_H + \lambda_1 w_K\right)^2} \left(\sigma-1\right) \lambda_2 \Big( \lambda_2 \left(w_H + \lambda_1 w_K \right) - \lambda_1 \left(w_H + \lambda_2 w_K\right)\Big)^2 {\displaybreak[1]}\\
=&\, \frac{\sigma K^{\sigma-2}}{\left(w_H + \lambda_1 w_K\right)^2} \left(\sigma-1\right) \lambda_2 \Big( \lambda_2 w_H + K w_K - \lambda_1 w_H - K w_K \Big)^2 {\displaybreak[1]}\\
=&\, \frac{\sigma K^{\sigma-2}}{\left(w_H + \lambda_1 w_K\right)^2} \left(\sigma-1\right) \left(\lambda_1-\lambda_2\right)^2 \lambda_2 w_H^2.
\end{align*}
We calculate the mixed terms \textbf{(compare \eqref{w gradient ident})} \\
$$-\frac{\partial F}{\partial \lambda_1} \left(\left(1+a_1\right)^2w_{HH}+2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right)w_{HK} + \left(\lambda_2+\lambda_1 a_1\right)^2w_{KK}\right)$$
for $F=K^\sigma$.
\begin{align*}
&\, -\frac{\partial F}{\partial \lambda_1} \left(\left(1+a_1\right)^2w_{HH}+2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right)w_{HK} + \left(\lambda_2+\lambda_1 a_1\right)^2w_{KK}\right) {\displaybreak[1]}\\
=&\,-\sigma K^{\sigma-1}\lambda_2 \left(\left(1-\frac{w_H+\lambda_2 w_K}{w_H +\lambda_1 w_K}\right)^2w_{HH} \right. \\
&\,+ \left. 2\left(1-\frac{w_H+\lambda_2 w_K}{w_H +\lambda_1 w_K}\right)\left(\lambda_2-\frac{w_H+\lambda_2 w_K}{w_H +\lambda_1 w_K}\right)w_{HK} \right. \\
&\,+ \left. \left(\lambda_2-\lambda_1 \frac{w_H+\lambda_2 w_K}{w_H +\lambda_1 w_K}\right)^2w_{KK}\right) {\displaybreak[1]}\\
=&\, -\frac{\sigma K^{\sigma-1} \lambda_2}{\left(w_H + \lambda_1 w_K\right)^2} \Big( \left(w_H + \lambda_1 w_K - w_H - \lambda_2 w_K\right)^2 w_{HH} \\
&\, \qquad + 2\left(w_H + \lambda_1 w_K - w_H - \lambda_2 w_K \right)\cdot \Big. \\
&\, \qquad \qquad \Big. \cdot \left(\lambda_2 w_H + \lambda_1 \lambda_2 w_K - \lambda_1 w_H - \lambda_1 \lambda_2 w_K \right) w_{HK} \Big. \\
&\, \qquad + \Big. \left(\lambda_2 w_H + \lambda_1 \lambda_2 w_K - \lambda_1 w_H - \lambda_1 \lambda_2 w_K \right)^2 w_H^2 w_{KK} \Big)
\qquad \textbf{(\text{see}\;\eqref{h111})} {\displaybreak[1]}\\
=&\, -\frac{\sigma K^{\sigma-1} \lambda_2}{\left(w_H + \lambda_1 w_K\right)^2} \Big( \left(\lambda_1-\lambda_2\right)^2 w_K^2 w_{HH} - 2\left(\lambda_1 - \lambda_2\right)^2 w_H w_K w_{HK} \Big. \\
&\, \qquad \qquad \qquad \qquad \quad \Big. + \left(\lambda_1-\lambda_2\right)^2 w_H^2 w_{KK} \Big) {\displaybreak[1]}\\
=&\, \frac{\sigma K^{\sigma-2}}{\left(w_H + \lambda_1 w_K\right)^2} \lambda_1 \left(\lambda_1 - \lambda_2\right)^2 \lambda_2^2 \Big(-w_K^2 w_{HH} + 2 w_H w_K w_{KK} - w_H^2 w_{KK} \Big).
\end{align*}
Using $q^2 L\left(\frac{p}{q} \right) = q\;L\left(p\right) - p\;L\left(q\right)$ \eqref{pq evol} we calculate the gradient terms $G_{\frac{p}{q}}\left(\lambda_1,\lambda_2\right)$.
\begin{align*}
&\, q^2\,G_{\frac{p}{q}}\left(\lambda_1,\lambda_2\right) {\displaybreak[1]}\\
=&\, G_H\left(\lambda_1,\lambda_2\right) r_H + G_K\left(\lambda_1,\lambda_2\right) r_K {\displaybreak[1]}\\
&\, -\frac{\partial F}{\partial \lambda_1} \left(\left(1+a_1\right)^2r_{HH}+2\left(1+a_1\right)\left(\lambda_2+\lambda_1 a_1\right)r_{HK} + \left(\lambda_2+\lambda_1 a_1\right)^2r_{KK}\right) \\
&\, \qquad \textbf{(\text{see}\;\eqref{pq evol},\;\eqref{w evol ident})} {\displaybreak[1]} \\
=&\, \frac{\sigma K^{\sigma-2}}{\left(r_H + \lambda_1 r_K\right)^2} \bigg(\left(-\left(\lambda_1+\lambda_2\right)^2 + \sigma \left(\lambda_1 - \lambda_2\right)^2 \right) r_H^3 \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. -2\left(\lambda_1 + 3\lambda_2\right) K r_H^2 r_K - 2 \lambda_1 \left(\lambda_1 + \lambda_2\right) \lambda_2^2 r_H r_K^2 \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. +\left(\sigma-1\right) \left(\lambda_1-\lambda_2\right)^2 \lambda_2 r_H^2 r_K \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. -\lambda_1 \left(\lambda_1-\lambda_2\right)^2 \lambda_2^2 r_K^2 r_{HH} \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. +2\lambda_1\left(\lambda_1-\lambda_2\right)^2\lambda_2^2 r_H r_K r_{HK} \bigg. \\
&\, \qquad \qquad \qquad \quad \bigg. -\lambda_1\left(\lambda_1-\lambda_2\right)^2\lambda_2^2r_H^2 r_{KK} \bigg)
\qquad \textbf{\text{(see \eqref{H gradient Ksigma}, \eqref{K gradient Ksigma}, \eqref{mixed terms Ksigma})}} {\displaybreak[1]}\\
=&\, \frac{\sigma K^{\sigma-2}}{\left(r_H + \lambda_1 r_K\right)^2}
\bigg(\left( \left(\sigma-1\right)\lambda_1^2-2\left(\sigma+1\right)\lambda_1 \lambda_2 + \left(\sigma-1\right)\lambda_2^2 \right) r_H^3 \bigg. {\displaybreak[1]}\\
&\, \qquad \qquad \qquad \quad \bigg. +\left( \left(\sigma-3\right) \lambda_1^2
- 2\left(\sigma+2\right) \lambda_1 \lambda_2 + \left(\sigma-1\right) \lambda_2^2 \right) \lambda_2 r_H r_K^2 \bigg. {\displaybreak[1]}\\
&\, \qquad \qquad \qquad \quad \bigg. -2 \lambda_1 \left(\lambda_1+\lambda_2\right) \lambda_2^2 r_H r_K^2 \bigg. {\displaybreak[1]}\\
&\, \qquad \qquad \qquad \quad \bigg. -\lambda_1 \left(\lambda_1-\lambda_2\right)^2 \lambda_2^2 r_K^2 r_{HH} \bigg. {\displaybreak[1]}\\
&\, \qquad \qquad \qquad \quad \bigg. +2\lambda_1\left(\lambda_1-\lambda_2\right)^2\lambda_2^2 r_H r_K r_{HK} \bigg. {\displaybreak[1]}\\
&\, \qquad \qquad \qquad \quad \bigg. -\lambda_1\left(\lambda_1-\lambda_2\right)^2\lambda_2^2r_H^2 r_{KK} \bigg).
\end{align*}
Dividing by the nonnegative factor $\frac{\sigma K^{\sigma-2}}{q^2\left(r_H + \lambda_1 r_K\right)^2}$ we get the polynomial in two variables $G_r\left(\lambda_1,\lambda_2\right)$.
\begin{align*}
G_r\left(\lambda_1,\lambda_2\right) :=&\, \frac{q^2\left(r_H + \lambda_1 r_K\right)^2}{\sigma K^{\sigma-2}} G_{\frac{p}{q}}\left(\lambda_1,\lambda_2\right) {\displaybreak[1]}\\
=&\, \left( \left(\sigma-1\right)\lambda_1^2-2\left(\sigma+1\right)\lambda_1 \lambda_2 + \left(\sigma-1\right)\lambda_2^2 \right) r_H^3 \\
&\, + \left( \left(\sigma-3\right) \lambda_1^2 - 2\left(\sigma+2\right) \lambda_1 \lambda_2 + \left(\sigma-1\right) \lambda_2^2 \right) \lambda_2 r_H^2 r_K \\
&\, -2 \lambda_1 \left(\lambda_1+\lambda_2\right) \lambda_2^2 r_H r_K^2 \\
&\, -\lambda_1 \left(\lambda_1-\lambda_2\right)^2 \lambda_2^2 r_K^2 r_{HH} \\
&\, +2\lambda_1\left(\lambda_1-\lambda_2\right)^2\lambda_2^2 r_H r_K r_{HK} \\
&\, -\lambda_1\left(\lambda_1-\lambda_2\right)^2\lambda_2^2r_H^2 r_{KK}.
\end{align*}
Now we dehomogenize the previous polynomial setting $\lambda_1=\rho,\,\lambda_2=1$ and $\lambda_1=1,\,\lambda_2=\rho$, respectively. We get the two polynomials in one variable $G_1\left(\rho\right)$ and $G_2\left(\rho\right)$.
\begin{align*}
G_1\left(\rho\right) :=&\, G_r\left(\rho,1\right) \\
=&\, \left( \left(\sigma-1\right)\rho^2-2\left(\sigma+1\right)\rho + \left(\sigma-1\right) \right) r_H^3 \\
&\, + \left( \left(\sigma-3\right) \rho^2 - 2\left(\sigma+2\right) \rho + \left(\sigma-1\right) \right) r_H^2 r_K \\
&\, -2 \rho \left(\rho+1\right) r_H r_K^2 \\
&\, -\rho \left(\rho-1\right)^2 r_K^2 r_{HH} \\
&\, +2\rho\left(\rho-1\right)^2 r_H r_K r_{HK} \\
&\, -\rho\left(\rho-1\right)^2 r_H^2 r_{KK}, \\
\end{align*}
\begin{align*}
G_2\left(\rho\right) :=&\, G_r\left(1,\rho\right) \\
=&\, \left( \left(\sigma-1\right)\rho^2-2\left(\sigma+1\right) \rho + \left(\sigma-1\right) \right) r_H^3 \\
&\, + \rho \left( \left(\sigma-1\right)\rho^2 - 2\left(\sigma+2\right) \rho + \left(\sigma-3\right) \right) r_H^2 r_K \\
&\, -2 \rho^2 \left(\rho+1\right) r_H r_K^2 \\
&\, -\rho^2 \left(\rho-1\right)^2 r_K^2 r_{HH} \\
&\, +2\rho^2\left(\rho-1\right)^2 r_H r_K r_{HK} \\
&\, -\rho^2\left(\rho-1\right)^2 r_H^2 r_{KK}.
\end{align*}
\end{proof}
\section{Dehomogenized polynomials, leading terms and nine cases}\label{sec dehomogenized polynomials}
\subsection{Dehomogenized polynomials, leading terms}
In the first part of Section \ref{sec dehomogenized polynomials}, we define homogeneous symmetric polynomials in the algebraic basis $\lbrace H,K \rbrace$.
We also state their first and second derivatives with respect to $H$ and $K$.
Furthermore, we calculate their dehomogenized versions setting $\lambda_1=\rho,\,\lambda_2=1$. So we obtain several polynomials in one variable.
Then we define an operator ${\mathcal{L}}$ that determines the leading terms of a given polynomial in one variable.
Now we present the leading terms of the above polynomials in one veriable. Here, we have to distinguish three distinct cases.
Due to the form of the $r$-terms this means that we have nine different cases to explore.
In the second part of Section \ref{sec dehomogenized polynomials} we determine the leading terms of the $r$-terms.
In each case we continue with the calculation of the leading terms of the polynomial constant terms $C\left(\rho\right)$ and
the calculation of the leading terms of the polynomial gradient terms $G_1\left(\rho\right)$ and $G_2\left(\rho\right)$.
All nine cases result in a contradiction. This concludes the proof of our main Theorem \ref{main theorem}.
\begin{lemma}\label{lem poly p}
We define two homogeneous symmetric polynomials
\begin{align}\label{poly p}
p(H,K) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1}H^{g-2i}K^i, \\
q(H,K) :=&\,\sum_{j=0}^{\lfloor h/2\rfloor} d_{j+1}H^{h-2j}K^j,
\end{align}
where $g$ is the degree of $p\left(H,K\right)$, and $h$ is the degree of $q\left(H,K\right)$, respectively.
Furthermore, we have $\#\lbrace c_i \rbrace = \lfloor g/2 + 1 \rfloor$ and
$\#\lbrace d_j \rbrace = \lfloor h/2 + 1 \rfloor$, respectively.
We calculate the derivatives of the polynomial $p(H,K)$
\begin{align*}
\frac{\partial p}{\partial H}(H,K) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1}(g-2i)H^{g-2i-1}K^i, \\
\frac{\partial p}{\partial K}(H,K) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1} i H^{g-2i}K^{i-1}, \\
\frac{\partial^2 p}{\partial H^2}(H,K) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1}(g-2i)(g-2i-1)H^{g-2i-2}K^i, \\
\frac{\partial^2 p}{\partial H \partial K}(H,K) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1} (g-2i) i H^{g-2i-1}K^{i-1}, \\
\frac{\partial^2 p}{\partial K^2}(H,K) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1} i (i-1) (g-2i)H^{g-2i}K^{i-2}.
\end{align*}
\end{lemma}
\begin{lemma}\label{lem poly}
We calculate the dehomogenized version of the polynomial $p(H,K)$ and its derivatives from Lemma \ref{lem poly p} setting $\lambda_1=1,\,\lambda_2=\rho$
\begin{align*}
p(\rho) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1}(\rho+1)^{g-2i}\rho^i, {\displaybreak[1]}\\
p_{H}(\rho) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1}(g-2i)(\rho+1)^{g-2i-1}\rho^i, {\displaybreak[1]}\\
p_{K}(\rho) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1} i (\rho+1)^{g-2i}\rho^{i-1}, {\displaybreak[1]}\\
p_{HH}(\rho) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1}(g-2i)(g-2i-1)(\rho+1)^{g-2i-2}\rho^i, {\displaybreak[1]}\\
p_{HK}(\rho) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1} (g-2i) i (\rho+1)^{g-2i-1}\rho^{i-1}, {\displaybreak[1]}\\
p_{KK}(\rho) :=&\,\sum_{i=0}^{\lfloor g/2\rfloor} c_{i+1} i (i-1) (g-2i)(\rho+1)^{g-2i}\rho^{i-2}.
\end{align*}
\end{lemma}
\begin{remark}
To determine the leading term of a polynomial $p \in {\mathbb{R}} [\rho]$ we write
\begin{align}\label{leading terms}
{\mathcal{L}}\left(p\right) = c_g \rho^g,
\end{align}
if $p = c_g \rho^g + \sum_{i=0}^g c_i \rho^i$ for some $c_i \in {\mathbb{R}}$.
Note that $c_g = 0$ is possible.
Furthermore, we set
\begin{equation}
{\mathcal{P}}_{\rho}\left(g\right) := \lbrace q \textit{ polynomial in } \rho :\textit{degree of } q \leq g \rbrace.
\end{equation}
\end{remark}
\begin{lemma}\label{lem leading terms}
We apply the operator ${\mathcal{L}}$ from Definition \ref{leading terms} to the polynomials in Lemma \ref{lem poly} in all three distinct cases. \\
\textbf{Case A.} $c_1> 0$
\begin{align*}
{\mathcal{L}}\left(p\right)=&\, c_1 \rho^g,\\
{\mathcal{L}}\left(p_H\right)=&\,c_1 g \rho^{g-1},\\
{\mathcal{L}}\left(p_K\right)=&\,c_2 \rho^{g-2},\\
{\mathcal{L}}\left(p_{HH}\right)=&\,c_1 g\left(g-1\right)\rho^{g-2},\\
{\mathcal{L}}\left(p_{HK}\right)=&\,c_2 \left(g-2\right)\rho^{g-3}, \\
{\mathcal{L}}\left(p_{KK}\right)=&\,c_32\rho^{g-4}.
\end{align*}
Terms with negative powers of $\rho$ do not occur for $g \geq 4$.
If $g \leq 3$ the terms with negative powers of $\rho$ are $0$. \\
\textbf{Case B.} $c_1=0,\,c_2>0$
\begin{align*}
{\mathcal{L}}\left(p\right)=&\, c_2 \rho^{g-1},\\
{\mathcal{L}}\left(p_H\right)=&\,c_2\left(g-2\right)\rho^{g-2},\\
{\mathcal{L}}\left(p_K\right)=&\,c_2 \rho^{g-2},\\
{\mathcal{L}}\left(p_{HH}\right)=&\,c_2 \left(g-2\right)\left(g-3\right)\rho^{g-3},\\
{\mathcal{L}}\left(p_{HK}\right)=&\,c_2 \left(g-2\right)\rho^{g-3},\\
{\mathcal{L}}\left(p_{KK}\right)=&\,c_32\rho^{g-4}.
\end{align*}
Terms with negative powers of $\rho$ do not occur for $g \geq 4$.
If $g \leq 3$ the terms with negative powers of $\rho$ are $0$. \\
\textbf{Case C.} $c_1=0,\ldots,c_{k-1}=0,\,c_k>0$ for all $k\geq 3$
\begin{align*}
{\mathcal{L}}\left(p\right)=&\, c_k \rho^{g-\left(k-1\right)},\\
{\mathcal{L}}\left(p_H\right)=&\,c_k \left(g-2\left(k-1\right)\right) \rho^{g-k},\\
{\mathcal{L}}\left(p_K\right)=&\,c_k \left(k-1\right)\rho^{g-k},\\
{\mathcal{L}}\left(p_{HH}\right)=&\,c_k\left(g-2\left(k-1\right)\right)\left(g-2k+1\right) \rho^{g-\left(k+1\right)},\\
{\mathcal{L}}\left(p_{HK}\right)=&\,c_k \left(g-2\left(k-1\right)\right)\left(k-1\right)\rho^{g-\left(k+1\right)},\\
{\mathcal{L}}\left(p_{KK}\right)=&\,c_k\left(k-2\right)\left(k-1\right)\rho^{g-\left(k+1\right)}.
\end{align*}
Terms with negative powers of $\rho$ do not occur.
Since $c_1=0,\ldots,c_{k-1}=0,\,c_k>0$ for all $k\geq 3$,
we have $3 \leq k \leq \# \lbrace c_i \rbrace = \lfloor g/2 + 1\rfloor$.
Thus, $2\left(k-1\right) \leq g$. Therefore, we get
\begin{align*}
&\, g- \left(k+1\right)
\geq 2\left(k-1\right) - \left(k+1\right)
= k-3
\geq 0.
\end{align*}
Furthermore, we have $g-k \geq 1$. We will use this implicitly in the second part of Section \ref{sec dehomogenized polynomials}.
\end{lemma}
\subsection{Nine cases}
\begin{remark}
We recall from Lemma \ref{lem gradient terms} that the $r$-terms are defined as
\begin{align*}
&\, r_H:=q\;\frac{\partial p}{\partial H}-p\;\frac{\partial q}{\partial H},\;
r_K:=q\;\frac{\partial p}{\partial K}-p\;\frac{\partial q}{\partial K}, \\
&\, r_{HH}:=q\;\frac{\partial^2p}{\partial H^2}-p\;\frac{\partial^2q}{\partial H^2},\;
r_{HK}:=q\;\frac{\partial^2p}{\partial H \partial K}-p\;\frac{\partial^2q}{\partial H \partial K},\;
r_{KK}:=q\;\frac{\partial^2p}{\partial K^2}-p\;\frac{\partial^2q}{\partial K^2}.
\end{align*}
Therefore, we have to distinguish these nine cases in order to calculate the leading terms of the $r$-terms
\begin{itemize}
\item Case I: $c_1>0,\,d_1>0$,
\item Case II: $c_1>0,\,d_2>0$,
\item Case III: $c_1>0,\,d_l>0$,\\
\item Case IV: $c_2>0,\,d_2>0$,
\item Case V: $c_2>0,\,d_l>0$,
\item Case VI: $c_k>0,\,d_l>0$,\\
\item Case VII: $c_2>0,\,d_1>0$,
\item Case VIII: $c_k>0,\,d_1>0$,
\item Case IX: $c_k>0,\,d_2>0$,
\end{itemize}
for all $k,\,l \geq 3$.
\end{remark}
\begin{remark}
We recall the constant terms $C\left(\rho\right)$ from Lemma \ref{lem constant terms}
\begin{align}\label{C}
C\left(\rho\right)=&\, \left(\left(1-\sigma\right)\rho^2+2\sigma\rho+\left(1-\sigma\right)\right)r_H + \rho\left(\rho+1\right)r_K,
\end{align}
the gradient terms $G_1\left(\rho\right)$ from Lemma \ref{lem gradient terms}
\begin{align}\label{G1}
\begin{split}
G_1\left(\rho\right)= &\, \left( \left(\sigma-1\right)\rho^2-2\left(\sigma+1\right)\rho + \left(\sigma-1\right) \right) r_H^3 \\
&\, + \left( \left(\sigma-3\right) \rho^2 - 2\left(\sigma+2\right) \rho + \left(\sigma-1\right) \right) r_H^2 r_K \\
&\, -2 \rho \left(\rho+1\right) r_H r_K^2 \\
&\, -\rho \left(\rho-1\right)^2 r_K^2 r_{HH} \\
&\, +2\rho\left(\rho-1\right)^2 r_H r_K r_{HK} \\
&\, -\rho\left(\rho-1\right)^2 r_H^2 r_{KK},
\end{split}
\end{align}
and the gradient terms $G_2\left(\rho\right)$ from Lemma \ref{lem gradient terms}
\begin{align}\label{G2}
\begin{split}
G_2\left(\rho\right) =&\,\left( \left(\sigma-1\right)\rho^2-2\left(\sigma+1\right) \rho + \left(\sigma-1\right) \right) r_H^3 \\
&\, + \rho \left( \left(\sigma-1\right)\rho^2 - 2\left(\sigma+2\right) \rho + \left(\sigma-3\right) \right) r_H^2 r_K \\
&\, -2 \rho^2 \left(\rho+1\right) r_H r_K^2 \\
&\, -\rho^2 \left(\rho-1\right)^2 r_K^2 r_{HH} \\
&\, +2\rho^2\left(\rho-1\right)^2 r_H r_K r_{HK} \\
&\, -\rho^2\left(\rho-1\right)^2 r_H^2 r_{KK}.
\end{split}
\end{align}
\end{remark}
\subsection{Case I} $c_1>0,\,d_1>0$ \\
First we calculate the leading terms or the maximal order of the $r$-terms using Lemma \ref{lem leading terms}.
\begin{align*}
&\, {\mathcal{L}}\left(r_H\right) = c_1 d_1 \left(g-h\right) \rho^{g+h-1}, \\
&\, {\mathcal{L}}\left(r_K\right) \in {\mathcal{P}}_{\rho}\left(g+h-2\right), \\
&\, {\mathcal{L}}\left(r_{HH}\right) \in {\mathcal{P}}_{\rho}\left(g+h-2\right), \\
&\, {\mathcal{L}}\left(r_{HK}\right) \in {\mathcal{P}}_{\rho}\left(g+h-3\right), \\
&\, {\mathcal{L}}\left(r_{KK}\right) \in {\mathcal{P}}_{\rho}\left(g+h-4\right).
\end{align*}
Now we calculate the leading terms of $G_1\left(\rho\right)$ using \eqref{G1}
\begin{align*}
{\mathcal{L}}\big(G_1\left(\rho\right)\big) =&\, c_1^3 d_1^3 \left(\sigma-1\right)\left(g-h\right)^3\rho^{3\left(g+h\right)-1}.
\end{align*}
For \textit{maximum-principle functions} \eqref{def mpf} we have $g\geq 2$, $g-h > 0$ and ${\mathcal{L}}\big( G_1\left(\rho \right) \big) \leq 0$ for all $\rho \geq 0$.
Since $\sigma-1 > 0$ \textit{Case I}, results in a contradiction.
\subsection{Case II} $c_1>0,\,d_2>0$ \\
First we calculate the leading terms or the maximal order of the $r$-terms using Lemma \ref{lem leading terms}.
\begin{align*}
&\, {\mathcal{L}}\left(r_H\right) = c_1 d_2 \left(g-h+2\right) \rho^{g+h-2}, \\
&\, {\mathcal{L}}\left(r_K\right) = -c_1 d_2 \rho^{g+h-2}, \\
&\, {\mathcal{L}}\left(r_{HH}\right) = c_1 d_2 \left(g-h+2\right)\left(g+h-3\right) \rho^{g+h-3}, \\
&\, {\mathcal{L}}\left(r_{HK}\right) = -c_1 d_2 \left(h-2\right) \rho^{g+h-3}, \\
&\, {\mathcal{L}}\left(r_{KK}\right) \in {\mathcal{P}}_{\rho}\left(g+h-4\right).
\end{align*}
Now we calculate the leading terms of $G_1\left(\rho\right)$ using \eqref{G1}
\begin{align*}
&\, {\mathcal{L}}\big( G_1\left(\rho\right) \big) = -c_1^3 d_2^3 \left(g-h+1\right)\left(g-h+2\right)
\left(\left(g-h\right)\left(1-\sigma\right)+1-2\sigma\right)\rho^{3\left(g+h\right)-4}.
\end{align*}
For \textit{maximum-principle functions} \eqref{def mpf} we have $g\geq 2$, $g-h > 0$ and ${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$. Since $\frac{2\sigma-1}{\sigma-2}>2$ for all $\sigma>1$, we get $g-h+\frac{2\sigma-1}{\sigma-1} > 0$ which is equivalent to $\left(g-h\right)\left(1-\sigma\right)+1-2\sigma < 0$. Therefore, \textit{Case II} results in a contradiction.
\subsection{Case III} $c_1>0,\,d_l>0$ for all $l\geq 3$ \\
First we calculate the leading terms of the $r$-terms using Lemma \ref{lem leading terms}.
\begin{align*}
&\, {\mathcal{L}}\left(r_H\right) = c_1 d_l \left(g-h+2\left(l-1\right)\right) \rho^{g+h-l}, \\
&\, {\mathcal{L}}\left(r_K\right) = -c_1 d_l \left(l-1\right) \rho^{g+h-l}, \\
&\, {\mathcal{L}}\left(r_{HH}\right) = c_1 d_l \left(g-h+2\left(l-1\right)\right)\left(g+h-2l+1\right)\rho^{g+h-\left(l+1\right)}, \\
&\, {\mathcal{L}}\left(r_{HK}\right) = -c_1 d_l \left(h-2\left(l-1\right)\right)\left(l-1\right)\rho^{g+h-\left(l+1\right)}, \\
&\, {\mathcal{L}}\left(r_{KK}\right) = -c_1 d_l \left(l-2\right)\left(l-1\right) \rho^{g+h-\left(l+1\right)}.
\end{align*}
Now we calculate the leading terms of $G_1\left(\rho\right)$ using \eqref{G1}
\begin{align*}
&\, {\mathcal{L}}\big(G_1\left(\rho\right)\big) = -c_1^3 d_l^3 \left(g-h+\left(l-1\right)\right)\left(g-h+2\left(l-1\right)\right)\cdot \\
&\, \qquad \qquad \qquad \cdot \left(\left(g-h\right)\left(1-\sigma\right)+\left(l-1\right)\left(1-2\sigma\right)\right) \rho^{3\left(g+h-l\right)+2}.
\end{align*}
For \textit{maximum-principle functions} \eqref{def mpf} we have $h-l\geq 1$, $g-h>0$ and ${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$.
Since $\left(l-1\right)\frac{2\sigma-1}{\sigma-1} > 4$ for all $l\geq 3,\,\sigma>1$, we get $g-h+\left(l-1\right)\frac{2\sigma-1}{\sigma-1} > 0$
which is equivalent to $\left(g-h\right)\left(1-\sigma\right)+\left(l-1\right)\left(1-2\sigma\right) < 0$. Therefore, \textit{Case III} results in a contradiction.
\subsection{Case IV} $c_2>0,\,d_2>0$ \\
First we calculate the leading terms or the maximal order of the $r$-terms using Lemma \ref{lem leading terms}.
\begin{align*}
&\, {\mathcal{L}}\left(r_H\right) = c_2 d_2 \left(g-h\right) \rho^{g+h-3}, \\
&\, {\mathcal{L}}\left(r_K\right) \in {\mathcal{P}}_{\rho}\left(g+h-4\right), \\
&\, {\mathcal{L}}\left(r_{HH}\right) \in {\mathcal{P}}_{\rho}\left(g+h-4\right), \\
&\, {\mathcal{L}}\left(r_{HK}\right) \in {\mathcal{P}}_{\rho}\left(g+h-4\right), \\
&\, {\mathcal{L}}\left(r_{KK}\right) \in {\mathcal{P}}_{\rho}\left(g+h-5\right).
\end{align*}
Now we calculate the leading terms of $G_1\left(\rho\right)$ using \eqref{G1}
\begin{align*}
{\mathcal{L}}\big(G_1\left(\rho\right)\big) =&\, c_2^3 d_2^3 \left(\sigma-1\right)\left(g-h\right)^3\rho^{3\left(g+h\right)-7}.
\end{align*}
For \textit{maximum-principle functions} \eqref{def mpf} we have $g\geq 2$, $g-h>0$ and ${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$.
Due to $d_1=0,\,d_2>0$ we have $h\geq 2$.
Since $\sigma-1>0$, \textit{Case IV} results in a contradiction.
\subsection{Case V} $c_2>0,\,d_l > 0$ for all $l\geq 3$ \\
First we calculate the leading terms of the $r$-terms using Lemma \ref{lem leading terms}.
\begin{align*}
&\, {\mathcal{L}}\left(r_H\right)= c_2d_l\left(g-h+2\left(l-2\right)\right)\rho^{g+h-\left(l+1\right)}, \\
&\, {\mathcal{L}}\left(r_K\right)= -c_2d_l\left(l-2\right)\rho^{g+h-\left(l+1\right)},\\
&\, {\mathcal{L}}\left(r_{HH}\right) = c_2d_l(g-h+2(l-2))(g+h-(2l+1))\rho^{g+h-\left(l+2\right)}, \\
&\, {\mathcal{L}}\left(r_{HK}\right)= c_2d_l \left( \left(g-2\right) - \left(h-2\left(l-1\right)\right)\left(l-1\right) \right)\rho^{g+h-\left(l+2\right)}, \\
&\, {\mathcal{L}}\left(r_{KK}\right)= -c_2d_l\left(l-2\right)\left(l-1\right)\rho^{g+h-\left(l+2\right)}.
\end{align*}
Now we calculate the leading terms of $G_1\left(\rho\right)$ using \eqref{G1}
\begin{align*}
&\, {\mathcal{L}}\big(G_1\left(\rho\right)\big) = -c_2^3 d_l^3 (g-h+(l-2))(g-h+2(l-2))\cdot \\
&\, \qquad \qquad \qquad \cdot((g-h)(1-\sigma)+(l-2)(1-2\sigma)) \rho^{3\left(g+h-l\right)-1}.
\end{align*}
For \textit{maximum-principle functions} \eqref{def mpf} we have $h-l \geq 1$, $g-h>0$ and ${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$.
Since $\left(l-2\right)\frac{2\sigma-1}{\sigma-1} > 2$ for all $l\geq 3,\,\sigma>1$, we get $g-h+\left(l-2\right)\frac{2\sigma-1}{\sigma-1} > 0$
which is equivalent to $\left(g-h\right)\left(1-\sigma\right)+\left(l-2\right)\left(1-2\sigma\right) < 0$. Therefore, \textit{Case V} results in a contradiction.
\subsection{Case VI} $c_k>0,\,d_l>0$ for all $k,\,l\geq 3$ \\
First we calculate the leading terms of the $r$-terms using Lemma \ref{lem leading terms}.
\begin{align*}
&\, {\mathcal{L}}\left(r_H\right)= c_k d_l\left(\left(g-h\right)-2\left(k-l\right)\right)\rho^{g+h-\left(k+l-1\right)}, \\
&\, {\mathcal{L}}\left(r_K\right)= c_kd_l\left(k-l\right)\rho^{g+h-\left(k+l-1\right)},\\
&\, {\mathcal{L}}\left(r_{HH}\right)= c_kd_l(g+h+3-2(k+l))(g+h-2(k-l))\rho^{g+h-\left(k+l\right)}, \\
&\, {\mathcal{L}}\left(r_{HK}\right)= c_kd_l\left(\left(g-2\left(k-1\right)\right)\left(k-1\right)-\left(h-2\left(l-1\right)\right)\left(l-1\right)\right)\rho^{g+h-\left(k+l\right)}, \\
&\, {\mathcal{L}}\left(r_{KK}\right)= c_kd_l\left(\left(k-2\right)\left(k-1\right)-\left(l-2\right)\left(l-1\right)\right)\rho^{g+h-\left(k+l\right)}.
\end{align*}
Now we calculate the leading terms of $C\left(\rho\right),\,G_1\left(\rho\right),\,G_2\left(\rho\right)$ using \eqref{C}, \eqref{G1}, \eqref{G2}
\begin{align*}
&\, {\mathcal{L}}\big(C(\rho)\big)= c_k d_l \left((g-h)(1-\sigma)+(l-k)(1-2\sigma)\right) \rho^{g+h-\left(k+l\right)+3},\\
&\, {\mathcal{L}}\big(G_1(\rho)\big)= -c_k^3 d_l^3 (g-h+(l-k))(g-h+2(l-k))\cdot \\
&\, \qquad \qquad \qquad \cdot((g-h)(1-\sigma)+(l-k)(1-2\sigma))\rho^{3\left(g+h-\left(k+l\right)\right)+5},\\
&\, {\mathcal{L}}\big(G_2(\rho)\big)= -c_k^3 d_l^3 (l-k)(g-h+2(l-k))\cdot \\
&\, \qquad \qquad \qquad \cdot((l-k)+(g-h+2(l-k))\sigma)\rho^{3\left(g+h-\left(k+l\right)\right)+6}.
\end{align*}
For \textit{maximum-principle functions} \eqref{def mpf} we have $g-k \geq 1$, $h-l \geq 1$, $g-h>0$ and ${\mathcal{L}}\big(C\left(\rho\right)\big) \leq 0$,
${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$, ${\mathcal{L}}\big(G_2\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$. \\
We assume $(g-h)(1-\sigma)+(l-k)(1-2\sigma) = 0$ which is equivalent to the identity $g-h = \left(k-l\right)\frac{2\sigma-1}{\sigma-1}$.
Since $\frac{2\sigma-1}{\sigma-1} > 2$, we get $k-l > 0$. Using this identity we get
\begin{align*}
{\mathcal{L}}\big(G_2(\rho)\big)=&\, c_k^3 d_l^3 (k-l)^3\left(\frac{1}{\sigma-1}\right)^2\rho^{3\left(g+h-\left(k+l\right)\right)+6}
\end{align*}
which results in a contradiction. So we have $(g-h)(1-\sigma)+(l-k)(1-2\sigma) < 0$ which is equivalent to $g-h + \left(l-k\right)\frac{2\sigma-1}{\sigma-1} > 0$.
Furthermore, we assume $k-l > 0$ which implies
\begin{align*}
&\, g-h>(k-l)\frac{2\sigma-1}{\sigma-1}>2(k-l)>k-l \text{ and }\\
&\, g-h+(l-k)>g-h+2(l-k)>0.
\end{align*}
Thus, the condition ${\mathcal{L}}\big(G_1(\rho)\big)\leq 0$ for all $\rho\geq 0$ results in a contradiction.
For $l-k\geq0$ the same condition also results in a contradiction. Therefore, \textit{Case VI} results in a contradiction.
\subsection{Case VII} $c_2>0,\, d_1>0$ \\
First we calculate the leading terms of the $r$-terms using Lemma \ref{lem leading terms}.
\begin{align*}
&\, {\mathcal{L}}\left(r_H\right) = c_2 d_1 \left(g-h -2\right) \rho^{g+h-2}, \\
&\, {\mathcal{L}}\left(r_K\right) = c_2 d_1 \rho^{g+h-2}, \\
&\, {\mathcal{L}}\left(r_{HH}\right) = c_2 d_1 \left(g-h-2\right)\left(g+h-3\right) \rho^{g+h-3}, \\
&\, {\mathcal{L}}\left(r_{HK}\right) = c_2 d_1 \left(g-2\right) \rho^{g+h-3}, \\
&\, {\mathcal{L}}\left(r_{KK}\right) \in {\mathcal{P}}_{\rho}\left(g+h-4\right).
\end{align*}
Now we calculate the leading terms of $C\left(\rho\right),\,G_1\left(\rho\right),\,G_2\left(\rho\right)$ using \eqref{C}, \eqref{G1}, \eqref{G2}
\begin{align*}
&\, {\mathcal{L}}\big(C\left(\rho\right)\big) = c_2 d_1 \left(\left(g-h\right)\left(1-\sigma\right) - 1 + 2\sigma\right)\rho^{g+h}, \\
&\, {\mathcal{L}}\big(G_1\left(\rho\right)\big) = -c_2^3 d_1^3 \left(g-h-2\right)\left(g-h-1\right)
\left(\left(g-h\right)\left(1-\sigma\right)-1+2\sigma\right)\rho^{3\left(g+h\right)-4}, \\
&\, {\mathcal{L}}\big(G_2\left(\rho\right)\big) = c_2^3 d_1^3 \left(-\left(g-h-2\right) - \left(g-h-2\right)^2\sigma\right)\rho^{3\left(g+h\right)-3}.
\end{align*}
For \textit{maximum-principle functions} \eqref{def mpf} we have $g\geq 2$, $g-h>0$ and ${\mathcal{L}}\big(C\left(\rho\right)\big) \leq 0$,
${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$, ${\mathcal{L}}\big(G_2\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$. \\
We assume $\left(g-h\right)\left(1-\sigma\right) - 1 + 2\sigma = 0$ which is equivalent to the identity $g-h = \frac{2\sigma-1}{\sigma-1}$. Using this identity we get
\begin{align*}
&\, {\mathcal{L}}\big(G_2\left(\rho\right)\big) = c_2^3 d_1^3 \left(\frac{1}{\sigma-1}\right)^2\rho^{3\left(g+h\right)-3}
\end{align*}
which results in a contradiction. So we have $\left(g-h\right)\left(1-\sigma\right) - 1 + 2\sigma < 0$ which is equivalent to $g-h > \frac{2\sigma-1}{\sigma-1}$. This implies
\begin{align*}
&\, g-h > \frac{2\sigma-1}{\sigma-1} > 2 > 1 \text{ and } \\
&\, g-h-1 > g-h-2 > 0.
\end{align*}
Thus, the condition ${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$ results in a contradiction. Therefore, \textit{Case VII} results in a contradiction.
\subsection{Case VIII} $c_k>0,\, d_1>0$ for all $k\geq 3$ \\
First we calculate the leading terms of the $r$-terms using Lemma \ref{lem leading terms}.
\begin{align*}
&\, {\mathcal{L}}\left(r_H\right) = c_k d_1 \left(g-h -2\left(k-1\right)\right) \rho^{g+h-k}, \\
&\, {\mathcal{L}}\left(r_K\right) = c_k d_1 \left(k-1\right) \rho^{g+h-k}, \\
&\, {\mathcal{L}}\left(r_{HH}\right) = c_k d_1 \left(g-h-2\left(k-1\right)\right)\left(g+h-2k+1\right) \rho^{g+h-\left(k+1\right)}, \\
&\, {\mathcal{L}}\left(r_{HK}\right) = c_k d_1 \left(g-2\left(k-1\right)\right)\left(k-1\right) \rho^{g+h-\left(k+1\right)}, \\
&\, {\mathcal{L}}\left(r_{KK}\right) = c_k d_1 \left(k-2\right)\left(k-1\right) \rho^{g+h-\left(k+1\right)}.
\end{align*}
Now we calculate the leading terms of $C\left(\rho\right),\,G_1\left(\rho\right),\,G_2\left(\rho\right)$ using \eqref{C}, \eqref{G1}, \eqref{G2}
\begin{align*}
&\, {\mathcal{L}}\big(C\left(\rho\right)\big) = c_k d_1 \left(\left(g-h\right)\left(1-\sigma\right) + \left(k-1\right)\left(-1+2\sigma\right)\right)\rho^{g+h-k+2}, \\
&\, {\mathcal{L}}\big(G_1\left(\rho\right)\big) = -c_k^3 d_1^3 \left(g-h-2\left(k-1\right)\right)\left(g-h-\left(k-1\right)\right) \cdot \\
&\, \qquad \qquad \qquad \qquad \cdot \left(\left(g-h\right)\left(1-\sigma\right)+\left(k-1\right)\left(-1+2\sigma\right)\right)\rho^{3\left(g+h-k\right)+2}, \\
&\, {\mathcal{L}}\big(G_2\left(\rho\right)\big) = c_k^3 d_1^3 \left(g-h-2\left(k-1\right)\right)\left(k-1\right)\cdot \\
&\, \qquad \qquad \qquad \qquad \cdot \left(-\left(k-1\right)+\left(g-h-2\left(k-1\right)\right)\sigma\right)\rho^{3\left(g+h-k\right)+3}.
\end{align*}
For \textit{maximum-principle functions} \eqref{def mpf} we have $g-k\geq 1$, $g-h>0$ and ${\mathcal{L}}\big(C\left(\rho\right)\big) \leq 0$,
${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$, ${\mathcal{L}}\big(G_2\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$. \\
We assume $\left(g-h\right)\left(1-\sigma\right) + \left(k-1\right)\left(-1+2\sigma\right) = 0$ which is equivalent to the identity $g-h = \left(k-1\right)\frac{2\sigma-1}{\sigma-1}$. Using this identity we get
\begin{align*}
&\, {\mathcal{L}}\big(G_2\left(\rho\right)\big) = c_k^3 d_1^3 \left(k-1\right)^3 \left(\frac{1}{\sigma-1}\right)^2\rho^{3\left(g+h-k\right)+3}
\end{align*}
which results in a contradiction. So we have $\left(g-h\right)\left(1-\sigma\right) + \left(k-1\right)\left(-1+2\sigma\right) < 0$ which is equivalent to $g-h > \left(k-1\right)\frac{2\sigma-1}{\sigma-1}$. This implies
\begin{align*}
&\, g-h > \left(k-1\right) \frac{2\sigma-1}{\sigma-1} > 2\left(k-1\right) > k-1 \text{ and } \\
&\, g-h-\left(k-1\right) > g-h-2\left(k-1\right) > 0.
\end{align*}
Thus, the condition ${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$ results in a contradiction. Therefore, \textit{Case VIII} results in a contradiction.
\subsection{Case IX} $c_k>0,\, d_2>0$ for all $k\geq 3$ \\
First we calculate the leading terms of the $r$-terms using Lemma \ref{lem leading terms}.
\begin{align*}
&\, {\mathcal{L}}\left(r_H\right) = c_k d_2 \left(g-h -2\left(k-2\right)\right) \rho^{g+h-\left(k+1\right)}, \\
&\, {\mathcal{L}}\left(r_K\right) = c_k d_2 \left(k-2\right) \rho^{g+h-\left(k+1\right)}, \\
&\, {\mathcal{L}}\left(r_{HH}\right) = c_k d_2 \left(g-h-2\left(k-2\right)\right)\left(g+h-\left(2k+1\right)\right) \rho^{g+h-\left(k+2\right)}, \\
&\, {\mathcal{L}}\left(r_{HK}\right) = c_k d_2 \left(\left(g-2\left(k-1\right)\right)\left(k-1\right) - \left(h-2\right)\right) \rho^{g+h-\left(k+2\right)}, \\
&\, {\mathcal{L}}\left(r_{KK}\right) = c_k d_2 \left(k-2\right)\left(k-1\right) \rho^{g+h-\left(k+2\right)}.
\end{align*}
Now we calculate the leading terms of $C\left(\rho\right),\,G_1\left(\rho\right),\,G_2\left(\rho\right)$ using \eqref{C}, \eqref{G1}, \eqref{G2}
\begin{align*}
&\, {\mathcal{L}}\big(C\left(\rho\right)\big) = c_k d_1 \left(\left(g-h\right)\left(1-\sigma\right) + \left(k-2\right)\left(-1+2\sigma\right)\right)\rho^{g+h-\left(k-1\right)}, \\
&\, {\mathcal{L}}\big(G_1\left(\rho\right)\big) = -c_k^3 d_2^3 \left(g-h-2\left(k-2\right)\right)\left(g-h-\left(k-2\right)\right) \cdot \\
&\, \qquad \qquad \qquad \qquad \cdot \left(\left(g-h\right)\left(1-\sigma\right)+\left(k-2\right)\left(-1+2\sigma\right)\right)\rho^{3\left(g+h-k\right)-1}, \\
&\, {\mathcal{L}}\big(G_2\left(\rho\right)\big) = c_k^3 d_1^3 \left(g-h-2\left(k-1\right)\right)\left(k-1\right)\cdot \\
&\, \qquad \qquad \qquad \qquad \cdot \left(-\left(k-1\right)+\left(g-h-2\left(k-1\right)\right)\sigma\right)\rho^{3\left(g+h-k\right)}.
\end{align*}
For \textit{maximum-principle functions} \eqref{def mpf} we have $g-k\geq 1$, $g-h>0$ and ${\mathcal{L}}\big(C\left(\rho\right)\big) \leq 0$,
${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$, ${\mathcal{L}}\big(G_2\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$. \\
We assume $\left(g-h\right)\left(1-\sigma\right) + \left(k-2\right)\left(-1+2\sigma\right) = 0$ which is equivalent to the identity $g-h = \left(k-2\right)\frac{2\sigma-1}{\sigma-1}$. Using this identity we get
\begin{align*}
&\, {\mathcal{L}}\big(G_2\left(\rho\right)\big) = c_k^3 d_2^3 \left(k-2\right)^3 \left(\frac{1}{\sigma-1}\right)^2\rho^{3\left(g+h-k\right)}
\end{align*}
which results in a contradiction. So we have $\left(g-h\right)\left(1-\sigma\right) + \left(k-2\right)\left(-1+2\sigma\right) < 0$ which is equivalent to $g-h > \left(k-2\right)\frac{2\sigma-1}{\sigma-1}$. This implies
\begin{align*}
&\, g-h > \left(k-2\right) \frac{2\sigma-1}{\sigma-1} > 2\left(k-2\right) > k-2 \text{ and } \\
&\, g-h-\left(k-2\right) > g-h-2\left(k-2\right) > 0.
\end{align*}
Thus, the condition ${\mathcal{L}}\big(G_1\left(\rho\right)\big) \leq 0$ for all $\rho\geq 0$ results in a contradiction. Therefore, \textit{Case IX} results in a contradiction.
\bibliographystyle{amsplain}
\def\weg#1{} \def\underline{\rule{1ex}{0ex}}} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'${\underline{\rule{1ex}{0ex}}} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,498,318 | arxiv | \section{Introduction}
\label{sec:introduction}
\K{In most wireless networks, medium access control (MAC) is needed to avoid excessive collisions, which occur if a station transmits to another transmitting station\khh{,}{} which is half-duplex, or a station receives multiple simultaneous transmissions and cannot successfully decode \kh{the}{} desired message(s).}{The performance of wireless networks is \KH{limited}{degraded} by two types of collisions: a station transmits to another transmitting station which is half-duplex; or\KH{}{,} a station receives simultaneous transmissions from multiple stations such that all transmissions cannot be decoded correctly.
Medium access control (MAC) protocols are needed to eliminate or reduce both types of collisions. }
\khh{
Many MAC protocols can be viewed as requiring each station to maintain a state, which determines when the station transmits.
To provide distributed operation, this state is updated based on information available locally in space.
For example, in carrier-sense multiple access (CSMA) protocols, the state is determined by the carrier sensing operation and the random backoff mechanism.
}{
A typical design of MAC protocols is to let \KH{each station}{stations} maintain a state variable and make transmission decisions according to the state, which is updated based on the station's observations.
\KH{In general, \KHH{it is desirable to let}{} neighboring stations \KHH{use different states}{avoid using the same state at the same time} to avoid collisions. A}{Besides eliminating collisions, a} practical MAC protocol should \K{adapt to changes over time and}{} be \textit{distributed}, \textit{i.e.}, stations make their decisions based on information available locally in space.
\K{}{A practical MAC protocol should also be \textit{adaptive} to recent changes in time.}
\KH{}{It is desirable to have a MAC protocol that offers high throughput and uses as few states as possible. }
}
\khh{This paper considers MAC protocols in which stations explicitly exchange limited state information.
The protocols are \textit{self-stabilizing}, \textit{i.e.}, they converge to a collision-free schedule regardless of the initial state.}{\K{This paper studies}{We consider static or slow-varying networks and study} \textit{self-stabilizing} MAC protocols, \K{which are}{\textit{i.e.},} protocols that converge to a collision-free schedule, \K{regardless}{independent} of the initial state.}
\K{The underlying network is assumed to be static or vary slowly with respect to the execution of the MAC protocol.}{}
In the steady state, these protocols behave like time-division multiple access (TDMA), in which stations take turn\KH{s}{} to transmit without collision; while in the transient state, they behave like \khh{CSMA}{carrier-sense multiple access (CSMA)}, such that stations contend with each other, trying to find a slot for transmission and avoid collisions.
\KH{Under the assumption of a single collision domain, \textit{i.e.}, all stations can hear each other,}{}\K{}{\KH{r}{R}eferences \cite{JLJW:1,JBBBCCMO:1,MFDMKDDL:1} study} self-stabilizing MAC protocols \K{have been studied in \cite{JLJW:1,JBBBCCMO:1,MFDMKDDL:1}}{\KH{}{ for a single collision domain, in which all stations can hear each other}}.
By learning transmission decisions of others, stations are able to find a collision-free schedule in a decentralized manner.
\kh{As is pointed out in \cite{JLJW:1}, \khh{these}{such}}{\KH{Such}{However, the analysis applies only to a single collision domain.
It is pointed out in \cite{JLJW:1} that the}} protocol\KH{s}{ therein} cannot guarantee the formation of a collision-free schedule in \KH{case of}{} multiple collision domains \khh{and focus on schedules for unicast traffic}{\kh{}{ \KH{as is pointed out in \cite{JLJW:1}}{}}}.
\khh{
This paper focuses on establishing collision-free schedules for broadcast and multicast traffic in networks with multiple collision domains.
It is well-known that multiple collision domains complicate scheduling due in part to hidden terminals and exposed terminals.
For unicast traffic, state exchange in the form of RTS/CTS signaling can help alleviate these complications.
However, this is not suitable when a station wants to broadcast a packet to all nearby stations.
To facilitate this, we consider a richer form of state exchange.
}{}
\khh{We build on work in \cite{KHDGRBMH:1} and \cite{KHDGRB:1}, which introduces self-stabilizing MAC protocols for one- and two-dimensional regular networks on lattices.}
{
\K{Many practical}{In practice, most} networks have multiple collision domains\K{,}{ and} hence the hidden terminal and exposed terminal problems \K{arise}{}.
\KH{D}{For one- and two-dimensional \textit{regular} networks with links between nearest neighbors only, d}istributed, adaptive, self-stabilizing MAC protocols \KH{for \K{the special case of}{} one- and two-dimensional \KHH{regular}{} networks on lattices with links between nearest neighbors\kh{}{ only} have been}{are} introduced in \cite{KHDGRBMH:1} and \cite{KHDGRB:1}, respectively.
}
\KH{The technique is to divide time into periodic cycles, where each cycle is divided into slots.
A station maintains a \K{single}{} state and transmits only over the slot corresponding to its state.
Once the protocols converge, a periodic state pattern \K{(with immediate neighbors \khh{assuming}{assume} different states)}{} is formed throughout the regular network, and the maximum broadcast throughput is achieved.
If one directly applies these ideas to networks with arbitrary topologies, sufficiently many states are needed for stations with many neighbors, but \K{in a neighborhood with few stations}{for stations with few neighbors}, the wireless channel is underutilized \kh{because few states are occupied}{}.}{
\KH{The protocols in \cite{KHDGRBMH:1} and \cite{KHDGRB:1} do not apply}{It is nontrivial to extend \cite{KHDGRBMH:1,KHDGRB:1}} to networks with arbitrary topolog\KH{ies}{y}.
\KH{Regardless of the network topology, it is usually}{Traditionally, it is} assumed that every station uses the same \KH{set of states and the same slot interval}{number of states, \textit{i.e.}, the same number of slots in each cycle.
This implies that all slots have the same length}.
A problem with this approach is that, we need sufficiently many states for stations with many neighbors, but \KH{in an area of few stations}{for stations with few neighbors}, the wireless channel will be underutilized.
Therefore, we introduce the notion of \textit{multiple resolutions}, \textit{i.e.}, a station having more neighbors uses a fine resolution (more states are used, each state corresponding to a shorter slot); while a station with fewer neighbors uses a coarse resolution (fewer states are used, each state corresponding to a longer slot). }
\khh{
There has been a significant work on MAC scheduling for networks that builds on the seminal max-weight algorithm \cite{LTAE:1}, and attempts to derive distributed, low complexity algorithms which approach the throughput-optimal performance of \cite{LTAE:1}.
Examples include \cite{PCKKXLSS:1,XLNS:1,GSNSRM:1,EMDSGZ:1,AEAOEM:1,XLSR:1}.
These approaches seek to adapt the resulting schedule to queue variations.
Here, we instead consider a model with saturated traffic and seek to find fixed rate-based schedules, as in \cite{YYGDSS:1}.
Such a schedule is naturally more useful for traffic that has a fixed long-term arrival rate.
More bursty traffic can be accomodated by reserving some fraction of time for contention-based access as in \cite{IRAWMAJMMS:1}.
}{}
\KH{
The main contributions of this paper are as follows:
\begin{enumerate}
\item
In Section~\ref{sec:model}, we introduce the concept of \textit{multiple resolutions}, \textit{i.e.}, a station having more neighbors uses a fine resolution (more states \KHH{in its state space}{are used}, each state corresponding to a shorter slot); while a station with fewer neighbors uses a coarse resolution (fewer states \KHH{in its state space}{are used}, each state corresponding to a longer slot).
\item
In Sections~\ref{sec:1d} and~\ref{sec:2d}, multi-resolution MAC protocols are proposed for broadcast in one- and two- dimensional networks with arbitrary topologies, respectively.
These protocols guarantee every station a chance to transmit in each cycle.
In addition, they achieve approximate proportional fairness in the sense that \K{a station's \kh{throughput}{rate} is approximately inverse\khh{ly}{} proportional to the node density in its neighborhood}{two stations with similar number of neighbors have similar rates}.
We show that in one-dimensional networks, stations can determine their resolutions in a distributed manner.
The same also holds for two-dimensional networks under a mild condition.
In case the condition is not met, we propose a mechanism for stations to dynamically change their resolutions until collisions do not occur in the entire network.
\item
We show that the multi-resolution protocols can be applied to a more general setting.
In Section~\ref{sec:mc}, we consider multicast traffic.
In Section~\ref{sec:multich}, broadcast and multicast in networks with multiple orthogonal channels are considered.
\end{enumerate}
}{
In this paper, we propose multi-resolution MAC protocols for one- and two-dimensional wireless networks with arbitrary topolog\KH{ies}{y}\KH{, \textit{i.e.}, a station having more neighbors uses a fine resolution (more states are used, each state corresponding to a shorter slot); while a station with fewer neighbors uses a coarse resolution (fewer states are used, each state corresponding to a longer slot)}{}.
These protocols guarantee every station a chance to transmit in each cycle.
In addition, they achieve \KH{approximate proportional}{} fairness in the sense that \KH{the rate of a station is inverse proportional to the node density in its neighborhood}{two stations with similar neighborhoods have similar rates}.
For one-dimensional networks, stations can determine their resolutions in a distributed manner.
The same also holds for two-dimensional networks under a mild condition.
In case the condition is not met, we propose a mechanism for stations to dynamically change their resolutions until collisions do not occur in the entire network. }
In all cases, the convergence of such protocols to a collision-free schedule is rigorously established.
\KHH{To achieve the global optimum in terms of throughput is an NP-complete problem \cite{AETT:2,RRKP:1}, which is out of scope of this paper}{We do not consider maximizing throughput or minimizing the number of states; these problems are NP-complete \cite{AETT:2,RRKP:1}}.
\KH{}{The remainder of this paper is organized as follows.
The system model is described in Section~\ref{sec:model}.
Results for broadcast in one- and two-dimensional networks are presented in Sections~\ref{sec:1d} and~\ref{sec:2d}\KH{,}{} respectively.
Multicast traffic is considered in Section~\ref{sec:mc}.
Section~\ref{sec:conclusion} concludes the paper. }
\section{System Model}
\label{sec:model}
\KH{Consider a simple model for wireless networks where two stations have a direct radio link between them if they can hear each other.
The network can be modeled by an arbitrary graph $G=(V,A)$, where $V=\lbrace\mathbf{r}_i\rbrace_{i=0}^{\lvert V\rvert-1}$ is the set of stations labeled by their coordinates, and $A=\lbrace(\mathbf{r}_i,\mathbf{r}_j)\rbrace\subset V\times V$ is the set of \textit{undirected} links.\KHH{}{As an example, if $d(\mathbf{r}_i,\mathbf{r}_j)$ is the Euclidean distance between $\mathbf{r}_i$ and $\mathbf{r}_j$, and $R$ is the transmission range, then we can define the set of links to be $A_R=\lbrace(\mathbf{r}_i,\mathbf{r}_j)\in N\times N\colon i\ne j\text{ and }d(\mathbf{r}_i,\mathbf{r}_j)\le R\rbrace$.
In this case the graph, denoted by $G_R=(N,A_R)$, is a finite undirected \textit{unit disk graph}.}}{
Consider a simple model for wireless networks where two stations have a direct radio link between them if their distance is within a given range.
Precisely, the network is modeled by a finite undirected \textit{unit disk graph} $G_R=(N,A_R)$, where $N=\lbrace\mathbf{r}_i\rbrace_{i=0}^{\lvert N\rvert-1}$ is the set of stations labeled by their coordinates, $A_R=\lbrace(\mathbf{r}_i,\mathbf{r}_j)\in N\times N\colon i\ne j\text{ and }d(\mathbf{r}_i,\mathbf{r}_j)\le R\rbrace$ is the set of undirected links, $d(\mathbf{r}_i,\mathbf{r}_j)$ denotes the Euclidean distance between $\mathbf{r}_i$ and $\mathbf{r}_j$, and $R$ is the transmission range.}
Let $V_\mathbf{r}$ denote the set of (one-hop) peers or neighbors of station $\mathbf{r}$.
\khh{We assume the interference range of a station is the same as its transmission range, so $V_\mathbf{r}$ denotes both the set of potential receivers and interferers for station $\mathbf{r}$.}{}
\KHH{}{In }Sections~\ref{sec:1d} and~\ref{sec:2d} \KHH{study the case where}{, it is assumed that} every station broadcasts packets to all its one-hop peers \KHH{in a single channel}{, \textit{i.e.}, station $\mathbf{r}$ transmits packets to $N_\mathbf{r}$}.
\KHH{}{In }Section~\ref{sec:mc} \KHH{studies the case where}{,} every station multicasts packets to a \KHH{certain}{fixed} subset of its one-hop peers \KHH{in a single channel}{, \textit{i.e.}, station $\mathbf{r}$ transmits packets to $D_\mathbf{r}\subseteq N_\mathbf{r}$}.\KHH{}{It is assumed that station $\mathbf{r}$ knows $D_{\mathbf{r}^\prime}$ for all $\mathbf{r}^\prime\in N_\mathbf{r}$, so that it knows whether it is an intended receiver of its peers.
This can be done by letting each station broadcast a list of its intended receivers while setting up a multicast session. }
\KHH{In Section~\ref{sec:multich}, broadcast and multicast in networks with multiple orthogonal channels are considered.}{}
For both broadcast and multicast, saturated traffic is assumed.
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=3.5in]{network}
\label{fig:network}
}
\subfigure[]{
\includegraphics[width=3.5in]{resolution}
\label{fig:resolution}
}
\caption{A multi-resolution MAC protocol in a one-dimensional network. In \subref{fig:network}, there are $6$ stations at positions $\mathbf{r}_0,\dots,\mathbf{r}_5$. The \khh{half}{} circles represent the transmission ranges of the stations. The values of $1+\lvert V_\mathbf{r}\rvert$ and $l_\mathbf{r}$ for different stations are shown on top of the corresponding circles. A possible schedule over a cycle is shown in \subref{fig:resolution}.}
\label{fig:mr}
\end{figure}
Next we formalize the concept of multiple resolutions\K{}{ alluded to above}.
\KHH{
Let time be divided into cycles of fixed length.
We let station $\mathbf{r}$ decide on a resolution represented by an integer $l_\mathbf{r}\ge0$.
From the viewpoint of this station, each cycle is further divided into $2^{l_\mathbf{r}}$ slots of equal length.
The state of this station in the $t$-th cycle, denoted by $X_\mathbf{r}(t)$, is a binary string of length $l_\mathbf{r}$ corresponding to the index of the slot in the $t$-th cycle over which the station transmits.
Let $\pmb{X}(t)=\lbrace X_\mathbf{r}(t)\rbrace_{\mathbf{r}\in V}$ be the configuration in the $t$-th cycle.
We assume that all stations are synchronized.
\khh{A finer}{Therefore, a fine} resolution can be obtained by `splitting' or `refining' a coarse resolution.
We assume that packets transmitted by a station fit in a slot of its own resolution.
Stations using coarse resolutions can also transmit multiple packets of smaller sizes in a slot.
A \textit{collision} occurs \K{between two}{at some receiver when two stations that are} one-hop or two-hop peers \K{if they}{} transmit at the same time.
Mathematically, \K{two such}{neighboring} stations $\mathbf{r}_i$ and $\mathbf{r}_j$, with $l_{\mathbf{r}_i}\le l_{\mathbf{r}_j}$ \K{(without loss of generality)}{}, collide in the $t$-th cycle when
\begin{equation}
\label{eqn:collision}
\text{the binary string }X_{\mathbf{r}_i}(t)\text{ is a prefix of }X_{\mathbf{r}_j}(t).
\end{equation}
In a \textit{collision-free configuration}, (\ref{eqn:collision}) \K{must}{does} not hold for any pair of one-hop and two-hop peers.}{
\KH{
Time is divided into cycles of fixed length.
We let each station pick a state represented using a binary string $b_1b_2\cdots b_m$, which is the index of the time slot if a cycle is divided into $2^m$ slots.
Two packets arriving at a station collide if and only if the state of one of the two transmitting stations is a prefix of the state of the other.}{}}
Consider \KH{the example described in}{} Fig.~\ref{fig:mr}.
Station $\mathbf{r}_5$ uses\KH{}{ a coarse resolution consisting of four states, $00,01,10,11$.
It transmits in} state $10$, \textit{i.e.}, \KH{it transmits during}{} the third quarter of the cycle.
Station $\mathbf{r}_3$ uses \KH{state $101$ with}{} a finer resolution consisting of eight states\KH{, so that it}{.
It} transmits in the sixth slot of the cycle \KH{which is divided into $8$ slots}{when its state is $101$}.
Station $\mathbf{r}_3$'s resolution can be seen as a refinement of that of station $\mathbf{r}_5$\KHH{}{, where every slot is reduced by half}.\KHH{}{Formally, the state of station $\mathbf{r}$ assumes values in $\mathbb{F}_2^{l_\mathbf{r}}$, the set of binary $l_\mathbf{r}$-tuples, where $2^{l_\mathbf{r}}$ is the \KH{total number of states available for}{number of states used by} station $\mathbf{r}$ \KH{to choose}{} and it depends on its local topology.
Let $\Omega=\prod_{\mathbf{r}\in N}\mathbb{F}_2^{l_\mathbf{r}}$ denote the configuration space.
Let $X_\mathbf{r}(t)\in\mathbb{F}_2^{l_\mathbf{r}}$ be the state of station $\mathbf{r}$ in the $t$-th cycle, and $\pmb{X}(t)=\lbrace X_\mathbf{r}(t)\rbrace_{\mathbf{r}\in N}\in\Omega$ be the configuration in the $t$-th cycle.
$X_\mathbf{r}(t)=s\in\mathbb{F}_2^{l_\mathbf{r}}$ means that station $\mathbf{r}$ divides the $t$-th cycle into $2^{l_{\mathbf{r}}}$ slots of equal length, and it transmits in slot $s$.
For this reason, we will use states and slots interchangeably.
We assume that all stations are synchronized.
Therefore, a fine resolution can be obtained by `splitting' or `refining' a coarse resolution, as shown in Fig.~\ref{fig:resolution}. }
\KHH{Since $10$ is a prefix of $101$, and stations $\mathbf{r}_3$ and $\mathbf{r}_5$ are two-hop peers as shown in Fig.~\ref{fig:network}, these two stations collide \K{(at receiver $\mathbf{r}_4$)}{}.}{}
\KHH{}{
We assume that packets transmitted by a station fit in a slot of its own resolution.
Stations using coarse resolution can also transmit multiple packets of smaller sizes in a slot.
A \textit{collision} occurs at some receiver when two stations that are one-hop or two-hop peers transmit at the same time.
Mathematically, neighboring stations $\mathbf{r}_i$ and $\mathbf{r}_j$, with $l_{\mathbf{r}_i}\le l_{\mathbf{r}_j}$, collide in the $t$-th cycle when
\begin{equation}
\label{eqn:collision}
\text{the binary string }X_{\mathbf{r}_i}(t)\text{ is a prefix of }X_{\mathbf{r}_j}(t).
\end{equation}
For example, in Fig.~\ref{fig:resolution}, since $10$ is a prefix of $101$, stations $\mathbf{r}_3$ and $\mathbf{r}_5$ collide.
In a \textit{collision-free configuration}, (\ref{eqn:collision}) does not hold for any pair of one-hop and two-hop peers.
}
\KH{We assume that at the end of each cycle, each station acquires the current states of its one-hop and two-hop peers, error-free.
\KHH{Such message exchanges can be carried out either over a control channel or over a dedicated time period.}{}
The careful reader may object that this itself requires a collision-free schedule.
However, since this control information is relatively low-rate, we assume that other techniques can be utilized for sending it.
For example, \KHH{stations can use a random access scheme to exchange the short control messages.}{}
\kh{Alternatively, the}{\KHH{T}{t}he} rapid on-off division duplex (RODD) scheme in \cite{DGLZ:1,LZDG:1} can\kh{}{ also} be used here, which enables all stations to exchange their control messages simultaneously.
\KHH{From now on, we will assume stations exchange state information within a control frame orthogonal to data frames in time or in frequency.\footnote{\K{Assuming that the control frame is short,}{The control frame is typically short. We ignore} its impact on the throughput \K{is ignored}{} in this paper.}}{}
}{
We assume that at the end of each cycle, each station acquires the current states of its one-hop and two-hop peers.
Here, every station broadcasts a message to all its one-hop peers, and tries to receive a message from every peer at the same time.
\KH{
Such message exchanges can be carried out either over a control channel or over a dedicated time period.
It is conceivable to use a random access scheme for exchanging the short state messages.}{}
\KH{There}{Though wireless systems are half-duplex, there} is a recent work on achieving virtual full-duplex communication in wireless systems \KH{using half duplex radio}{}, called rapid on-off division duplex (RODD) \cite{DGLZ:1}.
In this scheme, all stations can exchange a message with their respective one-hop peers within one frame interval.
Each station is assigned an on-off duplex mask of one frame length.
In an on-slot of the frame, the station transmits a symbol; whereas in an off-slot it does not emit any energy and therefore can receive a signal.
As long as the masks are sufficiently different, a station can receive enough signal through its off-slots and decode messages from its peers.
\KH{From now on}{Hence}, we will assume stations exchange state information\KH{}{ by techniques like RODD} within a control frame \KH{orthogonal to data frames in time or in frequency}{}.\footnote{The control frame is typically short. We ignore its impact on the throughput in this paper.}}
Let stations choose their next states based only on the current states of their one-hop and two-hop peers and themselves.
The state process of the MAC protocol can be modeled as a \textit{Markov Chain of Markov Fields} (MCMF) \cite{XGCH:1}, \textit{i.e.}, a process for which the states $\pmb{X}=\lbrace\pmb{X}(t)\rbrace_{t\in\mathbb{N}}$ satisfy
\begin{itemize}
\item
$\pmb{X}(1),\pmb{X}(2),\dots$ is a Markov chain\KHH{}{ on $\Omega$}, and
\item
for every $t$, $\pmb{X}(t)$ is a Markov field\KHH{}{ on $\Omega$} conditioned on $\pmb{X}(t-1)$.
\end{itemize}
\KH{In fact, $\pmb{X}(t)$ consists of independent random variables conditioned on $\pmb{X}(t-1)$ in our case.}{}
Here, we only consider protocols in which stations make identically distributed decisions conditioned on the same previous states of their one-hop and two-hop peers and themselves.\footnote{This rules out \K{location-based MAC protocols}{protocols in which stations, for example, are simply assigned to transmit or not based on their location} (\textit{e.g.}, in \cite{NWRB:2}).}
In Sections~\ref{sec:1d} and~\ref{sec:2d} we measure the performance by the one-hop broadcast throughput $\rho_\text{BC}$, which is the average proportion of time a station receives packets in each cycle.
A station receives a packet if and only if it does not transmit and only one of its peers transmits.
If there is no collision,
\KHH{\begin{equation}
\label{eqn:rho}
\rho_\text{BC}=\frac{1}{\lvert V\rvert}\sum_{\mathbf{r}\in V}\sum_{\mathbf{r}^\prime\in V_\mathbf{r}}2^{-l_{\mathbf{r}^\prime}}=\frac{1}{\lvert V\rvert}\sum_{\mathbf{r}\in V}\lvert V_\mathbf{r}\rvert2^{-l_\mathbf{r}}.
\end{equation}
}{\begin{equation}
\label{eqn:rho}
\textstyle\rho_\text{BC}=\bigl\langle\sum_{\mathbf{r}^\prime\in N_\mathbf{r}}2^{-l_{\mathbf{r}^\prime}}\bigr\rangle_\mathbf{r}=\bigl\langle\lvert N_\mathbf{r}\rvert2^{-l_\mathbf{r}}\bigr\rangle_\mathbf{r}
\end{equation}
where $\langle\cdot\rangle_\mathbf{r}$ in (\ref{eqn:rho}) is the spatial average over all stations, \textit{i.e.}, $\langle g(\mathbf{r})\rangle_\mathbf{r}=\frac{1}{\lvert N\rvert}\sum_{\mathbf{r}\in N}g(\mathbf{r})$ for any function $g$.}The two expressions are obtained by counting throughput from the receiver side and the transmitter side\K{,}{} respectively.
In Section~\ref{sec:mc}, we use the one-hop multicast throughput \KH{$\rho_\text{MC}$}{} to measure the performance.
In this case, a station receives a packet if and only if it does not transmit, only one of its peers transmits and it is an intended receiver for the packet.
\KHH{Let $D_\mathbf{r}\subseteq N_\mathbf{r}$ denote the set of intended receivers of the multicast by $\mathbf{r}$.}{}
\KHH{If there is no collision, the one-hop multicast throughput is,}{The one-hop multicast throughput is, if there is no collision,}
\KHH{\begin{equation}
\label{eqn:rho_mc}
\rho_\text{MC}=\frac{1}{\lvert V\rvert}\sum_{\mathbf{r}\in V}\sum_{\mathbf{r}^\prime\colon\mathbf{r}\in D_{\mathbf{r}^\prime}}2^{-l_{\mathbf{r}^\prime}}=\frac{1}{\lvert V\rvert}\sum_{\mathbf{r}\in V}\lvert D_\mathbf{r}\rvert2^{-l_\mathbf{r}}.
\end{equation}}{\begin{equation}
\label{eqn:rho_mc}
\textstyle\rho_\text{MC}=\bigl\langle\sum_{\mathbf{r}^\prime\colon\mathbf{r}\in D_{\mathbf{r}^\prime}}2^{-l_{\mathbf{r}^\prime}}\bigr\rangle_\mathbf{r}=\bigl\langle\lvert D_\mathbf{r}\rvert2^{-l_\mathbf{r}}\bigr\rangle_\mathbf{r}.
\end{equation}}
\KHH{It should be noted that under the concept of multiple resolutions, the structure of the states can be more complex than that described here.
For example, the states may be represented by tertiary codes, so the number of slots in a cycle need not be a power of $2$.
Also, to represent collisions using the prefix condition (\ref{eqn:collision}), it is not required that all slots in a cycle must have the same length; the only requirements are that all slot boundaries of a coarse resolution are also slot boundaries of a fine resolution, and two slots overlap in time if and only if the states representing the slots satisfy the prefix condition.
\K{}{The model described here is the simplest one, and is used for illustrative purposes only. }
}{}
\section{Broadcast in One-Dimensional Networks}
\label{sec:1d}
\subsection{Determining the number of states}
\label{subsec:l_1d}
\KH{
We first consider one-dimensional networks, \textit{i.e.}, all stations lie on a straight line.
We further assume the following: if $\mathbf{r}_i$ and $\mathbf{r}_j$ are one-hop peers, then all stations located between $\mathbf{r}_i$ and $\mathbf{r}_j$ are also one-hop peers of both $\mathbf{r}_i$ and $\mathbf{r}_j$.\KHH{}{This assumption is valid since stations closer to a transmitter receive stronger signals.}\K{}{In this situation, s}}{S}\K{}{tation $\mathbf{r}$ determines its resolution\KHH{}{ (in bits)} $l_\mathbf{r}$ as follows:
\begin{enumerate}
\item
station $\mathbf{r}$ computes $w_\mathbf{r}=1+\lvert V_\mathbf{r}\rvert$, the number of stations within $\mathbf{r}$'s one-hop neighborhood, and exchanges this with all its one-hop peers (\textit{e.g.}, using techniques of \cite{DGLZ:1}),
\item
station $\mathbf{r}$ \KH{sets}{uses} $l_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace \mathbf{r}\rbrace}w_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}}
\KH{To avoid collision, a station and all its one-hop peers must transmit at different times\kh{}{, implying the size of the state space of a station should be at least equal to the size of}}{The intuition is that the number of states used by a station should be larger than}\kh{}{ the largest one-hop neighborhood \KH{that the station belongs to}{that the station belongs to}}.
\K{The following result shows that a station can determine its resolution solely based on the size of the largest one-hop neighborhood that it belongs to.}{Fig.~\ref{fig:network} illustrates the procedure \K{of finding the resolutions from the following result}{}. }
\begin{thm}
\label{thm:num_states_1d}
\K{Suppose in a one-dimensional network, each station shares the number of stations within its one-hop neighborhood (\textit{i.e.}, $1+|V_\mathbf{r}|$ for station $\mathbf{r}$) with all its one-hop peers, then collision-free configurations are guaranteed to exist by letting stations choose their resolutions according to
\begin{equation}
\label{eqn:resolution_1d}
l_\mathbf{r}=\biggl\lceil\log_2\biggl(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}\rvert)\biggr)\biggr\rceil.
\end{equation}
}{For any one-dimensional network, if station $\mathbf{r}$ uses $l_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}w_{\mathbf{r}^\prime})\rceil$, where $w_\mathbf{r}=1+\lvert V_\mathbf{r}\rvert$, then it is possible for each station to choose a state such that collision-free configurations exist.}\KH{The resulting one-hop broadcast throughput is \kh{given by (\ref{eqn:rho}), where $l_\mathbf{r}$ in (\ref{eqn:rho}) is specified in (\ref{eqn:resolution_1d}).}{
\KHH{\begin{equation}
\label{eqn:throughput_1d}
\rho_\text{BC}=\frac{1}{\lvert V\rvert}\sum_{\mathbf{r}\in V}\lvert V_\mathbf{r}\rvert2^{-\lceil\log_2(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}\rvert))\rceil}.
\end{equation}}{\begin{equation}
\label{eqn:throughput_1d}
\rho_\text{BC}=\bigl\langle\lvert N_\mathbf{r}\rvert2^{-\lceil\log_2(\max_{\mathbf{r}^\prime\in N_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert N_{\mathbf{r}^\prime}\rvert))\rceil}\bigr\rangle_\mathbf{r}.
\end{equation}}}}{}
\end{thm}
\K{
Fig.~\ref{fig:network} illustrates the procedure in Theorem~\ref{thm:num_states_1d}.
Each station computes the size of its one-hop neighborhood (which is labeled on top of the \khh{half}{} circle representing its transmission range).
Stations then choose their resolutions following (\ref{eqn:resolution_1d}).
}{}
\K{}{\KH{We will introduce \KHH{a}{the} multi-resolution protocol in \KHH{Section~\ref{sec:multi_resolution_1d}, which has guaranteed convergence and achieves the throughput given in Theorem~\ref{thm:num_states_1d}}{the next subsection, and prove this result together with the convergence of such protocol}}{The proof of this result will be defered until we introduce the multi-resolution protocol, where the convergence of such protocol also proves this result}. }
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{throughput_mr_ms_sa}
\caption{Throughput of one-dimensional networks versus node \K{density}{intensity}.}
\label{fig:throughput}
\end{figure}
\KH{}{
\begin{thm}
\label{thm:throughput_1d}
Assume station $\mathbf{r}$ determines $l_\mathbf{r}$ following Theorem~\ref{thm:num_states_1d}, and each station transmits in one slot only.
The one-hop broadcast throughput of an one-dimensional network is
\begin{equation}
\label{eqn:throughput_1d}
\rho=\bigl\langle\lvert N_\mathbf{r}\rvert2^{-\lceil\log_2(\max_{\mathbf{r}^\prime\in N_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert N_{\mathbf{r}^\prime}\rvert))\rceil}\bigr\rangle_\mathbf{r}.
\end{equation}
\end{thm}}
\KH{
Consider finite segments of one-dimensional networks where stations are distributed following a Poisson point process with \K{node density}{intensity} $\lambda$\KHH{}{, and the transmission range is $R=1$}.
\KHH{We assume that the network is a \textit{unit disk graph} with transmission range $R=1$, \textit{i.e.}, there is a link between two stations if and only if the distance between them is at most equal to the transmission range $R=1$.}{}
We evaluate \kh{the throughput}{(\ref{eqn:throughput_1d})} by averaging over \khh{$100$}{} different realizations of the networks.
How the throughput $\rho_\text{BC}$ varies with the \K{node density}{intensity} $\lambda$ is shown in Fig.~\ref{fig:throughput}.}{
Consider a finite segment of an one-dimensional network where stations are distributed following a Poisson point process with intensity $\lambda$.
The transmission range is $R=1$.
How the throughput $\rho$ varies \KH{with}{against} the intensity $\lambda$ is shown in Fig.~\ref{fig:throughput}. }
The throughput oscillation is due to the fact that \KH{the size of any state space is}{all number of states have to be} a power of $2$.
In a worst-case scenario, if a station determines that it needs $2^l+1$ states, it\KHH{}{ actually} has to use \KH{a resolution of}{} $2^{l+1}$ states, meaning that\KHH{}{ potentially} almost half of the cycle will be left idle, hence the throughput \KHH{is}{can be very} close to $0.5$.
\KH{\KHH{Thus}{Hence}, a small increase in the \K{node density}{intensity} may cause a rather large \KHH{drop in the throughput \K{under certain circumstances}{}}{fraction of slots to be wasted}.}{}
After forming a collision-free configuration, there may still be many idle slots \KH{in certain neighborhoods}{such that stations can transmit in these slots without collision, thereby increasing the throughput}.
\K{
To illustrate this, consider a collision-free configuration which is formed by letting stations, from the left to the right, pick the earliest slot to transmit such that they do not collide with any station.
We \khh{then}{} let stations, from the left to the right, reclaim the idle slots to transmit, such that station $\mathbf{r}$ reclaims at most $\Bigl\lceil\frac{2^{l_\mathbf{r}}}{\lvert V_\mathbf{r}\rvert}\Bigr\rceil-1$ additional slots and ensures that it does not collide with other stations.
By doing so, station $\mathbf{r}$ transmits at a rate approximately equal to $\frac{1}{\lvert V_\mathbf{r}\rvert}$.
\khh{The top curve in}{} Fig.~\ref{fig:throughput} shows that a significant improvement in throughput \khh{results from this reclaiming}{is resulted by letting stations reclaim the idle slots to transmit}.
}{
How stations \KH{reclaim}{transmit in} these idle slots is left to future work.
}
For comparison, we also compute the throughput for slotted ALOHA \KH{in a one-dimensional network, where stations use the same fixed transmission probability $p$ but do not exchange any state information}{}.
Consider a segment of a one-dimensional network of length $2R$ with a station at the center.
This station has $k$ peers with probability $\exp(-\lambda2R)\frac{(\lambda2R)^k}{k!}$\khh{, and}{.\KH{}{Assume stations transmit with a fixed probability $p$. }
This station} receives a packet successfully with probability $kp(1-p)^k$.
Then,
\begin{IEEEeqnarray}{rCl}
\label{eqn:sa_p}
\rho_\text{BC}(p)&=&\sum_{k=1}^\infty\exp(-\lambda2R)\frac{(\lambda2R)^k}{k!}kp(1-p)^k\IEEEnonumber\\
&=&\lambda2Rp(1-p)\exp(-\lambda2Rp).\IEEEnonumber
\end{IEEEeqnarray}
\KH{The maximum throughput is}{Optimizing over all $p$, the throughput is}
\begin{displaymath}
\rho_\text{BC}=\frac{\lambda2R}{2+\sqrt{4+(\lambda2R)^2}}\exp\Biggl(-\frac{2\lambda2R}{2+\lambda2R+\sqrt{4+(\lambda2R)^2}}\Biggr)
\end{displaymath}
\K{which is achieved with transmission probability}{\KH{if the transmission probability is chosen to be}{with the corresponding $p$ given by}}
\begin{displaymath}
\label{eqn:p_opt_sa}
p=\frac{2}{2+\lambda2R+\sqrt{4+(\lambda2R)^2}}.
\end{displaymath}
This optimized throughput\KH{, with $R=1$,}{} is\KH{}{ also} plotted in Fig.~\ref{fig:throughput}.
The multi-resolution MAC protocol provides $46.7\%$ to $112.2\%$ improvement in terms of throughput over slotted ALOHA.
\subsection{\kh{}{A }Multi-Resolution MAC Protocol}
\label{sec:multi_resolution_1d}
In the following we propose a \textit{multi-resolution protocol} that leads to a collision-free configuration \textit{starting from an arbitrary initial configuration}.
Stations can learn two-hop state information in each cycle as follows.
In the $t$-th cycle, station $\mathbf{r}$ collects $\bigl\lbrace X_{\mathbf{r}^\prime}(t)\bigr\rbrace_{\mathbf{r}^\prime\in V_\mathbf{r}}$, and then broadcasts $\bigl\lbrace X_{\mathbf{r}^\prime}(t)\bigr\rbrace_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}$.
Hence, station $\mathbf{r}$ knows \KH{$X_{\mathbf{r}^\prime}(t)$ for all one-hop and two-hop peers $\mathbf{r}^\prime$}{$\bigl\lbrace X_{\mathbf{r}^{\prime\prime}}(t)\bigr\rbrace_{\mathbf{r}^{\prime\prime}\in N_{\mathbf{r}^\prime}\cup\lbrace\mathbf{r}^\prime\rbrace}$ for all $\mathbf{r}^\prime\in N_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace$} (this is accomplished by letting station $\mathbf{r}$ broadcast $2l_\mathbf{r}+\sum_{\mathbf{r}^\prime\in V_\mathbf{r}}l_{\mathbf{r}^\prime}$ bits), and selects its state at the $(t+1)$-st cycle following Protocol~\ref{alg:mr_bc}\kh{, where the parameter $\epsilon$ is set to 0 in the case of one-dimensional networks (in case of two-dimensional networks discussed in Section~\ref{sec:2d}, we will set $\epsilon$ to a strictly positive number)}{}.
\K{
\begin{algorithm}[t]
\caption{Multi-Resolution MAC Protocol for Broadcast}
\label{alg:mr_bc}
\begin{algorithmic}[1]
\WHILE{\khh{station $\mathbf{r}$ is active}{}}
\STATE
$\mathbf{r}$ sets the votes on all states to zero.
\FOR{$\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace$}
\IF{$\mathbf{r}$ is the only station occupying its current state in station $\mathbf{r}^\prime$'s one-hop neighborhood}
\STATE
\KH{$\mathbf{r}$'s current state is assigned a single vote of weight one}{A single vote of weight one on $\mathbf{r}$'s current state is given by $\mathbf{r}^\prime$}.
\ELSE
\STATE
\KH{$\mathbf{r}$ determines which states (according to $\mathbf{r}$'s resolution) are idle or have collisions in $\mathbf{r}^\prime$'s one-hop neighborhood}{$\mathbf{r}$ determines the states (according to $\mathbf{r}$'s resolution) that station $\mathbf{r}^\prime$ is idle or collides}.
\STATE
A vote of weight $\frac{1}{n}$ is \khh{added}{given} to each such state\KH{}{ by station $\mathbf{r}^\prime$}, where $n$ is the number of such states.
\ENDIF
\ENDFOR
\IF{$n_s>0$ for multiple $s$'s, where $n_s$ is the total weight state $s$ receives}
\STATE
Replace $n_s$ by $n_s+\epsilon$, where $\epsilon\ge0$, for all $s$.
\ENDIF
\STATE
$\mathbf{r}$ selects state $s$ with a probability proportional to $f(n_s)$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
}{
\begin{algorithm}[t]
\caption{Multi-Resolution MAC Protocol \KHH{for Broadcast on One-Dimensional Networks}{}}
\label{alg:mr_bc_1d}
\begin{algorithmic}[1]
\STATE
$\mathbf{r}$ sets the votes on all states to zero.
\FOR{$\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace$}
\IF{$\mathbf{r}$ is the only station occupying its current state in station $\mathbf{r}^\prime$'s one-hop neighborhood}
\STATE
\KH{$\mathbf{r}$'s current state is assigned a single vote of weight one}{A single vote of weight one on $\mathbf{r}$'s current state is given by $\mathbf{r}^\prime$}.
\ELSE
\STATE
\KH{$\mathbf{r}$ determines which states (according to $\mathbf{r}$'s resolution) are idle or have collisions in $\mathbf{r}^\prime$'s one-hop neighborhood}{$\mathbf{r}$ determines the states (according to $\mathbf{r}$'s resolution) that station $\mathbf{r}^\prime$ is idle or collides}.
\STATE
A vote of weight $\frac{1}{n}$ is given to each such state\KH{}{ by station $\mathbf{r}^\prime$}, where $n$ is the number of such states.
\ENDIF
\ENDFOR
\STATE
$\mathbf{r}$ selects state $s$ with a probability proportional to $f(n_s)$, where $n_s$ is the total weight state $s$ receives.
\end{algorithmic}
\end{algorithm}
}
In Protocol~\ref{alg:mr_bc}, $f:\mathbb{R}\mapsto\mathbb{R}$ \KH{can be any}{is an} increasing function with $f(0)=0$\KH{. Empirically\khh{,}{} a good choice is }{, \textit{e.g.},} $f(n_s)=\exp(Jn_s)\mathbf{1}_{\lbrace n_s>0\rbrace}$, where $\mathbf{1}_{\lbrace\cdot\rbrace}$ is the indicator function and \KH{$J>0$}{$J$} is the \textit{strength of interaction} (\KHH{more on this}{discussed} later).
The idea of Protocol~\ref{alg:mr_bc} is that a station `reserves' a slot \KH{for}{to} a peer if it knows that this peer does not collide with other peers, and notifies any peer experiencing collisions to stay away from these `reserved' slots.
We have the following convergence result for this protocol.
\begin{thm}
\label{thm:mr_1d}
If each station in a one-dimensional network chooses its resolution following Theorem~\ref{thm:num_states_1d} and executes Protocol~\ref{alg:mr_bc}, then \khh{all stations will converge to a collision-free configuration}{a collision-free configuration will be formed after a sufficiently long time}, regardless of \KH{the}{their} initial state\KH{}{s}.
\K{The resulting throughput is given \kh{in Theorem~\ref{thm:num_states_1d}}{by (\ref{eqn:throughput_1d})}.}{}
\end{thm}
\begin{IEEEproof}
\KH{
\KHH{}{It suffices to show that starting from an arbitrary configuration, there is a sequence of events which takes place with nonzero probability so that the network evolves to a collision-free configuration, which is an absorption state so that the network stays collision-free ever since. }Using Protocol~\ref{alg:mr_bc}, if a station does not collide with any one-hop or two-hop peers, then all the votes will be given to its current state, and it will remain in its current state with probability one.}{
Notice that for the proposed protocol, every station must have nonzero probability in choosing the same state in two consecutive time slots:
\begin{enumerate}
\item
If it does not collide with any one-hop or two-hop peers, then all the votes will be given to its current state, and it will remain in its current state with probability one.
\item
Otherwise, any one-hop peer detecting this collision will give a vote of nonzero weight to its current state, meaning that it will remain in its current state with nonzero probability.
\end{enumerate}
Inductively, any station can choose the same state for any number of consecutive time slots with nonzero probability. }
\KH{Therefore}{In particular}, if the \K{current}{initial} configuration is collision-free, then the same\khh{}{collision-free} configuration will appear in every subsequent cycle, so every collision-free configuration is absorbing.
Hence we only need to consider the case when the \khh{current}{initial} configuration is not collision-free and show that \khh{such configuration is}{configurations with collision are} transient.
\khh{To do this we explicitly construct a collision-free configuration, which the stations in the current configuration can transition to with positive probability.}{}
\KHH{}{We show next that\KH{}{ with nonzero probability} a collision-free configuration can be reached from an arbitrary configuration. }Without loss of generality, assume the stations are indexed such that $\mathbf{r}_i$ is on the left of $\mathbf{r}_j$ if and only if $i<j$.
Stations take turn\KH{s}{} to find a state that is collision-free with all stations on \KHH{their}{its} left:
\begin{itemize}
\item
Station $\mathbf{r}_0$ remains in its initial state, so it is collision-free with all stations on its left \KH{(notice that following Protocol~\ref{alg:mr_bc}, \textit{every station has a nonzero probability of remaining in the same state})}{}.
\item
Now, assume stations $\mathbf{r}_0,\dots,\mathbf{r}_{i-1}$ are collision-free with all stations on their left. For station $\mathbf{r}_i$:
\begin{enumerate}
\item
If its current state is collision-free with all stations on its left (including the special case where there is no neighboring station on its left), then it remains in its current state.
\item
Otherwise, consider the farthest left one-hop peer of station $\mathbf{r}_i$, which we denote by $\mathbf{r}_j$.
If $\mathbf{r}_i$ collides with some station $\mathbf{r}_k$ on its left, then $\mathbf{r}_j$ must be able to detect it, because $\mathbf{r}_j$ must be one-hop peers of both $\mathbf{r}_i$ and $\mathbf{r}_k$ ($\mathbf{r}_j$ and $\mathbf{r}_k$ can be the same station).
$\mathbf{r}_j$ and all one-hop peers $\mathbf{r}_m$ of $\mathbf{r}_j$ use \KHH{resolutions of}{} at least $2^{\lceil\log_2(1+\lvert V_{\mathbf{r}_j}\rvert)\rceil}$ states.
Therefore, from station $\mathbf{r}_j$'s point of view, there are at most $\lvert V_{\mathbf{r}_j}\rvert$ distinct busy periods, each of length at most $2^{-\lceil\log_2(1+\lvert V_{\mathbf{r}_j}\rvert)\rceil}$.
This means that there is at least one idle slot according to $\mathbf{r}_j$'s resolution, \textit{i.e.}, none of the $\mathbf{r}_m$'s use that slot.
Therefore, $\mathbf{r}_j$ gives a vote of nonzero weight on this slot to $\mathbf{r}_i$, then with nonzero probability, $\mathbf{r}_i$ chooses this slot (or a fraction of this slot if it uses a finer resolution) and becomes collision-free with all stations on its left.
\end{enumerate}
\end{itemize}
Finally, when station $\mathbf{r}_{\lvert V\rvert-1}$ finds a state that is collision-free with all stations on its left, the configuration is now collision-free.
Therefore, all configurations with collision\khh{s}{} are transient\khh{, proving both Theorems~\ref{thm:num_states_1d} and~\ref{thm:mr_1d}}{.
The above construction also proves Theorem~\ref{thm:num_states_1d}}.
\end{IEEEproof}
\subsection{Simulations: Convergence Speed-up by Annealing}
\label{subsec:annealing_1d}
\begin{figure}[t]
\centering
\subfigure[Convergence time]{
\includegraphics[width=3.5in]{time_1d}
\label{fig:time_1d}
}
\hspace{-0.4in}
\subfigure[Convergence percentage]{
\includegraphics[width=3.5in]{percentage_1d}
\label{fig:percentage_1d}
}
\caption[]{Simulations of the multi-resolution protocol with annealing for one-dimensional networks.}
\label{fig:annealing_1d}
\end{figure}
\KH{S}{Preliminary s}imulations of the proposed protocol show\KH{}{s} that it may take a long time for a collision-free configuration to appear.
Here we propose speeding up the convergence by \textit{annealing}, \textit{i.e.}, we consider the multi-resolution protocol with $f(n_s)=\exp(J(t)n_s)\mathbf{1}_{\lbrace n_s>0\rbrace}$, where $J(t)=\gamma J(t-1)$, $\gamma>1$ controls the increase in the strength of interaction, and $J(0)=1$.
Define the convergence time to be the first time that a \KH{certain}{fixed} configuration is observed \KH{and remains unchanged till the end of the simulation}{}, and the convergence percentage to be the proportion of stations that do not collide with other stations \KH{in that configuration}{when the fixed configuration is observed}.
\textit{\KH{This}{Notice that this fixed} configuration may not be collision-free}.
\textit{This means that there is\KH{}{ still} a nonzero probability that the network transits to another configuration, but this probability is \KH{so}{too} small (as $J(t)$ is very large, resulting in every station staying in the state with maximum vote) that this transition is practically impossible.}
We consider \KH{a line segment}{segments of one-dimensional networks} of length $50$ \KH{on which}{where} stations are distributed following a Poisson point process with \K{node density}{intensity} $\lambda$\KH{}{ ranging from $1$ to $10$}.
The \KHH{network is a unit disk graph with}{} transmission range\KHH{}{ is} $R=1$.
Ten simulations are run for each combination of $\lambda$ and $\gamma$.
All simulations last for $2000$ iterations\KH{}{, so when the convergence time is $2000$, it means that additional time is needed for convergence}.
The convergence time and percentage are plotted in Figs.~\ref{fig:time_1d} and~\ref{fig:percentage_1d} respectively.
When $\gamma$ is too small, the effect of annealing is not significant, \khh{and as shown the algorithm may not have converged after $2000$ iterations}{as shown in the figures that after \KH{$2000$ iterations}{a long time}, \KHH{many}{lots of} stations still experience collisions}.
When $\gamma$ is too large, the convergence time is reduced drastically, but the proportion of stations experiencing collisions is still significant.
Notice the similarity of the results here with the annealing process in statistical mechanics: when the annealing is too slow, it takes longer time to reach the state with minumum energy; when the annealing is too fast, the system reaches some metastable state or becomes glassy with noncrystalline structure.
\section{Broadcast in Two-Dimensional Networks}
\label{sec:2d}
\subsection{Determining the number of states}
\label{subsec:l_2d}
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=3.5in]{counterexamples1}
\label{fig:counterexamples1}
}
\subfigure[]{
\includegraphics[width=3.5in]{counterexamples2_o}
\label{fig:counterexamples2}
}
\caption[]{Intricacies for two-dimensional networks\KH{: \subref{fig:counterexamples1} determining the number of states (`??' labels stations that are unable to pick a state without collision), \subref{fig:counterexamples2} converging to a collision-free configuration}{}.}
\label{fig:counterexamples}
\end{figure}
Unlike \KH{in}{} one-dimensional networks, the resolution $l_\mathbf{r}$ cannot be completely determined by \khh{(\ref{eqn:resolution_1d})}{two-hop topology information} in two-dimensional networks.
An example is shown in the left part of Fig.~\ref{fig:counterexamples1}.
If \khh{(\ref{eqn:resolution_1d})}{the \KH{rule}{strategy} in Theorem~\ref{thm:num_states_1d}} is used here, \KH{every station has two one-hop peers and therefore}{all stations} should use \KH{a resolution of}{} four states.
But since every station is within two hops of every other station, at least five states are needed to resolve any collision.
\khh{
For a two-dimensional network, the following theorem shows that (\ref{eqn:resolution_1d}) gives a lower bound on the needed resolution.
An upper bound on the resolution is also given.
\begin{thm}
\label{thm:num_states_2d}
\khh{A lower bound on the needed resolution for a collision-free configuration for broadcast to exist is given by}{There exists a collision-free configuration for which the resolution of each station $\mathbf{r}$ is lower bounded by}
\begin{equation}
\label{eqn:resolution_lb_2d}
\underline{l}_\mathbf{r}=\biggl\lceil\log_2\biggl(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}\rvert)\biggr)\biggr\rceil.
\end{equation}
\khh{A corresponding upper bound is given by}{and upper bounded by}
\begin{equation}
\label{eqn:resolution_ub_2d}
\overline{l}_\mathbf{r}=\biggl\lceil\log_2\biggl(\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}^2\rvert)\biggr)\biggr\rceil,
\end{equation}
where $V_\mathbf{r}^2$ is the set of one-hop or two-hop peers of $\mathbf{r}$.
The resulting one-hop broadcast throughput of a two-dimensional network is bounded as follows:
\begin{equation}
\label{eqn:throughput_2d}
\frac{1}{\lvert V\vert}\sum_{\mathbf{r}\in V}\lvert V_\mathbf{r}\rvert2^{-\overline{l}_\mathbf{r}}\le\rho_\text{BC}\le\frac{1}{\lvert V\vert}\sum_{\mathbf{r}\in V}\lvert V_\mathbf{r}\rvert2^{-\underline{l}_\mathbf{r}}.
\end{equation}
\end{thm}
\begin{IEEEproof}
For a station to receive a packet from each one-hop peer, the station itself and all its one-hop peers must transmit at different times.
If $\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace$, station $\mathbf{r}$ is within the one-hop neighborhood $V_{\mathbf{r}^\prime}\cup\lbrace\mathbf{r}^\prime\rbrace$, and therefore $1+\lvert V_{\mathbf{r}^\prime}\rvert$ states are required to resolve any collision in $V_{\mathbf{r}^\prime}\cup\lbrace\mathbf{r}^\prime\rbrace$.
Then, at least $\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}\rvert)$ states are required.
Finally, we assume $\underline{l}_\mathbf{r}$ to be an integer and $2^{\underline{l}_\mathbf{r}}\ge\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}\rvert)$, therefore the lower bound (\ref{eqn:resolution_lb_2d}) is established.
Observe that a station cannot transmit when one of its one-hop or two-hop peers transmits in a collision-free configuration.
In the worst case, at most one station in every two-hop neighborhood transmits at any time.
Now, if $\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace$, station $\mathbf{r}$ is within the two-hop neighborhood $V_{\mathbf{r}^\prime}^2\cup\lbrace\mathbf{r}^\prime\rbrace$, and in the worst case $1+\lvert V_{\mathbf{r}^\prime}^2\rvert$ states are required to resolve any collision in $V_{\mathbf{r}^\prime}^2\cup\lbrace\mathbf{r}^\prime\rbrace$.
Therefore, at most $\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}^2\rvert)$ states are required.
Finally, we assume $\overline{l}_\mathbf{r}$ to be an integer and $2^{\overline{l}_\mathbf{r}}\ge\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}^2\rvert)$, therefore the upper bound (\ref{eqn:resolution_ub_2d}) is established.
\end{IEEEproof}
The upper bound in Theorem~\ref{thm:num_states_2d} can be quite loose.
When the network $G$ is `well-connected' it can be improved on by using}{This situation can be remedied if the network $G$ is `well-connected' and the protocol is modified as follows: $l_\mathbf{r}$ is determined by the size of} the maximum clique in $G^2$ containing $\mathbf{r}$, where $G^2=(N,A^2)$ is the square of $G$, \textit{i.e.}, $(\mathbf{r}_i,\mathbf{r}_j)\in A^2$ if $\mathbf{r}_i$ and $\mathbf{r}_j$ are one-hop or two-hop peers in $G$.
\khh{
More formally, we require that $G^2$ to be \textit{chordal}, meaning that in any cycle of at least four vertices, there must exist an edge between some pair of nonadjacent vertices.
A key property of chordal graphs is that they have a \textit{perfect elimination ordering} of their vertices \cite{DFOG:1}, \textit{i.e.}, one can order the vertices by repeatedly finding a vertex such that all its neighbors form a clique, and then removing it along with all incident edges.
We use this property to prove the next theorem, which shows that the choice of $l_\mathbf{r}$ based on the maximum clique size in $G^2$ is adequate.
}{}
\K{}{
\begin{thm}
\label{thm:num_states_2d_chordal}
Suppose a two-dimensional network $G$ has a chordal square, \textit{i.e.}, $G^2$ is chordal.
If station $\mathbf{r}$ uses resolution $l_\mathbf{r}=\lceil\log_2\lvert C_\mathbf{r}\rvert\rceil$, where $C_\mathbf{r}$ is the maximum clique in $G^2$ containing $\mathbf{r}$, then it is possible for each station to choose a state such that collision-free configurations exist.
\end{thm}
}
\khh{}{
\KH{A graph is \textit{chordal} if in any cycle of at least four vertices, there must exist an edge between some \KHH{pair of}{two} nonadjacent vertices}{A graph is \textit{chordal} if for any cycle of at least four vertices, there must exist an edge between two nonadjacent vertices in the cycle}.
Furthermore, a graph is chordal if and only if it has a perfect elimination ordering of vertices \cite{DFOG:1}.
A \textit{perfect elimination ordering} \KH{is}{can be} constructed by repeatedly finding a vertex such that all its neighbors form a clique, and then removing it along with all \KH{incident}{} edges\KH{}{ incident on the vertex}.
The order that the vertices are removed is a perfect elimination ordering.
\KH{
Evidently, chords are abundant in networks with moderate density.
On the other hand, if a network is quite sparse, then its square may not be chordal, but \K{medium access control}{scheduling} in a sparse network could be relatively easy through other heuristic means. }{}
}
\K{
\begin{thm}
\label{thm:num_states_2d_chordal}
Suppose a two-dimensional network $G$ has a chordal square, \textit{i.e.}, $G^2$ is chordal.
If station $\mathbf{r}$ uses resolution $l_\mathbf{r}=\lceil\log_2\lvert C_\mathbf{r}\rvert\rceil$, where $C_\mathbf{r}$ is the maximum clique in $G^2$ containing $\mathbf{r}$, then it is possible for each station to choose a state such that collision-free configurations exist.
\end{thm}
}{}
\begin{IEEEproof}
By assumption, there exists a perfect elimination ordering of vertices for $G^2$.
Without loss of generality, assume the stations are indexed following the reverse of the perfect elimination ordering, \textit{i.e.}, $\mathbf{r}_j$ appears after $\mathbf{r}_i$ in the perfect elimination ordering if and only if $j<i$.
We will show by induction that station $\mathbf{r}_i$ must be able to find a state so that it is collision-free with stations $\mathbf{r}_j$ where $j<i$.
\khh{S}{For s}tation $\mathbf{r}_0$\khh{}{, it} can pick any state.
Now, assume stations $\mathbf{r}_0,\dots,\mathbf{r}_{i-1}$ pick their states such that they are collision-free among themselves.
Then, for station $\mathbf{r}_i$, let $C=\lbrace\mathbf{r}_j\colon j<i\text{ and }(\mathbf{r}_i,\mathbf{r}_j)\in A^2\rbrace$.
By definition of perfect elimination ordering, $C\cup\lbrace\mathbf{r}_i\rbrace$ is a clique in $G^2$.
Therefore, $\mathbf{r}_i$ and all $\mathbf{r}_j\in C$ use \KHH{resolutions of}{} at least $2^{\lceil\log_2(1+\lvert C\rvert)\rceil}$ states.
Hence, from station $\mathbf{r}_i$'s point of view, there are $\lvert C\rvert$ distinct busy periods, each of length at most $2^{-\lceil\log_2(1+\lvert C\rvert)\rceil}$.
This means that there is at least one idle slot according to $\mathbf{r}_i$'s resolution, \textit{i.e.}, none of the $\mathbf{r}_j$'s in $C$ use that slot.
Therefore, station $\mathbf{r}_i$ can pick this slot (or a fraction of this slot if it uses a finer resolution) and therefore becomes collision-free with all stations $\mathbf{r}_j$ where $j<i$.
Repeating this argument, when station $\mathbf{r}_{\lvert V\rvert-1}$ finds a state that is collision-free with stations $\mathbf{r}_j$ where $j<\lvert V\rvert-1$, then the configuration is now collision-free.
\end{IEEEproof}
The condition in Theorem~\ref{thm:num_states_2d_chordal} is sufficient but not necessary.
For example, the right part of Fig.~\ref{fig:counterexamples1} shows that a collision-free configuration cannot be found using the \KH{resolutions}{number of states} predicted in Theorem~\ref{thm:num_states_2d}\KH{}{, but the right part of Fig.~\ref{fig:counterexamples2} shows the contrary}.
\KH{Consider also the right part of Fig.~\ref{fig:counterexamples2}, which is a $4\times4$ square lattice where multiple stations are collocated on some lattice points.
Theorem~\ref{thm:num_states_2d} predicts that every station uses a resolution of eight states, and a collision-free configuration exists, as shown in the figure.
For illustrative purposes, we use \khh{decimal}{octal} representation of the states, \textit{e.g.}, state $101$ is denoted as $5$.}{}
In both cases, $G^2$ is not chordal.
Since \KH{the sizes of all state spaces}{all number of states} are powers of $2$, additional states are provisioned in many cases.
Therefore, a collision-free configuration \KHH{is \khh{likely}{guaranteed} to exist by using the rule in Theorem~\ref{thm:num_states_2d_chordal}}{can be formed using local information exchange} even for many networks without chordal squares.
\khh{}{
For a general two-dimensional network, \KH{upper and lower bounds on the resolution}{Theorem~\ref{thm:num_states_1d} only gives a \textit{lower bound} $\underline{l}_\mathbf{r}\le l_\mathbf{r}$ on the resolution.
We can obtain a similar expression for an \textit{upper bound} $\overline{l}_\mathbf{r}\ge l_\mathbf{r}$ on the resolution.
These bounds} are characterized as follows.
\KH{
\begin{thm}
\label{thm:num_states_2d}
\kh{There exists a self-stabilizing protocol for which the resolution of each station $\mathbf{r}$ is lower bounded by}{\K{A}{The} lower bound\K{}{ $\underline{l}_\mathbf{r}$} on the resolution used by station $\mathbf{r}$ is}\K{
\begin{equation}
\label{eqn:resolution_lb_2d}
\underline{l}_\mathbf{r}=\biggl\lceil\log_2\biggl(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}\rvert)\biggr)\biggr\rceil
\end{equation}
}{ computed as follows:
\begin{enumerate}
\item
$\underline{w}_\mathbf{r}=1+\lvert V_\mathbf{r}\rvert$,
\item
$\underline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}\underline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}}\kh{and upper bounded by}{\K{A}{The} \KHH{corresponding}{} upper bound\K{}{ $\overline{l}_\mathbf{r}$}\KHH{}{ on the resolution used by station $\mathbf{r}$} is}\K{
\begin{equation}
\label{eqn:resolution_ub_2d}
\overline{l}_\mathbf{r}=\biggl\lceil\log_2\biggl(\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}^2\rvert)\biggr)\biggr\rceil,
\end{equation}
}{ computed as follows:
\begin{enumerate}
\item
$\overline{w}_\mathbf{r}=1+\lvert V_\mathbf{r}^2\rvert$,
\item
$\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}}where $V_\mathbf{r}^2$ is the set of one-hop or two-hop peers of $\mathbf{r}$.
The resulting one-hop broadcast throughput of a two-dimensional network is bounded as follows:
\KHH{\begin{equation}
\label{eqn:throughput_2d}
\frac{1}{\lvert V\vert}\sum_{\mathbf{r}\in V}\lvert V_\mathbf{r}\rvert2^{-\overline{l}_\mathbf{r}}\le\rho_\text{BC}\le\frac{1}{\lvert V\vert}\sum_{\mathbf{r}\in V}\lvert V_\mathbf{r}\rvert2^{-\underline{l}_\mathbf{r}}.
\end{equation}}{\begin{equation}
\label{eqn:throughput_2d}
\bigl\langle\lvert N_\mathbf{r}\rvert2^{-\overline{l}_\mathbf{r}}\bigr\rangle_\mathbf{r}\le\rho_\text{BC}\le\bigl\langle\lvert N_\mathbf{r}\rvert2^{-\underline{l}_\mathbf{r}}\bigr\rangle_\mathbf{r}.
\end{equation}}
\end{thm}
\begin{IEEEproof}
For a station to receive a packet from each one-hop peer, the station itself and all its one-hop peers must transmit at different times.
If $\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace$, station $\mathbf{r}$ is within the one-hop neighborhood $V_{\mathbf{r}^\prime}\cup\lbrace\mathbf{r}^\prime\rbrace$, and therefore \K{$1+\lvert V_{\mathbf{r}^\prime}\rvert$}{$\underline{w}_{\mathbf{r}^\prime}$} states are required to resolve any collision in $V_{\mathbf{r}^\prime}\cup\lbrace\mathbf{r}^\prime\rbrace$.
Then, at least \K{$\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}\rvert)$}{$\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}\underline{w}_{\mathbf{r}^\prime}$} states are required.
Finally, we assume $\underline{l}_\mathbf{r}$ to be an integer and \K{$2^{\underline{l}_\mathbf{r}}\ge\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}\rvert)$}{$2^{\underline{l}_\mathbf{r}}\ge\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}\underline{w}_{\mathbf{r}^\prime}$}, therefore \K{the lower bound (\ref{eqn:resolution_lb_2d}) is established}{we get \KHH{the lower bound}{} $\underline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}\underline{w}_{\mathbf{r}^\prime})\rceil$}.
Observe that a station cannot transmit when one of its one-hop or two-hop peers \KHH{transmits}{is active} in a collision-free configuration.
In the worst case, at most one station in every two-hop neighborhood transmits at any time.
Now, if $\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace$, station $\mathbf{r}$ is within the two-hop neighborhood $V_{\mathbf{r}^\prime}^2\cup\lbrace\mathbf{r}^\prime\rbrace$, and in the worst case \K{$1+\lvert V_{\mathbf{r}^\prime}^2\rvert$}{$\overline{w}_{\mathbf{r}^\prime}$} states are required to resolve any collision in $V_{\mathbf{r}^\prime}^2\cup\lbrace\mathbf{r}^\prime\rbrace$.
Therefore, at most \K{$\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}^2\rvert)$}{$\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime}$} states are required.
Finally, we assume $\overline{l}_\mathbf{r}$ to be an integer and \K{$2^{\overline{l}_\mathbf{r}}\ge\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}(1+\lvert V_{\mathbf{r}^\prime}^2\rvert)$}{$2^{\overline{l}_\mathbf{r}}\ge\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime}$}, therefore \K{the upper bound (\ref{eqn:resolution_ub_2d}) is established}{we get \KHH{the upper bound}{} $\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime})\rceil$}.
\end{IEEEproof}
}{
\begin{thm}
\label{thm:num_states_2d_lb}
The lower bound $\underline{l}_\mathbf{r}$ on the resolution used by station $\mathbf{r}$ is computed as follows:
\begin{enumerate}
\item
$\underline{w}_\mathbf{r}=1+\lvert N_\mathbf{r}\rvert$,
\item
$\underline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in N_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}\underline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}
\end{thm}
\begin{thm}
\label{thm:num_states_2d_ub}
The upper bound $\overline{l}_\mathbf{r}$ on the resolution used by station $\mathbf{r}$ is computed as follows:
\begin{enumerate}
\item
$\overline{w}_\mathbf{r}=1+\lvert N_\mathbf{r}^2\rvert$,
\item
$\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in N_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}
Here, $N_\mathbf{r}^2$ is the set of one-hop or two-hop peers of $\mathbf{r}$.
\end{thm}
\begin{proof}
Observe that a station cannot transmit when one of its one-hop or two-hop peers is active in a collision-free configuration.
In the worst case, at most one station in every two-hop neighborhood transmits at any time.
Now, if $\mathbf{r}^\prime\in N_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace$, station $\mathbf{r}$ is within the two-hop neighborhood $N_{\mathbf{r}^\prime}^2\cup\lbrace\mathbf{r}^\prime\rbrace$, and in the worst case $\overline{w}_{\mathbf{r}^\prime}$ states are required to resolve any collision in $N_{\mathbf{r}^\prime}^2\cup\lbrace\mathbf{r}^\prime\rbrace$.
Therefore, at most $\max_{\mathbf{r}^\prime\in N_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime}$ states are required.
Finally, we assume $\overline{l}_\mathbf{r}$ to be an integer and $2^{\overline{l}_\mathbf{r}}\ge\max_{\mathbf{r}^\prime\in N_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime}$, therefore we get $\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in N_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime})\rceil$.
\end{proof}
}
}
\subsection{\kh{Multi-resolution MAC Protocol for Two-dimensional Networks}{A Modified Multi-Resolution MAC Protocol}}
\label{sec:multi_resolution_2d}
\kh{Protocol~\ref{alg:mr_bc} with $\epsilon=0$ does not always}{The multi-resolution protocol\KH{}{ we propose} for one-dimensional networks may not} work for all two-dimensional networks.
\khh{In particular, the resulting Markov chain can have an absorbing class with more than one configuration, none of which is collision-free.}{}
An example is illustrated in Fig.~\ref{fig:counterexamples2}.\KH{}{This is a $4\times4$ square lattice, where multiple stations are collocated on some lattice points. }
\KH{Every station uses a resolution of eight states.}{}
\KH{T}{Every station uses eight states, and t}he right part of Fig.~\ref{fig:counterexamples2} shows that a collision-free configuration exists.
But, if the initial configuration is the one shown in the left part of Fig.~\ref{fig:counterexamples2}, and the protocol\KH{}{ proposed} in Section~\ref{sec:1d} is used\KH{, then the following occurs}{ here, then}:
\begin{enumerate}
\item
All stations in initial states \KH{$1,2,3,4,5$}{$001,010,011,100,101$} remain in their current states with probability one, since they do not cause any collision.
All stations in initial state \KH{$0$}{$000$} can only choose \KH{$0,6,7$}{$000,110,111$} as their next states, because all other states are not available.
This repeats for all subsequent iterations.
\item
Consider the four stations in the middle of the network, which\KH{}{ all} have initial states \KH{$0$}{$000$}.
\KH{They}{These stations} are within two hops of each other, so they must use different states.
However, only states \KH{$0,6,7$}{$000,110,111$} are available to them in any cycle.
\KH{Hence}{Therefore}, collision-free configurations cannot be reached.
\end{enumerate}
\kh{Choosing $\epsilon>0$ in Protocol~\ref{alg:mr_bc}}{A simple modification} prevents the preceding deadlock.
For any station, if the votes received do not all point to a single state, then the station increases the total weight of the votes received \KH{for}{by} any state by a \KH{nonzero}{small} constant.
\textit{A station in this situation will have nonozero probability of choosing any state to be its next state}.
\kh{}{\KHH{The modified multi-resolution protocol is shown as Protocol~\ref{alg:mr_bc}\K{, with $\epsilon>0$}{}.}{}}This randomization does not affect any absorption configuration, and is necessary to establish the counterpart of Theorem~\ref{thm:mr_1d} for two-dimensional networks.
\K{}{
\KHH{
\begin{algorithm}[t]
\caption{Multi-Resolution MAC Protocol for Broadcast on Two-Dimensional Networks}
\label{alg:mr_bc_2d}
\begin{algorithmic}[1]
\STATE
$\mathbf{r}$ sets the votes on all states to zero.
\FOR{$\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace$}
\IF{$\mathbf{r}$ is the only station occupying its current state in station $\mathbf{r}^\prime$'s one-hop neighborhood}
\STATE
\KH{$\mathbf{r}$'s current state is assigned a single vote of weight one}{A single vote of weight one on $\mathbf{r}$'s current state is given by $\mathbf{r}^\prime$}.
\ELSE
\STATE
\KH{$\mathbf{r}$ determines which states (according to $\mathbf{r}$'s resolution) are idle or have collisions in $\mathbf{r}^\prime$'s one-hop neighborhood}{$\mathbf{r}$ determines the states (according to $\mathbf{r}$'s resolution) that station $\mathbf{r}^\prime$ is idle or collides}.
\STATE
A vote of weight $\frac{1}{n}$ is given to each such state\KH{}{ by station $\mathbf{r}^\prime$}, where $n$ is the number of such states.
\ENDIF
\ENDFOR
\IF{$n_s>0$ for multiple $s$'s, where $n_s$ is the total weight state $s$ receives}
\STATE
Replace $n_s$ by $n_s+\epsilon$, where $\epsilon>0$, for all $s$.
\ENDIF
\STATE
$\mathbf{r}$ selects state $s$ with a probability proportional to $f(n_s)$.
\end{algorithmic}
\end{algorithm}
}{}
}
\begin{thm}
\label{thm:mr_2d}
For a two-dimensional network, suppose \KH{each station uses}{all stations use} a sufficiently \KH{fine resolution (\textit{e.g.}, the upper bound in Theorem~\ref{thm:num_states_2d}),}{large number of states} so that the existence of collision-free configurations is guaranteed.
Then, starting from an arbitrary initial configuration, \KHH{Protocol~\ref{alg:mr_bc} \kh{with $\epsilon>0$}{}}{the modified multi-resolution protocol} will \khh{converge to}{, after a sufficiently long time, \KH{result in}{lead to}} a collision-free configuration.
\end{thm}
\begin{IEEEproof}
As in the proof of Theorem~\ref{thm:mr_1d}, every collision-free configuration is absorbing.
So we only need to consider the case when the initial configuration is not collision-free.
The remaining proof is similar to the \khh{proof of Theorem 1}{one} in \cite{DLPC:1}\KHH{.}{, \textit{i.e.},}
\KHH{W}{w}e will show that an all-zero configuration\KHH{, \textit{i.e.}, a configuration where every station's state is a binary string of all zeros,}{} is reached with nonzero probability, and then in the next cycle there is a nonzero probability of reaching a collision-free configuration.
Without loss of generality, assume station $\mathbf{r}_0$ collides with some station.
Consider a spanning tree rooted at $\mathbf{r}_0$, and assume the stations are indexed following the breadth-first search order.
\KHH{Using Protocol~\ref{alg:mr_bc} \kh{with $\epsilon>0$}{}, t}{T}he following happens with nonzero probability:
\begin{itemize}
\item
Station $\mathbf{r}_0$ chooses a state \khh{that}{to} collide\khh{s}{} with its child\khh{}{ren} with the smallest index in the spanning tree, and repeats this for all children in the spanning tree following the breadth-first search ordering in subsequent cycles.
After colliding with all children, it then chooses the all-zero state and remains in that state.
\item
For station $\mathbf{r}_i$:
\begin{enumerate}
\item
If it does not collide with its parent in the spanning tree, then it remains in its current state \KHH{until it collides with its parent}{}.
\item
\KHH{When it collides with its parent, it follows what station $\mathbf{r}_0$ does, \textit{i.e.}}{Otherwise, like $\mathbf{r}_0$}, it chooses a state \khh{that}{to} collide\khh{s}{} with its child\khh{}{ren} with the smallest index in the spanning tree, and repeats this for all children in the spanning tree following the breadth-first search ordering in subsequent cycles.
After colliding with all children, it then chooses the all-zero state and remains in that state.
\end{enumerate}
\end{itemize}
Finally, all stations are in the all-zero state, \textit{i.e.}, every station \KHH{collides}{is colliding} with all one-hop and two-hop peers.
Therefore, in the next cycle, there is a nonzero probability that \khh{the stations choose any configuration including one that is collision-free}{every station chooses a state such that the configuration is collision-free}, where the existence of collision-free configurations is guaranteed.
Hence, all configurations with collision\khh{s}{} are transient.
\end{IEEEproof}
\subsection{Simulations: Dynamically Adjusting the Number of States}
\label{subsec:annealing_2d}
For a general two-dimensional network, it is difficult for stations to predict the \KH{resolutions}{number of states} they need.
It may still be difficult even for networks with chordal squares, since it is not known whether it is possible to find the maximum clique in the square of a\KH{}{ unit disk} graph efficiently.
Therefore, we propose the following \KH{dynamic algorithm}{dynamics to solve the problem}.
Initially, every station \KH{sets its resolution to be the lower bound given by Theorem~\ref{thm:num_states_2d}}{follows the rule proposed for one-dimensional networks, \textit{i.e.}, station $\mathbf{r}$ uses resolution $l_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in N_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}w_{\mathbf{r}^\prime})\rceil$, where $w_\mathbf{r}=1+\lvert N_\mathbf{r}\rvert$,} and executes the modified multi-resolution protocol with annealing.
\KH{If a station knows that the local configuration within its two-hop neighborhood remains the same for a number of iterations ($10$ in our simulations), but it still experiences collisions, then it checks if there are any idle states within its two-hop neighborhood.
If such states exist, it selects one of these states; otherwise, it doubles the size of its state space (\textit{i.e.}, it `refines' its resolution), picks its state randomly, resets the strength of interaction it uses and continues executing the protocol.
The refinement stops once the local configuration is collision-free, or the upper bound given by Theorem~\ref{thm:num_states_2d} is reached, whichever occurs first.
The upper bound provides a guarantee on the minimum rate a station can have. }{}
\begin{figure}[t]
\centering
\subfigure[Convergence time]{
\includegraphics[width=3.5in]{time_2d}
\label{fig:time_2d}
}
\hspace{-0.4in}
\subfigure[Convergence percentage]{
\includegraphics[width=3.5in]{percentage_2d}
\label{fig:percentage_2d}
}
\caption[]{Simulations of the multi-resolution protocol with annealing for two-dimensional networks.}
\label{fig:annealing_2d}
\end{figure}
For simulation, we consider \khh{a}{} $10\times10$ square \KH{area}{segments} of two-dimensional networks where stations are distributed following a Poisson point process with \K{node density}{intensity} $\lambda$\KH{}{ ranging from $0.5$ to $5$}.
All other simulation settings are the same as those for one-dimensional networks.
The convergence time and percentage are plotted in Figs.~\ref{fig:time_2d} and~\ref{fig:percentage_2d}\khh{,}{} respectively.
\KH{T}{Notice t}he convergence time is longer compared to one-dimensional networks, since\KH{}{ the} stations may need to adjust \KH{their}{the} resolutions\KH{}{ they use}.
\KH{W}{Also, w}hen $\gamma$ is too large, the convergence time increases drastically.
\KHH{In this case}{For large $\gamma$}, the interaction between stations is so large that the protocol behaves like \textit{majority vote} shortly after the protocol is executed.
This makes the randomization of states after each refinement not effective in resolving collision\KH{s}{}.
\section{Extension to Multicast}
\label{sec:mc}
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=3.3in]{multicast}
\label{fig:multicast}
}
\subfigure[]{
\includegraphics[width=2.7in]{auxiliary}
\label{fig:auxiliary}
}
\caption[]{\KHH{\subref{fig:multicast}}{} Multicast on a graph \KHH{$G$}{} is equivalent to \KHH{\subref{fig:auxiliary}}{} broadcast on the corresponding auxiliary graph \KHH{$G^L$}{}.}
\label{fig:mc_bc}
\end{figure}
In this section we extend the results\khh{}{ on broadcast} to multicast \khh{traffic}{}.
\KHH{Here\khh{,}{} we consider every undirected link in the network $G$ to be two \textit{directed} links in opposite directions.}{}
We define two directed links $a,a^\prime$ in $G$ to be (one-hop) \textit{link-peers} if and only if the transmitter of one link is the receiver of the other link.
The neighboring relationship can be represented by an \textit{auxiliary graph} $G^L$ constructed as follows: a directed link $a$ in $G$ is represented by a vertex $a$ in $G^L$, and two directed links $a,a^\prime$ being one-hop link-peers in $G$ is represented by an undirected edge $(a,a^\prime)$ in $G^L$.
An example is shown in Fig.~\ref{fig:mc_bc}.
Suppose station Tx transmits packets to station Rx only, \textit{i.e.}, only link $3$\KHH{}{ (labeled in red)} is used by Tx, and suppose it is active.
Then all its\KHH{}{ (blue)} one-hop link-peers \KHH{(links $2,4,8$)}{} must be silent due to the half-duplex constraint.
All\KHH{}{ (magenta)} two-hop link-peers must be silent also, because either they are originated from \textit{hidden terminals} (links $1,5,9$), or they are links not used by Tx (link $7$).
All other links\KHH{}{ (black)} are free to transmit, because either they are links from \textit{exposed terminals} to other stations not interfered by Tx (link $6$), or they are sufficiently far away (link $10$).
Therefore, under this neighboring model, a link must choose a state different from any of its one-hop and two-hop link-peers in order to form a collision-free configuration.
Correspondingly, in the auxiliary graph $G^L$, when vertex $3$ is active, its one-hop and two-hop peers must be silent at the same time.
In general, \textit{if we associate a state variable to each link, which always takes the same value as the state variable of the transmitter of the link (meaning that all links used by the same multicast session must be in the same state), then multicast on a graph $G$ is equivalent to broadcast on the corresponding \textit{auxiliary graph} $G^L$}.
Therefore, most results obtained for broadcast in previous sections can be applied here with slight modifications.
As discussed in Section~\ref{sec:model}, we assume station $\mathbf{r}$ multicasts packets to a subset $D_\mathbf{r}$ of its one-hop peers.
We represent a multicast session by the set of links used, \textit{i.e.}, $M_\mathbf{r}=\lbrace(\mathbf{r},\mathbf{r}^\prime)\colon\mathbf{r}^\prime\in D_\mathbf{r}\rbrace$.
\KHH{Following the ideas in Section~\ref{sec:2d}, we use one-hop and two-hop \textit{link-neighborhood} size to estimate the lower and upper bounds on the resolution for multicast.}{Recall that we use the one-hop and two-hop neighborhood size to estimate the lower and upper bounds on the resolution when there is only broadcast traffic.
By considering the analogy between multicast on $G$ and broadcast on $G^L$, it will be natural to use the one-hop and two-hop \textit{link-neighborhood} size to estimate the corresponding lower and upper bounds on the resolution for multicast. }
\KH{
\begin{thm}
\label{thm:num_states_2d_mc}
\khh{A lower bound on the needed resolution for a collision-free configuration for multicast to exist}{\kh{There exists a self-stabilizing protocol for multicast for which the lower bound on the resolution of each station $\mathbf{r}$}{For multicast, \kh{to allow stations to have a fair share of the wireless channel with their respective neighbors while avoiding collisions, a}{\K{a}{the}} lower bound\K{}{ $\underline{l}_\mathbf{r}$} on the resolution used by station $\mathbf{r}$}} is computed as follows:\K{
\begin{IEEEeqnarray}{rCl}
\underline{I}_a&=&\lbrace\mathbf{r}^\prime\colon M_{\mathbf{r}^\prime}\cap(L_a\cup\lbrace a\rbrace)\ne\emptyset\rbrace,\\
\underline{l}_a&=&\max_{a^\prime\in L_a\cup\lbrace a\rbrace}\lvert\underline{I}_{a^\prime}\rvert,\\
\label{eqn:resolution_mc_lb}
\underline{l}_\mathbf{r}&=&\biggl\lceil\log_2\biggl(\max_{a\in M_\mathbf{r}}\underline{l}_a\biggr)\biggr\rceil,
\end{IEEEeqnarray}
where\kh{}{ the set $\underline{I}_a$ contains the multicast sessions that use any link within link $a$'s one-hop link-neighborhood, and}}{
\begin{enumerate}
\item
$\underline{I}_a=\lbrace\mathbf{r}^\prime\colon M_{\mathbf{r}^\prime}\cap(L_a\cup\lbrace a\rbrace)\ne\emptyset\rbrace$,
\item
$\underline{w}_a=\lvert\underline{I}_a\rvert$,
\item
$\underline{l}_a=\max_{a^\prime\in L_a\cup\lbrace a\rbrace}\underline{w}_{a^\prime}$,
\item
$\underline{l}_\mathbf{r}=\lceil\log_2(\max_{a\in M_\mathbf{r}}\underline{l}_a)\rceil$.
\end{enumerate}
Here,} $L_a$ is the set of one-hop link-peers of link $a$\kh{, and $\underline{I}_a$ contains the multicast sessions that use any link within link $a$'s one-hop link-neighborhood\khh{. A}{; and the}}{.
\K{A}{The}} \KHH{corresponding}{} upper bound\K{}{ $\overline{l}_\mathbf{r}$}\KHH{}{ on the resolution used by station $\mathbf{r}$} is computed as follows:\K{
\begin{IEEEeqnarray}{rCl}
\overline{I}_\mathbf{r}&=&\lbrace\mathbf{r}^\prime\colon M_{\mathbf{r}^\prime}\cap\cup_{a\in M_\mathbf{r}}(L_a^2\cup\lbrace a\rbrace)\ne\emptyset\rbrace,\\
\label{eqn:resolution_mc_ub}
\overline{l}_\mathbf{r}&=&\biggl\lceil\log_2\biggl(\max_{\mathbf{r}^\prime\in\overline{I}_\mathbf{r}}\lvert\overline{I}_{\mathbf{r}^\prime}\rvert\biggr)\biggr\rceil,
\end{IEEEeqnarray}
where\kh{}{ any multicast session in the set $\overline{I}_\mathbf{r}$ either is $M_\mathbf{r}$ or cannot be active at the same time as $M_\mathbf{r}$ because it uses a link within the two-hop link-neighborhood of a link used by $M_\mathbf{r}$, and}}{
\begin{enumerate}
\item
$\overline{I}_\mathbf{r}=\lbrace\mathbf{r}^\prime\colon M_{\mathbf{r}^\prime}\cap\cup_{a\in M_\mathbf{r}}(L_a^2\cup\lbrace a\rbrace)\ne\emptyset\rbrace$,
\item
$\overline{w}_\mathbf{r}=\lvert\overline{I}_\mathbf{r}\rvert$,
\item
$\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in\overline{I}_\mathbf{r}}\overline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}
Here,} $L_a^2$ is the set of one-hop or two-hop link-peers of link $a$\kh{, and $\overline{I}_\mathbf{r}$ contains $M_\mathbf{r}$ and all multicast sessions that cannot be active at the same time as $M_\mathbf{r}$ because they use links within the two-hop link-neighborhood of a link used by $M_\mathbf{r}$}{}.
The resulting one-hop multicast throughput of a two-dimensional network is bounded as follows:
\KHH{\begin{equation}
\label{eqn:throughput_mc_2d}
\frac{1}{\lvert V\rvert}\sum_{\mathbf{r}\in V}\lvert D_\mathbf{r}\rvert2^{-\overline{l}_\mathbf{r}}\le\rho_\text{MC}\le\frac{1}{\lvert V\rvert}\sum_{\mathbf{r}\in V}\lvert D_\mathbf{r}\rvert2^{-\underline{l}_\mathbf{r}}.
\end{equation}}{\begin{equation}
\label{eqn:throughput_mc_2d}
\bigl\langle\lvert D_\mathbf{r}\rvert2^{-\overline{l}_\mathbf{r}}\bigr\rangle_\mathbf{r}\le\rho_\text{MC}\le\bigl\langle\lvert D_\mathbf{r}\rvert2^{-\underline{l}_\mathbf{r}}\bigr\rangle_\mathbf{r}.
\end{equation}}
\end{thm}
\begin{IEEEproof}
\K{}{The set $\underline{I}_a$ contains the multicast sessions that use any link in link $a$'s one-hop link-neighborhood. }
\K{For any link $a$}{At any time}, at most one multicast session in $\underline{I}_a$ can be active \K{at any time}{}.
Therefore, at least \K{$\lvert\underline{I}_a\rvert$}{$\lvert\underline{I}_a\rvert=\underline{w}_a$} states are required to resolve collisions among the multicast sessions \KHH{in $\underline{I}_a$}{using links in link $a$'s one-hop link-neighborhood}.
Link $a$ belongs to the one-hop link-neighborhood of any link $a^\prime\in L_a\cup\lbrace a\rbrace$.
Therefore, if link $a$ is used by any multicast session, it needs at least \K{$\max_{a^\prime\in L_a\cup\lbrace a\rbrace}\lvert\underline{I}_{a^\prime}\rvert=\underline{l}_a$}{$\max_{a^\prime\in L_a\cup\lbrace a\rbrace}\underline{w}_{a^\prime}=\underline{l}_a$} states to resolve collisions.
Finally, station $\mathbf{r}$ should use the finest resolution that its links use, therefore we have \K{the lower bound (\ref{eqn:resolution_mc_lb})}{$\underline{l}_\mathbf{r}=\lceil\log_2(\max_{a\in M_\mathbf{r}}\underline{l}_a)\rceil$ as the lower bound on the resolution}.
\K{}{Any multicast session in the set $\overline{I}_\mathbf{r}$ either is $M_\mathbf{r}$ or cannot be active at the same time as $M_\mathbf{r}$ because it uses a link in the two-hop link-neighborhood of a link used by $M_\mathbf{r}$. }
\K{For any station $\mathbf{r}$}{In the worst case}, at most one multicast session in $\overline{I}_\mathbf{r}$ can be active at any time \K{in the worst case}{}.
Therefore, at most \K{$\lvert\overline{I}_\mathbf{r}\rvert$}{$\lvert\overline{I}_\mathbf{r}\rvert=\overline{w}_\mathbf{r}$} states are required to resolve collisions among the multicast sessions in $\overline{I}_\mathbf{r}$.
Finally, station $\mathbf{r}$ also belongs to $\overline{I}_{\mathbf{r}^\prime}$ for $\mathbf{r}^\prime\in\overline{I}_\mathbf{r}$, implying the upper bound \K{(\ref{eqn:resolution_mc_ub})}{$\lceil\log_2(\max_{\mathbf{r}^\prime\in\overline{I}_\mathbf{r}}\overline{w}_{\mathbf{r}^\prime})\rceil=\overline{l}_\mathbf{r}$ on the resolution}.
\end{IEEEproof}
}{
\begin{thm}
\label{thm:num_states_2d_mc_lb}
For multicast, the lower bound $\underline{l}_\mathbf{r}$ on the resolution used by station $\mathbf{r}$ is computed as follows:
\begin{enumerate}
\item
$\underline{I}_a=\lbrace\mathbf{r}^\prime\colon S_{\mathbf{r}^\prime}\cap(L_a\cup\lbrace a\rbrace)\ne\emptyset\rbrace$,
\item
$\underline{w}_a=\lvert\underline{I}_a\rvert$,
\item
$\underline{l}_a=\max_{a^\prime\in L_a\cup\lbrace a\rbrace}\underline{w}_{a^\prime}$,
\item
$\underline{l}_\mathbf{r}=\lceil\log_2(\max_{a\in S_\mathbf{r}}\underline{l}_a)\rceil$.
\end{enumerate}
Here, $L_a$ is the set of one-hop link-peers of link $a$.
\end{thm}
\begin{proof}
The set $\underline{I}_a$ contains the multicast sessions that use any link in link $a$'s one-hop link-neighborhood.
At any time, at most one multicast session in $\underline{I}_a$ can be active.
Therefore, at least $\lvert\underline{I}_a\rvert=\underline{w}_a$ states are required to resolve collision among the multicast sessions using links in link $a$'s one-hop link-neighborhood.
Link $a$ belongs to the one-hop link-neighborhood of any link $a^\prime\in L_a\cup\lbrace a\rbrace$.
Therefore, if link $a$ is used by any multicast session, it needs at least $\max_{a^\prime\in L_a\cup\lbrace a\rbrace}\underline{w}_{a^\prime}=\underline{l}_a$ states to resolve collision.
Finally, station $\mathbf{r}$ should use the finest resolution that its links use, therefore we have $\underline{l}_\mathbf{r}=\lceil\log_2(\max_{a\in S_\mathbf{r}}\underline{l}_a)\rceil$ as the lower bound on the resolution.
\end{proof}
\begin{thm}
\label{thm:num_states_2d_mc_ub}
For multicast, the upper bound $\overline{l}_\mathbf{r}$ on the resolution used by station $\mathbf{r}$ is computed as follows:
\begin{enumerate}
\item
$\overline{I}_\mathbf{r}=\lbrace\mathbf{r}^\prime\colon S_{\mathbf{r}^\prime}\cap\cup_{a\in S_\mathbf{r}}(L_a^2\cup\lbrace a\rbrace)\ne\emptyset\rbrace$,
\item
$\overline{w}_\mathbf{r}=\lvert\overline{I}_\mathbf{r}\rvert$,
\item
$\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in\overline{I}_\mathbf{r}}\overline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}
Here, $L_a^2$ is the set of one-hop or two-hop link-peers of link $a$.
\end{thm}
\begin{proof}
Any multicast session in the set $\overline{I}_\mathbf{r}$ either is $S_\mathbf{r}$ or cannot be active at the same time as $S_\mathbf{r}$ because it uses a link in the two-hop link-neighborhood of a link used by $S_\mathbf{r}$.
In the worst case, at most one multicast session in $\overline{I}_\mathbf{r}$ can be active at any time.
Therefore, at most $\lvert\overline{I}_\mathbf{r}\rvert=\overline{w}_\mathbf{r}$ states are required to resolve collision among the multicast sessions in $\overline{I}_\mathbf{r}$.
Finally, station $\mathbf{r}$ also belongs to $\overline{I}_{\mathbf{r}^\prime}$ for $\mathbf{r}^\prime\in\overline{I}_\mathbf{r}$, implying the upper bound $\lceil\log_2(\max_{\mathbf{r}^\prime\in\overline{I}_\mathbf{r}}\overline{w}_{\mathbf{r}^\prime})\rceil=\overline{l}_\mathbf{r}$ on the resolution.
\end{proof}
}
Note that \KH{Theorem~\ref{thm:num_states_2d_mc} gives the same result as Theorem~\ref{thm:num_states_2d}}{Theorems~\ref{thm:num_states_2d_mc_lb} and~\ref{thm:num_states_2d_mc_ub} give the same result as Theorems~\ref{thm:num_states_2d_lb} and~\ref{thm:num_states_2d_ub}} when there is only broadcast, since $\underline{I}_a=V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace$ when $\mathbf{r}$ is the transmitter of link $a$, and $\overline{I}_\mathbf{r}=V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace$.
The corresponding multi-resolution MAC protocol for multicast is shown as Protocol~\ref{alg:mr_mc}.
\KHH{T}{Apart from the randomization discussed in Section~\ref{sec:2d} (lines $10-12$ \KH{in Protocol~\ref{alg:mr_mc}}{}), t}he only difference from Protocol~\ref{alg:mr_bc} is that \textit{the votes on station $\mathbf{r}$'s next state are cast\khh{}{ed} by using the state information of link $a^\prime$'s one-hop link-neighborhood (lines $3$ and $6$), where $a^\prime$ belongs to any one-hop link-neighborhood of link $a$ used by station $\mathbf{r}$ (line $2$)}.
\KHH{By considering the analogy between multicast on $G$ and broadcast on $G^L$, we have the following convergence result. }{}
\begin{algorithm}[t]
\caption{Multi-Resolution MAC Protocol for Multicast}
\label{alg:mr_mc}
\begin{algorithmic}[1]
\WHILE{\khh{station $\mathbf{r}$ is active}{}}
\STATE
$\mathbf{r}$ sets the votes on all states to zero.
\FOR{$a^\prime\in\cup_{a\in M_\mathbf{r}}(L_a\cup\lbrace a\rbrace)$}
\IF{$\mathbf{r}$ is the only station occupying its current state in link $a^\prime$'s one-hop link-neighborhood}
\STATE
\KH{$\mathbf{r}$'s current state is assigned a single vote of weight one}{A single vote of weight one on $\mathbf{r}$'s current state is given by $a^\prime$}.
\ELSE
\STATE
\KH{$\mathbf{r}$ determines which states (according to $\mathbf{r}$'s resolution) are idle or have collisions in link $a^\prime$'s one-hop link-neighborhood}{$\mathbf{r}$ determines the states (according to $\mathbf{r}$'s resolution) that are idle or experience collision in link $a^\prime$'s neighborhood}.
\STATE
A vote of weight $\frac{1}{n}$ is \khh{added}{given} to each such state\KH{}{ by link $a^\prime$}, where $n$ is the number of such states.
\ENDIF
\ENDFOR
\IF{$n_s>0$ for multiple $s$'s, where $n_s$ is the total weight state $s$ receives}
\STATE
Replace $n_s$ by $n_s+\epsilon$, where $\epsilon>0$, for all $s$.
\ENDIF
\STATE
$\mathbf{r}$ selects state $s$ with a probability proportional to $f(n_s)$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\KHH{
\begin{thm}
\label{thm:mr_mc_2d}
For multicast on a two-dimensional network, suppose \KH{each station uses}{all stations use} a sufficiently fine resolution (\textit{e.g.}, the upper bound in Theorem~\ref{thm:num_states_2d_mc}), so that the existence of collision-free configurations is guaranteed.
Then, starting from an arbitrary initial configuration, Protocol~\ref{alg:mr_mc} will \khh{converge to}{, after a sufficiently long time, \KH{result in}{lead to}} a collision-free configuration.
\end{thm}
}{}
Next we discuss how station $\mathbf{r}$ \KHH{exchanges}{obtain} the state information \KHH{required}{it requires} in Protocol~\ref{alg:mr_mc}.
\KHH{A naive scheme is to let stations code the state information of different links into separate messages.
However, links having the same transmitter share some common state information.
Therefore, we introduce the following two-step message exchange which exploits this redunduncy to reduce the amount of information exchange between stations. }{}
The first step is to compute the state information of link $a$'s one-hop link-neighborhood \KHH{for all $a$ such that}{where} $\mathbf{r}$ is the transmitter of link $a$, \textit{i.e.}, $a=(\mathbf{r},\mathbf{r}^\prime)$.
In the $t$-th cycle, station $\mathbf{r}$ collects $\bigl\lbrace X_{\mathbf{r}^\prime}(t)\bigr\rbrace_{\mathbf{r}^\prime\in V_\mathbf{r}}$.
Then $\mathbf{r}$ constructs the following \textit{disjoint} sets:
\begin{enumerate}
\item
common state information: $\mathcal{C}_\mathbf{r}(t)=\lbrace X_{\mathbf{r}^{\prime\prime}}(t)\rbrace_{\mathbf{r}^{\prime\prime}\colon\mathbf{r}\in D_{\mathbf{r}^{\prime\prime}}}$ (this includes the states of one-hop peers of $\mathbf{r}$ having $\mathbf{r}$ as an intended receiver, notice this state information is \textit{common to all links having $\mathbf{r}$ as the transmitter});
\item
self state information: $\mathcal{S}_\mathbf{r}(t)=\lbrace X_\mathbf{r}(t)\rbrace$ (this includes $\mathbf{r}$'s state, notice this state information is \textit{common only to all links in $M_\mathbf{r}$});
\item
link-specific state information for $(\mathbf{r},\mathbf{r}^\prime)$ where $\mathbf{r}^\prime\in V_\mathbf{r}$: $\mathcal{L}_{\mathbf{r},\mathbf{r}^\prime}(t)=\lbrace X_{\mathbf{r}^\prime}(t)\rbrace$ if $\mathbf{r}\notin D_{\mathbf{r}^\prime}$, and $\mathcal{L}_{\mathbf{r},\mathbf{r}^\prime}(t)=\emptyset$ otherwise (this includes $\mathbf{r}^\prime$'s state if $\mathbf{r}^\prime$ transmits to stations other than $\mathbf{r}$.
\end{enumerate}
For $a=(\mathbf{r},\mathbf{r}^\prime)$, if $a\in M_\mathbf{r}$, the union $\mathcal{C}_\mathbf{r}(t)\cup\mathcal{S}_\mathbf{r}(t)\cup\mathcal{L}_{\mathbf{r},\mathbf{r}^\prime}(t)$ is the state information in the one-hop link-neighborhood of link $a$; otherwise if $a\notin M_\mathbf{r}$, the corresponding state information is the union $\mathcal{C}_\mathbf{r}(t)\cup\mathcal{L}_{\mathbf{r},\mathbf{r}^\prime}(t)$.
\KHH{As an example, consider the network shown in Fig.~\ref{fig:onehopstateinfo_network}.
A solid arrow is a link between a transmitter and an intended receiver, while a dashed arrow is a link between a transmitter and a nonintended receiver.
Fig.~\ref{fig:onehopstateinfo_table} shows how station $\mathbf{r}_0$ computes the state information of links $a_1,a_2,a_3,a_4$ following the above procedure.
\K{Since $\mathbf{r}_0$ is an intended receiver for both $\mathbf{r}_2$ and $\mathbf{r}_3$, the states of both $\mathbf{r}_2$ and $\mathbf{r}_3$ are common to all links $a_1,a_2,a_3,a_4$, hence $\mathcal{C}_{\mathbf{r}_0}(t)=\lbrace X_{\mathbf{r}_2},X_{\mathbf{r}_3}\rbrace$.
Because $\mathbf{r}_0$ transmits to $\mathbf{r}_1$ and $\mathbf{r}_2$ only, $\mathcal{S}_{\mathbf{r}_0}(t)=\lbrace X_{\mathbf{r}_0}\rbrace$ is the state information common only to links $a_1,a_2$.
Since $\mathbf{r}_1$ and $\mathbf{r}_4$ transmit to stations other than $\mathbf{r}_0$, $X_{\mathbf{r}_1}$ and $X_{\mathbf{r}_4}$ are included in $\mathcal{L}_{\mathbf{r}_0,\mathbf{r}_1}(t)$ and $\mathcal{L}_{\mathbf{r}_0,\mathbf{r}_4}(t)$\khh{,}{} respectively.
}{}
}{}
\KHH{}{
\begin{table}[t]
\centering
\caption{One-hop state information of link $a$ with station $\mathbf{r}_0$ being the transmitter.}
\label{table:onehop_state}
\begin{tabular}{||c||c|c|c||}\hline
$a=(\mathbf{r},\mathbf{r}^\prime)$&$\mathcal{C}_\mathbf{r}(t)$&$\mathcal{S}_\mathbf{r}(t)$&$\mathcal{L}_{\mathbf{r},\mathbf{r^\prime}}(t)$\\\hline
$a_1$&$X_{\mathbf{r}_2},X_{\mathbf{r}_3}$&$X_{\mathbf{r}_5}$&$X_{\mathbf{r}_1}$\\\hline
$a_2$&$X_{\mathbf{r}_2},X_{\mathbf{r}_3}$&$X_{\mathbf{r}_5}$&empty\\\hline
$a_3$&$X_{\mathbf{r}_2},X_{\mathbf{r}_3}$¬ included&empty\\\hline
$a_4$&$X_{\mathbf{r}_2},X_{\mathbf{r}_3}$¬ included&$X_{\mathbf{r}_4}$\\\hline
\end{tabular}
\end{table}}
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=2.7in]{onehopstateinfo}
\label{fig:onehopstateinfo_network}
}
\subfigure[]{
\includegraphics[width=3.5in]{stateinfo}
\label{fig:onehopstateinfo_table}
}
\caption[]{One-hop state information of links $a_1,a_2,a_3,a_4$ with station $\mathbf{r}_0$ being the transmitter.}
\label{fig:onehopstateinfo}
\end{figure}
The \KHH{second}{next} step is to let $\mathbf{r}$ broadcast $\mathcal{C}_\mathbf{r}(t)$, $\mathcal{S}_\mathbf{r}(t)$, and $\mathcal{L}_{\mathbf{r},\mathbf{r}^\prime}(t)$ for $\mathbf{r}^\prime\in V_\mathbf{r}$.
Note that any link in $\cup_{a\in M_\mathbf{r}}(L_a\cup\lbrace a\rbrace)$, \textit{i.e.}, the set of links\K{}{ in Protocol~\ref{alg:mr_mc}} that cast votes on $\mathbf{r}$'s next state, must be one of the followings:
\begin{enumerate}
\item
$(\mathbf{r},\mathbf{r}^\prime)$ where $\mathbf{r}^\prime\in D_\mathbf{r}$, \textit{i.e.}, a link used by station $\mathbf{r}$,
\item
$(\mathbf{r}^\prime,\mathbf{r})$ where $\mathbf{r}^\prime\in V_\mathbf{r}\setminus D_\mathbf{r}$, \textit{i.e.}, the link from a nonintended receiver of station $\mathbf{r}$ to station $\mathbf{r}$,
\item
$(\mathbf{r}^\prime,\mathbf{r}^{\prime\prime})$ where $\mathbf{r}^\prime\in D_\mathbf{r}$ and $\mathbf{r}^{\prime\prime}\in V_{\mathbf{r}^\prime}$, \textit{i.e.}, a link originated from an intended receiver of station $\mathbf{r}$.
\end{enumerate}
The transmitters of all these links are within the one-hop neighborhood of $\mathbf{r}$.
\KHH{After station $\mathbf{r}$ receives the state information from its one-hop peer $\mathbf{r}^\prime$ in the second step, if $\mathbf{r}^\prime$ is an intended receiver of $\mathbf{r}$, then $\mathbf{r}$ needs to recover the state information of \textit{all} links with $\mathbf{r}^\prime$ as the transmitter; otherwise, $\mathbf{r}$ only needs to recover the state information of link $(\mathbf{r}^\prime,\mathbf{r})$.
Here, it is assumed that station $\mathbf{r}$ knows $D_{\mathbf{r}^\prime}$ for all $\mathbf{r}^\prime\in V_\mathbf{r}$ so that recovery of state information of all links is possible; this can be done by letting each station broadcast a list of its intended receivers while setting up a multicast session.}{}
Therefore, station $\mathbf{r}$ can construct the one-hop state information of any link $a^\prime\in\cup_{a\in M_\mathbf{r}}(L_a\cup\lbrace a\rbrace)$, and then select\khh{}{s} its state at the $(t+1)$-st cycle following Protocol~\ref{alg:mr_mc}.
\KHH{The amount of information exchange can be characterized as follows:
\begin{enumerate}
\item
In the first step, station $\mathbf{r}$ broadcasts its own state, which consists of $l_\mathbf{r}$ bits.
Station $\mathbf{r}$ \khh{also}{} broadcasts\khh{}{ also} its identity, which helps its one-hop peers partition the collected state information into the disjoint sets described above.
\item
In the second step, station $\mathbf{r}$ broadcasts the states of itself and all its one-hop peers (which is already partitioned as described above), which consist of $l_\mathbf{r}+\sum_{\mathbf{r}^\prime\in V_\mathbf{r}}l_{\mathbf{r}^\prime}$ bits.
Station $\mathbf{r}$ also broadcasts its identity here, to help its one-hop peers recover the state information of each link.
\end{enumerate}
}{In the above process, station $\mathbf{r}$ needs to broadcast $2l_\mathbf{r}+\sum_{\mathbf{r}^\prime\in N_\mathbf{r}}l_{\mathbf{r}^\prime}$ bits for the state information, and in both steps, station $\mathbf{r}$ needs to broadcast its identity together with the state information.}
\KHH{}{
By considering again the analogy between multicast on $G$ and broadcast on $G^L$, we have the following convergence result.
\begin{thm}
\label{thm:mr_mc_2d}
For multicast on a two-dimensional network, suppose \KH{each station uses}{all stations use} a sufficiently fine resolution (\textit{e.g.}, the upper bound in Theorem~\ref{thm:num_states_2d_mc}), so that the existence of collision-free configurations is guaranteed.
Then, starting from an arbitrary initial configuration, Protocol~\ref{alg:mr_mc} will, after a sufficiently long time, \KH{result in}{lead to} a collision-free configuration.
\end{thm}}
\begin{figure}[t]
\centering
\subfigure[Convergence time]{
\includegraphics[width=3.5in]{time_2d_20}
\label{fig:time_mc_2d_20}
}
\hspace{-0.4in}
\subfigure[Convergence percentage]{
\includegraphics[width=3.5in]{percentage_2d_20}
\label{fig:percentage_mc_2d_20}
}
\caption[]{Simulations of the multi-resolution protocol with annealing for multicast, $q=0.2$ (the same legend applies to both figures).}
\label{fig:annealing_mc_2d_20}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Convergence time]{
\includegraphics[width=3.5in]{time_2d_80}
\label{fig:time_mc_2d_80}
}
\hspace{-0.4in}
\subfigure[Convergence percentage]{
\includegraphics[width=3.5in]{percentage_2d_80}
\label{fig:percentage_mc_2d_80}
}
\caption[]{Simulations of the multi-resolution protocol with annealing for multicast, $q=0.8$ (the same legend applies to both figures).}
\label{fig:annealing_mc_2d_80}
\end{figure}
Figs.~\ref{fig:annealing_mc_2d_20} and~\ref{fig:annealing_mc_2d_80} show \khh{a}{the} simulation of Protocol~\ref{alg:mr_mc} \khh{for a}{on} two-dimensional network\khh{}{s}.
In the simulation, any one-hop peer of station $\mathbf{r}$ is an intended receiver of the multicast by station $\mathbf{r}$ with probability $q$, independent of other one-hop peers.
All other simulation settings are the same as those for broadcast.
Each station uses the \KH{lower bound on the resolution predicted by Theorem~\ref{thm:num_states_2d_mc}}{resolution predicted by Theorem~\ref{thm:num_states_2d_mc_lb}} and executes Protocol~\ref{alg:mr_mc}.
Each station refines its resolutions if necessary, until the local configuration is collision-free, or the upper bound given by Theorem~\ref{thm:num_states_2d_mc} is reached, whichever occurs first.
The simulation results are similar to those for broadcast in Section~\ref{sec:2d}.
It appears that when $q$ is larger, the convergence time is shorter and the convergence percentage is higher.
This suggests that as $q$ is larger, the lower bound given by Theorem~\ref{thm:num_states_2d_mc} is accurate enough so fewer stations need to refine their resolutions, which helps speed up the convergence.
\section{Multi-Channel Networks}
\label{sec:multich}
In this section we assume there are $K$ orthogonal channels in the network.
A station can either transmit on one channel only, or listen to all channels simultaneously at any time, \textit{i.e.}, stations are half-duplex.\footnote{This is just one of several possibilities. For half-duplex constraints with multiple channels, similar ideas can apply to other models, \textit{e.g.}, if stations can listen to only one channel at a time.}
\KHH{In this case, the state of a station represents \textit{both the slot $s$ and the channel $\omega$} over which the station transmits, \textit{i.e.}, $X_\mathbf{r}(t)=(\omega,s)$. }{}
To estimate the lower and upper bounds on the resolutions for broadcast\KHH{}{ and multicast} with $K$ orthogonal channels, notice that the best possible scenario in station $\mathbf{r}$'s one-hop neighborhood is that $\mathbf{r}$ occupies one slot to transmit, and in the remaining slots, $\mathbf{r}$ receives one packet on each channel; while the worst situation in station $\mathbf{r}$'s two-hop neighborhood is that every station within this two-hop neighborhood must transmit at different times.
Hence we have the following results.
\KH{
\begin{thm}
\label{thm:num_states_2d_multich}
\khh{A lower bound on the needed resolution for a collision-free configuration for broadcast with $K$ orthogonal channels to exist is given by}{\kh{There exists a self-stabilizing protocol for broadcast with $K$ orthogonal channels for which the resolution of each station $\mathbf{r}$ is lower bounded by}{For broadcast with $K$ orthogonal channels, \kh{to allow stations to have a fair share of the wireless channel with their respective neighbors while avoiding collisions, a}{\K{a}{the}} lower bound\K{}{ $\underline{l}_\mathbf{r}$} on the resolution used by station $\mathbf{r}$ is}}\K{
\begin{equation}
\label{eqn:resolution_multich_lb}
\underline{l}_\mathbf{r}=\biggl\lceil\log_2\biggl(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}\underline{w}_{\mathbf{r}^\prime}\biggr)\biggr\rceil,
\end{equation}
where $\underline{w}_\mathbf{r}=1+\frac{\lvert V_\mathbf{r}\rvert}{K}$\khh{.}{;}
}{ computed as follows:
\begin{enumerate}
\item
$\underline{w}_\mathbf{r}=1+\frac{\lvert V_\mathbf{r}\rvert}{K}$,
\item
$\underline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}\underline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}}\kh{\khh{A corresponding upper bound is given by}{and upper bounded by}}{\K{A}{The} \KHH{corresponding}{} upper bound\K{}{ $\overline{l}_\mathbf{r}$}\KHH{}{ on the resolution used by station $\mathbf{r}$} is}\K{
\begin{equation}
\label{eqn:resolution_multich_ub}
\overline{l}_\mathbf{r}=\biggl\lceil\log_2\biggl(\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime}\biggr)\biggr\rceil,
\end{equation}
where $\overline{w}_\mathbf{r}=1+\lvert V_\mathbf{r}^2\rvert$.
}{ computed as follows:
\begin{enumerate}
\item
$\overline{w}_\mathbf{r}=1+\lvert V_\mathbf{r}^2\rvert$,
\item
$\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in V_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}}
\end{thm}
}{
\begin{thm}
\label{thm:num_states_2d_multich_lb}
For broadcast with $K$ orthogonal channels, the lower bound $\underline{l}_\mathbf{r}$ on the resolution used by station $\mathbf{r}$ is computed as follows:
\begin{enumerate}
\item
$\underline{w}_\mathbf{r}=1+\frac{\lvert N_\mathbf{r}\rvert}{K}$,
\item
$\underline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in N_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace}\underline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}
\end{thm}
\begin{thm}
\label{thm:num_states_2d_multich_ub}
For broadcast with $K$ orthogonal channels, the upper bound $\overline{l}_\mathbf{r}$ on the resolution used by station $\mathbf{r}$ is computed as follows:
\begin{enumerate}
\item
$\overline{w}_\mathbf{r}=1+\lvert N_\mathbf{r}^2\rvert$,
\item
$\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in N_\mathbf{r}^2\cup\lbrace\mathbf{r}\rbrace}\overline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}
Here, $N_\mathbf{r}^2$ is the set of one-hop or two-hop peers of $\mathbf{r}$.
\end{thm}
}
Similar arguments provide the corresponding lower and upper bounds on the resolution for multicast.
\KH{
\begin{thm}
\label{thm:num_states_2d_mc_multich}
\khh{A lower bound on the needed resolution for a collision-free configuration for multicast with $K$ orthogonal channels to exist}{\kh{There exists a self-stabilizing protocol for multicast with $K$ orthogonal channels for which the lower bound on the resolution of each station $\mathbf{r}$}{For multicast with $K$ orthogonal channels, \kh{to allow stations to have a fair share of the wireless channel with their respective neighbors while avoiding collisions, a}{\K{a}{the}} lower bound\K{}{ $\underline{l}_\mathbf{r}$} on the resolution used by station $\mathbf{r}$}} is computed as follows:
\K{
\begin{IEEEeqnarray}{rCl}
\underline{I}_a&=&\lbrace\mathbf{r}^\prime\colon M_{\mathbf{r}^\prime}\cap(L_a\cup\lbrace a\rbrace)\ne\emptyset\rbrace,\\
\underline{w}_a&=&1+\frac{\lvert\underline{I}_a\rvert-1}{K},\\
\underline{l}_a&=&\max_{a^\prime\in L_a\cup\lbrace a\rbrace}\underline{w}_{a^\prime},\\
\underline{l}_\mathbf{r}&=&\biggl\lceil\log_2\biggl(\max_{a\in M_\mathbf{r}}\underline{l}_a\biggr)\biggr\rceil.
\end{IEEEeqnarray}
}{
\begin{enumerate}
\item
$\underline{I}_a=\lbrace\mathbf{r}^\prime\colon M_{\mathbf{r}^\prime}\cap(L_a\cup\lbrace a\rbrace)\ne\emptyset\rbrace$,
\item
$\underline{w}_a=1+\frac{\lvert\underline{I}_a\rvert-1}{K}$,
\item
$\underline{l}_a=\max_{a^\prime\in L_a\cup\lbrace a\rbrace}\underline{w}_{a^\prime}$,
\item
$\underline{l}_\mathbf{r}=\lceil\log_2(\max_{a\in M_\mathbf{r}}\underline{l}_a)\rceil$.
\end{enumerate}}\K{A}{The} \KHH{corresponding}{} upper bound\K{}{ $\overline{l}_\mathbf{r}$}\KHH{}{ on the resolution used by station $\mathbf{r}$} is computed as follows:
\K{
\begin{IEEEeqnarray}{rCl}
\overline{I}_\mathbf{r}&=&\lbrace\mathbf{r}^\prime\colon M_{\mathbf{r}^\prime}\cap\cup_{a\in M_\mathbf{r}}(L_a^2\cup\lbrace a\rbrace)\ne\emptyset\rbrace,\\
\overline{w}_\mathbf{r}&=&\lvert\overline{I}_\mathbf{r}\rvert,\\
\overline{l}_\mathbf{r}&=&\biggl\lceil\log_2\biggl(\max_{\mathbf{r}^\prime\in\overline{I}_\mathbf{r}}\overline{w}_{\mathbf{r}^\prime}\biggr)\biggr\rceil.
\end{IEEEeqnarray}
}{
\begin{enumerate}
\item
$\overline{I}_\mathbf{r}=\lbrace\mathbf{r}^\prime\colon M_{\mathbf{r}^\prime}\cap\cup_{a\in M_\mathbf{r}}(L_a^2\cup\lbrace a\rbrace)\ne\emptyset\rbrace$,
\item
$\overline{w}_\mathbf{r}=\lvert\overline{I}_\mathbf{r}\rvert$,
\item
$\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in\overline{I}_\mathbf{r}}\overline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}}
\end{thm}
}{
\begin{thm}
\label{thm:num_states_2d_mc_multich_lb}
For multicast with $K$ orthogonal channels, the lower bound $\underline{l}_\mathbf{r}$ on the resolution used by station $\mathbf{r}$ is computed as follows:
\begin{enumerate}
\item
$\underline{I}_a=\lbrace\mathbf{r}^\prime\colon S_{\mathbf{r}^\prime}\cap(L_a\cup\lbrace a\rbrace)\ne\emptyset\rbrace$,
\item
$\underline{w}_a=1+\frac{\lvert\underline{I}_a\rvert-1}{K}$,
\item
$\underline{l}_a=\max_{a^\prime\in L_a\cup\lbrace a\rbrace}\underline{w}_{a^\prime}$,
\item
$\underline{l}_\mathbf{r}=\lceil\log_2(\max_{a\in S_\mathbf{r}}\underline{l}_a)\rceil$.
\end{enumerate}
Here, $L_a$ is the set of one-hop link-peers of link $a$.
\end{thm}
\begin{thm}
\label{thm:num_states_2d_mc_multich_ub}
For multicast with $K$ orthogonal channels, the upper bound $\overline{l}_\mathbf{r}$ on the resolution used by station $\mathbf{r}$ is computed as follows:
\begin{enumerate}
\item
$\overline{I}_\mathbf{r}=\lbrace\mathbf{r}^\prime\colon S_{\mathbf{r}^\prime}\cap\cup_{a\in S_\mathbf{r}}(L_a^2\cup\lbrace a\rbrace)\ne\emptyset\rbrace$,
\item
$\overline{w}_\mathbf{r}=\lvert\overline{I}_\mathbf{r}\rvert$,
\item
$\overline{l}_\mathbf{r}=\lceil\log_2(\max_{\mathbf{r}^\prime\in\overline{I}_\mathbf{r}}\overline{w}_{\mathbf{r}^\prime})\rceil$.
\end{enumerate}
Here, $L_a^2$ is the set of one-hop or two-hop link-peers of link $a$.
\end{thm}
}
\KHH{
When there are multiple channels available, stations need to let their peers know which slot and channel they use to transmit.
If a dedicated control channel (which is orthogonal to the $K$ channels for data transmission) is used to exchange state information, then $l_\mathbf{r}+\lceil\log_2K\rceil$ bits are required to represent station $\mathbf{r}$'s state: $l_\mathbf{r}$ bits for the slot, and $\lceil\log_2K\rceil$ bits for the channel.
Alternatively, if there are control frames preceding each cycle, and these can be used to exchange state information on the $K$ channels for data transmission, a station can save the extra $\lceil\log_2K\rceil$ bits as follows: it broadcasts the state information on the $\omega$-th channel if the state information indicates a transmission on the $\omega$-th channel.
The multi-resolution protocol for broadcast on networks with multiple channels is shown as Protocol~\ref{alg:mr_bc_multich}.
The main difference from Protocol~\ref{alg:mr_bc} is that before station $\mathbf{r}$ computes the votes using the state information from station $\mathbf{r}^\prime$'s one-hop neighborhood, station $\mathbf{r}$ assumes the states $(\omega,s)$ for all $\omega$ are occupied by station $\mathbf{r}^\prime$, where $s$ is the slot currently occupied by station $\mathbf{r}^\prime$ (line $3$ in Protocol~\ref{alg:mr_bc_multich}).
This is due to the half-duplex constraint: if station $\mathbf{r}^\prime$ transmits in slot $s$, then it cannot receive on \textit{any} channel in slot $s$, meaning that packets transmitted by any one-hop peer in this slot experience collisions.
The multi-resolution protocol for multicast on networks with multiple channels can be constructed similarly.
\begin{algorithm}[t]
\caption{Multi-Resolution MAC Protocol for Broadcast on Networks with Multiple Channels}
\label{alg:mr_bc_multich}
\begin{algorithmic}[1]
\WHILE{\khh{station $\mathbf{r}$ is active}{}}
\STATE
$\mathbf{r}$ sets the votes on all states to zero.
\FOR{$\mathbf{r}^\prime\in V_\mathbf{r}\cup\lbrace\mathbf{r}\rbrace$}
\STATE
Assume states $(\omega,s)$ for all $\omega$ are occupied by station $\mathbf{r}^\prime$, where $s$ is the slot currently occupied by $\mathbf{r}^\prime$.
\IF{$\mathbf{r}$ is the only station occupying its current state in station $\mathbf{r}^\prime$'s one-hop neighborhood}
\STATE
\KH{$\mathbf{r}$'s current state is assigned a single vote of weight one}{A single vote of weight one on $\mathbf{r}$'s current state is given by $\mathbf{r}^\prime$}.
\ELSE
\STATE
\KH{$\mathbf{r}$ determines which slots $s$ (according to $\mathbf{r}$'s resolution) are idle or have collisions in $\mathbf{r}^\prime$'s one-hop neighborhood}{$\mathbf{r}$ determines the states (according to $\mathbf{r}$'s resolution) that station $\mathbf{r}^\prime$ is idle or collides}.
\STATE
A vote of weight $\frac{1}{Kn}$ is \khh{added}{given} to states $(\omega,s)$ for all $\omega$, where $n$ is the number of \KHH{slots $s$ determined above}{such slots}.
\ENDIF
\ENDFOR
\IF{$n_{(\omega,s)}>0$ for multiple $(\omega,s)$'s, where $n_{(\omega,s)}$ is the total weight state $(\omega,s)$ receives}
\STATE
Replace $n_{(\omega,s)}$ by $n_{(\omega,s)}+\epsilon$, where $\epsilon>0$, for all $(\omega,s)$.
\ENDIF
\STATE
$\mathbf{r}$ selects state $(\omega,s)$ with a probability proportional to $f(n_{(\omega,s)})$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
}{}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have proposed multi-resolution MAC protocols for wireless networks with arbitrary topolog\KH{ies}{y}.
We have shown that \khh{collision-free schedules can be established}{collisions can be eliminated} in a distributed manner, by \khh{allowing stations to exchange limited state information}{letting stations make their transmission decisions based on recent transmission decisions of its peers only}.
These protocols do not require all stations to use the same resolution, \textit{i.e.}, the same number of states or the same length of each slot\KH{}{, and zero collisions can still be achieved}.
\KH{Future work should \K{investigate}{explore} the performance of the multi-resolution protocols under the signal-to-interference-\kh{and-}{}noise ratio (SINR) model.
This model assumes a minimum SINR requirement at a receiver for successful reception and also takes into account cumulative interference from faraway transmissions, which is more realistic.
Under this model, there may be a need to reconsider what messages should be exchanged among peers in order to eliminate collisions in a distributed manner.
}{}
\section*{Acknowledgment}
The authors would like to thank Tianyi Li for his assistance in simulations.
\bibliographystyle{IEEEtran}
|
1,116,691,498,319 | arxiv | \section{Introduction}
\IEEEPARstart{E}{nd}-to-end backpropagation (E2EBP) \cite{rumelhart1986learning} has been the de facto training standard for deep architectures since almost the inception of deep learning.
It optimizes the entire model simultaneously using backpropagated error gradients with respect to some target signal.
At each optimization step, E2EBP requires sending an input through the entire model to compute an error (an \textit{end-to-end forward pass}), and then taking gradient of the error with respect to all trainable parameters using the chain rule (an \textit{end-to-end backward pass}).
Historically, E2EBP was a major achievement that first enabled training multilayer neural networks effectively and, as a result, capitalizing on the expressiveness of universal function approximators \cite{cybenko1989approximation}.
Many years of research has yielded E2EBP improvements that enabled training deep neural networks with thousands of layers and millions of units.
These include advances in both hardware and implementation such as large-scale distributed training with massive datasets, residual connections inside networks to facilitate end-to-end gradient flow, and so on.
And E2EBP has been intertwined with the success of deep learning \cite{krizhevsky2012imagenet,devlin2019bert,silver2016mastering}.
The versatility, practicality, and theoretical optimality (in terms of first-order gradient) of E2EBP made it the de facto standard.
However, E2EBP is not perfect and has shortcomings that create practical difficulties, especially when scaled to large architectures.
First, gradients may vanish or explode when propagated through too many layers, and much care is needed to avoid such suboptimal training dynamics \cite{he2016deep,zhang2019gradient}.
As another drawback, end-to-end training requires optimizing the model as a whole, which does not allow for modularized workflows and turns models into ``black boxes'', making it difficult for the users to debug models or extract any useful insight on the learning task from trained models \cite{duan2021modularizing}.
This inflexibility also means that training larger models with E2EBP would necessitate more computational resources, raising the bar to adopt deep learning technologies and creating concerns over their energy consumptions and environmental impacts \cite{gomez2020interlocking,wang2021revisiting}.
Furthermore, E2EBP can scale poorly to deep models due to less-than-ideal loss landscapes and spurious correlations \cite{ioffe2015batch}.
The convergence may become slow and the reached local minimum may be suboptimal.
More subtle are the issues of how effective and how much information is preserved in the backpropagation of error gradient.
From the optimization standpoint, as long as the model is differentiable, a gradient can be defined and an extremum (or saddle point) of the loss function can be reached by simply following the gradient, which is what E2EBP amounts to.
However, in many real-world scenarios, the user may need more control on the characteristics of the internal representations to improve robustness, generalization, etc.
In E2EBP, the error can only be computed after sending the internal representations through the usually many downstream layers and eventually into the output space.
As a result, the effect of the output loss function penalty on a hidden layer cannot be effectively controlled because of the nonlinear layers in between \cite{lee2015deeply}.
The above practical difficulties of E2EBP can make realizing the full potential of deep learning difficult in applications.
Therefore, it is rather important to seek alternatives to E2EBP that can preserve the good properties of E2EBP while improving its shortcomings.
Modular and weakly modular training are such alternatives.
\rch{This paper} uses the term \textit{modular training} to refer to training schemes that do not require end-to-end forward or backward pass, and \textit{weakly modular training} for training schemes that need end-to-end forward pass but not end-to-end backward pass.
In simple terms the possibilities to avoid E2EBP are all related to the use of the target signal in training.
E2EBP creates the error at the output and backpropagates it to each layer in the form of gradients.
Instead, one can use the target at each layer to train it locally with a proxy objective function, in a modular way without backpropagating the error gradients end-to-end.
Alternatively, one can approximate the inverse targets or local gradients to yield weak modularity.
Besides solving the above practical issues of E2EBP, modular training offers novel insights and practical implications to other research domains such as transferability estimation \cite{duan2021modularizing} and optimization.
The main limitation of modular and weakly modular training is that not all such methods can provide similar theoretical optimality guarantees in generic settings as E2EBP does.
There is a large body of work studying modular or weakly modular training on modern deep neural networks \cite{duan2019kernel,duan2021modularizing,lee2015difference,nokland2019training,jaderberg2017decoupled,czarnecki2017understanding,lansdell2020learning,bengio2014auto,belilovsky2019greedy,pogodin2020kernelized,meulemans2020theoretical,belilovsky2020decoupled,manchev2020target,nokland2016direct,lillicrap2016random,moskovitz2018feedback,samadi2017deep,krotov2019unsupervised,qin2021contrastive,mostafa2018deep,marquez2018deep,xiao2018biologically,carreira2014distributed,balduzzi2014kickback,ororbia2020continual,guerguiev2019spike,kunin2020two,launay2019principled,taylor2016training,podlaski2020biological,ororbia2019biologically,laborieux2020scaling,scellier2017equilibrium,veness2019gated,ma2020hsic,huang2018learning,obeid2019structured,song2020can,whittington2017approximation,pehlevan2018similarity,malach2018provably,gu2020fenchel,zhang2017convergent,askari2018lifted,lau2018proximal,carreira2016parmac,li2019lifted,zhang2016efficient,zeng2019global,marra2020local,raghavan2020distributed,bartunov2018assessing,choromanska2019beyond,lukasiewicz2020can,baldi2018learning,liao2016important,bengio2020deriving,lowe2019putting,wang2021revisiting}.
However, only a few of the existing schemes have been shown to produce competitive performance on meaningful benchmark datasets.
Even fewer provided some form of optimality guarantee under reasonably general settings.
And despite the surging interest, no recent existing work provides a survey on these provably optimal modular and weakly modular training methods.
The goal of this paper is exactly to review existing work on avenues to train deep neural networks without E2EBP, so as to push the field to new highs.
\rch{This paper mainly focuses on provably optimal methods, reviewing them extensively.
Other popular families of methods are also discussed after.}
\rch{Formally, this paper defines \textit{provably optimal} training methods to be those that can be proven to yield optimal solutions to the given training objective in reasonably general settings, with some level of flexibility in how ``optimal'' is interpreted (for example, optimal with respect to first-order gradient and optimal with respect to first-order and second-order gradients are both valid interpretations because both are relevant in practice).}
Why \rch{put more emphasis on} the provably optimal approaches?
For practitioners, to the best of our knowledge, only provably optimal methods have been shown to produce performance comparable to E2EBP on meaningful benchmark datasets \cite{belilovsky2019greedy,belilovsky2020decoupled,nokland2019training,duan2021modularizing,wang2021revisiting}, whereas others have been greatly outperformed by E2EBP \cite{bartunov2018assessing}.
For theoreticians, provably optimal methods offer theoretical insights \rch{that} the others do not.
In particular, from a learning theoretic perspective, the fundamental job of a learning method is to effectively find a solution that best minimizes a given objective \cite{shalev2014understanding}.
E2EBP is obviously capable of doing this.
And if a non-E2E alternative is not guaranteed to find a minimizer to the training objective, it is not a very compelling alternative to E2EBP because it cannot get the most basic job done.
In summary, our choice of reviewing only the probably optimal methods is motivated both by empirical performance and theoretical value.
And to dive deep into the theory, we had to somewhat sacrifice the breadth of this survey and focus \rch{mainly} on these methods.
These provably optimal methods can be categorized into three distinct abstract algorithms: Proxy Objective (modular), Target Propagation (weakly modular), and Synthetic Gradients (weakly modular).
And for each abstract algorithm, its popular instantiations, optimality guarantees, advantages and limitations, and potential implications on other research areas are discussed.
Some future research directions will also be sketched out.
A summary of all methods is provided in Table~\ref{table1}, their best reported results on standard benchmark datasets are listed in Table~\ref{table2}, and illustrations are given in Fig.~\ref{fig1}.
\rch{The other families of methods this paper reviews include Feedback Alignment (weakly modular) and Auxiliary Variables (weakly modular).
Due to a lack of established optimality guarantee or other limitations, we review these methods somewhat less extensively after the provably optimal ones.
Reported results from popular instantiations of these methods on standard benchmark datasets are given in Table~\ref{table2}.
}
This tutorial paper is intended to serve as an entry point to readers that either wish to engage in research in modular or weakly modular learning or simply would like to use these training methods in their applications.
The readers should be able to gain a holistic view on the existing methods as well as a full understanding on how each of them work.
However, detailed discussions and optimality proofs should be found in the cited original papers.
This paper assumes basic knowledge on the core concepts in deep learning such as knowing what a deep neural network is and how E2EBP works.
In Section~\ref{sec2}, \rch{the settings considered by this survey are described}.
The formal presentation \rch{on provably optimal methods} is in Section~\ref{sec3}.
Specifically, Proxy Objective methods will be discussed in Section~\ref{sec3:po}, Target Propagation in Section~\ref{sec3:tp}, and Synthetic Gradients in~\ref{sec3:sg}.
\rch{Feedback Alignment and Auxiliary Variables are reviewed in Section~\ref{other_methods}.}
\section{The Settings}
\label{sec2}
\rch{This survey} considers the task of classification using feedforward neural networks, which is a set-up in which all reviewed methods have been shown to work in the respective papers.
Some of the methods have been evaluated on other architectures such as recurrent networks and for other tasks such as regression.
Some other methods can be extended to more models and tasks, but there are no papers known to us that have provided empirical evidence on the practical validity of such extensions.
In the rest of this paper, the term \textit{module} is used to mean a composition of an arbitrary number of network layers.
Consider a two-module network \(f\left(\cdot, \theta_1, \theta_2\right) = f_2\left(f_1\left(\cdot, \theta_1\right), \theta_2\right)\) for simplicity, where \(\theta_1\) represents the trainable parameters of the input module \(f_1\) and \(\theta_2\) the output module \(f_2\).
Each method presented can be trivially extended to training with more than two modules by analyzing one pair of modules at a time.
Note that \(f_i\) is not only defined by its trainable parameters, but also the non-trainable ones.
For example, a fully-connected layer on \(\mathbb{R}^d\) with \(p\) nodes and some nonlinearity \(\sigma:\mathbb{R}^p\to\mathbb{R}^p\) can be written as \(f_i\left(\cdot, W_i\right):\mathbb{R}^d\to\mathbb{R}^p:x\mapsto \sigma\left(W_i x\right)\), where \(W_i\) is its trainable weight matrix.
In this case, \(\theta_i\) is \(W_i\), and that this layer is fully-connected together with the nonlinearity \(\sigma\) constitutes the non-trainable parameters.
For \(f_i\), its non-trainable parameters are denoted as \(\omega_i\).
\rch{The dependence of \(f_i\) on the non-trainable parameters \(\omega_i\) will not be explicitly written out as this survey only focuses on the training of the model.}
Let the data be \((X, Y)\)\footnote{\rch{For data, random elements are denoted using capital letters and lower-case letters are reserved for their observations (actual data collected).}}, with \(X\) being the input example --- a random element on \(\mathbb{X}\subset\mathbb{R}^d\) for some \(d\), and \(Y\) its label --- a random variable on \(\mathbb{Y}\subset\mathbb{R}\).
Suppose a loss function \(\ell:\mathbb{Y}\times\mathbb{Y}\to\mathbb{R}\) is given.
Define the risk as \(R\left(f\left(\cdot, \theta_1, \theta_2\right), X, Y\right) = E_{(X, Y)}\ell\left(f\left(X, \theta_1, \theta_2\right), Y\right)\)
The goal is to find some \(\theta_1^\prime, \theta_2^\prime\) such that \(\left(\theta_1^\prime, \theta_2^\prime\right)\in\argmin_{\left(\theta_1, \theta_2\right)} R\left(f\left(\cdot, \theta_1, \theta_2\right), X, Y\right)\).
\rch{Suppose a training set \(S = \left\{x_i, y_i\right\}_{i=1}^n\) is given.}
And in practice, the risk can be estimated by an objective function, e.g., the sample average of the loss evaluated on \(S\) or the sample average together with a regularization term.
Let an objective function be \(L\left(f, \theta_1, \theta_2, S\right)\), and the goal in practice is to minimize this objective in \(\left(\theta_1, \theta_2\right)\).
\textit{End-to-end forward pass} refers to sending input through the entire model, i.e., evaluating \(f\left(x, \theta_1, \theta_2\right)\) for some input \(x\).
And \textit{end-to-end backward pass} means taking gradient of the training objective with respect to all trainable parameters, i.e., evaluating the partial derivatives of \(L\) with respect to both \(\theta_1\) and \(\theta_2\) using the chain rule.
\rch{This survey} uses the term \textit{modular training} to refer to training schemes that do not require end-to-end forward pass or backward pass, and \textit{weakly modular training} for training schemes that need end-to-end forward pass but not end-to-end backward pass.
\section{Provably Optimal Learning Schemes}
\label{sec3}
In this section, the reviewed modular and weakly modular training methods are presented.
Modular methods will be discussed first in Section \ref{sec3:modular}.
Weakly modular methods will be presented in Section~\ref{sec3:ne2e}.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{./figures/methods}
\caption{
E2EBP, modular, and weakly modular training schemes in the case where a model is trained as two modules.
Proxy Objective methods leverage a strictly local proxy objective \(L_1\) for training the input module.
Target Propagation approximates an ``inverse'' of the output module \(f_2\) using an auxiliary learnable inverse model \(g_2\), then backpropagates a target \(t_1\) instead of gradient (the connection from \(f_2\) to \(g_2\) is optional).
Synthetic Gradients methods approximate local gradient with an auxiliary trainable gradient model \(s_2\).
}
\label{fig1}
\end{figure*}
\begin{table*}[t]
\caption{Provably optimal modular and weakly modular training methods.}
\label{table1}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Training Scheme} & \textbf{Main Idea} & \textbf{Auxiliary Model} & \textbf{Forward Pass} & \textbf{Backward Pass}\\
\hline
Proxy Objective & Use a local proxy objective function & Not necessary & Not required & Not required \\
\hline
Target Propagation & Approximate ``inverse'' of downstream layers, backpropagate targets & Required & Required & Not required \\
\hline
Synthetic Gradients & Approximate local gradients & Required & Required & Required \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[t]
\caption{Performant instantiations of reviewed methods and best reported test accuracy on standard benchmarking datasets. Note that each result entry should only be compared against the corresponding E2EBP baseline due to the potentially different test settings across entries. VGG-11B and VGG-8B are customized VGG-11 and VGG-8, respectively. Only Proxy Objective methods can match the performance of E2EBP on competitive networks.}
\label{table2}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Training Scheme} & \textbf{Instantiation} & \textbf{Test Dataset} & \textbf{Network} & \textbf{Modularity} & \textbf{Acc. (\%)} & \textbf{E2EBP Acc. (\%)}\\
\hline
\multirow{10}{*}{Proxy Objective} & \cite{belilovsky2019greedy} & ImageNet & VGG-11 & Trained as 3-layer modules & 67.6 (Top-1)/88.0 (Top-5) & 67.9/88.0 \\
\cline{2-7}
& \multirow{3}{*}{\cite{belilovsky2020decoupled} (synchronous)} & \multirow{3}{*}{ImageNet} & VGG-13 & Trained as 4 modules & 67.8/88.0 & 66.6/87.5 \\
& & & VGG-19 & Trained as 2 modules & 70.8/90.2 & 69.7/89.7 \\
& & & ResNet-152 & Trained as 2 modules & 74.5/92.0 & 74.4/92.1 \\
\cline{2-7}
& \multirow{4}{*}{\cite{nokland2019training}} & CIFAR-10 & VGG-11B & \multirow{4}{*}{Trained as 1-layer modules} & 96.03 & 94.98 \\
& & CIFAR-100 & VGG-11B & & 79.9 & 76.3 \\
& & SVHN & VGG-8B & & 98.26 & 97.71 \\
& & STL-10 & VGG-8B & & 79.49 & 66.92 \\
\cline{2-7}
& \multirow{2}{*}{\cite{duan2021modularizing}} & \multirow{2}{*}{CIFAR-10} & ResNet-18 & \multirow{2}{*}{Trained as 2 modules} & 94.93 & 94.91 \\
& & & ResNet-152 & & 95.73 & 95.87 \\
\cline{2-7}
& \multirow{4}{*}{\cite{wang2021revisiting}} & CIFAR-10 & \tiny{DenseNet-BC-100-12} & \multirow{4}{*}{Trained as 2 modules} & 95.26 & 95.39 \\
& & SVHN & ResNet-110 & & 96.85 & 96.93 \\
& & STL-10 & ResNet-110 & & 79.01 & 77.73 \\
& & ImageNet & ResNeXt-101 & & 79.65/94.72 & 79.36/94.60 \\
\hline
\multirow{2}{*}{Target Propagation} & \multirow{2}{*}{\cite{meulemans2020theoretical}} & \multirow{2}{*}{CIFAR-10} & Small MLP & \multirow{2}{*}{Trained layerwise} & 49.64 & 54.40 \\
& & & Small CNN & & 76.01 & 75.62 \\
\hline
\multirow{2}{*}{Synthetic Gradients} & \multirow{2}{*}{\cite{lansdell2020learning}} & CIFAR-10 & \multirow{2}{*}{Small CNN} & \multirow{2}{*}{Trained layerwise} & 74.8 & 76.9 \\
& & CIFAR-100 & & & 48.1 & 51.2 \\
\hline
\multirow{1}{*}{\rch{Feedback Alignment}} & \multirow{1}{*}{\cite{xiao2018biologically}} & \rch{ImageNet} & \multirow{1}{*}{\rch{AlexNet}} & \multirow{1}{*}{\rch{Trained layerwise}} & \rch{47.57/23.68} & \rch{49.15/25.01} \\
\hline
\multirow{2}{*}{\rch{Auxiliary Variables}} & \multirow{2}{*}{\cite{zeng2018block}} & \rch{MNIST} & \multirow{2}{*}{\rch{Small MLP}} & \multirow{2}{*}{\rch{Trained layerwise}} & \rch{95.68} & \rch{95.33} \\
& & \rch{CIFAR-10} & & & \rch{44.96} & \rch{46.99} \\
\hline
\end{tabular}
\end{table*}
\subsection{Modular Learning Schemes}
\label{sec3:modular}
\subsubsection{Proxy Objective}
\label{sec3:po}
This strategy amounts to finding a \textit{proxy objective function} \(L_1\left(f_1, \theta_1, \omega_2, S\right)\) that can be used to train the input module.
The important thing to note is that this proxy objective does not involve the trainable parameters \(\theta_2\), which enables decoupling the training of \(f_1\) and \(f_2\) completely.
Methods based on a proxy objective can be abstracted into Algorithm~\ref{alg:proxy} (also see Fig.~\ref{fig1} for an illustration).
Note that these methods focus on finding \(L_1\), but not the actual optimization on \(L_1\) or \(L\) --- any off-the-shelf optimizer such as stochastic gradient descent can be used here.
\begin{algorithm}[t]
\caption{Proxy Objective}
\label{alg:proxy}
\begin{algorithmic}[1]
\Require A proxy objective function \(L_1\)
\Begin
\State Find \(\theta_1^\star\in\argmin_{\theta_1}L_1\left(f_1, \theta_1, \omega_2, S\right)\)
\State Find \(\theta_2^\star\in\argmin_{\theta_2}L\left(f, \theta_1^\star, \theta_2, S\right)\)
\State \Return \(\left(\theta_1^\star, \theta_2^\star\right)\)
\End
\end{algorithmic}
\end{algorithm}
\paragraph{Instantiations}
Different variations of Proxy Objective methods differ mainly in how they choose the proxy objective \(L_1\).
Essentially, the existing proxy objectives can all be interpreted as characterizations of hidden representation separability.
Then training on \(L_1\) encourages \(f_1\) to improve separability of its output representations, making the classification problem simpler for \(f_2\).
There are mainly two schemes for quantifying this separability.
\begin{itemize}
\item \textbf{Separability through feature space distance/similarity}
Separability can be quantified via distance/similarity of learned hidden representation vectors.
In \cite{duan2019kernel,duan2021modularizing,wang2021revisiting}, this idea has been explored.
\cite{duan2019kernel,duan2021modularizing} proposed methods that do not require training auxiliary models, whereas in \cite{wang2021revisiting}, auxiliary networks were needed for transforming representations.
\cite{duan2021modularizing} proposed to quantify output data separability for \(f_1\) in terms of distances between its output data representation pairs in their native feature space.
Then \(L_1\) should encourage \(f_1\) to map example pairs from distinct classes further apart for better separability.
Specifically, suppose \(f_1\) ends with a nonlinearity \(\phi\) mapping into some inner product space such as ReLU, that is, \(f_1\left(\cdot, \theta_1\right) = \phi\circ g_1\left(\cdot, \theta_1\right)\), where \(g_1\) is some other function, any loss function that can help learn \(\theta_1\) such that \(g_1\) maximizes
\begin{equation}
\label{eq1}
\left\|\phi\left(g_1\left(x_i, \theta_1\right)\right) - \phi\left(g_1\left(x_j, \theta_1\right)\right)\right\|
\end{equation}
for all pairs of \(x_i, x_j\) from \(S\) with \(y_i\neq y_j\) can be used as a proxy objective, where the norm is the canonical norm induced by the inner product.
More generally, this distance can be substituted by other distance metrics or similarity measures.
In addition to encouraging \(f_1\) to separate examples from distinct classes, one can additionally modify \(L_1\) such that \(f_1\) is encouraged to map examples from identical classes closer, making the pattern more separable for \(f_2\).
To implement this idea, \cite{duan2019kernel} suggested minimizing
\begin{equation}
\label{eq2}
\left\|\phi\left(g_1\left(x_i, \theta_1\right)\right) - \phi\left(g_1\left(x_j, \theta_1\right)\right)\right\|
\end{equation}
for all pairs of \(x_i, x_j\) from \(S\) with \(y_i = y_j\) in addition to the primary proxy (Eq. ~\ref{eq1}).
In \cite{wang2021revisiting}, the authors proposed to attach an auxiliary representation-learning convolutional neural network (CNN) \(h_1\left(\cdot, \alpha_1\right)\) (\(\alpha_1\) denotes its trainable parameters) to \(f_1\) before computing feature distance/similarity.
This auxiliary model mapped the hidden representation to another representation space, and the proxy objective can be any objective function that drives \(h_1\left(f_1\left(x_i, \theta_1\right), \alpha_1\right)\) and \(h_1\left(f_1\left(x_j, \theta_1\right), \alpha_1\right)\) more (less) similar if \(y_i = (\neq) y_j\).
\(\alpha_1, \theta_1\) were trained jointly to optimize the proxy objective, and \(h_1\) was discarded during inference time.
Compared to \cite{duan2019kernel,duan2021modularizing}, the addition of an auxiliary model gives user the flexibility to choose the dimensionality of the feature space in which the proxy objective is computed regardless of the given network layer dimension.
The downside is that this auxiliary model requires extra resources to design and train.
Note that the training method in \cite{duan2019kernel} was initially established for the so-called ``kernel networks''.
But \cite{duan2021modularizing} later showed that neural networks are special cases of kernel networks, therefore making all results in \cite{duan2019kernel} applicable to neural networks as well.
\item \textbf{Separability through auxiliary classifier performance} Instead of expressing data separability via distance in feature space as above, \cite{belilovsky2019greedy,wang2021revisiting} attached an auxiliary CNN classifier to the output of \(f_1\) and used its accuracy as an indicator of data separability.
To train \(f_1\) to improve data separability, the authors proposed to use a classification loss such as cross-entropy on this auxiliary classifier and \(f_1\) as a proxy objective for training \(f_1\).
This auxiliary classifier was trained together with \(f_1\) to minimize the proxy objective and was discarded at test time.
\cite{mostafa2018deep,marquez2018deep} explored a similar idea but used different auxiliary classifiers.
Specifically, the auxiliary classifiers in \cite{mostafa2018deep} were single-layer fully-connected networks with random, fixed weights, and in \cite{marquez2018deep}, the auxiliary classifiers were two-layer, fully-connected, and trainable.
Using fixed auxiliary classifiers produces less performant main classifiers, as demonstrated in \cite{mostafa2018deep}.
In \cite{wang2021revisiting}, expressing data separability through similarity in feature space almost always outperformed doing so through performance of auxiliary classifiers in terms of final network performance.
\item As a combination of the earlier two approaches, one may quantify data separability in terms of a combination of both feature space distance and auxiliary classifier accuracy.
In \cite{nokland2019training}, two auxiliary networks were used to facilitate the training of \(f_1\).
The first network, denoted \(h_1\left(\cdot, \alpha_1\right)\), was a convolutional representation-learning module (\(\alpha_1\) denotes its trainable parameters) and the second was a linear classifier denoted \(q_1\left(\cdot, \beta_1\right)\) (\(\beta_1\) denotes its trainable parameters).
The proxy objective was a weighted combination of a ``similarity matching loss'' and a cross-entropy loss.
The similarity matching loss was the Frobenius norm between the cosine similarity matrix of \(h_1\left(f_1\left(\cdot, \theta_1\right), \alpha_1\right)\) evaluated on a batch of training examples and the cosine similarity matrix of their one-hot encoded labels.
The cross-entropy loss was on \(q_1\left(f_1\left(\cdot, \theta_1\right), \beta_1\right)\).
\(f_1, h_1, q_1\) were trained jointly to minimize the proxy objective, and \(h_1, q_1\) were discarded at test time.
Minimizing the similarity matching loss can be interpreted as maximizing distances between pairs of examples from distinct classes and minimizing those between pairs from identical classes.
Therefore, this loss term can be interpreted as quantifying data separability in terms of feature space distance.
The cross-entropy loss and the use of an auxiliary classifier can be viewed as expressing data separability using auxiliary classifier accuracy.
Thus, this method can be seen as a combination of the earlier two approaches.
\item In an effort to further reduce computational complexity and memory footprint for the greedy training method in \cite{belilovsky2019greedy}, \cite{belilovsky2020decoupled} proposed two variants.
In the synchronous variant, for each training step, an end-to-end forward pass was performed, then all modules were trained simultaneously, each minimizing its own proxy objective as proposed in \cite{belilovsky2019greedy}.
The algorithmic stability for this variant was proven both theoretically and empirically.
In the other asynchronous variant, no end-to-end forward pass was needed.
Instead, all modules were trained simultaneously in an asynchronous fashion.
This was made possible by maintaining a replay buffer for each module that contained stashed past outputs from the immediate upstream module.
Then each module received its input from this replay buffer instead of actual current output from its upstream module, eliminating the need for end-to-end forward pass.
Each module was still trained to minimize its own proxy objective in this asynchronous variant.
The synchronous variant is not fully modular, since end-to-end forward pass is needed.
The advantage of these variants is that, especially with the asynchronous variant, one can implement highly parallelized pipelines to train modules simultaneously instead of sequentially, further improving training speed and memory efficiency.
Simultaneous training of all modules cannot be done with other Proxy Objective methods since the training of \(f_2\) requires outputs from a trained \(f_1\).
\item One practical issue observed in the above modular training works is that the performance usually drops when a given network is split into too many thin modules \cite{wang2021revisiting}.
\cite{wang2021revisiting} hypothesized that this was caused by greedy training ``collapsing'' task-relevant information too aggressively in the early modules.
And the authors proposed a new (intractable \rch{due to the need for estimating a maximizer of a mutual information term}) proxy objective motivated from an information theoretic perspective.
In practice, their proposal amounted to an add-on term to the previously discussed proxy objectives.
This add-on term was a reconstruction loss of a CNN decoder head attempting to reconstruct \(x\) using \(f_1(x, \theta_1)\) as input.
The decoder and \(f_1\) were jointly trained to minimize the proxy objective and this extra reconstruction loss term.
Conceptually, it is easy to see that this extra reconstruction loss encourages \(f_1\) to retain information about the input \(x\), countering how the earlier proxy objectives drive \(f_1\) to keep information only about the label \(y\).
According to the authors, information about input may not be immediately helpful for improving separability in the hidden layers (as opposed to information about label), but it may help overall network performance.
\end{itemize}
\paragraph{Optimality Guarantees}
\cite{duan2021modularizing} gave an in-depth analysis on the optimality of training using the proxy objective proposed therein (Eq. \ref{eq1}).
In a modular training setting, the optimal set of input module parameters are the ones for which there exists output module(s) such that they combine into a minimizer for the overall objective.
Mathematically, this optimal set is given as
\begin{equation}
\Theta_1^\star:= \left\{\theta_1\Bigg\vert \exists\theta_2\text{ s.t. }\left(\theta_1, \theta_2\right)\in\argmin_{\left(\theta_1, \theta_2\right)} L\left(f, \theta_1, \theta_2, S\right)\right\}.
\end{equation}
It was shown, under certain assumptions on \(L\), \(\phi\), and \(f_2\), that \(\argmin_{\theta_1}L_1\left(f_1, \theta_1, \omega_2, S\right)\subset\Theta_1^\star\) (Theorem 4.1 in \cite{duan2021modularizing}), justifying the optimality of training \(f_1\) to minimize the proxy objective.
The assumptions on \(L\) and \(\phi\) are mild.
In particular, the result works with popular classification objectives such as cross-entropy.
However, the assumption on \(f_2\) requires that it is a linear layer.
Overall, this optimality guarantee covers the case where a network is trained as two modules, with the output linear layer as the output module and everything else as the input module.
Optimality guarantee for more general training settings (training as more than two modules) was provided in \cite{duan2019kernel} (Theorem 2), but under stronger assumptions on \(\phi\) (see discussion immediately preceding the theorem).
One thing to note with this analysis is that the proxy objective is only picked to have its minima aligned with those of the overall objective.
For smooth loss landscapes, solutions in a small neighborhood around a minimum should still effectively make the overall objective function value small.
But in general, the behavior of the proxy objective away from its minima is not constrained to align with that of the overall objective.
This implies that the solutions learned with this method may be suboptimal when the module being trained does not have enough capacity to minimize its proxy objective to a reasonable degree.
In practice, \cite{nokland2019training,wang2021revisiting} showed that effective training with low-capacity modules can be achieved by essentially enhancing the method in \cite{duan2021modularizing} with the idea of \cite{belilovsky2019greedy} or an add-on reconstruction term.
A theoretical understanding of the instantiation proposed in \cite{nokland2019training} is still lacking.
In \cite{wang2021revisiting}, the authors motivated their proxy objectives from an information theoretic perspective.
Specifically, they showed that a proxy objective formed by combining their reconstruction loss term with either feature space similarity/distance or auxiliary classifier performance can be interpreted as simultaneously maximizing (1) mutual information between hidden representation and input and (2) mutual information between hidden representation and label.
However, the authors did not establish a connection between this mutual information maximization scheme and minimizing the overall training objective for the network.
Nevertheless, the strong empirical performance shown in the paper hints that such a connection should exist.
\cite{belilovsky2019greedy} provided a brief analysis on the optimality of their method and sketched out ideas that can potentially lead to optimality results.
But no concrete optimality guarantee was established.
\cite{malach2018provably} analyzed in detail a more complicated variant of the method in \cite{belilovsky2019greedy}.
It was shown that training algorithm was able to produce a correct classifier under a very strong data generation assumption.
However, the proposed variant in \cite{malach2018provably} also imposed architectural constraints on the model and was not accompanied by empirical evidence that it would work well with competitive network architectures.
\och{Although not originally established for modular training, some results from similarity learning can be viewed as optimality guarantees for Proxy Objective methods.
\((\epsilon, \gamma, \tau)\)-good similarity characterizes ideal hidden representations for which a downstream linear separator achieving small classification risk exists\cite{balcan2008improved,bellet2012similarity}.
Proxy objectives can be so designed that they drive the hidden layers towards \((\epsilon, \gamma, \tau)\)-good representations.}
\paragraph{Advantages}
\begin{itemize}
\item Proxy Objective methods have by far the best empirical performance among alternatives to E2EBP (see Table~\ref{table2} and \cite{bartunov2018assessing}).
To the best of our knowledge, they are the only family of modular or weakly modular training methods that have been shown to produce similar or even better performance compared to E2EBP on challenging benchmark datasets such as CIFAR-10 \cite{krizhevsky2009learning,duan2021modularizing,nokland2019training,belilovsky2020decoupled,wang2021revisiting}, CIFAR-100, SVHN \cite{netzer2011reading,wang2021revisiting}, STL-10 \cite{coates2011analysis,nokland2019training,wang2021revisiting}, and ImageNet \cite{russakovsky2015imagenet,deng2009imagenet,belilovsky2019greedy,belilovsky2020decoupled,wang2021revisiting}.
\item Proxy Objective methods are arguably the simplest to use in practice compared to the other methods reviewed.
In particular, unlike the other methods, Proxy Objective methods do not always require learning auxiliary models \cite{duan2019kernel,duan2021modularizing}.
\item Proxy Objective methods give the user full and direct control over the hidden representations.
When training a hidden module with a proxy objective, the supervision and potentially any side information that the user wishes to inject into the module can be directly passed to it without being propagated through the downstream module first.
\end{itemize}
\paragraph{Current Limitations and Future Work}
\begin{itemize}
\item Despite the strong empirical performance of Proxy Objective methods and the existence of some optimality proofs, optimality analysis for the various instantiations is far from complete.
It is unclear how the optimality analysis performed for feedforward models in classification \cite{duan2019kernel,duan2021modularizing} can be extended to non-feedforward architectures, other objective functions in classification, or regression tasks.
Optimality guarantees for training settings beyond the simple two-module one in, e.g., \cite{duan2021modularizing}, are lacking.
Although, \cite{nokland2019training,wang2021revisiting} provided strong empirical evidence that modular training with few architectural assumptions and arbitrarily fine network partitions can still produce E2EBP-matching performance.
Therefore, a theoretical analysis on the method in \cite{nokland2019training} (essentially combining \cite{duan2021modularizing} and \cite{belilovsky2019greedy}) and \cite{wang2021revisiting} (adding an extra reconstruction term) may be of interest.
\item While much work has been done on studying the hidden representations learned by E2EBP, little is known about those learned by modular training.
There are at least two directions in which this topic can be pursued.
First, deep learning models have been known to create ``hierarchical'' internal representations under E2EBP --- a feature widely considered to be one of the factors contributing to their success \cite{bengio2013representation}.
As deep architectures trained with Proxy Objective methods have been shown to provide state-of-the-art performance on challenging datasets such as ImageNet \cite{belilovsky2020decoupled,wang2021revisiting}, it would therefore be interesting to dissect these models and study the characteristics of the internal representations learned.
Notably, \cite{wang2021revisiting} already pointed out that some existing Proxy Objective methods do not learn representations that are ``hierarchical'' enough, leading to poor performance when a given model is trained as too many thin modules.
As another direction, while there has been work studying using proxy objectives as side objectives along with E2EBP to enhance generalization of the model \cite{lee2015deeply}, the effects of using proxy objectives alone to inject prior knowledge into the model are understudied in the context of modular training.
It is possible that such practice can yield stronger effects due to its more direct nature.
And future work may study its impact on model generalization, adversarial robustness \cite{goodfellow2014explaining}, etc.
\end{itemize}
\paragraph{Further Implications on Other Research Domains}
\label{others}
Proxy Objective methods have profound implications on deep learning.
\begin{itemize}
\item \textbf{Modularized workflows:}
Fully modular training unlocks modularized workflows for deep learning.
As argued in \cite{duan2021modularizing,gomez2020interlocking,jaderberg2017decoupled}, such workflows can significantly simplify the usually arduous procedure of implementing performant deep learning pipelines by allowing effective divide-and-conquer.
Indeed, most engineering disciplines such as software engineering consider modularization as an integral part of any workflow for enhanced scalability.
And modular training brings the freedom to embrace full modularization to deep learning engineering.
\item \textbf{Transfer learning:}
As an example on how modularized workflows can be advantageous in practice, \cite{duan2021modularizing} showed that a proxy objective can be naturally used as an estimator for module reusability estimation in transfer learning.
Since a module that better minimizes a proxy objective can be used to build a network that better minimizes the overall objective, evaluating a proxy objective value on a batch of target task data can be used as an indicator on the transfer performance of a trained network body.
This idea can be further extended to provide a model-based solution to the task transferability estimation problem --- the theoretical problem behind reusability estimation.
\cite{duan2021modularizing} provided some empirical evidence on the validity of this simple approach, but rigorous comparisons against other existing (and often more complicated) methods from transfer learning literature remains a future work.
\item \textbf{Label requirements:}
The Proxy Objective instantiations proposed in \cite{duan2021modularizing,wang2021revisiting} use only labeled data of the form \(\left(x_i, x_j, \mathbbm{1}_{\{y_i = y_j\}}\right)\) for training the hidden layers, where \(\mathbbm{1}\) denotes the indicator function.
And according to the optimality result in \cite{duan2021modularizing}, labeled data of this form is sufficient for training the hidden layers --- the specific class of \(x_i\) or \(x_j\) is not needed.
This observation reveals a novel insight about learning: Pairwise summary on data is sufficient for learning the optimal hidden representations.
Further, \cite{duan2021modularizing} demonstrated that the output module can be trained well with as few as a single randomly chosen fully-labeled example per class.
In practice, this indicates that it suffices to have annotators mostly provide labels of the form \(\mathbbm{1}_{\{y_i = y_j\}}\), i.e., identify if each given pair is from the same class, which may be easier to annotate compared to full labels \(y_i, y_j\).
Further, being able to learn effectively without knowing the specific values of \(y_i, y_j\) may be useful for user privacy protection when the label contains sensitive information.
The theory behind learning with such pairwise summary on data and its applications has recently been developed in \cite{duan2021labels,shimada2021classification}.
\item \textbf{Optimization:}
Since Proxy Objective methods completely decouple the trainings of the modules, each training session will be an optimization problem with a smaller set of parameters and potentially more desirable properties.
For example, suppose the output layer is trained by itself, the optimization can be convex depending on the choice of the loss function.
Studying modular learning through the lens of optimization can be a worthwhile future direction.
\item \textbf{Connections with contrastive learning:}
The Proxy Objective instantiation proposed in \cite{duan2021modularizing} (two-module version) can be viewed as a supervised analog of contrastive learning \cite{saunshi2019theoretical}.
Specifically, contrastive learning also trains the network as two modules and uses a contrastive loss to train the hidden layers.
A typical contrastive loss encourages the hidden layers to map similar example pairs closer and dissimilar pairs farther.
The key difference arises from the fact that the two methods work in different learning settings, one (the standard contrastive learning) in unsupervised learning and the other in supervised learning.
In unsupervised contrastive learning, a pair is considered similar or dissimilar based on some prior knowledge such as the proximity of the individual examples.
On the other hand, in \cite{duan2021modularizing}, a pair is considered similar or dissimilar based on if the examples are from the same class.
Due to their close relationship, we expect results and observations from contrastive learning to benefit the research in Proxy Objective methods, and vice versa.
\cite{lowe2019putting} directly applied self-supervised contrastive learning as a proxy objective for training the hidden layers.
However, the optimality of such practice was not justified in the paper.
\end{itemize}
\subsection{Weakly Modular Learning Schemes}
\label{sec3:ne2e}
\subsubsection{Target Propagation}
\label{sec3:tp}
On a high level, Target Propagation methods train each module by having it regress to an assigned target.
This target is chosen such that perfect regression to it results in a decrease in the overall loss function value or at least a decrease in the loss function value of the immediate downstream module.
At each training step, the targets for the network modules are found sequentially (from the output module to the input module).
The output module uses the true labels as its target.
Specifically, Target Propagation assumes the usage of an iterative optimization algorithm, and, at each optimization step, generates a target \(t_1\)\footnote{This target depends on the label and network parameters \(\theta_1, \theta_2\), but is considered fixed once computed during the actual optimization. Therefore, this dependence is not explicitly written out.} for \(f_1(\cdot, \theta_1)\) to regress to, thereby removing the need for end-to-end backward pass (of gradients).
This target is essentially chosen as the inverse image of the label under \(f_2(\cdot, \theta_2)\).
And to approximate this inverse, an auxiliary model \(g_2(\cdot, \gamma_2)\), typically chosen to be a fully-connected neural network layer, needs to be learned alongside the main model \(f\), where \(\gamma_2\) represents the trainable parameters of \(g_2\).
Target Propagation methods can be understood as backpropagating the target through approximated inverses of layers.
Re-writing \(L\left(f, \theta_1, \theta_2, \left(x, y\right)\right)\) as \(H\left(f\left(x, \theta_1, \theta_2\right), y, \theta_1, \theta_2\right)\) for some \(H\), these methods can be abstracted as Algorithm~\ref{alg:tp}.
Note that this presentation assumes training batch size being set to \(1\).
An extension to mini-batch training with batch size greater than \(1\) is trivial.
\begin{algorithm}[t]
\caption{An Optimization Step in Target Propagation}
\label{alg:tp}
\begin{algorithmic}[1]
\Require An auxiliary inverse model \(g_2(\cdot, \gamma_2)\), a target generating function \(T\left(g_2\left(\cdot, \gamma_2\right), u, v, w\right)\) (\(u, v, w\) are placeholder variables), training data \(\left(x_{i}, y_{i}\right)\in S\), a reconstruction loss \(\ell^\text{rec}\left(g_2\left(\cdot, \gamma_2\right), f\left(\cdot, \theta_1, \theta_2\right), x_{i}\right)\), step size \(\eta > 0\)
\Begin
\State End-to-end forward pass
\begin{equation}
a_1\left(\theta_1\right) \leftarrow f_1\left(x_{i}, \theta_1\right), a_2\left(\theta_1, \theta_2\right) \leftarrow f_2\left(a_1\left(\theta_1\right), \theta_2\right)
\end{equation}
\State Obtain target for output module
\begin{equation}
t_2 \leftarrow a_2\left(\theta_1, \theta_2\right) - \eta\frac{\partial H(u, y_{i}, \theta_1, \theta_2)}{\partial u}\bigg\vert_{u = a_2\left(\theta_1, \theta_2\right)}
\end{equation}
\State Obtain target for input module
\begin{equation}
t_1 \leftarrow T\left(g_2\left(\cdot, \gamma_2\right), a_1\left(\theta_1\right), a_2\left(\theta_1, \theta_2\right), t_2\right)
\end{equation}
\State Update \(\gamma_2\) to minimize
\begin{equation}
\ell^\text{rec}\left(g_2\left(\cdot, \gamma_2\right), f\left(\cdot, \theta_1, \theta_2\right), x_{i}\right)
\end{equation}
\State Update \(\theta_1\) to minimize
\begin{equation}
\left\|a_1\left(\theta_1\right) - t_1\right\|_2^2
\end{equation}
\State Fix \(\theta_1\), update \(\theta_2\) to minimize
\begin{equation}
\left\|a_2\left(\theta_1, \theta_2\right) - t_2\right\|_2^2
\end{equation}
\End
\end{algorithmic}
\end{algorithm}
Target propagation does not allow fully modularized training since forward pass is always needed in each training step.
\paragraph{Instantiations}
Different instantiations of Target Propagation differ in their choices of the target generation function \(T\) and the reconstruction loss \(\ell^\text{rec}\).
\begin{itemize}
\item Vanilla Target Propagation \cite{bengio2014auto,lee2015difference}:
\begin{align}
&T\left(g_2\left(\cdot, \gamma_2\right), a_1\left(\theta_1\right), a_2\left(\theta_1, \theta_2\right), t_2\right) := g_2\left(t_2, \gamma_2\right);\\
&\ell^\text{rec}\left(g_2\left(\cdot, \gamma_2\right), f\left(\cdot, \theta_1, \theta_2\right), x_{i}\right) \\\nonumber
& \quad := \left\|g_2\left(f_2\left(a_1\left(\theta_1\right) + \epsilon, \theta_2\right), \gamma_2\right) - \left(a_1\left(\theta_1\right) + \epsilon\right)\right\|_2^2,
\end{align}
where \(\epsilon\) is some added Gaussian noise and \(a_1, a_2\) are stashed values from forward pass (see Algorithm \ref{alg:tp}).
It is easy to see how this reconstruction loss encourages \(g_2\) to approximate an ``inverse'' of \(f_2\), and the added noise \(\epsilon\) enhances generalization.
In the case where the network is trained as \(Q\) \((Q \geq 2)\) modules, the reconstruction loss for \(g_q\), \(q=2, \ldots, Q\) becomes
\begin{align}
&\ell^\text{rec}_q\left(g_q\left(\cdot\right), f\left(\cdot\right), x_{i}\right) \\\nonumber
& \quad := \left\|g_q\left(f_q\left(a_{q-1} + \epsilon\right)\right) - \left(a_{q-1} + \epsilon\right)\right\|_2^2,
\end{align}
where the trainable parameters are omitted for simplicity.
\item Difference Target Propagation \cite{lee2015difference}:
\begin{align}
&T\left(g_2\left(\cdot, \gamma_2\right), a_1\left(\theta_1\right), a_2\left(\theta_1, \theta_2\right), t_2\right) \\\nonumber
& \quad := g_2\left(t_2, \gamma_2\right) + \left[a_1\left(\theta_1\right) - g_2\left(a_2\left(\theta_1, \theta_2\right), \gamma_2\right)\right].
\end{align}
Difference Target Propagation uses the same reconstruction loss as Vanilla Target Propagation.
The extra term \(\left[a_1\left(\theta_1\right) - g_2\left(a_2\left(\theta_1, \theta_2\right), \gamma_2\right)\right]\) is to correct any error \(g_2\) makes in estimating an ``inverse'' of \(f_2\).
And it can be shown that this correction term enables a more robust optimality guarantee.
\item Difference Target Propagation with Difference Reconstruction Loss \cite{meulemans2020theoretical}:
This instantiation uses the same target generating function \(T\) as Difference Target Propagation but a different reconstruction loss dubbed the Difference Reconstruction Loss (DRL).
Define difference corrected \(g_2\) as \(g'_2(\cdot, \gamma_2) = g_2\left(\cdot, \gamma_2\right) + \left[a_1\left(\theta_1\right) - g_2\left(a_2\left(\theta_1, \theta_2\right), \gamma_2\right)\right]\), the DRL is given as
\begin{align}
&\ell^\text{rec}\left(g_2\left(\cdot, \gamma_2\right), f\left(\cdot, \theta_1, \theta_2\right), x_{i}\right) \\\nonumber
& \quad := \left\|g'_2\left(f_2\left(a_1\left(\theta_1\right) + \epsilon_1, \theta_2\right), \gamma_2\right) - \left(a_1\left(\theta_1\right) + \epsilon_1\right)\right\|_2^2\\\nonumber
& \quad\quad + \lambda\left\|g'_2\left(a_2\left(\theta_1, \theta_2\right) + \epsilon_2, \gamma_2\right) - a_1\left(\theta_1\right)\right\|_2^2
\end{align}
where \(\epsilon_1, \epsilon_2\) are some added Gaussian noise and \(\lambda\) a regularization parameter.
In the case where the network is trained as \(Q\) modules, the reconstruction loss for \(g_q\) becomes
\begin{align}
&\ell^\text{rec}_q\left(g_q, \ldots, g_Q, f_{q\rightarrow Q}, x_{i}\right) \\\nonumber
& \quad := \left\|g'_q\circ\cdots\circ g'_Q\left(f_{q\rightarrow Q}\left(a_{q-1} + \epsilon_1\right)\right) - \left(a_{q-1} + \epsilon_1\right)\right\|_2^2\\\nonumber
& \quad\quad + \lambda\left\|g'_q\circ\cdots\circ g'_Q\left(a_Q + \epsilon_2\right) - a_{q-1}\right\|_2^2,
\end{align}
where the trainable parameters are omitted for simplicity, \(g'_q\circ\cdots\circ g'_Q\) (by abusing notation) denotes the operation of recursively computing inverse until \(g'_q\), and \(f_{q\rightarrow Q}\) denotes the composition of \(f_q, \ldots, f_Q\).
\item Direct Difference Target Propagation \cite{meulemans2020theoretical}:
Modify the Difference Reconstruction Loss by modeling \(g'_q\circ\cdots\circ g'_Q\) with a direct learnable connection from the output module into the activation space of module \(q-1\).
\end{itemize}
\paragraph{Optimality Guarantees}
If \(g_2=f_2^{-1}\) and \(f_2\) is a linear layer on top of some elementwise nonlinearity, then the local gradient produced by Vanilla Target Propagation can be shown to align well with (within a \(90^\circ\) proximity of) the true gradient of the overall loss function in the input module under some additional mild assumptions (Theorem 1 in \cite{lee2015difference}).
A similar result can be found in \cite{meulemans2020theoretical} (Theorem 6).
Under some mild assumptions, the target produced by Difference Target Propagation can be shown to cause a decrease in the loss of the immediate downstream module if this downstream module is already close to its own target (Theorem 2 in \cite{lee2015difference}).
In \cite{meulemans2020theoretical}, it was shown that, because \(f_2\) is typically not invertible, Difference Target Propagation does not propagate the so-called ``Gauss-Newton target'' as \(t_1\), i.e., target that represents an update from an approximate Gauss-Newton optimization step.
And minimizing the proposed DRL encourages the propagation of such a target.
The benefit of training the input module to regress to a Gauss-Newton target is that, at least in certain settings, the resulting gradients in the input module can be shown to align well with gradients computed from the overall loss (Theorem 6 in \cite{meulemans2020theoretical}), thus leading to effective training.
Almost all existing theoretical optimality results require that the network consists of purely linear layers (possibly linked by elementwise nonlinearities).
\paragraph{Advantages}
As a weakly modular method, Target Propagation does not require end-to-end backward pass and updates each module individually after each forward pass is done.
When computing gradients for the entire model using the chain rule becomes expensive, Target Propagation may therefore help save computations since it only needs module-wise gradients.
This also enables training models that have non-differentiable operations, as demonstrated in \cite{lee2015difference}.
These advantages are of course shared by the other two families of methods.
Note that if and how much computation can be saved highly depends on the actual use case.
To be specific, while Target Propagation does not need full backward pass, it does require nontrivial computations for calculating the hidden targets that involve training and evaluating the auxiliary models.
Depending on how complex these models are, this extra workload may overweigh the saving of eliminating full backward pass.
Further, some Target Propagation variants require stashing forward pass results for evaluating hidden targets, meaning that they may not have any advantage over E2EBP in terms of memory footprint either.
\paragraph{Current Limitations and Future Work}
\begin{itemize}
\item The auxiliary models require extra human (architecture selection, hyperparameter tuning, etc.) and machine resources.
\item Target Propagation methods have not been shown to yield strong performance on more challenging datasets such as CIFAR-10 and ImageNet or on more competitive networks \cite{meulemans2020theoretical,bartunov2018assessing}.
\item Similar to Proxy Objective methods, optimality results for more general settings, in particular, broader network architecture families, are lacking.
\end{itemize}
\subsubsection{Synthetic Gradients}
\label{sec3:sg}
Synthetic Gradients methods approximate local gradients and use those in place of real gradients produced by end-to-end backward pass for training.
Specifically, Synthetic Gradients methods assume that the network weights are updated using a gradient-based optimization algorithm such as stochastic gradient descent.
Then these methods approximate local gradients with auxiliary models.
These auxiliary models are typically implemented with fully-connected networks, and are trained to regress to a module's gradients (gradients of the overall objective function with respect to the module's activations) when given its activations.
By leveraging these local gradient models, Synthetic Gradients methods reduce the frequency in which end-to-end backward passes are needed by using the synthesized gradients in place of real gradients.
End-to-end backward passes are only performed occasionally to collect real gradients for training the local gradient models.
Re-writing \(L\left(f, \theta_1, \theta_2, \left(x, y\right)\right)\) as \(H\left(f\left(x, \theta_1, \theta_2\right), y, \theta_1, \theta_2\right)\) for some \(H\), these methods can be abstracted as Algorithm~\ref{alg:synthetic}.
It is possible to reduce the frequency that the forward pass is needed as well by approximating the forward pass signals with auxiliary synthetic input models that predict inputs to modules given data.
Synthetic Gradients methods do not allow truly weakly modular training since occasional end-to-end backward (or forward) passes are needed to learn the synthetic gradient (or input) models.
They can only be used to accelerate end-to-end training and enable parallelized optimizations.
\begin{algorithm}[t]
\caption{An Optimization Step in Synthetic Gradients}
\label{alg:synthetic}
\begin{algorithmic}[1]
\Require An auxiliary synthetic gradient model \(s_2\left(\cdot, \psi_2\right)\), training data \(\left(x_{i}, y_{i}\right)\), step size \(\eta > 0\)
\Begin
\State End-to-end forward pass
\begin{equation}
a_1\left(\theta_1\right) \leftarrow f_1\left(x_{i}, \theta_1\right), a_2\left(\theta_1, \theta_2\right) \leftarrow f_2\left(a_1\left(\theta_1\right), \theta_2\right)
\end{equation}
\State Update output module
\begin{equation}
\theta_2 \leftarrow \theta_2 - \eta\frac{\partial H\left(f\left(x_i, \theta_1, u\right), y_i, \theta_1, u\right)}{\partial u}\bigg\vert_{u = \theta_2}
\end{equation}
\State Obtain synthetic gradients for input module
\begin{equation}
\hat{\delta}_1\left(\theta_1, \psi_2\right) \leftarrow s_2\left(a_1\left(\theta_1\right), \psi_2\right)
\end{equation}
\State Update input module
\begin{equation}
\theta_1 \leftarrow \theta_1 - \eta \hat{\delta}_1\left(\theta_1, \psi_2\right)\frac{\partial a_1\left(u\right)}{\partial u}\bigg\vert_{u=\theta_1}
\end{equation}
\If{update synthetic gradient model}
\State End-to-end backward pass, obtain true gradients
\begin{equation}
\delta_1 \leftarrow \frac{\partial H\left(f_2\left(u, \theta_2\right), y_{i}, \theta_1, \theta_2\right)}{\partial u}\bigg\vert_{u=a_1\left(\theta_1\right)}
\end{equation}
\State Update \(\psi_2\) to minimize
\begin{equation}
\left\|\delta_1 - \hat{\delta}_1\left(\theta_1, \psi_2\right)\right\|_2^2
\end{equation}
\EndIf
\End
\end{algorithmic}
\end{algorithm}
\paragraph{Instantiations}
\begin{itemize}
\item \cite{jaderberg2017decoupled,czarnecki2017understanding}: Proposed the original instantiation, which is fully described above.
\item To remove the need for occasional end-to-end backward pass, \cite{lansdell2020learning} proposed a method to obtain target signals for training the synthetic gradient models using only local information.
However, this method necessitates the use of stochastic networks.
And the performance reported in the paper is underwhelming compared to either \cite{jaderberg2017decoupled} or E2EBP.
Therefore, we do not consider it as a training method that works in the general set-up as the rest of the methods in this paper do, and it is included here only for completeness.
\end{itemize}
\paragraph{Optimality Guarantees}
Optimality can be trivially guaranteed assuming that the synthetic gradient models perfectly perform their regression task, i.e., that they perfectly approximate local gradients using module activations.
However, this assumption is almost never satisfied in practice.
\paragraph{Advantages}
\begin{itemize}
\item While Proxy Objective and Target Propagation methods all help reduce computational load and memory usage, Synthetic Gradients methods can enable parallelized training of network modules.
This can further reduce training cost since update on a certain module does not need to wait for those of other modules except when the synthetic input or synthetic gradient models are being updated.
This advantage is shared by a variant of Proxy Objective \cite{belilovsky2020decoupled}.
And it was shown in \cite{belilovsky2020decoupled} that this Proxy Objective variant is much more performant than Synthetic Gradients.
\item Synthetic Gradients can be used to approximate true backpropagation through time (unrolled for an unlimited number of steps) for learning recurrent networks.
It was shown in \cite{jaderberg2017decoupled} that this allows for much more effective training for learning long-term dependency compared to the usual truncated backpropagation through time.
\end{itemize}
\paragraph{Current Limitations and Future Work}
\begin{itemize}
\item Similar to Target Propagation, the auxiliary models require extra human and machine resources.
And like Target Propagation, there is no empirical evidence that Synthetic Gradients can scale to challenging benchmark datasets or competitive models.
\item Synthetic Gradients methods do not enable truly weakly modular training in general.
\end{itemize}
\section{Other Non-E2E Training Approaches}
\label{other_methods}
\subsection{\rch{Methods Motivated Purely by Biological Plausibility}}
Arguably the most notable set of works left out in this survey are those studying alternatives to E2EBP purely from the perspective of biological plausibility (for a few examples, see \cite{balduzzi2014kickback,xiao2018biologically,nokland2016direct,lillicrap2016random,liao2016important}).
These training methods were generally purely motivated by our understandings on how human brain works, and were therefore claimed to be more ``biological plausible'' than E2EBP.
However, biological plausibility in itself does not lead to provable optimality.
Despite the fact that these methods are of great value from a biology standpoint, they have been significantly outperformed by E2EBP on meaningful benchmark datasets \cite{bartunov2018assessing}.
\rch{Below, arguably the most popular family of biologically plausible alternatives to E2EBP -- Feedback Alignment -- is discussed.}
\rch{One major argument criticizing E2EBP's lack of biological plausibility states that E2EBP requires each neuron to have precise knowledge of all of the downstream neurons (end-to-end backward pass), whereas human brain is not believed to demonstrate such precise pattern of reciprocal connectivity \cite{lillicrap2016random}.
This issue is known as the ``weight transport'' problem \cite{lillicrap2016random}.
}
\rch{To solve the weight transport problem, \cite{lillicrap2016random} proposed to use fixed, random weights in place of actual network weights during backward pass, breaking the symmetry between weights used during forward and backward passes and thus solving the problem.
This eliminates the need for a true end-to-end backward pass.
And this family method is called Feedback Alignment.
\cite{liao2016important} proposed to use fixed, random weights that share signs with the actual network weights.
\cite{nokland2016direct} proposed two more revisions of the original Feedback Alignment instantiation.
During the backward pass, instead of using the backpropagated supervision (with random weights instead of real weights) to provide gradient like the original instantiation does, these alternative versions directly use error at the output (with potential modulation by random matrices to make the dimensionality match for each layer).
}
\rch{Suppose the model is written as \(f(x, W_1, W_2, W_3) = \sigma_3\left(W_3\sigma_2\left(W_2\sigma_1\left(W_1 x\right)\right)\right) = f_3(f_2(f_1(x, W_1), W_2), W_3)\), where \(W_1, W_2, W_3\) are trainable weight matrices and \(\sigma_1, \sigma_2, \sigma_3\) are activation functions.
During any step in gradient descent, suppose the forward pass has been done and let \(a_3 = f(x, W_1, W_2, W_3), a_2 = f_2(f_1(x, W_1), W_2), a_1 = f_1(x, W_1), b_2 = W_2 a_1\), where \(W_1, W_2\) are the current network weights, E2EBP computes gradient in \(f_1\) as}
\begin{equation}
\rch{\frac{\partial{L}}{\partial{a_1}} = \frac{\partial{L}}{\partial{a_3}}\frac{\partial{a_3}}{\partial{a_2}}\frac{\partial\sigma_2}{\partial b_2}W_2.}
\end{equation}
\rch{The original instantiation of Feedback Alignment in \cite{nokland2016direct} computes this gradient with \(W_2\) substituted by some fixed, random matrix \(B_2\).
The variant proposed in \cite{liao2016important} substitutes \(W_2\) with fixed, random matrix \(B_2\) with the only constraint being that each element of \(B_2\) shares the same sign with the corresponding element in \(W_2\).
\cite{nokland2016direct} proposed to compute this gradient as
}
\begin{equation}
\rch{\frac{\partial{L}}{\partial{a_1}} = \frac{\partial{L}}{\partial{a_3}}\frac{\partial\sigma_2}{\partial b_2}C_2,}
\end{equation}
\rch{where \(C_2\) is a fixed, random matrix with appropriate dimensionality.
The error at the output (\(\partial L / \partial a_3\)), after modulation by some random matrix \(C_2\), is used for training in place of backpropagated supervision.}
\subsection{\rch{Auxiliary Variables}}
Another important family of E2EBP-free training methods is the Auxiliary Variables methods \cite{carreira2014distributed,taylor2016training,marra2020local,raghavan2020distributed,gu2020fenchel,zhang2017convergent,askari2018lifted,lau2018proximal,li2019lifted,zhang2016efficient,zeng2019global}.
These methods introduce auxiliary trainable variables that approximate the hidden activations in order to achieve parallelized training.
Despite the strong theoretical guarantees, the introduced auxiliary variables may pose scalability issues and, more importantly, these methods require special, often tailor-made alternating solvers.
And none of the existing work in this area has scaled these methods to go beyond toy models and toy data.
\rch{The basic idea and connections with the reviewed methods will be discussed below.}
\rch{On a high level, Auxiliary Variables methods are built upon the idea of variable splitting, that is, transforming a complicated problem in which variables are coupled highly nonlinearly into a simpler one where the coupling between variables becomes more tractable by introducing additional variables \cite{zeng2019global}.
Auxiliary Variables methods assume a representation of the model as follows: \(f(x, W_1, W_2) = \sigma_2\left(W_2\sigma_1\left(W_1 x\right)\right)\), where \(W_1, W_2\) are trainable weight matrices and \(\sigma_1, \sigma_2\) are activation functions.
Suppose the objective function \(L\left(f, W_1, W_2, S\right)\) can be written as \(\sum_{i=1}^nL\left(f, W_1, W_2, \left(x_i, y_i\right)\right)\).
Re-writing \(L\left(f, W_1, W_2, \left(x, y\right)\right)\) as \(H\left(f\left(x, W_1, W_2\right), y, W_1, W_2\right)\) for some \(H\), one popular family of Auxiliary Variables methods amounts to introducing new variables \(V_1, V_2\), and reformulating the training objective as follows:}
\begin{align}
&\rch{\min_{W_1, W_2}\sum_{i=1}^n H\left(f\left(x_i, W_1, W_2\right), y_i, W_1, W_2\right)} \\
&\rch{\rightarrow \min_{W_1, W_2, V_1, V_2}\sum_{i=1}^n H'\left(V_2\left(V_1\left(x_i\right)\right), y_i, W_1, W_2, V_1, V_2)\right)} \nonumber\\
& \rch{\text{subject to }} \nonumber\\
&\,\,\rch{V_1\left(x_i\right) = \sigma_1\left(W_1 x_i\right), V_2\left(V_1\left(x_i\right)\right) = \sigma_2\left(W_2 V_1\left(x_i\right)\right), \forall i.}
\end{align}
\rch{One example of \(H\left(f\left(x_i, W_1, W_2\right), y_i, W_1, W_2\right)\) could be the hinge loss on \(\left(f\left(x_i, W_1, W_2\right), y_i\right)\) plus some regularization terms on the weights \(W_1, W_2\), in which case \(H'\left(V_2\left(V_1\left(x_i\right)\right), y_i, W_1, W_2, V_1, V_2)\right)\) is the hinge loss on \(\left(V_2\left(V_1\left(x_i\right)\right), y_i\right)\) plus regularization terms on \(W_1, W_2, V_1, V_2\).
This constrained optimization problem is typically turned into an unconstrained one by adding a regularization term forcing \(V_2\left(\cdot\right)\) to regress to \(\sigma_2\left(W_2\, \cdot\right)\) and \(V_1\left(\cdot\right)\) to \(\sigma_1\left(W_1\, \cdot\right)\).
The variables \((W_1, W_2) \text{ and } (V_1, V_2)\) are solved alternatively.
Popular solvers include block coordinate descent \cite{carreira2014distributed,zeng2019global,zhang2017convergent,lau2018proximal,askari2018lifted} and alternating direction method of multipliers \cite{taylor2016training,zhang2016efficient}.
Strong convergence proofs on these solvers are provided in, e.g., \cite{zeng2019global}.}
\rch{Another popular family of Auxiliary Variables methods introduces another set of auxiliary variables: \(V_i\left(\cdot\right) = \sigma_i\left(U_i(\cdot)\right), U_i(\cdot) = W_i\,\cdot, i = 1, 2\).
This instantiation is even more tractable than the previous one, often enjoying closed-form solutions.
The drawback is the additional computation and memory introduced by the third set of auxiliary variables \(U_i, i = 1, 2\).
}
\rch{Auxiliary Variables methods are conceptually similar to Target Propagation methods in the sense that they eliminate end-to-end backward pass by introducing new, local-in-each-layer targets for the hidden layers to regress to.
Training then alternates between solving these targets and optimizing the main network weights using local supervision.
Target Propgagation leverages (approximated) inverse images of the label through the layers as the hidden targets.
In comparison, the hidden targets (auxiliary variables) used in Auxiliary Variables methods can be interpreted as approximations of the forward images of the input through the layers.}
\section{Conclusion}
\rch{This survey} reviewed modular and weakly modular training methods for deep architectures as alternatives to the traditional end-to-end backpropagation.
These alternatives can match or even surpass the performance of end-to-end backpropagation on challenging datasets like ImageNet.
In addition, they are natural solutions to some of the practical limitations of end-to-end backpropagation and they reveal novel insights about learning.
As the interest in modular and weakly modular training schemes continues to grow, we hope that this short survey can serve as a summary of the progress made in this field and inspire future research.
\section*{Acknowledgment}
This work was supported by the Defense Advanced Research Projects Agency (FA9453-18-1-0039) and the Office of Naval Research (N00014-18-1-2306).
\bibliographystyle{IEEEtran}
|
1,116,691,498,320 | arxiv | \section{Introduction}
The recent observations of high redshift supernovae \cite{hubble},
Cosmic Microwave Background (CMB) temperature anisotropies
\cite{wmap} and the shape of the matter power spectrum \cite{sdss}
consistently support the idea that our Universe is currently
undergoing an epoch of accelerated expansion \cite{reviews}.
Currently, the debate is centered on when did the acceleration
actually start and what agent is driving it. A variety of models
based on at least two matter components (baryonic and dark) and
one dark energy component (with negative pressure) have been
suggested -see \cite{peebles_rmph}. The $\Lambda$CDM model, where
a vacuum energy density or cosmological constant provides the
negative pressure, was the earliest and simplest to be analyzed.
While this model is consistent with the observational data (high
redshift supernova \cite{hubble}, CMB anisotropies
\cite{wmap,boomerang}, galaxy cluster evolution \cite{sdss}), at
the fundamental level it fails to be convincing: the vacuum energy
density falls below the value predicted by any sensible quantum
field theory by many orders of magnitude \cite{weinberg}, and it
unavoidably leads to the {\em coincidence problem}, i.e., ``Why
are the vacuum and matter energy densities of precisely the same
order today?" \cite{coincidence}. More sophisticated models
replace $\Lambda$ by a dynamical dark energy either in the form of
a scalar field (quintessence), tachyon field, phantom field or
Chaplygin gas. These models fit the observational data but it is
doubtful that they solve the coincidence problem \cite{lad}.
Recently, it has been proposed that dark matter and dark energy
are coupled and do not evolve separately
\cite{amendola,amendola0,iqm0,iqm,peebles,hoffman,mangano,rong}.
In particular the Interacting Quintessence (IQ) models of
references \cite{amendola,amendola0,iqm0,iqm}, aside from fitting
rather well the high redshift supernovae data, quite naturally
solve the coincidence problem by requiring the ratio of matter and
dark energy densities to be constant at late times. The coupling
between matter and quintessence is either motivated by high energy
particle physics considerations \cite{amendola} or is constructed
by requiring the final matter to dark energy ratio to be stable
against perturbations \cite{iqm, iqm0}. Since the nature of dark
matter and dark energy are unknown there are no physical arguments
to exclude their interaction. On the contrary, arguments in favor
of such interaction have been suggested \cite{peebles}. As a
result of the interaction, the matter density drops with the scale
factor $a(t)$ more slowly than $a^{-3}$.
A slower matter density evolution fits the supernovae data as well
as the $\Lambda$CDM concordance model does \cite{iqm}. The
interaction also alters the age of the Universe, the evolution of
matter and radiation perturbations and gives rise to a different
matter and radiation power spectra. All these effects will be used
to set constraints on the decay rate of the scalar field using
cosmological observations. In this paper, we shall further
constrain the Chimento et al. \cite{iqm} model by using the
recently WMAP measurements of the cosmic microwave background
temperature anisotropies. As it turns out, a small but
non-vanishing interaction between dark matter and dark energy is
compatible with the WMAP data with the advantage of solving the
coincidence problem. To some extent, this was already suggested in
a recent analysis that uses the position of the peaks and troughs
of the CMB \cite{peaks} to constrain a general class of
interacting models designed not to strictly solve the coincidence
problem, but to alleviate it \cite{scaling}. Briefly, the outline
of the paper is: in Section II we summarize the cosmological
model, in Section III we derive the equations of dark matter and
dark energy density perturbations and find the range of parameter
space that best fits the observations; finally, in Section IV we
discuss our main results and present our conclusions.
\section{The interacting quintessence model}
The IQ model considered here has been constructed to solve the coincidence
problem by
introducing a coupling between matter and dark energy; their respective
energy densities
don't evolve independently. In this paper, we shall
simplify the Chimento {\em et al.} model \cite{iqm} in the sense
that the IQ will be assumed to decay into cold dark matter (CDM) and not into
baryons, as required by the constraints imposed by local gravity measurements
\cite{gravity, peebles_rmph}
The baryon--photon fluid evolves independently of the CDM
and quintessence components. Unlike \cite{iqm}, we do not include
dissipative effects.
In \cite{iqm} this scaling was considered only during matter domination,
so the
scalar field would evolve independently of the CDM component until it started
to decay at some early time. These assumptions facilitate the numerical work
while they preserve its essential features. Specifically,
the quintessence field (denoted
by a subscript $x$) decays into pressureless CDM (subscript $c$) according to
\cite{iqm}
\\
\begin{equation}\label{cont}
\begin{array}{rcl}
\displaystyle \frac{d\rho_c}{dt}+ 3H \rho_{c} = 3H c^{2}
\left(\rho_{c} + \rho_{x}\right),\\
\displaystyle \frac{d\rho_x}{dt} + 3(1+w_{x})H\rho_{x} = -3H c^{2}
\left(\rho_{c} + \rho_{x}\right),
\end{array}
\end{equation}
\\
where $w_{x} <0 $ is the equation of state parameter of the dark
energy and $c^{2}$ is a small dimensionless constant parameter
that measures the intensity of the interaction. Approaches similar
(but not identical) to ours have been discussed in
\cite{amendola,amendola0,hoffman,mangano,rong}. Eqs. (\ref{cont})
were not derived assuming some particle physics model for the
interaction, where quintessence is described as an scalar field
with a given potential. We followed a phenomenological approach
and instead we have required the Interacting Quintessence Model to
solve the coincidence problem. We have imposed the dark matter to
dark energy density interaction to give a dark matter to dark
energy ratio constant at late times and stable against
perturbations. As a result, the shape of the scalar field
potential is also fixed.
Eqs. (\ref{cont}) can be solved by Laplace transforming the system.
The result is
\\
\begin{equation}\label{density}
\begin{array}{lll}
\rho_{x}(a)&=&\displaystyle\frac{H_0^2}{8\pi G w_{eff}}\displaystyle
[3(c^2\Omega_{c,0}-(1-c^2)\Omega_{x,0})(a^{S_{+}}-a^{S_{-}})+
\Omega_{x,0}(S_{-}a^{S_{-}}
-S_{+}a^{S_{+}})],\\
\rho_{c}(a)&=&\displaystyle\frac{H_0^2}{8\pi G w_{eff}}\displaystyle
[3((1+w_x+c^2)\Omega_{c,0}+c^2\Omega_{x,0})(a^{S_{-}}-a^{S_{+}})+
\Omega_{c,0}(S_{-}a^{S_{-}}-S_{+}a^{S_{+}})],
\end{array}
\end{equation}
\\
where $w_{eff}=(w_{x}^{2}+4c^{2}w_{x})^{1/2}$, and
$S_{\pm}=-3(1+w_x/2) \mp (3/2) w_{eff}$. The density parameters
$\Omega_{c,0}$ and $\Omega_{x,0}$ denote the current values of
matter and dark energy, respectively. Solutions of Eqs.
(\ref{cont}) are plotted in Fig. \ref{fig:density}. Solid, dashed,
dotted and dot-dashed lines correspond to $\Omega_{c}$,
$\Omega_{x}$, $\Omega_{r}$ and $\Omega_{b}$, respectively. In
panel (a) $c^{2} = 0.1$ and there is a short period of baryon
dominance; this does not happen in panel (b) where $c^{2} = 5
\times 10^{-3}$.
\\
\begin{figure}
\centering
\includegraphics[scale=.8]{fig1.eps}
\caption{Redshift evolution of different energy densities. Solid,
dashed, dotted and dot-dashed lines correspond to $\Omega_{c}$,
$\Omega_x$, $\Omega_{r}$ and $\Omega_{b}$, respectively. In panel
(a) $c^2=0.1$, and in panel (b) $c^2=5\times 10^{-3}$.
The following parameters were assumed: $\Omega_{c,0}= 0.25$,
$\Omega_{x,0}= 0.7$,
$\Omega_{b,0}= 0.05$, $\Omega_{r,0}= 10^{-5}$, and $w_{x} = -0.99$.}
\label{fig:density}
\end{figure}
\\
As detailed in Ref. \cite{iqm}, the
interaction dark matter--dark energy brings the
ratio $r \equiv \rho_{c}/\rho_{x}$ to a constant,
stable value at late times. From Eqs.(\ref{cont}),
it is seen that the evolution of the aforesaid ratio is
\\
\begin{equation}\label{ratio}
\frac{dr}{dt}=3H c^2\left[ r^{2}+\left(\frac{w_{x}}{c^{2}}+2\right)r+1\right].
\end{equation}
\\
The equation $dr/dt = 0$, has two stationary solutions, namely,
\[
r_{\pm}= -w_{x}/(2c^{2})-1\pm [w_x^{2}/(4c^{4})+w_x/c^{2}]^{1/2}\, ,
\]
\\
which verify $r_{+}\, r_{-}=1$ (with $r_{+} > r_{-}$). As shown in
Fig. \ref{fig:ratio}, the ratio evolves from an unstable maximum
$r_{+}$ at early times -with dark matter and quintessence energy
densities scaling as $a^{S_{+}}$- to a stable minimum
$r_{-}$ at late times, where both energy densities scale
as $a^{S_-}$. In Fig. \ref{fig:ratio}, the smaller the
coupling constant the larger the
ratio of cold dark matter to dark energy in the past and the smaller in
the future without significantly affecting the length of the transition period.
We are not suggesting that the Universe
is already in the late time epoch of constant, stable ratio
$r_{-}$. The value of the asymptotic ratio is
determined by the strength of the interaction and at present
this ratio could be still slowly evolving in time.
\begin{figure}\centering
\includegraphics[scale=.7]{fig2.eps}
\caption{Evolution of the ratio $r = \rho_{c}/\rho_{x}$ from
an unstable maximum
toward a stable minimum (at late times) for different values of $c^{2}$.
We took $r_{0}= 0.42$ as the current value.}
\label{fig:ratio}
\end{figure}
In terms of a scalar field description, the second equation of (\ref{cont})
is equivalent to \\
\begin{equation}\label{KG}
\frac{d^2\phi}{dt^2}
+ 3H\frac{d\phi}{dt} + V^{\prime}_{eff}=0 \, ,
\end{equation}
where $\phi$ denotes the dark energy field and $V_{eff}(\phi)$ is the
effective potential. The latter is given by
\\
\begin{equation}\label{potential}
V^{\prime}_{eff}=\frac{dV(\phi)}{d\phi}+3H c^{2}
\left(\rho_{c} + \rho_{x}\right)/(d\phi/dt).\\
\end{equation}
If $r=$ constant, the potential has two asymptotic limits:
$V\propto e^{(-\phi)}$ during both matter domination and the
period of accelerated expansion, and $V\propto \phi^{-\alpha}$
with $\alpha > 0$ well within the radiation dominated period. A
detailed study has shown that only potentials that are themselves
power laws -with positive or negative powers- or exponentials of
the scalar field yield energy densities evolving as power laws of
the scale factor \cite{liddle_scherrer}. Potentials with
exponential and power-law behavior have been considered
extensively in the high energy physics literature. Exponential
potentials arise as a consequence of Kaluza--Klein type
compactifications of string theory and in $N=2$ supergravity while
inverse power law models arise in SUSY QCD (see
\cite{iqm0,copeland}). Potentials showing both asymptotic
behaviors have also been studied \cite{sahni_wang}, but at present
there are not satisfactory particle physics model to justify the
shapes of potentials of this type \cite{riazuelo}.
From the evolution of the background energy densities it is possible
to constrain the amplitude of the IQ and CDM coupling. Since
$w_{eff}> 0$, $c^2$ is confined to the interval $0\leq c^2<
|w_x|/4$. Negative values of $c^2$ would correspond to a transfer
of energy from the matter to the quintessence field and might
violate the second law of thermodynamics. Further constraints can
be derived by imposing stability of the interaction to first order
loop corrections \cite{one-loop}. In Fig. \ref{fig:likeSNIQM} we
used the supernova data of Riess {\it et al.} \cite{hubble} to
constrain model parameters. In the figure we plot the 68\%, 95\%
and 99.9\% confidence levels of a cosmological model after
marginalizing over, $w_x$ and the absolute magnitude of SNIa. We
set a prior: $-1.0\le w_x\le -0.6$. Variations of the baryon
density produce no significant differences and the Hubble constant
is unconstrained by this Hubble test since the absolute luminosity
of Type Ia supernovae is not accurately measured. The contours are
rather parallel to the $c^2$ axis, i.e., the low redshift
evolution of interacting models is not very different from the
non-interacting ones.
\begin{figure}\centering
\includegraphics[scale=1]{fig3.eps}
\caption{Joint confidence intervals at 68\%, 95\% and 99.9\%
confidence level of IQM fitted to the ``gold" sample of SNIa data
of Riess {\it et al.} \cite{hubble}} \label{fig:likeSNIQM}
\end{figure}
\section{Observational constraints on the matter-quintessence coupling}
Primordial nucleosynthesis and Cosmic Microwave Background temperature
anisotropies provide the best available tools
to constrain the physics of the early Universe. By
assumption, the scalar field decays into dark matter and not
into baryons. Since dark matter and quintessence density perturbations
are coupled to baryon and photons
only through gravity, there is no transfer of energy or momentum
from the scalar field to baryons or radiation. The evolution of density
perturbations of dark matter and dark energy can be simply derived from the
energy conservation equation. In the equations below we shall
use the conventions of \cite{bertschinger}. In the synchronous gauge,
\\
\begin{equation}\label{pert_eq_m}
\begin{array}{lll}
\dot\delta_{c}&=&-\displaystyle\frac{\dot h}{2}-3\displaystyle\frac{\dot a}{a}c^2\displaystyle
\left(\displaystyle\frac{\delta_x}{r}+\delta_{c}\right),\\
\dot\theta_{c}&=&0,
\end{array}
\end{equation}
\\
while the evolution of dark energy density perturbations is given by
\begin{equation}\label{pert_eq_de}
\begin{array}{lll}
\dot\delta_{x}&=&-\displaystyle(1+w_x)(\theta_x+\displaystyle\frac{\dot h}{2})
-3\displaystyle\frac{\dot a}{a}(c_s^2-w_x)\delta_x\\
&-&9\left(\displaystyle\frac{\dot a}{a}\right)^2 \displaystyle(c^2_{s,x}-w_x)(1+w_x)\theta_xk^{-2}
+3\displaystyle\frac{\dot a}{a}c^2\displaystyle(\delta_x+r\delta_{c})\\
\dot\theta_x &=&-\displaystyle(1-3c^2_{s,x})\displaystyle\frac{\dot a}{a}\theta_x
+\displaystyle\frac{k^2c^2_{s,x}}{1+w_x}\delta_x -3\displaystyle\frac{\dot a}{a}\displaystyle\frac{c^2}{1+w_x}(1+r)\theta_x,
\end{array}
\end{equation}
\\
where $\delta$ and $\theta$ denote the density contrast and the
divergence of the peculiar velocity field of each component,
respectively, derivatives are with respect to conformal time and
$c^2_{s,x}$ is the quintessence sound speed, taken to be unity as
for a scalar field with a canonical Lagrangian. The interaction
introduces the terms with a $c^2$ factor on the right hand side of
Eqs. (\ref{pert_eq_m}) and (\ref{pert_eq_de}). Combining these
equations, the evolution of density perturbations in the IQ field
are described by a driven damped harmonic oscillator, where the
driving term is the gravitational field \cite{caldwell}. After a
brief transient period, the evolution is dominated by the
inhomogeneous solution and is insensitive to the initial
amplitude. To find the model that best fits the WMAP data, we have
implemented equations (\ref{density}), (\ref{pert_eq_m}) and
(\ref{pert_eq_de}) into the CMBFAST code \cite{cmbfast}. We used
the likelihood code provided by the WMAP team \cite{likeli} to
determine the quality of the fit of every model to the data. Since
we are introducing a new parameter, the coupling between dark
matter and dark energy, the parameter space could become
degenerate with different local maxima representing models that
fit the data equally well. For this reason, we did not use a Monte
Carlo Markov Chain approach \cite{likeli} but we run through a
grid of models on a six-dimensional parameter space. Grids of
models are computationally very expensive. To make the
computations feasible we reduced the parameter space by
introducing prior information. We imposed two constraints: (1) all
models were within the 90\% confidence level of the constraint
imposed by Big Bang Nucleosynthesis: $0.017\le\Omega_bh^2\le
0.027$ \cite{olive} and (2) in all cosmologies the age of the
Universe was chosen to be $t_{0}>12$ Gyr. With these requirements,
we explore the region of parameter space close to the concordance
model. We have considered only flat models with no reionization,
no gravitational waves and no running of the spectral index. We
considered a 6-dimensional parameter space and assumed our
parameters to be uniformly distributed in the following intervals:
Hubble constant $H_0=[46,90]\,km/s/Mpc$, baryon fraction $\Omega_b
=[0.01,0.12]$, dark energy $\Omega_x=[0.1,0.9]$, slope of the
matter power spectrum on large scales $n_s=[0.95,1.04]$, dark
energy equation of state $w_x=[-1.0,-0.65]$ and $c^2=[0,0.05]$. We
took 23, 15, 33, 10, 9 linear subdivisions and 22 logarithmic
subdivisions of the above intervals, respectively. The likelihood
was computed using the routines made publicly available by the
WMAP team.
\begin{figure}\centering
\includegraphics[scale=1]{fig4.eps}
\caption{ Joint confidence intervals at the 68\%, 95\% and 99.9\%
level for pairs of parameters after marginalizing over the rest.
For convenience the $c^2$ axis is represented using a logarithmic
scale and it has been cut to $c^2\le 10^{-4}$, though models with
$c^2 = 0$ have been included in the analysis. In panels (a), (b)
and (c) models were fit to CMB data alone. In panel (d) we
included supernovae data of Riess {\it et al.} \cite{hubble}.}
\label{fig:contour}
\end{figure}
In Fig.\ref{fig:contour} we give confidence intervals for
pairs of parameters after marginalizing over the rest.
Contours represent the 68\%, 95\%
and 99.9\% confidence levels.
Models with $c^2=0$ have been computed and were included in the
analysis. The results were undistinguishable from those of $c^2 =
10^{-4}$ the last point included in the graphs. The WMAP data sets
strong upper limits on the quintessence decay rate. A non-zero
decay rate is clearly favored by the data. At $c^2\sim 10^{-2}$
the contours indicate a steep gradient in the direction of growing
$c^2$. This behavior is associated with the decreasing fraction of
CDM at recombination with increasing $c^2$. When the interaction
rate is large, the Universe goes through a period dynamically
dominated by baryons (Fig.\ref{fig:density}a). The oscillations on
the baryon-photon plasma induce large anisotropies in the
radiation and those models are strongly disfavored by the data. In
Fig.\ref{fig:contour}, the models fit the data more comfortably
with lower values of $\Omega_x$ and $H_{0}$ than in the
$\Lambda$CDM concordance model. Interacting models have larger
dark energy density in the past than non-interacting models,
achieving the same rate of accelerated expansion today with a
smaller $\Omega_{x,0}$. Our best model also requires larger baryon
fraction since the matter density is smaller prior to
recombination than in the concordance model, therefore dark matter
potential wells are shallower and a higher baryon fraction is
required to reproduce the amplitude of the first acoustic peak
\cite{cmb}. The mean value of the cosmological parameters and
their corresponding $1\sigma$ confidence intervals are:
$\Omega_{x} = 0.43\pm 0.12$, $\Omega_{b}=0.08\pm 0.01$,
$n_s=0.98\pm0.2$ and $H_{0}= 56\pm 4 \,km/s/Mpc$. The latter
number is not very meaningful since the probability distribution
of $H_0$ is rather skewed. As it can be seen in
Fig.\ref{fig:contour}a, low values of $H_0$ are suppressed very
fast. As the height of the first acoustic peak scales with
$\Omega_bh^2$, high values of baryon fraction are speedily
suppressed by the WMAP data, which translates into an even faster
suppression of low values of $H_0$. With respect the quintessence
equation of state, as we did not explore models with $w_{x}<-1$,
we can only set an upper limit $w_{x}\le -0.86$ at the 1$\sigma$
confidence level. Finally, as we chose a uniform prior on $\log
c$, the confidence interval is not symmetric, resulting: $c^{2}=
0.005^{+0.007}_{-0.003}$.
\begin{figure}\centering
\includegraphics[scale=.9]{fig5.eps}
\caption{Radiation Power Spectrum. The solid line is our best fit
model ($c^2=5\times 10^{-3}$, $\Omega_{x}=0.43$,
$\Omega_{b}=0.08$, $H_0=54\,km/s/Mpc$, $n_s=0.98$,$w=-0.99$).
Dashed line corresponds to the $\Lambda CDM$ concordance model and
dot-dashed line is $QCDM$ with parameters $\Omega_x=0.5$,
$\Omega_b = 0.07$, $H_{0}=60\,km/s/Mpc$, $w=-0.75$ and
$n_s=1.02$.} \label{fig:radpower}
\end{figure}
Our main result is that models with interaction are preferred over
non-interacting models, with the remarkable feature that they
require very different cosmological parameters than the
concordance model.
The range for $\Omega_{x}, H_{0}, \Omega_{b}$ and $w_x$ are not
directly comparable to those found in \cite{amendola} since the
interaction is different and we used different priors. Their
coupled model requires higher values of the Hubble constant when
the strength of the interaction increases, opposite to the
behavior found in Fig. \ref{fig:contour}.
Non-interacting models are compatible with
the data only at the 99.9\% confidence level. Our best fit model
($c^2=5\times 10^{-3}$, $\Omega_x=0.43$,
$\Omega_b = 0.08$, $H_{0}=54\,km/s/Mpc$, $n_s=0.98$, $w=-0.99$)
has a $\chi^2=-2log(\cal{L}) = 974$ while the best fit for a
non-interacting model occurs at $\Omega_x=0.5$, $\Omega_b = 0.07$,
$H_{0}=60\,km/s/Mpc$, $w=-0.75$, $n_s=1.02$ and has $\chi^2 =
983$. The Bayesian Information Criteria defined as
$BIC=\chi^2+k\log N$ \cite{liddle}, that penalizes the inclusion
of additional parameters to describe data of small size (in this
case the number of independent data points is $N = 899$ and k, the
number of model parameters, is 5 in the non-IQ model, and 6 in the
model with interaction), gives $\Delta BIC = -2$ which can be
considered as positive evidence in favor of including this
additional parameter to describe the data. The $\Lambda$CDM
concordance model was deduced by fitting a different set of
parameters to WMAP data \cite{likeli} and the results are not
directly comparable to ours. For completeness, let us mention that
the concordance model with $\Omega_\Lambda=0.72$, $\Omega_b =
0.049$, $H_{0}= 68\,Km/s/Mpc$ and $n_s=0.97$ has a fit of $\chi^2
= 972$ is positively better than ours if the amplitude of the
matter power spectrum and the redshift of reionization are
included as parameters. If only the overall normalization of the
power spectrum is included, the fit is $\chi^2 = 990$. In this
case, the BIC would give $\Delta BIC = -9$ that must be taken as
strong indication that the interaction improves the fit.
The likelihood curves of Fig. \ref{fig:contour} seems to suggest
that our model is ruled out by observations since, for example,
luminosity distance estimates from high redshift supernovae
indicate that $\Omega_x\ge 0.6$ at the 95\% confidence level
\cite{hubble}. However, analysis of the temperature-luminosity
relation of distant clusters observed by XMM-Newton and Chandra
satellites appear to be consistent with $\Omega_{c}$ up to 0.8
\cite{cdm_high}. Moreover, with no priors on $\Omega_c$ and $w_x$,
(so phantom models are included in the analysis) values of the CDM
fraction as high as ours are consistent
with SNIa data \cite{sn_review}.
Contours in Fig. \ref{fig:contour}c are rather parallel to the
$\Omega_x$-axis, while at the redshifts probed by supernovae
interacting models behave as non-interacting, and in the same
plane contours are parallel to the $c^2$-axis (see Fig.
\ref{fig:likeSNIQM}). In Fig. \ref{fig:contour}d we plot the
confidence intervals combining both WMAP and high redshift
supernovae data. Contours are shifted
to large values of dark energy, the reason being that WMAP data
constraints the coupling $c^2$ while the supernovae data is
insensitive to it. For illustrative purposes, in Fig.
\ref{fig:radpower} we compare our best fit model (solid line) with
the concordance model (dashed line) and the best quintessence
model with $c^2=0$ (dot-dashed). All models are rather smooth
compared with the rigging present in the data with the excess
$\chi^2$ coming from similar regions in $l$-space: $l\sim 120,
200, 350$. The discrepancy among the models is clearer at the
first acoustic peak. This rigging introduces a high degeneracy
among cosmological parameters since very different models fit the
data with similar $\chi^2$ per degree of freedom. When adding
other data sets, like the supernovae in Fig. \ref{fig:contour}c,
the fraction of dark energy increases dramatically. With respect
to the Hubble constant, the value is less certain since several
groups advocate values close to $60\,km/s/Mpc$ \cite{hubble_low},
and even as low as $50\,km/s/Mpc$ \cite{battistelli}.
There is a significant difference between the concordance
model and our best fit model. Since the latter requires lower
Hubble constant and dark energy density, it generates a smaller
Integrated Sachs Wolfe effect, responsible for the rise of the
radiation power spectrum at $l\le 10$. This is a generic
feature of this type of IQ models and, therefore, the low
amplitude of the measured quadrupole and octupole is less of a problem
in models with decaying dark energy than in the concordance model.
\section{Discussion}
We have shown that a model where the coincidence problem is solved
by the decay of the quintessence scalar field into cold dark
matter is fully compatible with the WMAP data. The best model,
$c^2\simeq 5 \times 10^{-3}$, fits the data significantly better
than models with no interaction. Our best fit model requires
cosmological parameters, in particular $\Omega_{x}$ and $H_{0}$,
that are very different from the concordance model. Our models are
highly degenerate in the $\Omega_{x}-c^{2}$ plane with contours
almost parallel to the $\Omega_{x}$ axis (see Fig.
\ref{fig:contour}) but including prior information from supernovae
data shifts this parameter close to the concordance model
$\Omega_{x} \simeq 0.68$. We wish to emphasize that the
non-interacting model ($c^{2} = 0$) is only compatible with WMAP data
at $3\sigma$ confidence level, but when the supernovae data are
included this is shifted to $2\sigma$ confidence level.
We have shown that the Bayesian Information Criteria, that
strongly disfavors increasing the parameter space to describe data
sets of $N\simeq 10^3$ points, provides positive evidence in favor
of the existence of interaction. Other IQ models have been
proposed \cite{amendola} and even if their results are not
directly comparable to ours, two interacting models, constructed
with different motivations, suggest a value of the dark energy
density smaller than in the concordance model, signals a need
to investigate this type of models. As we have discussed, the
quality of the fit and the cosmological parameters that can be
derived by fitting cosmological models to observations depend on
the parameter space explored.
The fact that IQ models appear to be favored by current
observations suggests that dark energy and dark matter might not
be so different entities after all. This is in line with recent
ideas involving the Chaplygin gas. There, a single component plays
the dual role of cold dark matter (at early times) and vacuum
energy (at late times) and it interpolates between the two as
expansion proceeds \cite{chgas}. The matter power spectrum could
also be an important test of IQ models. A preliminary study shows
that for the range of parameter space compatible with WMAP, the
effect of the scalar field decaying into CDM has little effect
\cite{progress}.
To summarize, the interacting cosmology model fits the WMAP data
significantly better than the $\Lambda$CDM model does, and in fact
alleviates the ISW effect at large angular scales, has no
coincidence problem and provides a unified picture of dark matter
and dark energy. It predicts lower values of Hubble constant, dark
energy density and higher baryon fraction. It is to be expected
that the next generation of CMB experiments \cite{planck} and
large scale surveys will enable us to constrain $c^{2}$ even
further and discriminate between the different variants.
\acknowledgments The authors wish to thank Alejandro Jakubi for
discussions and comments. This research was partially supported by
the Spanish Ministry of Science and Technology under Grants
BFM2003-06033, BFM2000-1322 and AYA2000-2465-E and the Junta de
Castilla y Le\'{o}n (project SA002/03).
|
1,116,691,498,321 | arxiv | \section{Introduction}
The existence and detection for both stellar mass black holes as well
as supermassive black holes are commonly accepted and recognized in
the astronomical community \citep{ho99,kor01}. Intermediate mass black
holes (IMBH), i.e. black holes with mass in the $10^3-10^4 M_{\sun}$
range, possibly formed by runaway mergers of massive stars \citep{pz1}
are expected to fill in the mass range from stellar to supermassive
BH. Indeed, evidence for the presence of IMBH in star cluster is
accumulating, but so far there are only two globular clusters -
\emph{M15} and \emph{G1} - that present observational evidence for the
existence of IMBH \citep{geb02,geb05,vdm02,ger03}. The cases for the
presence of IMBH are based on the challenging determination of the
precise kinematics of the systems, that has to be compared with
detailed dynamical models in order to infer the presence of the
central BH. At the level of velocity dispersion and full line of sight
velocity dispersion profiles, the \emph{direct} dynamical influence of
an IMBH is limited within the sphere of influence of the BH, hard to
resolve observationally (but see the recent high-resolution HST survey
carried out by \citealt{noy06}). In addition, the intrinsic three
dimensional density (and velocity dispersion) cusp around the central
BH is smeared out when the data are projected along the line of sight.
However, even if the direct dynamical signature of a central BH is
hard to detect observationally, the presence of an IMBH at the center
of a collisional stellar systems has profound consequences on the
global dynamics. In particular the visual appearance of a globular
clusters hosting an IMBH is that of a system with a sizeable core
\citep{bau05,tre06b} and not that of very concentrated cluster, as
could be naively inferred from the existence of the Bahcall-Wolf
$1/r^{1.7}$ density cusp \citep{bah76} within the sphere of influence
of the BH. This is because the presence of a central BH and of its
compact density cusps generates, through stellar encounters, a
sufficient amount of energy to halt the core collapse, if the initial
conditions start with a wide core, or to fuel a core expansion if the
initial configuration is centrally concentrated. In \citet{tre06b} we
have shown that star clusters with a central IMBH starting from a
variety of initial stellar density profiles (i.e. King models with
concentration parameter $W_0=3,5,7,11$ as well as Plummer models) all
evolve toward a common value of the core to half mass radius ratio
($r_c/r_h \approx 0.3$) on a timescale of a few initial half-mass
relaxation times. This is indeed not surprisingly: a similar behavior
is observed when primordial binaries are the responsible driver for
energy production in the center of the cluster \citep{tre06a}, in
agreement with the theoretical expectation formulated by
\citet{ves94}. As binaries alone are less efficient at producing
energy than an IMBH, in this case we have $r_c/r_h \lesssim 0.06$
(where the upper limit is reached in presence of a number density of
at least $50\%$ binaries in the core) for a typical globular
cluster. Only when no or little binaries are present there can be a
deep core collapse, ending when $r_c/r_h \lesssim 0.02$ (see,
e.g. \citealt{heg06}).
Given this theoretical framework, the observed core to half mass
radius ratio appears indeed a powerful indicator of the dynamical
configuration of the core of an ``old'' globular cluster, i.e. a
globular cluster that has evolved for about $5$ to $10$ half-mass
relaxation times. $r_c/r_h$ is in addition a relatively easy
photometric measurement, that can be applied even to extra-galactic
globular clusters, without worrying about the need of a very accurate
measure. In fact, the combination of numerical simulations and
theoretical modeling predicts that this ratio increases by about a
factor $3$ moving from clusters with single stars only to
clusters with a sizeable population of binaries; when an IMBH is
present $r_c/r_h$ is $4$ to $5$ times larger than in the case when
only binaries are present and $12$ to $15$ times larger than when the
cluster contains single stars only.
In this letter we present a preliminary application of the use of
$r_c/r_h$ as an indirect indicator for the presence of an IMBH in the
cores of 57 galactic globular clusters that satisfy a conservative
dynamical age criterion, i.e. their half-mass relaxation time is
shorter than $10^9 yr$. In particular, we show that the observed core
to half mass radius ratio in more than half of the clusters in the
sample is so large that the only general dynamical explanation can be
given in terms of the presence of a central BH. The paper is organized
as follows: in Sec.~\ref{sec:picture} we summarize the current
numerical and theoretical understanding of the evolution of $r_c/r_h$,
in Sec.~\ref{sec:data} we present the sample of galactic globular
clusters that we use in this analysis and we discuss their properties
in terms of our theoretical expectations. Finally we conclude in
Sec.~\ref{sec:conc}.
\section{Global evolution of a cluster with binaries and IMBH}\label{sec:picture}
As discussed in quantitative detail in a series of paper devoted to
the dynamical evolution of star clusters with primordial binaries and
IMBHs \citep{heg06,tre06a,tre06b}, the long term evolution of a
globular cluster is driven by the heat flow generated in the core of
the system toward the halo (see also \citealt{ves94}). Three main
processes can be identified at the base of this heat flow, that
regulates the evolution of $r_c/r_h$ (shown in Fig.~\ref{fig:rcrh_th}
for representative simulations of our numerical simulations program).
(1) When only single stars are present the core behaves as a
self-gravitating systems with negative specific heat (e.g. see
\citealt{heg03}) and a thermal collapse happens leading to a core
contraction. The core contraction proceeds for about $10-15$ half mass
relaxation times for equal mass particles, while it is significantly
faster when a mass spectrum is considered \citep{che90}. The collapse
lasts until the central density is so high that a few binaries are
dynamically formed by three body encounters. Given the high central
density these binaries interact efficiently with single stars and
become progressively tighter and tighter providing enough energy to
halt the collapse and create a core bounce. When the number of stars
in the system is large enough ($N \gtrsim 10^4$) binary activity may
cause a temporary temperature inversion in the core and subsequent
core expansion. The process is eventually repeated in a series of
``gravothermal oscillations'' \citep{sug83} with the core radius
oscillating around a value roughly $2\%$ of the half mass radius. (2)
When a sufficient fraction (i.e. greater than a few percent) of stars
in the cluster has initially a companion with typical separations
below $10 AU$ (that is ``hard binaries'' that have a binding energy of
at least a few times the total energy of the cluster), energy
generation due to existing binaries is much more efficient than that
due to dynamically formed pairs. Here the equilibrium size of the core
is larger than when only single stars are present. The system may even
show a core radius expansion if the initial conditions are too
concentrated. The precise value of the equilibrium core radius has a
moderate dependence on the number of stars in the systems and on the
number density of binaries in the core: $r_c \approx 5\%~ r_h$ for a
typical globular cluster ($N=3 \cdot 10^5$) with about $40\%$ of
binaries in the core \citep{heg06,tre06a,fre06,ves94}. This
quasi-equilibrium evolutionary phase lasts until there is a sufficient
number of binaries in the core, that is until the binary fraction in
the cluster is reduced at a few percent level. In fact, binaries in
the outer part of the cluster tend to sink toward the core for mass
segregation, so the core binary fraction is replenished until the
total binary fraction is almost exhausted. (3) When an IMBH with mass
of the order of $1\%$ of the total mass of the cluster is present, a
yet more efficient energy production channel is available in the
center of the system via dynamical interactions within the influence
sphere of the BH \citep[][see also
\citealt{bau04a,bau04b}]{tre06b}. The equilibrium value of the core
radius in this case is significantly larger: $r_c/r_h \approx 0.3$
independently from the initial binary fraction. Even in this case
there is an expansion of the core radius on a relaxation timescale if
the initial conditions are too concentrated.
This general picture has been obtained with somewhat idealized N-body
simulations that employed single particles only and did not take into
account stellar evolution. Therefore it is important to assess if the
results that we obtain can be biased by our assumptions. Modest
deviations from our numerical expectations will not impair the
usefulness of $r_c/r_h$ as diagnostic tool to infer the presence of
IMBH, as $r_c/r_h$ is expected in this case to be larger by a factor
$4$ to $5$ with respect to the case where only binaries are present.
To ensure that no major bias is present we examine critically the
idealizations in the numerical runs:
\begin{enumerate}
\item {\emph{Mass spectrum.}} The choice to use equal mass stars is
probably the most significant simplification introduced in our
simulations. When a realistic mass spectrum is present, massive stars
segregate on a relaxation timescale toward the core of the system and
speed up the core collapse by about one order of magnitude, i.e. the
collapse takes about $1-2~t_{rh}$ \citep{che90}. In principle, the
equilibrium value of the core radius could be changed
significantly. However simulations of the dynamics of star clusters
with primordial binaries that include a realistic mass spectrum have
been recently performed with a Monte Carlo code by \citet{fre06} and
their results on $r_c/r_h$ are fully consistent with our more
idealized runs. Similarly, realistic simulations of globular clusters
with an IMBH that include a mass spectrum and stellar evolution (but
without binaries) have been carried out by \citet{bau04b} and again
the presence of a large core radius is confirmed. The analysis of the
final configuration at $t=12 Gyr$ of two largest runs by
\citet{bau04b} - with $N=131072$ - leads to $r_c/r_h \approx 0.27$ for
the run with a \citet{kro01} IMF truncated at $100~M_{\sun}$ and
$r_c/r_h \approx 0.35$ when the IMF is truncated at $30~M_{\sun}$.
\item{\emph{Stellar evolution.}} Our runs do not take into account the
effects of stellar evolution, that limits the lifetime of massive
stars and may lead to the coalescence of tight binaries. These effects
may possibly lead to a more rapid depletion of the binary population
(see \citealt{iva05}), so that a star cluster would burn out its
binary reservoir earlier than expected from our simulations. If this
is the case, then the core undergoes a deep core contraction, like in
the case where only single stars are present. The direction of the
evolution is fortunately in the right direction to avoid an
observational bias when we are interested, as in this paper, to focus
on \emph{large} values of the core to half mass radius. For
simulations with an IMBH, the study of \citet{bau04b} (see point
above), that included stellar evolution, guarantees instead that the
bias due to stellar evolution is not important (see also Fig.1 in
\citealt{bau04b}, where the Lagrangian radii, that is the radii
enclosing a fixed fraction of the total mass of the system, expand
steadily starting from a $W_0=7$ model).
\item{\emph{Number of particles.}} Our direct simulations have been
limited by CPU speed to employ only up to $\approx 20000$
particles. The typical number of stars in globular clusters is larger
by at least one order of magnitude. Given our choice to use equal mass
particles and no stellar evolution, our results are scale-free and can
be adapted to any physical value for the mass and radius scales. From
theoretical considerations by \citet{ves94} when primordial binaries
are present $r_c/r_h$ is expected to have a modest dependence on the
number of particles $N$, that is $r_c/r_h \propto
1/log(0.11N)$. Simulations with $N$ from $512$ to $16384$ indeed
verify that the scaling is approximately consistent with the
\citealt{ves94} model \citep{heg06}. If anything $r_c/r_h (N)$ seems
to decrease slightly \emph{faster} than expected. In addition the
simulations by \citet{fre06} employ a realistic number of particles
and their core radii are consistent with our results extrapolated
using the \citet{ves94} model. When a IMBH is present we do not expect
a significant scaling of $r_c/r_h$ with $N$ as confirmed by our
analysis of the two $N=131072$ runs of \citet{bau04b}. Even if a
$\log{N}$ scaling were to be assumed despite the indication from the
simulations by \citet{bau04b}, the core radius value extrapolated from
our runs with a IMBH would still be such that $r_c/r_h \gtrsim 0.15$
for a realistic number of particles.
\item {\emph{Tidal field.}} The runs with an IMBH in \citet{tre06b} do
not take into account the presence of a tidal field. This is however
not expected to introduce a significant bias: $r_c/r_h$ in runs with
primordial binaries varies at the $10\%$ level with or without
inclusion of the galactic tidal field \citep{tre06a}.
\item{\emph{Spherical symmetry.}} Our initial conditions are all
spherically symmetric. We do not expect this to be a problem as
globular clusters are typically very close to spherical symmetry. In
addition, the precise details of the starting configuration are not
important on a collisional timescale. To be sure, in the analysis of
the next section we analyze a subsample of the clusters selected by
excluding objects that departs from spherical symmetry, obtaining the
same $r_c/r_h$ distribution as in the parent sample.
\item {\emph{Rotation.}} The simulations that we consider have zero
net angular momentum, i.e. no rotating model is studied. Observational
evidence of rotation in globulars is weak and the velocity dispersion
tensor displays a mild anisotropy at most \citep{mey97}. Therefore it
would be very surprinsing if the comparison with observations turns
out to be strongly biased by not including rotation in simulations.
\end{enumerate}
\subsection{Core radius definition}
Finally, before moving on to compare our theoretical expectations for
$r_c/r_h$ with the data, we have to consider a yet different source of
uncertainty. Numerical simulations give us mass-defined core and
half-mass radii, while luminosity-based quantities are derived from
observations. A discrepancy can therefore arise (1) if the luminosity
profile does not track accurately the mass profile and/or (2) if the
core radius is differently defined in the two cases.
For the first issue we have to consider that the luminosity of a
globular cluster is dominated by stars in the giant branch, that are
more massive than the average star of an old globular cluster. The
most luminous stars are more centrally concentrated due to mass
segregation than the average, so if a bias is introduced, it goes in
the direction of reducing the observed core radius with respect to a
mass weighted definition. However \citet{che90} derive only modest
color gradients due to this effect. In any case it does not affect the
interpretation of large observed cores.
The second issue, that is a difference between the observational and
theoretical core radius definition is also not expected to be a major
problem. In our simulations we adopted the density weighted definition
of the core radius from \citet{cas85}, Eq. IV.3:
$$
r_c \equiv \frac{\langle |\vec{x}| \rho \rangle_M}{\langle \rho \rangle_M} = \frac{\sum_i r_i \rho_i m_i}{\sum_i \rho_i m_i},
$$
where the sum is carried over all the stars in the simulation, $r_i$
is the distance of the i-th star from the center of the system, $m_i$
its mass and $\rho_i$ the stellar density computed at the star
position. This definition for $r_c$ is closely aligned to the standard
observational practice of defining the core radius as the radius
$r_{\mu}$ where the luminosity surface density has dropped to half its
central value \citep{cas85}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig1.eps}}
\caption{Evolution of $r_c/r_h$ in a series of runs with equal mass
stars from \citet{heg06,tre06a,tre06b} starting from a variety of
initial conditions that include single stars only (red line: Plummer
model, N=8192), primordial binaries with galactic tidal field (green
line: King model $W_0=7$ \& $10\%$ binaries, $W_0=11$ \& $20\%$
binaries, $N=16384$) and with IMBH (black line: $m_{bh} = 1.4\%
M_{tot}$ \& $10\%$ binaries, Plummer model, N=8192; $m_{bh} = 3\%
M_{tot}$, no binaries, King model $W_0=11$, N=4096). When a realistic
mass spectrum is considered for single stars runs the core collapse is
much faster ($\approx 2~t_{rh}$, see
\citealt{che90}).}\label{fig:rcrh_th}
\end{figure}
\section{Data sample}\label{sec:data}
As a preliminary application of the $r_c/r_h$ diagnostic to infer the
presence of an IMBH, we consider the compilation of galactic globular
clusters properties by \citet{har96}, revised in February 2003. For
each globular cluster in the tables of \citet{har96} we extract the
following quantities: (1) core radius ($r_c$); (2) half-mass radius
($r_h$); (3) tidal radius ($r_t$); (4) half-mass relaxation time
($t_{rh}$); (5) ellipticity ($e$); (6) cluster luminosity - i.e. the
absolute visual magnitude - ($M_{Vt}$).
We then proceed to build a uniform and homogeneous data set that
includes only \emph{old} globular clusters to ensure that these
systems are well relaxed by two body encounters. As a first step we
therefore exclude from the analysis:
\begin{enumerate}
\item All the globular clusters for which one of these values is not
quoted (except for $e$);
\item All the globular clusters with $t_{rh}>10^9 yr$. Assuming a
typical globular cluster age of $10^{10}yr$, this ensures that all the
objects in the sample are at least 10 half mass relaxation times
old. Most of the clusters excluded from the sample fail this test;
\item All the cluster that are under the influence of a strong tidal
field effects, i.e. those for which $r_t/r_h<4$. This last selection
criterion is mainly a precaution to exclude systems where $r_c/r_h $
may start fluctuating as $r_t$ approaches $r_h$ (see
\citealt{tre06a}). Only four clusters that pass the previous
selections are excluded due to a small tidal radius.
\end{enumerate}
After applying our selection criteria we are left with a sample of
$57$ old globular clusters, whose $r_c/r_h$ distribution is plotted in
Fig.~\ref{fig:rcrh_obs}. The plot is extremely interesting: it is
immediately apparent that only a modest fraction of the globular
clusters is characterized by $r_c/r_h<0.1$, that would be consistent
with the core size expected when the system is populated only by
single and binary stars. The majority of the objects in the sample has
a large core ($r_c/r_h > 0.2$) and $r_c/r_h \gtrsim 0.5$ is not
infrequent. The average value of the ratio is $\langle r_c/r_h \rangle
= 0.31$. The median of the distribution is at $r_c/r_h=0.28$. From the
dynamical considerations of Sec.~\ref{sec:picture}, only the presence
of an IMBH appears to be consistent with such large core radius
values.
In Sec.~\ref{sec:picture} we have seen that, when the core to half
mass radius ratio is set by the burning of binaries there is a modest,
logarithmic N-dependence on this ratio, that is reasonably well
captured by the \citet{ves94} model, especially in presence of a
galactic tidal field (see \citealt{tre06a}). To take into account this
factor in the analysis we assume (1) that the total luminosity is a
proxy for the total mass of the system and (2) that the average
stellar mass in each cluster is constant and equal to
$0.7M_{\sun}$. The mass-luminosity relaxation has been checked and
calibrated using the data from \citet{gne97}. We used
$\log_{10}(M_{tot}/M_{\sun}) = A + b \cdot M_{Vt} $ with $A=2.5$ and
$b=-0.38236$. We have then corrected the ratio $r_c/r_h$ by applying a
correction factor $\xi$
$$
\xi = \frac{\log{(0.11/0.7~M_{tot}/M_{\sun})}}{\log{(0.11 N_{ref})}}
$$
so as to obtain a ratio $\tilde{r_c}/\tilde{r_h} = \xi r_c/r_h$
equivalent to that of a cluster with $N_{ref}=3 \cdot 10^5$, a
standard value in numerical simulations of globular clusters. The
results shown in Fig.~\ref{fig:rcrh_obs} do not change significantly
when this correction is applied: the average value is $\langle
\tilde{r_c}/\tilde{r_h} \rangle = 0.27$ and the median value of the
distribution is $0.22$, only marginally smaller than without the
correction. The presence of an IMBH is therefore still needed to
explain the large value of the core radius for at least half the
objects in the sample.
Could this conclusion be biased by the fact that the observed globular
clusters are for some (improbable) reasons not yet well relaxed, so
that the equilibrium value for $r_c/r_h$ has not yet been reached
after $\approx 10~t_{rh}$? To address this point we enhance our
selection algorithm to select only objects that are at least
$20~t_{rh}$ old, i.e. with $t_{rh}<5 \cdot 10^8 yr$. This reduces our
sample to 25 clusters whose average core radius is still $\langle
r_c/r_h \rangle = 0.36$ (marginally larger than the whole sample
value), while the median is at $r_c/r_h = 0.21$ (marginally smaller
than the whole sample analysis). If the \citet{ves94} $log(0.11N)$
correction is applied we have $\langle \tilde{r_c}/\tilde{r_h} \rangle
= 0.28$ and median value of $\tilde{r_c}/ \tilde{r_h} = 0.17$. The net
effect of the correction is to reduce the mean and the median values,
as by requiring $t_{rh} \leq 5 \cdot 10^8 yr$ we are biased toward
selecting smaller clusters. Even for this reduced sample more than
half of the objects have $\tilde{r_c}/\tilde{r_h} > 0.15$, so the need
for a central IMBH remains strong.
As an additional test to ensure absence of observational biases, we
have further refined the reduced sample with $t_{rh}<5 \cdot 10^8 yr$
to exclude globular clusters that depart from spherical symmetry,
i.e. those clusters with a quoted $e > 0.1$, where the ellipticity
ratio $e$ is defined in terms of the axis ratio ($e=1-a/b$). This
leaves 20 objects in the sample and does not change the results:
$\langle r_c/r_h \rangle = 0.38$ (median $0.23$), while $\langle
\tilde{r_c}/\tilde{r_h} \rangle = 0.29$ (median $0.20$).
All the progressively selective cuts that we applied to the galactic
globular cluster system provide a consistent picture where about half
of the objects have a significantly large core radius, i.e. $r_c/r_h
\gtrsim 0.2$. If an IMBH turns out to be absent in these systems, then
a different dynamical explanation for their large core radii must be
found. If this is indeed the case for the whole sample then a major
rethinking of our knowledge of the dynamics of globular clusters is
needed. Nevertheless a few individual objects in the sample may turn
out to have a large core radius for peculiar reasons, especially if
they have been strongly perturbed and are therefore not as dynamically
old as their relaxation time would imply. One example of a peculiar
object that passes all our selection criteria is the cluster with the
largest $r_c/r_h$ ratio, Pal 13, which is considered a very unusual
object, either due a strong tidal heating during the last
perigalacticon passage or due to the presence of a sizeable dark
matter halo \citep{cot02}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig2.eps}}
\caption{Main panel: distribution of the $r_c/r_h$ ratio in the main
sample of 57 \emph{old} galactic globular clusters (selected to have
$t_{rh}<10^9 yr$). Small panel: $\xi \cdot r_c/r_h$ (corrected to
rescale it to a reference number of particles $N_{ref}=3 \cdot 10^5$ )
for a sub-sample (20 objects) of the main sample, selected by imposing
$t_{rh}<5 \cdot 10^8 yr$ and $e<0.1$. Both panels show a
significant number of clusters with $r_c/r_h \gtrsim 0.2$.
}\label{fig:rcrh_obs}
\end{figure}
\section{Conclusions}\label{sec:conc}
In this paper we propose the use of the observed ratio of the core to
half mass radius as a powerful indirect dynamical indicator for the
presence of an IMBH at the center of old stellar cluster. A number of
theoretical and numerical investigations combined together
\citep{ves94,bau04a,bau04b,fre06,heg06,tre06a,tre06b} strongly support
the idea that after about $5$ to $10$ half-mass relaxation times the
value of $r_c/r_h$ in a globular cluster assumes significantly
different values depending only on whether the core contains single
stars ($r_c/r_h \approx 0.02$), a large fraction of hard binaries
($r_c/r_h \approx 0.05$) or an IMBH ($r_c/r_h \approx 0.3$). The
details of the initial conditions, such as the initial density
profile, do not affect this quantity. By analyzing the distribution of
$r_c/r_h$ in a sample of $57$ galactic globular clusters, selected to
ensure that they are well collisionally relaxed, we conclude that
there is a strong indirect evidence for the presence of an IMBH in at
least half of the objects in the sample.
A globular cluster that hosts an IMBH represents a laboratory where
dynamical interactions between hierarchical systems take place
frequently, leading in particular to the formation of triples (and
even quadruples) systems and to the ejections of a number of high
velocity stars, with some able to reach ejection velocities up to
several hundreds km/s (see \citealt{tre06b}).
Intriguingly, some objects in the sample have a core radius that could
possibly be too large ($r_c/r_h \gtrsim 0.5$) to be explained by the
presence of a \emph{single} IMBH. It could therefore be possible that
some of these globular clusters host a binary IMBH. In this case,
however, the heating may be so efficient (e.g. see \citealt{yu03} for
an estimate of the interaction rate of a binary BH with single stars)
that it is not clear whether the system is able to survive for even a
few relaxation times. Therefore an accurate numerical modeling is
critically required before any conclusion about this speculation can
be drawn.
Our conclusions on the presence of singles IMBHs appear instead to be
robust. We have discussed a number of possible biases in the
comparison of numerical simulations with the observations, but while
some of the idealizations introduced in the modeling may induce
limited changes in $r_c/r_h$, we could not identify a possible major
bias. An error of the order at least $300\%$ would be required to be
able to explain the observed distribution of $r_c/r_h$ without
invoking the systematic presence of an IMBH in half of the objects in
the sample.
Clearly more detailed numerical simulations will be required to
evaluate and confirm the presence of IMBH in specific clusters of the
sample. In particular it would be extremely interesting the conversion
of N-body snapshots into synthetic observations that could therefore
be directly analyzed like images acquired by a telescope, avoiding an
indirect comparison between theoretical and observed quantities.
\section{Acknowledgments}
I am very grateful to Holger Baumbardt for providing the snapshots of
his numerical simulations whose analysis is used in
Sec.~\ref{sec:picture}. It is a pleasure to thank Douglas Heggie, Piet
Hut and Massimo Stiavelli for their helpful comments and suggestions.
This work was partially supported by NASA through grant HST-AR-10982.
|
1,116,691,498,322 | arxiv | \section{Parameterization of Energy Correlators}
\label{sec:parameterization}
At hadron colliders, an $N$-point energy correlator is specified by the rapidity and azimuthal angles of $N$ points on an idealized cylindrical calorimeter at infinity.
In the jet substructure (collinear) limit we are considering, they can be well approximated by the configurations of $N$-side polygons (not necessarily convex), with the side lengths specified by the mutual angular distance of the points $\Delta R = \sqrt{\Delta y^2 + \Delta \phi^2}$.
Two-point, three-point, and four-point correlators are shown schematically in \Fig{fig:example}.
A three-point projected energy correlator (E3C) is a three-point energy correlator with $R_S$ and $R_M$ integrated over, while maintaining the hierarchy $R_S \leq R_M \leq R_L$.
\begin{figure}[h!]
\includegraphics[width=0.5\linewidth]{figs/correlators}
\caption{Example configurations of two-point, three-point, and four-point energy correlators, labeled by the longest side $R_L$, medium side $R_M$ (in the case of the three-point correlator), and shortest side $R_S$.}
\label{fig:example}
\end{figure}
In our analysis of the three-point energy correlator, we only distinguish inequivalent configuration up to translation, rotation, and reflection.
We use the configuration space of a triangle to label inequivalent configurations.
This is illustrated by the green region in \Fig{fig:conf_zzbar}.
The squeezed (OPE) limit is located at the bottom left corner.
We also label the three triangles plotted in \Fig{fig:EEC_shape_scale} by A, B, and C in \Fig{fig:conf_zzbar}.
To simplify data binning, we make a coordinate transformation of the configuration space to a square, as in \Eq{eq:transf}.
A schematic illustration of the mapping is shown in \Fig{fig:conf_xiphi}, where the squeezed limit has been blown up into a line at $\xi = 0$.
\begin{figure}[h!]
\subfloat[]{%
\includegraphics[width=0.255\linewidth,valign=t]{figs/zzbar_crd.png}
\vphantom{\includegraphics[width=0.3\textwidth,valign=t]{figs/zzbar_crd.png}}%
\label{fig:conf_zzbar}
}
\qquad\qquad
\subfloat[]{%
\includegraphics[width=0.3\linewidth,valign=t]{figs/xiphi_crd.png}
\label{fig:conf_xiphi}
}
\caption{
(a) Configuration space of triangles with fixed longest side $R_L$.
(b) Mapping the configuration space to a square using the coordinate transformation in \Eq{eq:transf}.
The OPE singularity is blown up into a line at $\xi = 0$.
The three labeled shapes have $\xi \in [0.975, 1]$, with $\phi$ values of (A) [1.532, 1.571], (B) [0.746, 0.785], and (C) [0, 0.039].
}
\label{fig:configuration}
\end{figure}
\section{Comparison with Leading-Logarithmic Predictions}
\label{sec:LL}
It is instructive to compare theory predictions for the $N$-point projected correlator against results obtained from the CMS Open Data.
For simplicity we restrict our theory prediction to leading-logarithmic (LL) accuracy.
In principle, a next-to-leading logarithmic analysis could be carried out using the formalism in \Refs{Chen:2020vvp,Dixon:2019uzg}, combined with the use of fragmenting jet functions to incorporate the jet algorithm dependence~\cite{Procura:2009vm,Kang:2016ehg,Kang:2016mcy}.
Next-to-next-to-leading logarithmic predictions are also available for $e^+e^-$ collisions~\cite{Dixon:2019uzg}, while for hadronic collisions, an infrared subtraction algorithm for collinear unsafe final state observable is needed, which is currently not available.
In the LL approximation, the $N$-point projected energy correlator is given by the following factorization formula at the factorization scale $\mu$~\cite{Chen:2020vvp}:
\begin{equation}
\text{ENC}(R_L) = \frac{d}{dR_L} \left[
(1,1) \exp\left( - \frac{\gamma^{(0)}(N+1)}{\beta_0} \ln \frac{\alpha_s(R_L \mu)}{\alpha_s(\mu)} \right)
\begin{pmatrix}
x_q
\\
x_g
\end{pmatrix}
\right] H_J(\mu) \,,
\label{eq:LL}
\end{equation}
where $\beta_0 = 11 C_A/3 - 2N_f/3$ is the one-loop QCD beta function, $x_q$ ($x_g$) is the fraction of quark (gluon) jets in the sample, and $H_J$ is the production cross section for a jet under the $p_T$ and rapidity selection cut.
Note that at LL, the $N$ dependence only enters through $\gamma^{(0)}(N+1)$.
At leading order, $\gamma^{(0)}(j)$ is the anomalous dimension matrix of twist-$2$ local Wilson operator for quarks and gluons:
\begin{equation}
\gamma^{(0)}(j) =
\begin{pmatrix}
\gamma_{qq}^{(0)}(j) & 2N_f \gamma_{qg}^{(0)}(j)
\\
\gamma_{gq}^{(0)}(j) & \gamma_{gg}^{(0)}(j)
\end{pmatrix} \,,
\end{equation}
with matrix entries given by
\begin{align}
\label{eq:QCDAD}
\gamma_{ qq}^{(0)}(j)&\ = -2 C_F \left[ \frac{3}{2} + \frac{1}{j (j+1)} - 2 (\Psi(j+1) + \gamma_E ) \right] \,,
\nonumber\\
\gamma_{gq}^{(0)}(j)&\ = -2 C_F \frac{ (2 + j + j^2)}{j (j^2 - 1)} \,,
\nonumber\\
\gamma_{gg}^{(0)}(j)&\ = -4 C_A \bigg[ \frac{1}{j (j-1)} + \frac{1}{(j+1) (j+2)}
- (\Psi(j+1) + \gamma_E) \bigg]
- \beta_0 \,,
\nonumber\\
\gamma_{qg}^{(0)}(j) &\ = - \frac{(2 + j + j^2)}{j (j+1) (j+2)} \,,
\end{align}
where $\Psi(z) = \Gamma'(z)/\Gamma(z)$ is the logarithmic derivative of the gamma function.
The expression in \Eq{eq:LL} is a LL prediction at parton level.
At small $R_L$, it scales as $1/R_L$ and is the dominant perturbative contribution.
It is known, however, that EEC-type observables suffer from large hadronization corrections, which scale as $1/R_L^2$~\cite{Basham:1978zq,Korchemsky:1994is,Korchemsky:1997sy,Korchemsky:1999kt}.
When taking the ratio of projected energy correlators, though, a large part of the hadronization corrections are cancelled.
In addition, taking the ratio also largely cancels the hard function $H_J$.
Thus, up to the overall quark/gluon composition, the LL prediction is independent of the parton distribution functions and underlying hard scattering processes that produce the jet ensemble.
This makes the ratio of projected energy correlators an ideal candidate for precision QCD measurements.
\begin{figure}[t]
\subfloat[]{%
\includegraphics[width=0.47\linewidth]{figs/LL_E3C_ratio.pdf}
\label{fig:E3C_ratio}
}
\subfloat[]{%
\includegraphics[width=0.47\linewidth]{figs/LL_E4C_ratio.pdf}
\label{fig:E4C_ratio}
}
\\
\subfloat[]{%
\includegraphics[width=0.47\linewidth]{figs/LL_E5C_ratio.pdf}
\label{fig:E5C_ratio}
}
\subfloat[]{%
\includegraphics[width=0.47\linewidth]{figs/LL_E6C_ratio.pdf}
\label{fig:E6C_ratio}
}
\caption{Ratios of $N$-point projected energy correlators for $N$ ranging from $2$ to $6$.
We show results from the CMS Open Data for both the all-hadron case~(black) and the charged-hadron case~(red).
We stress these results do not involve detector unfolding and the error bars only represents statistical uncertainties.
We also show LL predictions for quark and for gluon jets, with the corresponding scale uncertainty band.}
\label{fig:LL_vs_CMS}
\end{figure}
In \Fig{fig:LL_vs_CMS}, we compare the partonic predictions with CMS Open Data for the ratios of projected energy correlators.
The experimental results are shown for all hadrons~(black) and charged hadrons only~(red), and their relative agreement is one piece of evidence for the non-perturbative robustness of these ratios.
The close agreement between the scaling for the ratios of projected correlators as measured on all hadrons and on tracks arises from a combination of three non-trivial features of these observables. First, due to the renormalization group consistency of the hard-collinear factorization formula in \Eq{eq:LL}~\cite{Chen:2020vvp}, the use of tracks does not modify the anomalous dimension of the jet or hard functions. Second, as shown in \Ref{Chen:2020vvp}, in a pure gluon theory, the track functions are governed by the same anomalous dimensions as the jet function but with a non-trivial mixing structure, leading to an interesting cancellation and resulting in the same LL scaling behavior whether measured on all hadrons or tracks. And finally, corrections to this picture in QCD are suppressed by the difference of the first moments of the track functions for quarks and gluons. Since high energy jets in QCD are dominated by pions, the first moments satisfy the approximate relation $T_g(1)\simeq T_q(1) \simeq 2/3$ and hence $\Delta=T_q(1)-T_g(1)\ll 1$ is highly suppressed \cite{Li:2021zcf,Jaarsma:2022kdd}.
For our LL calculation, we choose $\alpha_s (M_Z) = 0.118$ and use two-loop running of strong coupling.
We set $\mu = p_T^{\rm jet}/5$ as the nominal scale, as motivated by the fragmenting jet formalism, and vary around the nominal scale by a factor of $2$ to estimate the theory uncertainty.
The partonic predictions are shown for a pure-quark sample ($x_q = 1$, $x_g=0$) and a pure gluon sample ($x_q = 0$, $x_g= 1$).
We see that a reasonably good agreement can be achieved if a large gluon jet fraction is chosen.
The fact that good agreement persists out to $N = 6$ is evidence that hadronization corrections are indeed largely cancelled in the ratios.
In future work, it would be interesting to fit the quark/gluon composition to the data using the technique of \Ref{Komiske:2018vkc}.
\begin{figure}[t]
\includegraphics[width=0.5\linewidth]{figs/EEC_all}
\caption{EEC from all hadrons in a jet, to be compared to \Fig{fig:EEC}. No unfolding to particle level is performed, and the uncertainty is statistical only.}
\label{fig:EEC_all}
\end{figure}
For completeness, in \Fig{fig:EEC_all} we show the two-point correlator for the all-hadron case.
Like for the charged-hadron version in \Fig{fig:EEC}, the different phases of QCD are still visible.
That said, in the quark/gluon phase, the scaling law seems to be weakly violated.
We suspect this is due to detector effects, so it will be interesting to see if these features are absent once unfolding is performed.
Perhaps counterintuitively, the $R_L d\sigma/d R_L \propto R_L^2$ scaling is robust in the all hadron case, despite the worse angular resolution for neutral hadrons.
One has to remember, though, that detector smearing effects also induce decorrelation, so detailed studies are needed to disentangle detector effects from a genuine QCD phase transition.
\section{Comparison with Pythia Parton Shower}
\label{sec:pythia}
\begin{figure}[]
\subfloat[]{%
\includegraphics[width=0.47\linewidth]{figs/EEEC_1D_xL_quark.pdf}
\label{fig:pythia_quark}
}
\subfloat[]{%
\includegraphics[width=0.47\linewidth]{figs/EEEC_1D_xL_gluon.pdf}
\label{fig:pythia_gluon}
}
\caption{Three-point scaling behavior in \textsc{Pythia} for fixed shapes for (a) quark jets and (b) gluon jets, to compare to \Fig{fig:EEC_shape_scale}.}
\label{fig:pythia_shape_scale}
\end{figure}
In \Fig{fig:EEC_shape_scale}, we saw that the CMS Open Data results were consistent with theoretical expectations for the $R_L$ scaling of the three-point correlator for different shapes.
For reference, we show results from the default parton shower in \textsc{Pythia 8.226}~\cite{Sjostrand:2014zea}, using quark (uds) and gluon jet datasets generated for the study in \Ref{Komiske:2018cqr}.
We plot the $R_L$ scaling for three different triangles A~(blue), B~(green), and C~(red), and the normalized projected three-point energy correlator for reference~(purple).
While \textsc{Pythia} does show a scaling behavior for a fixed-shape triangle, the scaling exponent is nevertheless different from the three-point projected energy correlator. This is clearly seen in the bottom panels, where we show the ratio of the scaling for the fixed shapes to the projected correlators.
This seems to contradict the CMS Open Data result in \Fig{fig:EEC_shape_scale}.
In future work, it will be interesting to understand the mismatch between the parton shower and LO predictions.
While \textsc{Pythia} includes LL and partial NLL resummation of logarithms of $R_S$ and the fixed-order prediction does not, the configurations of A, B, and C are chosen such that large logarithms of $R_S$ are less important.
Furthermore, the fixed-order prediction includes matrix-element corrections for $1\to 3$ splittings, whereas the default parton shower in \textsc{Pythia} does not.
Understanding the origin of difference between Pythia and fixed-order theory, and between Pythia and the CMS Open Data, might shed light on the resummation of $R_L$ scaling and on the matrix element corrections for $1 \to 3$ splitting.
\end{document}
|
1,116,691,498,323 | arxiv | \section{Introduction}
The implementation of non-classical light sources in spectroscopic and sensing methods has been a long-standing goal for advancing many practical applications of quantum science. One particularly intriguing possibility is to use a time-energy entangled photon pair source for two-photon absorption (2PA) excitation instead of a coherent, laser-based (classical) source. Here we refer to the latter regime as classical two-photon absorption (C2PA). It has been predicted that if entangled photons generated via spontaneous parametric down conversion (SPDC) are used for excitation then the resulting entangled two-photon absorption (E2PA) rate should scale linearly with the excitation flux and the process efficiency can be boosted relative to C2PA at low photon flux\cite{1989Banacloche,1990Javanainen}. Entangled photon excitation might therefore enable ultra-low-power two-photon excited fluorescence imaging, which would be particularly advantageous for limiting perturbation and damage of fragile biological samples.
The favorable scaling behavior stems from the linear dependence of the 2PA rate on the second order correlation function, $g^{(2)}$ \cite{1968Mollow,2021Parzuchowski}. In addition to this absorption efficiency enhancement from the photon statistics, further enhancement is possible from the spectral shape and bandwidth of the frequency anticorrelated photon pairs \cite{2020Raymer,2021Carnio}. Whether these mechanisms can provide a practical advantage for E2PA in molecules is still unclear.
Since 2004, numerous publications have reported E2PA and entangled two-photon excited fluorescence (E2PEF) for many different chromophores, reporting large excitation efficiencies \cite{2004French,2006Lee,2009Harpham,2010Guzman,2013Upton,2017Varnavski,2018Monsalve,2020Schatz}. The resulting E2PA cross sections, $\sigma_{\text{E2PA}}$, may be as large as $10^{-17}$~$\text{cm}^2$, which is on the same order of magnitude as a moderately-strong one-photon absorption (1PA) transition. More recently, however, a number of studies have reported conflicting results which cast doubt on the large enhancements claimed in those reports \cite{2019Ashkenazy,2020Mikhaylov,2021Corona,2021Raymer,2021Landes}. For example, three different groups employed E2PEF measurements to determine the $\sigma_{\text{E2PA}}$ of Rhodamine 6G (Rh6G) \cite{2021Tabakaev, 2021Parzuchowski, 2020landes}. Tabakaev \textit{et al.}~\cite{2021Tabakaev} measured E2PEF using CW SPDC excitation at 1064~nm with up to $5\times10^{8}$~pairs/sec. Although several tests were performed to rule out one-photon mechanisms as the origin of the measured signal, the observed dependence on time delay was inconsistent with expectations \cite{2020Lavoie}. The authors concluded that $\sigma_{\text{E2PA}}$ was $(0.99-1.9)\times 10^{-21}$~$\text{cm}^2$ for a range of fluorophore concentrations. In a separate study, Parzuchowski \textit{et al.}~\cite{2021Parzuchowski} observed no E2PEF using a pulsed SPDC excitation source at 810~nm with approximately $9\times10^{9}$~photons/sec. This result was used to determine an upper bound on the cross section for Rh6G of $\sigma_{\text{E2PA}} = 1.2\times 10^{-25}$~$\text{cm}^2$, which is nearly four orders of magnitude smaller than the value reported by Tabakaev \textit{et al.}~\cite{2021Tabakaev}. The null result of Parzuchowski \textit{et al.} was supported by results from a study by Landes \textit{et al.}~\cite{2020landes}. In this case, a CW SPDC excitation source at 1064~nm with $2\times10^{9}$~pairs/s was used, along with dispersion control and sum frequency generation measurements to optimize the excitation radiation parameters and the signal collection efficiency. However no measurable E2PEF was observed~\cite{2020landes}.
The origin of the $\approx$10,000-fold variation in reported $\sigma_{E2PA}$ values is unclear.
Here we focus on hot-band absorption (HBA) which can contribute to signals measured with SPDC and mimic certain characteristics of E2PA. HBA is a classical 1PA process from the thermally-populated vibronic levels of the ground electronic state.
Figure~\ref{fig:HBA} shows a schematic of a two electronic level system (solid grey lines) with vibronic levels of the ground state indicated by dashed grey lines. A 2PA transition is allowed between these two electronic states (blue vertical arrows, Fig.~\ref{fig:HBA}~\textit{a}). If the excitation source has a broad spectrum or is tuned far away from the peak of the ``0-0" transition, its radiation can stimulate transitions involving the vibronic manifold of the ground electronic state. If 1PA transitions between these levels and the upper electronic state are allowed, HBA may take place (red vertical arrows, Fig.~\ref{fig:HBA}~\textit{b}). Although the probability of HBA transitions is very low, C2PA is also inefficient, thus the magnitude of the signals from the two processes can be comparable under certain conditions. The system relaxes back to the ground electronic state emitting fluorescence photons (``anti-Stokes" emission; green vertical arrows), which are indistinguishable for the two mechanisms.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{HBA_diagram.png}
\caption{
A schematic of 2PA (a) and HBA (b) with electronic and vibronic levels indicated by solid and dashed lines respectively. The 2PA excitation source (blue) may have high energy components (red) that resonate with 1PA (hot-band) transitions.
Note that the fluorescence emitted by the two mechanisms (green arrows) is indistinguishable.}
\label{fig:HBA}
\end{figure}
HBA has been shown to play a crucial role in C2PA measurements~\cite{1983Apatin, 1995Okamura,2003Drobizhev,2007Makarov,2007Rebane,2008Starkey}, but has not been discussed in the E2PA literature. The importance of including HBA in the analysis of C2PA data has been detailed in a study by Drobizhev $\latin{et~ al.}$~\cite{2003Drobizhev}, where C2PA and HBA were simultaneously observed in a series of meso-tetra-alkynyl-porphyrins. In this study, the excitation frequency $\nu$ was detuned far to the red of the chromophore's ``0-0" transition frequency ($\nu_{\text{max}}$) to avoid direct one-photon excitation of the lowest energy transition. However, the detuning ($\nu_{\text{max}}-\nu$) was insufficient to avoid excitation from the vibronic manifold of the ground electronic state. Temperature-dependent measurements were conducted to decouple the roles of the two excitation pathways.
An additional complication arises in distinguishing HBA from E2PA using power dependence. Since HBA is a 1PA process, it scales linearly with excitation power.
When the SPDC photon flux is sufficiently low that pairs are separated in time, the E2PA rate is also predicted to scale linearly with excitation power. However, when linear losses act on the produced pairs, E2PA exhibits the unique signature of scaling quadratically with attenuation of the SPDC beam.
This behavior is also expected for other two-photon processes, as clearly demonstrated for sum frequency generation~\cite{2005Dayan}. Thus, to confirm the origin of a potential E2PA signal, both power dependencies should be measured. In earlier reports the linear dependence on the pump alone was taken as proof that the signals originated from E2PA. However this signature is consistent with many one-photon mechanisms~\cite{2021Parzuchowski}, including HBA. This amalgamation of signals, corrupting the purely quantum-enhanced 2PA signal, would lead to misleading conclusions regarding the efficiency of E2PA and its dependence on molecular properties and on the quantum state of the light.
Here we report 2PA measurements on Rh6G and LDS798 (CAS No 92479-59-9) dissolved in methanol and deuterated chloroform ($\text{CDCl}_{3}$), respectively. Rh6G is particularly interesting because it was studied in the prior E2PA reports mentioned above and has well known C2PA properties~\cite{2016deReguardatti}. LDS798 is another commercially-available fluorophore with a large C2PA cross section at 1064~nm~\cite{2011Makarov,2020Drobizhev}. According to a simple probabilistic model of the E2PA process proposed in Fei \textit{et al.}~\cite{1997Fei}, a large C2PA cross section implies a large E2PA cross section as well.
We use two CW sources operated near 1060~nm ---a laser and time-energy entangled photon pairs generated via SPDC---to independently excite the samples under identical conditions. We observe no measurable E2PEF signal from Rh6G with the maximum available SPDC power. In contrast, a signal is observed from the LDS798 sample. Upon further investigation, we find this signal does not show the excitation power scaling characteristics of E2PA. We attribute this fluorescence signal to HBA. Temperature-dependent measurements, excitation wavelength-dependent measurements, and modelling of the signals support our conclusions. We propose that HBA may be responsible for absorption signals observed with SPDC excitation. We emphasize the importance of including additional verification tests to elucidate the origin of signals measured with SPDC excitation.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{Fig2.png}
\caption{Results of classical (coherent) two-photon excited fluorescence measurements on a log-log scale. Fluorescence signals (vertical axis, in counts per second, cnt/s) versus the laser excitation power (lower horizontal axis, in mW) or versus the excitation photon flux (upper horizontal axis, in photons per second per $\text{cm}^2$) measured for Rh6G and LDS798 are shown in panels \textit{a} and \textit{b}, respectively (black squares). The slope values obtained from the fit (red lines) and the derived cross section values in GM units are indicated in the insets. In the case of LDS798 the slope value changes from quadratic (1.91) to nearly linear (1.10) with decreasing power.}
\label{fig:C2PEF}
\end{figure}
A detailed description of the experimental setup and the measurement procedures is provided in the Supporting Information (SI) section.
Classical two-photon excited fluorescence (C2PEF) measurements on Rh6G were used to ensure the proper alignment of the optical system and characterize its sensitivity. In Fig.~\ref{fig:C2PEF} the detected fluorescence signal (in counts per second, cnt/s) is plotted versus the excitation photon flux (upper horizontal axis, in photons per second per $\text{cm}^2$) or the excitation power (lower horizontal axis, in mW) on a log-log scale. The minimum count rate that we could assign to fluorescence photons measured above the background level ($3-5$~cnt/s) is determined to be approximately 0.5~cnt/s. The power dependence of C2PEF for Rh6G (Fig.~\ref{fig:C2PEF}~a) in the range of 0.1-2~mW is found to be near-quadratic with a slope (power exponent) of 1.96$\pm$0.01. The sample concentration (1.1~mM), fluorescence quantum yield (0.9~\cite{1987Penzkofer}) and the measured and calculated excitation condition parameters are used to derive the value of the C2PA cross section, $\sigma_{\text{C2PA}} = 9.9$~GM (see details in the SI) which agrees with the literature value of 9.8~GM~\cite{2016deReguardatti}.
We repeat the C2PEF measurement with LDS798, which has a large $\sigma_{\text{C2PA}}$, but is less advantageous for fluorescence detection because its quantum yield is only 0.054 (see SI) and its emission spectrum is red-shifted from the peak of the detector sensitivity (Fig.~S3). Makarov~$\latin{et~ al.}$~ \cite{2011Makarov} reported $\sigma_{\text{C2PA}}$ of 515~GM for LDS798 excited at 1060~nm. This value was probably overestimated by a factor of two due to an issue with a Rhodamine B reference standard used in that work as was discussed in de Reguardati \textit{et al.}~\cite{2016deReguardatti}
In our experiment the C2PEF power dependence in the range 50-500~mW is found to have a slope value of 1.91$\pm$0.01 (Fig.~\ref{fig:C2PEF}~\textit{b}). Using a sample of $0.1$~mM LDS798 we derive $\sigma_{\text{C2PA}}=220$~GM.
The methods and apparatus used here are very similar to ones employed in our earlier study~\cite{2021Parzuchowski}, where the uncertainty for determining $\sigma_{\text{C2PA}}$ was estimated to be approximately 28\%. We therefore assume it is similar in the present experiment.
With decreasing excitation power on LDS798, we observe that the slope of the power dependence decreases and reaches a value of 1.10$\pm$0.09 in the 0.05-1~mW range. Overall, the data show a transition from a quadratic (i.e. C2PA) to a linear (i.e. 1PA) excitation regime. Although a transition of this type is rather uncommon in C2PEF experiments, there are several reports of similar behavior indicating the presence of the HBA process~\cite{1983Apatin, 1995Okamura,2003Drobizhev,2007Makarov,2007Rebane,2008Starkey}. As suggested by Drobizhev $\latin{et~ al.}$~\cite{2003Drobizhev} the collected fluorescence signal, $F$ (in cnt/s), can be written as a sum of two terms, one describing the excitation via HBA and the other via C2PA,
\begin{equation}
F = N K \sigma_{\text{HBA}} \phi + \frac{1}{2} N K \sigma_{\text{C2PA}} \phi^2
\label{EqHBAplus2PA}
\end{equation}
where $N$ is the number of molecules in the excitation volume, $K$ is the overall fluorescence collection efficiency, $\phi$ is the excitation photon flux and $\sigma_{\text{HBA}}$ is the HBA cross section. The $\sigma_{\text{HBA}}$ is a function of the excitation frequency $\nu$ and sample temperature $T$ (see the SI for details). Lowering the temperature is expected to decrease the rate of HBA but not affect the rate of C2PA. In our temperature-dependent experiments (see SI) we observe a maximum 12~nm red shift in the steady-state emission spectrum of the fluorophore and 23\% decrease in quantum yield while increasing the temperature, both of which are accounted for in the analysis.
Eq.~\ref{EqHBAplus2PA} indicates that the relative contributions of C2PA and HBA vary with excitation flux. The HBA term depends linearly on excitation power while the C2PA term depends quadratically, thus at higher powers the latter should be dominant. This is consistent with what we observe in our experiment with LDS798 (see more on this below and in SI).
Next, we block the 1060~nm laser and switch to SPDC excitation. For Rh6G we are unable to detect any fluorescence signal with the maximum available SPDC power of approximately 1.3~$\mu$W.
However, for LDS798 we measure a strong fluorescence signal (up to 40~cnt/s) under SPDC excitation (Fig.~\ref{fig:HBAwithSPDC}).
First, we carefully verify that this signal is not a scattered portion of the SPDC light and not related to the solvent itself. Replacing the LDS798 sample with pure $\text{CDCl}_{3}$ results in no signal observed above the background level.
To assess whether the signal from the LDS798 sample is E2PEF, we test for a unique signature of the process as discussed above. In two separate measurements, we vary the SPDC pump power and attenuate the SPDC beam. Figure~\ref{fig:HBAwithSPDC} is a plot of the measured fluorescence signals (vertical axis, in cnt/s) versus the excitation power (lower horizontal axis, in nW) or, equivalently, versus the flux (upper horizontal axis, in photons per sec per $\text{cm}^2$) on a log-log scale. Varying the SPDC pump power (green squares), we observe that the fluorescence follows a linear dependence (slope value of 1.02$\pm$0.02), which is consistent with E2PEF. However, attenuation of the SPDC beam power (black symbols) also results in a linear dependence (slope value of 1.01$\pm$0.03). The latter result clearly indicates that the fluorescence signal is not related to E2PA because attenuating the SPDC beam with a neutral density filter randomly removes individual photons rather than photon pairs, thus making the excitation more classical, and classical 2PA scales quadratically with power.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{HBAmodel.png}
\caption{Results of measurements with SPDC excitation of LDS798 on a log-log scale. Fluorescence count rate (in counts per second, cnt/s) versus the SPDC power (lower horizontal axis, in nW) or the excitation photon flux (upper horizontal axis, in photons per second per $\text{cm}^2$). The SPDC excitation flux is calculated assuming the effective wavelength of 1064~nm. The excitation power was controlled by attenuating the SPDC (black squares) or pump laser (green squares) beams. In both cases the signal dependence was linear as determined by the slope values (shown in the inset) calculated from the fits (dashed black and green line, respectively). }
\label{fig:HBAwithSPDC}
\end{figure}
We perform two additional experiments that confirm the observation of HBA. We characterized the temperature-dependence of the C2PA signal on LDS798 encapsulated in a poly-dimethylsiloxane (PDMS) matrix. A rigid polymer rather than a liquid is selected to ensure that changes in solvent viscosity with temperature do not influence the radiationless relaxation rate of LDS798 and thus its fluorescence quantum yield.~\cite{Doan2017} The fluorescence signal in PDMS was observed to increase nearly four-fold with increasing temperature from 283~K to 323~K, and is well fit by a Boltzmann function (Fig.~S4~b (SI)). The experiment was repeated with SPDC excitation (Fig.~S4~c (SI)). The measured signal scales with temperature in the same manner. In addition, we used an independent setup designed for characterizing absorption cross sections~\cite{2020Drobizhev} to measure the HBA cross section of LDS798 as a function of wavelength in the 680 to 900~nm region. We compared this to the cross section we derived from the data shown in Fig.~\ref{fig:C2PEF}~b. The cross sections in the red tail region, including our 1060~nm data point, fit to a Boltzmann function (Fig.~S8 (SI)), which is consistent with HBA theory (Eq.~6 (SI)). Finally, we note that a model entirely based on HBA without any adjustable parameters is consistent with both the laser-excited and SPDC-excited fluorescence signals (Fig.~S7 (SI)).
Several important points can be concluded from this study.
We have shown that even when the excitation wavelengths are detuned hundreds of nanometers from the 1PA peaks of a chromophore, vibronic states can still be excited via HBA. Although this effect is known from previous reports on C2PA, it has not been discussed in previous studies of E2PA. Explaining the origin of inconsistency among different experiments is the most significant challenge currently facing the development of E2PA spectroscopy and its applications. As shown here for LDS798, the HBA signal can partially mimic the power scaling of E2PA. It seems likely that this mechanism could be contributing to E2PA measurements on any other chromophore. Potential HBA contributions should be carefully quantified since they could lead to a significant over-estimate of the quantum enhancement for the 2PA efficiency. Our results underline a critical need to perform stringent tests for unique signatures of E2PA in measured signals with SPDC excitation to distinguish one-photon processes from E2PA.
In particular, to confirm a signal is from E2PA as opposed to other potential mechanisms, the proper validation procedure is to vary the incident power from the entangled photon source both by attenuating the power input to the SPDC crystal and also by attenuating the power afterwards. To demonstrate E2PEF, these two methods of varying the incident power must show different fluorescence power dependencies.
\section{Supporting Information}
Details of the experimental setup, applied measurement procedures, $\sigma_{\text{C2PA}}$ calculations, temperature dependence measurements, modeling the measured HBA signal and measurements of HBA cross section as a function of wavelength are available in the supporting information section.
\begin{acknowledgement}
We acknowledge Mikhail Drobizhev (Montana State University) for suggesting that we closely examine the HBA contributions and for the technical help with some of the measurements. Some of the experimental results were obtained using his resource for multiphoton characterization of genetically encoded probes, supported by the NIH/NINDS grant U24NS109107.
AM and KMP thank Srijit Mukherjee for assisting with the 1PA measurements and for valuable discussions in preparation of the manuscript. This work was supported by NIST and by the NSF Physics Frontier Center at JILA (PHY 1734006) and by the NSF-STROBE center (DMR 1548924).
Certain commercial equipment, instruments, or materials are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.
\end{acknowledgement}
\section{Present Addresses}
A.M.: Max Planck Institute for the Science of Light, Staudtstrasse 2, 91058 Erlangen, Germany.
\beginsupplement
\begin{suppinfo}
\section{Experimental setup and measurement methods}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{setup1.png}
\caption{Schematic of the experiment setup. The 1060~nm laser, SPDC pump and SPDC beams are indicated by solid red, solid green and dashed red lines respectively. See text for details.}
\label{fig:setup}
\end{figure}
As schematically shown in Fig.~\ref{fig:setup}, the experimental setup includes two excitation sources. The fiber laser provides CW 1060~nm radiation (solid red line) for the C2PEF measurements. Its power is controlled with a half-wave plate (HWP1) placed inside a motorized rotational stage and a Glan-Thompson polarizer cube (PBS1). Back reflections from the power controlling optics are blocked from re-entering the fiber output with a Faraday isolator (not shown). Two lenses forming a telescope (L1) are used to adjust the laser beam size. At the polarizing cube (PBS2) the laser beam is overlapped with the SPDC beam (dashed red line).
A CW pump laser at 532~nm (solid green line) with a maximum output of 5W is used as the pump. The laser power is controlled with a half-wave plate (HWP2) and a polarizing cube (PBS3). A band pass filter (F1), and 3 dichroic mirrors are added to remove harmonics generated by the strong pump beam anywhere in the optical system. The pump polarization is controlled with a half-wave plate (HWP3) before entering a 12~mm periodically poled potassium titanyl phosphate (ppKTP) crystal designed for Type-0 SPDC. The pump laser beam is focused with a lens (L2) to full width at half maximum ($\text{FWHM}$ ) $\approx30$~$\mu$m inside the crystal. The ppKTP crystal is placed inside a custom made holder, and its temperature is stabilized at 25~$^{\circ}\mathrm{C}$ with a recirculating chiller. The pump polarization and the crystal temperature are adjusted to optimize the SPDC generation, see Fig.~\ref{fig:SPDCspectrum}~a. The generated SPDC light has a spectral FWHM of 128.9~nm centered at 1077.4~nm (Fig.~\ref{fig:SPDCspectrum}~b). The SPDC spectrum is measured with a InGaAs spectrometer used with a fiber-coupled input. The SPDC light is collimated with a lens (L3). The remaining pump is filtered out with a series of dichroic mirrors (M5--M7) and several long pass interference filters (F2). The total OD at 532~nm is greater than 27. The accumulated group velocity dispersion (GVD) for the SPDC beam that traveled through the setup to the sample position is approximately 1600~$\text{fs}^2$ at the central wavelength. This GVD is compensated via multiple reflections between four chirped mirrors (M8--M11). The GVD is fine-tuned with a 4~mm-thick YAG window. We estimate the short- and long-wavelength edges of the SPDC spectrum have $\approx 150$~fs$^2$ of uncompensated GVD. The polarization of the SPDC is controlled with a half-wave plate (HWP4) before PBS2. The maximum pump power corresponds to 1.3~$\mu$W of output SPDC power at the sample, a value similar to previous E2PA reports~\cite{2021Tabakaev,2021Landes}. While characterizing the entanglement and the photon pair production rate of this source are beyond the scope of this study, the linear losses in the system provide an estimated 17.6\% transmission efficiency for each SPDC photon by the time it reaches the sample resulting in an estimated 40~nW of SPDC power (3.15\% transmission for pairs).
The 1060~nm laser and SPDC beams are focused with a lens (L4) into the sample. At the focus inside the sample cuvette, the laser and SPDC beam widths are 55.5~$\mu$m and 67~$\mu$m FWHM, respectively. Several silver mirrors (M1--M4 and M12) are used to steer the beams to the sample position. The sample is contained in a 2$\times$10~mm spectroscopic quartz cuvette. Fluorescence is collected perpendicular to the beam propagation direction, in a similar manner to that detailed in Parzuchowski \textit{et al.}~\cite{2021Parzuchowski} A spherical mirror (SM) is used to increase the fluorescence collection. From ray-tracing simulations using the Zemax OpticStudio software, we estimate that the geometrical fluorescence collection efficiency is approximately 4.2$\%$. The fluorescence is detected with a photon counting photomultiplier tube (PMT) cooled to 5~$^{\circ}\mathrm{C}$ using a thermoelectric chiller. Details regarding filters placed in front of the PMT (F3), the sample emission, and the PMT quantum efficiency are provided in Fig.~\ref{fig:overplap}. The digitized counts are measured with a counter. The laser and the SPDC powers are measured with a silicon photodiode power sensor calibrated relative to a germanium photodiode. The SPDC beam is attenuated with ND filters introduced right after the F2 filters. The PMT background counts are measured by blocking the excitation beams before the sample. The fluorescence and the background counts at each power are averaged over 100~seconds total. For the regime when the fluorescence counts are 10 cnt/s or lower this is increased to 1000~sec to reduce the standard deviations. The measurements and the data acquisition are controlled with LabVIEW.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{SPDCspectrum.png}
\caption{Measured SPDC light spectrum. The spectrum as a function of the ppKTP crystal temperature is shown in panel a. The chosen temperatures are shown in the legend. At 25~$^{\circ}\mathrm{C}$ the spectrum is symmetric with the central wavelength $\approx1077.4$~nm and FWHM of 128.9~nm (panel b) determined from a Gaussian fit (solid green curve). }
\label{fig:SPDCspectrum}
\end{figure}
Samples of Rh6G (SigmaAldrich) and LDS798 (Luxottica) were prepared in methanol (Thermo Fisher Scientific, ACS grade) and $\text{CDCl}_{3}$ (Cambridge Isotope Laboratories, 99.8$\%$) respectively.
Rh6G and LDS798 concentrations were 1.1~mM and 0.3~mM, respectively.
Chromophore concentrations were determined spectrophotometrically. We observed about 10\% maximum change in the concentration over the course of the measurements due to solvent evaporation or photobleaching. For the sample concentrations used in our study the linear absorption spectra closely agreed with available literature data. For Rh6G, a previous study found that formation of aggregates at this concentration is negligible.~\cite{1987Penzkofer} We found no evidence of aggregate formation for these sample concentrations.
\section{Rh6G and LDS798 emission, PMT quantum efficiency and transmission spectra of optical filters}
Here we provide the emission spectra of the samples, PMT quantum efficiency and transmission spectra of the filters. These spectra are shown in a single plot for each of the samples in Fig.~\ref{fig:overplap}. The SPDC spectrum (dashed magenta) and the samples' absorption (dashed red) and emission (dashed blue) spectra are not shown to scale (relative shapes). The PMT sensitivity curve (black) and various used Semrock filters (see legends for the color mapping) are also shown. The left vertical axis indicates the PMT's quantum efficiency (in \%) whereas the right vertical axis shows optical density, OD, of the filters used in the experiment. Note for panel b: a FF01-795/188 filter was originally used for the C2PEF measurements of LDS798 and was later replaced with a FF01-709/167 for the E2PEF study.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{filtersandsamplesoverlap.png}
\caption{Overlap of the samples' absorption (dashed red) and emission spectra (dashed blue), filter spectra (green, orange, brown), PMT quantum efficiency (black) and SPDC spectrum (dashed magenta) for Rh6G (a) and LDS798 (b) experiments.}
\label{fig:overplap}
\end{figure}
\section{Calculation of C2PA cross sections}
Here we detail how $\sigma_{\text{C2PA}}$ for Rh6G and LDS798 are calculated using the experimental data. Assuming a Gaussian spatial profile for the excitation CW laser beam we can express the collected fluorescence signal (in cnt/s) excited via C2PA as \cite{2011Makarov,2021Parzuchowski}
\begin{equation}
\label{SIF2}
F_2= 2^{1/2}\times \left(\frac{\log{2}}{\pi}\right)^{3/2} \times \gamma \times \kappa \times \eta \times N_{\text{mol}} \times L \times \sigma_{\text{C2PA}} \times \frac{1}{S} \times {N_\text{ph}}^2
\end{equation}
where
$\kappa$ and $\gamma$ (both dimensionless) are geometrical (setup specific) and optical (setup and sample specific) fluorescence collection efficiencies, respectively; the latter is a function of the emission frequency $\nu_{\text{em}}$; the product of these parameters is the absolute fluorescence collection efficiency (shows what portion of the emitted photons is registered); $\eta$ is the sample's quantum yield (dimensionless); $N_{\text{mol}}$ (in $\text{m}^{-3}$) is the number density of molecules and $L$ is the sample's thickness; $S$ is the beam area and $N_{\text{ph}}$ is the number of excitation photons incident on the sample per second. For convenience $N_\text{mol}$ can be re-written in terms of molar concentration $C_m$ (in mol $L^{-1}$) and Avogadro's number $N_A$ as $N_\text{mol}=N_A \times C_m \times 10^3$. For the CW excitation regime, $N_{\text{ph}}$ is simply excitation power $P$ (in W) per excitation photon energy $h \nu$ (in J). The values of the experimental parameters are measured or calculated, and the cross section can be calculated from here as
\begin{equation}
\sigma_{\text{C2PA}}= \frac{F_2}{2^{1/2}\times \left(\frac{\log{2}}{\pi}\right)^{3/2} \times \gamma \times \kappa \times \eta \times N_A \times C_m \times 10^3 \times L \times \frac{1}{S}\times \left(\frac{P}{h \nu}\right)^2}
\end{equation}
Now we can insert numbers for our experiment.
The beam size measured at the sample position in vertical and horizontal projections have FWHM of 55 and 57~$\mu$m. We approximate the beam area $S$ (in $\text{m}^2$) as
\begin{equation}
S\simeq \pi \times \frac{55}{2} \times \frac{57}{2} \times 10^{-12}
\end{equation}
The Rayleigh range of the laser was measured to be 6400 $\mu$m, thus the beam area changes by less than a factor of 2 within the 1~cm cuvette. We obtain the geometrical collection efficiency $\kappa$ using a Zemax simulation similar to those detailed in Parzuchowski \textit{et al.}~\cite{2021Parzuchowski}. In this simulation we assume that the beam is of uniform area (55~$\mu$m FWHM) throughout the cuvette and arrive at $\kappa \approx$~0.042. We did not measure $\kappa$, and may expect a similar deviation of the experimental and simulated values as that found in Parzuchowski \textit{et al.}, where the experimental $\kappa$ was $\approx 23 \%$ lower than the simulated value.
Next we proceed with the sample specific parameters.
First, consider Rh6G.
The value of $\gamma$ is found from the multiplication of the normalized emission profile of Rh6G, absolute transmission curves for used fluorescence filters and the PMT's quantum efficiency at the corresponding wavelength (Fig.~\ref{fig:overplap}). This gives $\gamma \approx 0.075$. The quantum yield $\eta$ value is $\approx 0.9$~\cite{1987Penzkofer}. For the measurement we used a Rh6G sample of concentration $C_m \approx 1.1$~mM.
To calculate an averaged value of $\sigma_{\text{C2PA}}$ we fit the fluorescence signals measured over the range of 0.1-2~mW shown in the Fig.~2~a (main text). From here we estimate $\sigma_{\text{C2PA}} \approx 9.9$~GM.
Next we repeat the same for the LDS798 sample. In this case we determine $\gamma \approx 0.018$. The quantum yield for LDS798 in $\text{CDCl}_{3}$ is measured using LDS798 in EtOH as a reference, which has a quantum yield of 0.011~\cite{2009Luchowski}. The determined $\eta$ value is $\approx 0.054$. For the C2PA measurement we used a LDS798 sample of concentration $C_m \approx 0.1$~mM. For averaging we use the quadratic portion of the measured fluorescence signals corresponding to 50-500~mW (Fig.~2~b (main text)). We estimate $\sigma_{\text{C2PA}} \approx 220$~GM.
\section{Sample temperature dependence of HBA}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{PowerExponentsVsWave_HBASignal.png}
\caption{Panel a: fluorescence counts (in cnt/s) versus the sample temperature (in K) obtained using 10~mW of the 1060~nm laser power. The red curve shows the exponential dependence of the fluorescence verifying its HBA origin in this regime. Panel b: the same as the panel a except the laser source is replaced with the SPDC. Panel c: the same as the panel b except a 1055~nm short-pass filter was placed in the SPDC beam path.}
\label{fig:ExpVsWavel}
\end{figure}
To further confirm that the fluorescence signal that we measure is HBA, we vary the LDS798 sample temperature. For this experiment LDS798 was encapsulated in a molded PDMS slab. To prepare this sample, a pristine slab of PDMS was cast to fit the cuvette holder, and then soaked in a 2.3~mM solution of LDS798 in chloroform. The temperature of the solid sample was measured using a thermistor cast directly into the PDMS sample. This allowed for simultaneous measurement of temperature and fluorescence rates. We used the 1060~nm laser at 1~mW of power. This power is low enough that C2PEF contribution should be negligible (see Fig.~2~b (main text)). The temperature was first held at 50~$^{\circ}\mathrm{C}$, then ramped down to 10~$^{\circ}\mathrm{C}$, and finally ramped back up to 50~$^{\circ}\mathrm{C}$ to ensure no sample degradation occurred.
The population of LDS798 vibronic states is expected to follow Boltzmann statistics. Therefore increasing or decreasing the temperature should result in an increase or decrease of HBA, respectively. The measured fluorescence counts versus temperature for laser excitation (Fig.~\ref{fig:ExpVsWavel}~a) illustrate precisely this behavior. The fit (red curve) for the fluorescence signal $F$ is of the form $F = A \times \exp{(-E/kT)} + C$, where $E$ is the energy of a 1064~nm photon, $k$ is the Boltzmann constant, and $A$ and $C$ are the fitting parameters. The same experiment is then repeated using SPDC excitation (Fig.~\ref{fig:ExpVsWavel}~b) and SPDC excitation filtered with a 1055~nm short-pass filter (Fig.~\ref{fig:ExpVsWavel}~c). In both cases the signal follows a Boltzmann temperature-dependence. These results indicate the fluorescence orginates from HBA for both laser and SPDC excitation. Alternatively, to confirm that HBA occurs when the sample is excited by SPDC photon pairs, one could measure the fluorescence signal as a function of spectral width at a fixed temperature. In this case, one would expect the signal to decrease as the photon spectrum becomes narrower.
\section{Simulations of HBA signals}
We calculate the fluorescence signals expected from HBA with laser and SPDC excitation.
The collected one-photon excited fluorescence signal (in cnt/s), using excitation photons of frequency $\nu$ (in Hz) spread out over the range of $\Delta\nu$ (in Hz), can be written as
\begin{equation}
\label{SIF1}
F_1=\gamma \times \kappa \times \eta \times N_{\text{mol}} \times L\times \int\displaylimits_{\Delta \nu} \sigma_1 (\nu) \times \mathcal{F}(\nu) d\nu
\end{equation}
where $\sigma_1$ is the 1PA cross section (in $\text{cm}^2$), $\mathcal{F}(\nu)$ is the flux spectral density (in photons per Hz per s) and the remaining notation is the same as in Eq.~\ref{SIF2}. For the CW case
\begin{equation}
\mathcal{F} (\nu)=\frac{\mathcal{P} (\nu)}{h\nu}
\end{equation}
where $\mathcal{P}(\nu)$ is a power spectral density (in W per Hz) which can be calculated from the measured total power, $P$, (in W) and a dimensionless spectral power density function, $f(\nu)$, obtained from the simulated SPDC spectrum (Fig.~\ref{fig:mirroredSpectrumHBA}~a).
For HBA the value of $\sigma_1$ can be written as~\cite{2003Drobizhev,2005Kachynski}
\begin{equation}
\label{SIsigmaHBA}
\sigma_1=\sigma_{\text{max}} \times \exp{\left[ \frac{-h (\nu_{\text{max}}-\nu)}{kT}\right]} \times \text{FC}(\nu)
\end{equation}
where $\sigma_{\text{max}}$ is $\sigma_1$ at the frequency of the ``0--0" transition $\nu_{\text{max}}$; $k$ is Boltzmann's constant and $T$ is the sample's temperature. FC is a normalized Franck-Condon factor (see below). The value of $\sigma_{\text{max}}$ (in $\text{cm}^2$) is calculated from the extinction coefficient $\epsilon_{\text{max}}$ (in $\text{M}^{-1} \text{cm}^{-1}$) at $\nu_{\text{max}}$ as $\sigma_{\text{max}}= \epsilon_{\text{max}} \times 3.82 \times 10^{-21}$~\cite{2006Lakowicz}.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{mirroredSpectrumAndAbsorption.png}
\caption{Panel a: The measured SPDC spectrum (blue dashed) overlaid with the mirrored spectrum (black solid) as a function of wavelength. The mirrored spectrum is constructed by reflecting the red side of the spectrum about the central SPDC wavelength in order to obtain an approximation for the tail on the blue side. Panel b: The reflected portion of the SPDC spectrum (black) overlaid on the red tail of the LDS798 absorption spectrum. The reflected spectrum is cut off around 850~nm due to a long-pass filter in place before the sample.}
\label{fig:mirroredSpectrumHBA}
\end{figure}
Here $\gamma \approx0.082$ for Rh6G and $\gamma \approx 0.025$ for LDS798, which is slightly larger for this measurement because a different filter was employed compared to the C2PEF measurements described above (see Fig.~\ref{fig:overplap}~b).
The beam size of the SPDC beam is measured to be 65~$\mu$m FWHM at its focus in the sample, and the Rayleigh range is 920~$\mu$m. This beam is significantly more divergent than the laser beam, leading to a different shape of the excitation volume within the cuvette. Since the collection optics collect fluorescence most efficiently at the center of the cuvette (as shown in Parzuchowski \textit{et al.}~\cite{2021Parzuchowski}) where the excitation volumes look much more similar, we assume that $\kappa \approx 0.042$, which was simulated for laser excitation, can be used here as well.
For Rh6G $\eta \approx 0.9$ and was measured using a sample concentration of $C_m \approx 1.1$~mM and a path length of 1~cm. The simulated HBA fluorescence signal from 1~mW of classical excitation is $8.9\times 10^{-3}$~cnt/s, which is consistent with the lack of linear dependence in Fig.~2~a. Using equation 4, we estimated that the signal due to HBA with 1.3~$\mu$W SPDC power is 2 orders of magnitude below background level. Therefore, 130~$\mu$W would be necessary to generate a measurable signal with SPDC excitation. Based on our calculations, a sample temperature above $80^{\circ}$C would be necessary to observe a linear regime with classical excitation or similarly to observe a signal above background with SPDC excitation.
For LDS798 $\eta \approx$~0.054. For the measurement we used a LDS798 sample of concentration $C_m \approx 0.3$~mM. As before we approximate the value $L$ to be equal to the cuvette length (1~cm).
The FC($\nu$) value can be estimated from the ratio of the sample absorption spectrum at $2\nu_{\text{max}}-\nu$ and $\nu_{\text{max}}$~\cite{2003Drobizhev}. The value of $\nu_{\text{max}}$ can be determined from the intersection of the properly normalized emission and absorption profiles~\cite{2006Lakowicz} and is found to correspond to approximately 672~nm, Fig.~\ref{fig:FrankCondon}~a. The corresponding extinction coefficient is $\epsilon_{\text{max}} \approx 1.54\times 10^4$~$\text{M}^{-1}\times~\text{cm}^{-1}$.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{LDS798AbsEm_FranckCondon.png}
\caption{Determination of the Franck-Condon parameters ratio (FC). The intersection of the normalized emission and absorption spectral profiles of LDS798 is used to find the $\nu_{\text{max}}$ position (a) and then to plot the FC value as a function of frequency $\nu$ (in sec$^{-1}$) according to Drobizhev \textit{et al.}~\cite{2003Drobizhev} (b).}
\label{fig:FrankCondon}
\end{figure}
For LDS798 we estimate the FC value falls in the range between 0.41 and 2.65 in the $\Delta \nu \approx (200-361.4)$~$\text{THz}$ range (Fig.~\ref{fig:FrankCondon}~b).
When calculating the expected fluorescence from HBA excited by SPDC, a numerical integration has to be done due to the wide bandwidth which results in significantly different contributions from the red and blue sides of the spectrum. The calculated fluorescence signal is found to be particularly sensitive to the low amplitude tail of the SPDC spectrum on the blue side of the spectrum which is difficult to measure accurately due to the limited sensitivity of the spectrometer (based on a thermoelectrically-cooled linear InGaAs array) in this spectral region. To account for this spectral content, we employ a model SPDC spectrum in which the red side of the measured spectrum is reflected about the central frequency. The spectrum of this type of degenerate SPDC source is theoretically predicted to be symmetric~\cite{2013Lerch,2021Szoke}. Then the fluorescence signal $F_{\text{HBA}}$ is given by
\begin{equation}
\label{SIF1tot}
\begin{split}
F_{\text{HBA}} = \gamma \times \kappa \times \eta \times C_m \times 10^3 \times L \times \epsilon_{\text{max}} \times \ln 10 \times \hspace{10em} \\
\qquad \times \int\displaylimits_{\Delta \nu}
\left( \exp{\left[ \frac{-h (\nu_{\text{max}}-\nu)}{kT}\right]} \times \text{FC}(\nu) \right) \times \mathcal{F}(\nu) ~d\nu
\end{split}
\end{equation}
As already mentioned, the SPDC spectral density, $\mathcal{F}(\nu)$ (in photons/sec/nm), was obtained by measuring the red side of the spectrum and reflecting it about the central wavelength (Fig.~\ref{fig:mirroredSpectrumHBA}~a). The region from 1550~nm to 1600~nm was used to ensure proper background subtraction of the SPDC spectrum which corresponds to a spectral cutoff at 850~nm on the reflected blue side. The calculated fluorescence is consistent with measured values for SPDC induced HBA (Fig.~3 (main text)) being nearly 24-fold lower (Fig.~\ref{fig:MeasuredVsCalcHBA}).
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{MeasuredVsCalcHBA.png}
\caption{Calculated (dashed) and measured (with solid fit line) fluorescence counts (in cnt/s) versus laser power (in~$\mu$W) for both the 1060nm laser (black) and SPDC source (blue). The calculated fluorescence signal is 24-fold lower than the experimental value for the SPDC source and 3-fold lower for the classical source.}
\label{fig:MeasuredVsCalcHBA}
\end{figure}
The calculated HBA cross section is subject to uncertainty because the method used to determine $\nu_{\text{max}}$ is based on the mirror-image rule~\cite{2006Lakowicz}, and without investigating the electronic structure of LDS798 it is not clear if this holds.
At the same time, the method used to model the blue tail of the SPDC spectrum relies on the far-red tail, a spectral region which was lossy in our setup due to filters used to block the pump beam.
Because the calculation is highly sensitive to changes in the values of these parameters, it is believed that these issues are the main cause of discrepancy between measured and calculated rates. For example, a 15~nm blue shift on the long-pass filter used would be enough to increase the calculated signal an order of magnitude bringing it within 3-fold of the measured value.
It was also assumed that the collection efficiency for both the classical and SPDC excited fluorescence were identical, though in reality this is most likely not the case due to small mismatches in beam shape and divergence.
Next, the same calculation is repeated to simulate HBA with the 1060~nm laser source assuming a Gaussian spectrum with a 1~nm FWHM.
For consistency, the same spectral integration is used for the classical laser, though is unlikely to be necessary due to the narrow bandwidth.
This resulted in a calculated fluorescence signal that was 3-fold lower than the measured value (Fig.~\ref{fig:MeasuredVsCalcHBA}).
\section{Measured HBA cross section}
We compared the derived HBA cross section for LDS798 from our CW laser-based experiment (Fig.~2~b (main text)) to those obtained over a range of wavelengths using an independent experimental setup. These cross sections are plotted in Fig.~\ref{fig:1PAcrossSection}.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{1PAcrosssection_v2.png}
\caption{Experimental HBA cross section obtained using a tunable femtosecond laser (blue circles and black triangles), a spectrophotometer (green stars) and the 1060~nm CW laser (red square). The results lie along a linear fit (dashed line) with a slope in agreement with Eq.~\ref{SIsigmaHBA}.}
\label{fig:1PAcrossSection}
\end{figure}
For the excitation wavelengths of 680 to 820~nm (blue circles) and 820 to 900~nm (black triangles), a setup (described previously~\cite{2020Drobizhev}) based on a tunable femtosecond laser coupled to a photon counting spectrofluorimeter was used. For the 680 to 820~nm range, the right channel of the spectrofluorimeter which contains a monochromator was used to collect fluorescence from the sample at 760~nm. The excitation power was varied from 0.4 to 40~mW and the resulting fluorescence signal was confirmed to depend linearly on excitation power. To derive a cross section from the signals, first, the ratio of the fluorescence signals at two independent wavelengths for the same excitation power is used to find a ratio of the HBA cross sections at the two wavelengths (see Eq.~1 (main text)). Next, the absolute value of the cross section is scaled in the 750 to 780~nm range to match the values reported in literature~\cite{2011Makarov}. The red tail of the absorption spectrum was also measured in a spectrophotometer (green stars) and scaled to match literature values.
For the 820 to 900~nm range, the left channel of the spectrofluorimeter was used to collect the integrated fluorescence signal through a 770~nm shortpass filter. Fluorescence was measured as a function of peak photon flux in the range $2\times10^{26}$ to $4\times10^{27}$~photons~cm$^{-2}$~s$^{-1}$. Similar to our results shown in Fig.~2~b (main text), as the photon flux is increased the signal transitions from depending linearly to quadratically on photon flux. At each wavelength, the signal as a function of flux was fit to Eq.~1 (main text). The coefficient from the linear term was used again to derive an HBA cross section. First, the relative strength was found using ratios of the coefficients at independent wavelengths as described above. Next, the absolute value is scaled at 820~nm to match the monochromator results.
In the region of 770 to 876~nm, the cross sections obtained from these two methods are fit (black dashed line) to a linear function. The slope of the fit is found to equal $\mathrm{log}(e)hc/kT$ for $T\approx290$~K, which is consistent with the expected Boltzmann exponential dependence of the cross section (see Eq.~\ref{SIsigmaHBA}). The cross section also contains another frequency dependent term, the ratio of Franck-Condon factors (Fig.~\ref{fig:FrankCondon}~b), but here we assume these factors do not change significantly compared to the change imparted by the Boltzmann factor.
For our 1060~nm CW laser measurement (red square), the signal from Fig.~2~b (main text) is fit to Eq.~1 (main text). The ratio of the coefficients in front of the linear and quadratic components is used to derive the HBA cross section, analogous to that shown in Eq.3 of Drobizhev \textit{et al.}~\cite{2003Drobizhev}, but for a CW beam with a Gaussian spatial profile. We used our derived value of $\sigma_{\text{C2PA}} = 220$~GM. Our derived HBA cross section lies along the fit to the HBA cross sections measured at lower wavelengths.
|
1,116,691,498,324 | arxiv | \section{Introduction}
\label{sec:intro}
In exponential growth the population grows at a rate proportional to its current size. This is unrealistic, since in reality growth will not exceed some maximum, called its carrying capacity. The logistic equation \cite[Chapter 6]{BACA11} deals with this problem by ensuring that the growth rate of the population decreases once the population reaches its carrying capacity \cite{PANI14}. Statistical modelling of the logistic equation's growth and decay is accomplished with the {\em logistic distribution} \cite{JOHN95c} \cite[Chapter 22]{KRIS15}, noting that the tails of the logistic distribution are heavier than those of the ubiquitous normal distribution. The normal and logistic distributions are both symmetric, however, real data often exhibits skewness \cite{DASG10}, which has given rise to extensions of the normal distribution to accommodate for skewness, as in the skew normal \cite{AZZA14} and epsilon skew normal \cite{MUDH00} distributions. Subsequently, skew logistic distributions were also devised, as in \cite{NADA09a,SAST16}.
\smallskip
Epidemics, such as COVID-19, are traditionally modelled by compartmental models such as the SIR (Susceptible-Infected-Removed) model and its extension the SEIR (Susceptible-Exposed-Infected-Removed) model, which estimate the trajectory of an epidemic \cite{LI18}. These models typically rely on assumptions on how the disease is transmitted and progresses \cite{IOAN22}, and are routinely used to understand the consequences of policies such as mask wearing and social distancing \cite{DAVI20}. Time series models \cite{HARV21}, on the other hand, employ historical data to make forecasts about the future, are generally simpler than compartmental models, and are able to make forecasts on, for example, number of cases, hospitalisations and deaths. The SIR model can be interpreted as a logistic growth model \cite{DELA20,POST20}. However, as the data is inherently skewed, a skewed logistic statistical model would be a natural choice, although as such it does not rely on biological assumptions in its forecasts \cite{DYE20}.
\smallskip
Herein we present a novel yet simple (one may argue the simplest), three parameter skewed extension to the logistic distribution to allow for asymmetry; c.f. \cite{DYE20}. Nevertheless, if instead of our extension we deploy one of the other skew logistic distributions (such as the one described in \cite{NADA09a}) the results, would no doubt be comparable to the results we obtain herein; we, however, pursue our simpler extension detailing its statistical properties.
\smallskip
In the context of analysing epidemics the logistic distribution is normally preferred, as it is a natural distribution to use in modelling population growth and decay. However, we still briefly mention a comparison of the results we obtain in modelling COVID-19 waves with the skew logistic distribution, to one which, instead, employs a skew normal distribution (more specifically we choose the, flexible, epsilon skew normal distribution \cite{MUDH00}). The result of this comparison implies that utilising the epsilon skew normal distribution leads, overall, to results which are comparable to those when utilising the skew logistic distribution. However, in practice, it is still preferable to make use of the skew logistic distribution as it is the natural model to deploy in this context \cite{PELIN20}, since, on the whole, it is more consistent with the data as its tails are heavier than those of a skew normal distribution.
\smallskip
Epidemics are said to come in ``waves''. The precise definition of a wave is somewhat elusive \cite{ZHAN21}, but it is generally accepted that, assuming we have a time series of the number of, say, daily hospitalisations, a wave will span over a period from one valley (minima) in the time series to another valley, with a peak (maxima) in between them; there is no strict requirement that waves do not overlap, although, here for simplicity we will not consider any overlap as such; see \cite{ZHAN21} for an attempt to give an operational definition of the concept of epidemic wave. In order to combine waves we make use of the concept of {\em bi-logistic growth} \cite{MEYE94,FENN13}, or more generally multi-logistic growth, which allows us to sum two or more instances logistic growth when the time series spans over more than a single wave.
\smallskip
To fit the skew logistic distribution to the time series data we employ maximum likelihood, and to evaluate the goodness-of-fit we make use of the recently formulated {\em empirical survival Jensen-Shannon divergence} (${\cal E}SJS$) \cite{LEVE18,LEVE21a} and the well-established {\em Kolmogorov-Smirnov two-sample test statistic} ($KS2$) \cite[Section 6.3]{GIBB21}. The ${\cal E}SJS$ is an information-theoretic goodness-of-fit measure of a fitted parametric continuous distribution, which overcomes the inadequacy of the {\em coefficient of determination}, $R^2$, as a goodness-of-fit measure for nonlinear models \cite{SPIE10}. The $KS2$ statistic also satisfies this criteria regarding $R^2$, however we observe that the 95\% bootstrap confidence intervals \cite{EFRO93} we obtain for the ${\cal E}SJS$ are narrower than those for the $KS2$, suggesting that the ${\cal E}SJS$ is more powerful \cite{COLE21} than the $KS2$. Another, well-known, limitation of the $KS2$ statistic is that it is less sensitive to discrepancies at the tails of the distribution than the ${\cal E}SJS$ statistic is, in the sense that as opposed to ${\cal E}SJS$ it is ``local'', i.e. its value is determined by a single point \cite{BEND15}.
\smallskip
The rest of the paper is organised as follows.
In Section~\ref{sec:sl}, we introduce a skew logistic distribution, which is a simple extension of the standard, symmetric, logistic distribution obtained by adding to it a single skew parameter, and derive some of its properties.
In Section~\ref{sec:ml}, we formulate the solution to the maximum likelihood estimation of the parameters of the skew logistic distribution. In Section~\ref{sec:bilog}, we make use of an extension of the skew logistic distribution to the bi-skew logistic distribution to model a time series of COVID-19 data items having more than a single wave. In Section~\ref{sec:data} we provide analysis of daily COVID-19 deaths in the UK from 30/01/20 to 30/07/21, assuming the skew logistic distribution as an underlying model of the data. The evaluation of goodness-of-fit of the skew logistic distribution to the data makes use of the recently formulated ${\cal E}SJS$, and compares the results to those when employing the $KS2$ instead. We observe that the same technique, which we applied to the analysis of COVID-19 deaths, can be used to model new cases and hospitalisations. Finally, in Section~\ref{sec:conc}, we present our concluding remarks.
It is worth noting that in the more general setting of information modelling, being able to detect epidemic waves may help supply chains in planning increased resistance to such adverse events \cite{SEME22}. We note that all computations were carried out using the Matlab software package.
\section{A skew logistic distribution}
\label{sec:sl}
Here we introduce a novel {\em skew logistic distribution}, which extends, in straightforward manner, the standard two parameter logistic distribution \cite{JOHN95c} \cite[Chapter 22]{KRIS15} by adding to it a skew parameter. The rationale for introducing the distribution is that, apart from its simple formulation, we believe its maximum likelihood solution, presented below is also simpler than those derived for other skew logistic distributions, such as the ones investigated in \cite{NADA09a,SAST16}. This point provides further justification for our skew logistic distribution when introducing the bi-skew logistic distribution in Section~\ref{sec:bilog}.
\smallskip
Now, let $\mu$ be a location parameter, $s$ be a scale parameter and $\lambda$ be a skew parameter, where $s > 0$ and $0 < \lambda < 2$. Then, the probability density function of the skew logistic distribution at a value $x$ of the random variable $X$, denoted as $f(x;\lambda,\mu,s)$, is given by
\begin{equation}\label{eq:pdf}
f(x;\lambda,\mu,s) = \frac{\kappa_\lambda \ \exp\left(- \lambda \ \frac{x-\mu}{s} \right)}{s \left( 1+ \exp\left(- \frac{x-\mu}{s} \right) \right)^2},
\end{equation}
noting that for clarity we write $x-\mu$ above as a shorthand for $\left( x-\mu \right)$, and $\kappa_\lambda$ is a normalisation constant, which depends on $\lambda$.
\smallskip
When $\lambda = 1$, the~skew logistic distribution reduces to the standard logistic distribution as in~\cite{JOHN95c} and \cite{KRIS15} (Chapter 22), which is symmetric. On~the other hand, when $0 < \lambda < 1$, the~skew logistic distribution is positively skewed, and~when $1 < \lambda < 2$, it is negatively~skewed. So, when $\lambda = 1$, $\kappa_\lambda = 1$, and, for example, when $\lambda = 0.5$ or $1.5$, $\kappa_\lambda = 2/\pi$. For simplicity, from now on, unless necessary, we will omit to mention the constant $\kappa_\lambda$ as it will not effect any of the results.
\smallskip
The {\em skewness} of a random variable $X$ \cite{DASG10,KRIS15}, is defined as
\begin{displaymath}
{\rm E}\left[ \left( \frac{X-\mu}{s} \right)^3 \right],
\end{displaymath}
and thus, assuming for simplicity of exposition (due the linearity of expectations \cite{DASG10}) that $\mu=0$ and $s=1$, the skewness of the skew logistic distribution, denoted by $\gamma(\lambda)$, is given by
\begin{equation}\label{eq:skew1}
\gamma(\lambda) = \int_{-\infty}^\infty x^3 \ \frac{\exp\left(- \lambda x \right)}{s \left( 1+ \exp\left(- x \right) \right)^2} \ dx.
\end{equation}
First, we will show that letting $\lambda_1 = \lambda$, with $0 < \lambda_1 < 1$, we have $\gamma(\lambda_1) > 0$, that is $f(x;\lambda_1,0,1)$ is positively skewed. We can split the integral in (\ref{eq:skew1}) into two integrals for the negative part from $-\infty$ to $0$ and the positive part from $0$ to $\infty$, noting that when $x=0$, the expression to the right of the integral is equal to $0$. Then, on setting $y=-x$ for the negative part, and $y=x$ for the positive part, the result follows, as by algebraic manipulation it can be shown that
\begin{equation}\label{eq:skew2}
\frac{\exp(- \lambda_1 y)}{\left( 1 + \exp(-y) \right)^2} > \frac{\exp(\lambda_1 y)}{\left( 1 + \exp(y) \right)^2},
\end{equation}
implying that $\gamma(\lambda_1) > 0$ as required.
\smallskip
Second, in a similar fashion to above, on letting $\lambda_2 = \lambda_1 +1 = \lambda$, with $1 < \lambda_2 < 2$, it follows that $\gamma(\lambda_2) < 0$, that is $f(x;\lambda_2,0,1)$ is negatively skewed. In particular, by algebraic manipulation we have that
\begin{equation}\label{eq:skew3}
\frac{\exp \left(- \lambda_2 y \right)}{\left( 1 + \exp(-y) \right)^2} < \frac{\exp \left( \lambda_2 y \right)}{\left( 1 + \exp(y) \right)^2},
\end{equation}
implying that $\gamma(\lambda_2) < 0$ as required.
\medskip
The cumulative distribution function of the skew logistic distribution at a value $x$ of the random variable $X$ is obtained by integrating $f(x;\lambda,\mu,s)$, to obtain $F(x;\mu,s,\lambda)$, which is given by
\begin{align}\label{eq:cdf}
F(x;\lambda,\mu,s) = \kappa_\lambda \ \exp\left( -(\lambda-2) \ \frac{x-\mu}{s} \right) & \left( \frac{1}{\left(1+\exp \left(\frac{x-\mu}{s}\right)\right)} \right. - \nonumber \\
& \left. \ \ \frac{\lambda-1}{\lambda-2} \ {}_2F_1\left(1,2-\lambda;3-\lambda;-\exp\left(\frac{x-\mu}{s}\right)\right)\right),
\end{align}
where ${}_2F_1(a,b;c;z)$ is the {\em Gauss hypergemoetric function} \cite[Chapter 15]{ABRA72}; we assume $a, b$ and $c$ are positive real numbers, and that $z$ is a real number extended outside the unit disk by analytic continuation \cite{PEAR17}.
\smallskip
The hypergeometric function has the following integral representation \cite[Chapter 15]{ABRA72},
\begin{equation}\label{eq:hyperg1}
\frac{\Gamma(c)}{\Gamma(b) \Gamma(c-b)} \int_0^1 \frac{t^{b-1} (1-t)^{c-b-1}}{(1 - tz)^a} \ dt,
\end{equation}
where $c > b$. Now, assuming without loss of generality that $u=0$ and $s=1$, we have that
\begin{equation}\label{eq:hyperg2}
{}_2F_1\left(1,2-\lambda;3-\lambda;-\exp (x) \right) = \left( 2-\lambda \right) \int_0^1 \frac{t^{1-\lambda}}{\left( 1 + t \ \exp(x) \right)} \ dt,
\end{equation}
where $x$ is a real number.
\smallskip
Therefore, from (\ref{eq:hyperg2}) it can be verified that: (i) ${}_2F_1 (1,2-\lambda;3-\lambda;-\exp (x))$ is monotonically decreasing with $x$, (ii) as $x$ tends to plus infinity, ${}_2F_1 (1,2-\lambda;3-\lambda;-\exp (x))$ tends to $0$, and (iii) as $x$ tends to minus infinity, ${}_2F_1 (1,2-\lambda;3-\lambda;-\exp (x))$ tends to $1$, since
\begin{displaymath}
\left( 2 - \lambda \right) \int_0^1 t^{1- \lambda} \ dt = 1.
\end{displaymath}
\section{Maximum likelihood estimation for the skew logistic distribution}
\label{sec:ml}
We now formulate the maximum likelihood estimation \cite{WARD18} of the parameters $\mu, s$ and $\lambda$ of the skew logistic distribution. Let $\{x_1, x_2, \ldots, x_n\}$ be a random sample of $n$ values from the density function of the skew logistic distribution in (\ref{eq:pdf}). Then, the log likelihood function of its three parameters is given by
\begin{equation}\label{eq:mle}
\ln L(\lambda,\mu,s) = - n \ln(s) - \frac{\lambda}{s} \sum_{i=1}^n (x_i - \mu) - 2 \sum_{i=1}^n \ln \left( 1+ \exp \left( - \frac{x_i - \mu}{s} \right) \right).
\end{equation}
\smallskip
In order to solve the log likelihood function, we first partially differentiate $\ln L(\lambda,\mu,s)$ as follows:
\begin{align}\label{eq:mle-eqs}
\frac{\partial \ln L(\lambda,\mu,s)}{\partial \lambda} &= \sum_{i=1}^n \frac{\mu - x_i}{s}, \nonumber \\
\frac{\partial \ln L(\lambda,\mu,s)}{\partial \mu} &= \frac{\lambda n}{s} - \frac{2}{s} \sum_{i=1}^n \frac{1}{1 + \exp \left( \frac{x_i - \mu}{s} \right)} \ {\rm and} \nonumber \\
\frac{\partial \ln L(\lambda,\mu,s)}{\partial s} &= - \frac{n}{s} + \frac{1}{s^2} \sum_{i=1}^n \left( x_i - \mu \right)
\left( \lambda - \frac{2}{1 + \exp \left( \frac{x_i - \mu}{s} \right)} \right).
\end{align}
\smallskip
It is therefore implied that the maximum likelihood estimators are the solutions to the following three equations:
\begin{align}\label{eq:mle-sol}
\mu &= \frac{\sum_{i=1}^n x_i}{n}, \nonumber \\
\lambda &= \frac{2}{n} \sum_{i=1}^n \frac{1}{1 + \exp \left( \frac{x_i - \mu}{s} \right)} \ {\rm and} \nonumber \\
s &= \frac{1}{n} \sum_{i=1}^n \left( x_i - \mu \right) \left( \lambda - \frac{2}{1 + \exp \left( \frac{x_i - \mu}{s} \right)} \right),
\end{align}
which can be solved numerically.
\smallskip
We observe that the equation for $\mu$ in (\ref{eq:mle-sol}) does not contribute to solving the maximum likelihood, since the location parameter $\mu$ is equal to the mean only when $\lambda=1$. We thus look at an alternative equation for $\mu$, which involves the mode of the skew logistic distribution.
\smallskip
To derive the mode of the skew logistic distribution we solve the equation,
\begin{equation}\label{eq:mode1}
\frac{\partial}{\partial x} \frac{\exp\left(- \lambda \ \frac{x-\mu}{s}\right)}{s \left( 1+ \exp\left(- \frac{x-\mu}{s} \right) \right)^2} = 0,
\end{equation}
to obtain
\begin{equation}\label{eq:mode2}
\mu = x - s \ \log \left( - \frac{\lambda-2}{\lambda} \right).
\end{equation}
\smallskip
Thus, motivated by ({\ref{eq:mode2}) we replace the equation for $\mu$ in (\ref{eq:mle-sol}) with
\begin{equation}\label{eq:mode3}
\mu = m - s \ \log \left( - \frac{\lambda-2}{\lambda} \right),
\end{equation}
where $m$ is the mode of the random sample.
\section{The bi-skew logistic distribution for modelling epidemic waves}
\label{sec:bilog}
We start by defining the bi-skew logistic distribution, which will enable us to model more than one wave of infections at a time. We then discuss how we partition the data into single waves, in a way that we can apply the maximum likelihood from the previous section to the data in a consistent manner.
\smallskip
We present the {\em bi-skew logistic distribution}, which is described by the sum,
\begin{displaymath}
f(x;\lambda_1,\mu_1,s_1) + f(x;\lambda_2,\mu_2,s_2),
\end{displaymath}
of two skew logistic distributions. It is given in full as
\begin{equation}\label{eq:bi-skew}
\frac{\exp \left( -\lambda_1 \frac{x - \mu_1}{s_1} \right)}{s_1 \left( 1+ \exp \left(- \frac{x - \mu_1}{s_1} \right) \right)^2} + \frac{\exp \left( -\lambda_2 \frac{x - \mu_2}{s_2} \right)}{s_2 \left( 1+ \exp \left(- \frac{x - \mu_2}{s_2} \right) \right)^2},
\end{equation}
which characterises two distinct phases of logistic growth (c.f. \cite{MEYE94,SHEE04}). We note that (\ref{eq:bi-skew}) can be readily extended to the general case of the sum of multiple skew logistic distributions, however for simplicity we only present the formula for the bi-skew logistic case. Thus while the (single) skew logistic distribution can only model one wave of infected cases (or deaths, or hospitalisations) the bi-skew logistic distribution can model two waves of infections, and in the general cases any number of waves.
\smallskip
In the presence of two waves the maximum likelihood solution to (\ref{eq:bi-skew}) would give us access to the necessary model parameters, and solving the general case in presence of multiple waves, when the sum in (\ref{eq:bi-skew}) may have two or more skew logistic distributions, is evidently even more challenging. Thus we simplify the solution for the multiple wave case, and concentrate on an approximation assuming a sequential time series when one wave strictly follows the next. More specifically, we assume that each wave is modelled by a single skewed logistic distribution describing the growth phase until a peak is reached, followed by a decline phase; see \cite{CLIFF82} who considers epidemic waves in the context of the standard logistic distribution. Thus a wave is represented by a temporal pattern of growth and decline, and the time series as whole describes several waves as they evolve.
\smallskip
To provide further clarification of the model, we mention that the skew-bi logistic distribution is {\em not} a {\em mixture model} per se, in which case there is a mixture weight for each distribution in the sum, as in, say, a Gaussian mixture \cite[Chapter 9]{BISH06}. In the bi-skew logistic distribution case we do not have mixture weights, rather we have two phases or in our context waves, which are sequential in nature, possibly with some overlap, as can be seen in Figure~\ref{fig:deaths} (c.f. \cite{MEYE94,SHEE04}); however, strictly speaking, the bi-skew logistic distribution can be viewed as a mixture model where the mixture weights are each $0.5$ and a scaling factor of $2$ is applied. Thus, as an approximation, we add a preprocessing step where we segment the time series into distinct waves, resulting in a considerable reduction to the complexity of the maximum likelihood estimation. We do, however, remark that the maximum likelihood estimation for the bi-skew logistic distribution is much simpler than that of a corresponding mixture model, due to the absence of mixture weights. In particular, although we could, in principle make use of the EM (expectation-maximisation) algorithm \cite{REDN84} \cite[Chapter 9]{BISH06} to approximate the maximum likelihood estimates of the parameters, this would not be strictly necessary in the bi-skew logistic case, cf. \cite{MCDO21}. The only caveat, which holds independently of whether the EM algorithm is deployed or not, is the additional number of parameters present in the equations being solved. We leave this investigation as future work, and focus on our approximation, which does not require the solution to the maximum likelihood of (\ref{eq:bi-skew}); the details of the preprocessing heuristic we apply are given in the following section.
\section{Data analysis of COVID-19 deaths in the UK}
\label{sec:data}
Here we provide a full analysis of COVID-19 deaths in the UK from 30/01/20 to 30/07/21, employing the ${\cal E}SJS$ goodness-of-fit statistic and comparing it to the $KS2$ statistic. The daily UK COVID-19 data we used was obtained from \cite{GOV21}.
\smallskip
As a proof of concept of the modelling capability of the skew logistic distribution, we now provide a detailed analysis of the time series of COVID-19 deaths in the UK from 30/01/20 to 30/07/21.
\smallskip
To separate the waves we first smoothed the raw data using a moving average with a centred sliding window of 7 days.
We then applied a simple heuristic, where we identified all the minima in the time series and defined a wave as a consecutive portion of the time, of at least 72 days, with the endpoints of each wave being local minima apart from the first wave which starts from day 0. The resulting four waves in the time series are shown in Figure~\ref{fig:deaths}; see last column of Table~\ref{table:sl} for the endpoints of the four waves. It would be worthwhile, as future work, to investigate other heuristics, which may for example allow overlap between the waves to obtain more accurate start and end points and to distribute the number of cases between the waves when there is overlap between them.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{deaths_minima.jpg}
\caption{\label{fig:deaths} Reported daily COVID-19 deaths from 30/01/20 to 30/07/21 and their minima labelled `*', resulting in four distinct waves; a moving average with a centred sliding window of 7 days was applied to the raw data.}
\end{center}
\end{figure}
\smallskip
In Table~\ref{table:sl} we show the parameters resulting from maximum likelihood fits of the skew logistic distribution to the four waves. Figure~\ref{fig:waves} shows histograms of the four COVID-19 waves, each overlaid with the curve of the maximum likelihood fit of the skew logistic distribution to the data. Pearson's moment and median skewness coefficients \cite{DOAN11} for the four waves are recorded in Table~\ref{table:skew}. It can be seen that the correlation between these and $1-\lambda$ is close to $1$, as we would expect.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|c|c|c|c|}\hline
\multicolumn{5}{|c|}{Fitted parameters for the skew logistic distribution} \\ \hline
Wave & $\lambda$ & $\mu$ & $s$ & End \\ \hline \hline
1 & 0.2150 & 3.5137 & 3.8443 & 71 \\ \hline
2 & 1.0741 & 196.5157 & 14.4323 & 239 \\ \hline
3 & 0.2297 & 243.0709 & 4.5882 & 334 \\ \hline
4 & 1.7306 & 502.2758 & 7.0195 & 532 \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:sl} Parameters from maximum likelihood fits of the skew logistic distribution to the four waves, and the day of the local minimum (End), which is the end point of the wave.}
\end{table}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.75]{waves_histograms.jpg}
\caption{\label{fig:waves} Histograms for the four waves of COVID-19 deaths from 30/01/20 to 30/07/21, each overlaid with
the curve of the maximum likelihood fit of the skew logistic distribution to the data.}
\end{center}
\end{figure}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|c|c|c|}\hline
\multicolumn{4}{|c|}{Skewness} \\ \hline
Wave & $1-\lambda$ & moment & median \\ \hline \hline
1 & 0.7850 & 0.9314 & 0.2939 \\ \hline
2 & -0.0741 & -0.7758 & -0.0797 \\ \hline
3 & 0.7703 & 0.9265 & 0.1939 \\ \hline
4 & -0.7306 & -1.5555 & -0.2413 \\ \hline
\multicolumn{2}{|c|}{Correlation} & 0.9931 & 0.9826 \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:skew} Pearson's moment and median skewness coefficients for the four waves, and the correlation between $1-\lambda$ and these coefficients.}
\end{table}
\smallskip
We now turn to the evaluation of goodness-of-fit using the ${\cal E}SJS$ (empirical survival Jensen-Shannon divergence) \cite{LEVE18,LEVE21a}, which generalises the Jensen-Shannon divergence \cite{LIN91} to survival functions, and the well-known $KS2$ (Kolmogorov-Smirnov two-sample test statistic) \cite[Section 6.3]{GIBB21}. We will also employ 95\% bootstrap confidence intervals \cite{EFRO93} to measure the improvement in the ${\cal E}SJS$ and $KS2$, goodness-of-fit measures, of the skew-logistic over the logistic and normal distributions, respectively. For completeness we formally define the ${\cal E}SJS$ and $KS2$.
\smallskip
To set the scene we assume a time series \cite{CHAT19}, ${\bf x} = \{x_1, x_2, \ldots, x_n\}$, where $x_t$, for $t=1,2,\ldots, n$ is a value indexed by time, $t$, in our case modelling the number of daily COVID-19 deaths. We are, in particular, interested in the marginal distribution of ${\bf x}$, which we suppose comes from an underlying parametric continuous distribution $D$.
\smallskip
The {\em empirical survival function} of a value $z$ for the time series ${\bf x}$, denoted by $\widehat{S}({\bf x})[z]$, is given by
\begin{equation}\label{eq:emp}
\widehat{S}({\bf x})[z] = \frac{1}{n} \sum_{i=1}^n I_{\{x_i> z\}},
\end{equation}
where $I$ is the indicator function. In the following we will let $\widehat{P}(z) = \widehat{S}({\bf x})[z]$ stand for the empirical survival function $\widehat{S}({\bf x})[z]$, where the time series ${\bf x}$ is assumed to be understood from context; we will generally be interested in the empirical survival function $\widehat{P}$, which we suppose arises from the survival function $P$ of the parametric continuous distribution $D$, mentioned above.
\smallskip
The {\em empirical survival Jensen-Shannon divergence} (${\cal E}SJS$) between two empirical survival functions, $\widehat{Q}_1$ and $\widehat{Q}_2$ arising from the survival functions $Q_1$ and $Q_2$, is given by
\begin{equation}\label{eq:ejs}
{\cal E}SJS(\widehat{Q}_1,\widehat{Q}_2) = \frac{1}{2} \ \int_0^\infty \ \widehat{Q}_1(z) \ \log \left( \frac{\widehat{Q}_1(z)}{\widehat{M}(z} \right) \ + \ \widehat{Q}_2(z) \ \log \left( \frac{\widehat{Q}_2(z)}{\widehat{M}(z)} \right) {\rm d} z,
\end{equation}
where
\begin{displaymath}
\widehat{M}(z) = \frac{1}{2} \ \left( \widehat{Q}_1(z) \ + \ \widehat{Q}_2(z) \right).
\end{displaymath}
\smallskip
We note that the ${\cal E}SJS$ is bounded and can thus be normalised, so it is natural to assume its values are between $0$ and $1$; in particular, when $\widehat{Q}_1 = \widehat{Q}_2$ its value is zero. Moreover, its square root is a metric \cite{NGUY15}, cf. \cite{LEVE18}.
\smallskip
The {\em Kolmogorov-Smirnov} two-sample test statistic between $\widehat{Q}_1$ and $\widehat{Q}_2$ as above, is given by
\begin{equation}\label{eq:ks2}
KS2(\widehat{Q}_1, \widehat{Q}_2) = \underset{z}{max} | \widehat{Q}_1(z) - \widehat{Q}_2(z) |,
\end{equation}
where $max$ is the maximum function, and $|v|$ is the absolute value of a number $v$.
We note that $KS2$ is bounded between $0$ and $1$, and is also a metric.
\smallskip
For a parametric continuous distribution $D$, we let $\phi = \phi(D,\widehat{P})$ be the parameters that are obtained from fitting $D$ to the empirical survival function, $\widehat{P}$, using maximum likelihood estimation. In addition, we let $P_\phi = S_\phi({\bf x})$ be the survival function of ${\bf x}$, for $D$ with parameters $\phi$.
Thus, the empirical survival Jensen-Shannon divergence and the Kolmogorov-Smirnov two-sample test statistic, between $\widehat{P}$ and $P_\phi$, are given by ${\cal E}SJS(\widehat{P}, P_\phi)$ and $KS2(\widehat{P},P_\phi)$, respectively, where $\widehat{P}$ and $P_\phi$ are omitted below as they will be understood from context. These values provide us with two measures of goodness-of-fit for how well $D$ with parameters $\phi$ is fitted to ${\bf x}$ \cite{LEVE21a}.
\smallskip
We are now ready to present the results of the evaluation.
In Table~\ref{table:esjs} we show the ${\cal E}SJS$ values for the four waves and the said improvements, while in Table~\ref{table:ks2} we show the corresponding $KS2$ values and improvements. In all cases the skew logistic is a preferred model over both the logistic and normal distributions, justifying the addition of a skewness parameter as can be see in
Figure~\ref{fig:waves}. Moreover, in all but one case is the logistic distribution preferred over the normal distribution; this is for wave 3, where the $KS2$ statistic of the normal distribution is smaller than that of the logistic distribution. We observe that, for the second wave, the ${\cal E}SJS$ and $KS2$ values for the skew logistic and logistic distribution are the closest, since as can be seen from Table~\ref{table:sl} the second wave was more or less symmetric, in which case the skew logistic distribution reduces to the logistic distribution.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}\hline
\multicolumn{6}{|c|}{${\cal E}SJS$ values for SL, Logit and Norm distributions} \\ \hline
Wave & SL & Logit & SL-Logit & Norm & SL-Norm \\ \hline \hline
1 & 0.0419 & 0.0583 & 28.25\% & 0.0649 & 35.54\% \\ \hline
2 & 0.0392 & 0.0448 & 12.52\% & 0.0613 & 36.17\% \\ \hline
3 & 0.0316 & 0.0387 & 18.38\% & 0.0423 & 25.38\% \\ \hline
4 & 0.0237 & 0.0927 & 74.47\% & 0.0939 & 74.79\% \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:esjs} ${\cal E}SJS$ values for the skew logistic (SL), logistic (Logit) and normal (Norm) distributions, and the improvement percentage of the skew logistic over the logistic (SL-Logit) and normal (SL-Norm) distributions, respectively.}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}\hline
\multicolumn{6}{|c|}{$KS2$ values for SL, Logit and Norm distributions} \\ \hline
Wave & SL & Logit & SL-Logit & Norm & SL-Norm \\ \hline \hline
1 & 0.0621 & 0.1245 & 50.14\% & 0.1280 & 51.50\% \\ \hline
2 & 0.0357 & 0.0391 & 8.57\% & 0.0420 & 15.01\% \\ \hline
3 & 0.0571 & 0.0930 & 38.66\% & 0.0854 & 33.18\% \\ \hline
4 & 0.0098 & 0.0817 & 87.98\% & 0.1046 & 90.61\% \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:ks2} $KS2$ values for the skew logistic (SL), logistic (Logit) and normal (Norm) distributions, and the improvement percentage of the skew logistic over the logistic (SL-Logit) and normal (SL-Norm) distributions, respectively.}
\end{table}
\smallskip
In Tables \ref{table:boot-jsd} and \ref{table:boot-ks2} we present the bootstrap 95\% confidence intervals of the ${\cal E}SJS$ and $KS2$ improvements, respectively, using the {\em percentile} method, while in Tables \ref{table:bca-jsd} and \ref{table:bca-ks2} we provide the 95\% confidence intervals of the ${\cal E}SJS$ and $KS2$ improvements, respectively, using the {\em bias-corrected and accelerated} (BCa) method \cite{EFRO93}, which adjusts the confidence intervals for bias and skewness in the empirical bootstrap distribution. In all cases the mean of the bootstrap samples is above zero with a very tight standard deviation. As noted above the second wave is more or less symmetric, so we expect that the standard logistic distribution will provide a fit to the data which as good as the skew logistic fit. It is thus not surprising that in this case the improvement percentages are, generally, not significant. In addition, the improvements for the third wave are also, generally, not significant, which may be due to the starting point of the third wave, given our heuristic, being close to its peak; see Figure~\ref{fig:deaths}. We observe that, for this data set, it is not clear whether deploying the BCa method yields a significant advantage over simply deploying of the percentile method.
\smallskip
In Table~\ref{table:mean-std} we show the mean and standard deviation statistics of the confidence interval widths, of the metrics we used to compare the distributions, implying that, in general, the ${\cal E}SJS$ goodness-of-fit measure is more powerful than the $KS2$ goodness-of-fit measure. This is based on the known result that statistical tests using measures resulting in smaller confidence intervals are normally considered to be more powerful, implying that a smaller sample size may be deployed \cite{LIU13a}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}\hline
\multicolumn{6}{|c|}{Percentile confidence intervals for ${\cal E}SJS$ improvement} \\ \hline
Wave/Diff & LB of CI & UB of CI & Width of CI & Mean & STD \\ \hline \hline
1/SL-Logit & 0.0093 & 0.0317 & 0.0224 & 0.0211 & 0.0063 \\ \hline
1/SL-Norm & 0.0170 & 0.0382 & 0.0212 & 0.0278 & 0.0063 \\ \hline
2/SL-Logit & {\em -0.0010} & 0.0066 & 0.0076 & 0.0034 & 0.0049 \\ \hline
2/SL-Norm & 0.0154 & 0.0232 & 0.0078 & 0.0201 & 0.0051 \\ \hline
3/SL-Logit & {\em -0.0028} & 0.0112 & 0.0140 & 0.0083 & 0.0022 \\ \hline
3/SL-Norm & 0.0021 & 0.0149 & 0.0128 & 0.0120 & 0.0022 \\ \hline
4/SL-Logit & 0.0549 & 0.0810 & 0.0261 & 0.0714 & 0.0068 \\ \hline
4/SL-Norm & 0.0560 & 0.0821 & 0.0261 & 0.0722 & 0.0070 \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:boot-jsd} Results from the percentile method for the confidence interval of the difference of the ${\cal E}SJS$ between the logistic (Logit) and skew logistic (SL), and between the normal (Norm) and skew logistic (SL) distributions, respectively; Diff, LB, UB, CI, Mean and STD stand for difference, lower bound, upper bound, confidence interval, mean of samples and standard deviation of samples, respectively.}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}\hline
\multicolumn{6}{|c|}{Percentile confidence intervals for $KS2$ improvement} \\ \hline
Wave/Diff & LB of CI & UB of CI & Width of CI & Mean & STD \\ \hline \hline
1/SL-Logit & 0.0438 & 0.0760 & 0.0322 & 0.0621 & 0.0073 \\ \hline
1/SL-Norm & 0.0411 & 0.0821 & 0.0410 & 0.0684 & 0.0078 \\ \hline
2/SL-Logit & 0.0003 & 0.0047 & 0.0044 & 0.0033 & 0.0009\\ \hline
2/SL-Norm & 0.0007 & 0.0092 & 0.0085 & 0.0065 & 0.0017 \\ \hline
3/SL-Logit & {\em -0.0073} & 0.0441 & 0.0514 & 0.0343 & 0.0082\\ \hline
3/SL-Norm & {\em -0.0142} & 0.0365 & 0.0507 & 0.0267 & 0.0080 \\ \hline
4/SL-Logit & 0.0474 & 0.0728 & 0.0254 & 0.0680 & 0.0046\\ \hline
4/SL-Norm & 0.0710 & 0.0962 & 0.0252 & 0.0905 & 0.0048 \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:boot-ks2} Results from the percentile method for the confidence interval of the difference of the $KS2$ between the logistic (Logit) and skew logistic (SL), and between the normal (Norm) and skew logistic (SL) distributions, respectively; Diff, LB, UB, CI, Mean and STD stand for difference, lower bound, upper bound, confidence interval, mean of samples and standard deviation of samples, respectively.}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}\hline
\multicolumn{6}{|c|}{BCa confidence intervals for ${\cal E}SJS$ improvement} \\ \hline
Wave/Diff & LB of CI & UB of CI & Width of CI & Mean & STD \\ \hline \hline
1/SL-Logit & 0.0087 & 0.0260 & 0.0173 & 0.0210 & 0.0062 \\ \hline
1/SL-Norm & 0.0165 & 0.0333 & 0.0168 & 0.0275 & 0.0063 \\ \hline
2/SL-Logit & {\em -0.0009} & 0.0258 & 0.0267 & 0.0036 & 0.0053 \\ \hline
2/SL-Norm & 0.0153 & 0.0425 & 0.0272 & 0.0201 & 0.0050 \\ \hline
3/SL-Logit & {\em -0.0024} & 0.0095 & 0.0119 & 0.0084 & 0.0023 \\ \hline
3/SL-Norm & {\em -0.0027} & 0.0135 & 0.0162 & 0.0119 & 0.0024 \\ \hline
4/SL-Logit & 0.0308 & 0.0703 & 0.0395 & 0.0708 & 0.0074 \\ \hline
4/SL-Norm & 0.0554 & 0.0713 & 0.0159 & 0.0726 & 0.0069 \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:bca-jsd} Results from the BCa method for the confidence interval of the difference of the ${\cal E}SJS$ between the logistic (Logit) and skew logistic (SL), and between the normal (Norm) and skew logistic (SL) distributions, respectively; Diff, LB, UB, CI, Mean and STD stand for difference, lower bound, upper bound, confidence interval, mean of samples and standard deviation of samples, respectively.}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}\hline
\multicolumn{6}{|c|}{BCa confidence intervals for $KS2$ improvement} \\ \hline
Wave/Diff & LB of CI & UB of CI & Width of CI & Mean & STD \\ \hline \hline
1/SL-Logit & 0.0428 & 0.0801 & 0.0373 & 0.0624 & 0.0074 \\ \hline
1/SL-Norm & 0.0444 & 0.0777 & 0.0333 & 0.0683 & 0.0078 \\ \hline
2/SL-Logit & 0.0005 & 0.0047 & 0.0042 & 0.0033 & 0.0008 \\ \hline
2/SL-Norm & 0.0001 & 0.0089 & 0.0088 & 0.0064 & 0.0017 \\ \hline
3/SL-Logit & 0.0013 & 0.0445 & 0.0432 & 0.0346 & 0.0077 \\ \hline
3/SL-Norm & {\em -0.0111} & 0.0368 & 0.0479 & 0.0263 & 0.0082 \\ \hline
4/SL-Logit & 0.0491 & 0.0739 & 0.0248 & 0.0676 & 0.0047 \\ \hline
4/SL-Norm & 0.0685 & 0.0985 & 0.0300 & 0.0908 & 0.0046 \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:bca-ks2} Results from the BCa method for the confidence interval of the difference of the $KS2$ between the logistic (Logit) and skew logistic (SL), and between the normal (Norm) and skew logistic (SL) distributions, respectively; Diff, LB, UB, CI, Mean and STD stand for difference, lower bound, upper bound, confidence interval, mean of samples and standard deviation of samples, respectively.}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|c|c|c|c|}\hline
\multicolumn{5}{|c|}{Summary statistics for the CI widths} \\ \hline
Statistic & ${\cal E}SJS$-P & $KS2$-P & ${\cal E}SJS$-BCa & $KS2$-BCa \\ \hline \hline
Mean & 0.0172 & 0.0298 & 0.0214 & 0.0287 \\ \hline
STD & 0.0077 & 0.0176 & 0.0091 & 0.0155 \\ \hline
\end{tabular}
\end{center}
\caption{\label{table:mean-std} Mean and standard deviation (STD) statistics for the confidence interval (CI) widths using the percentile (P) and BCa methods.}
\end{table}
\smallskip
As mentioned in the introduction, we obtained comparable results to the above when modelling epidemic waves with the epsilon skew normal distribution \cite{MUDH00} as opposed to using the skew logistic distribution; see also \cite{KAZE15} for a comparison of a skew logistic and skew normal distributions in the context of insurance loss data, showing that the skew logistic performed better than the skew normal distribution for fitting the data sets tested. Further to the note in the introduction, that the skew logistic distribution is a more natural one to deploy in this case due to its heavier tails, we observe that in an epidemic scenario the number of cases counted can only be non-negative, while the epsilon skew normal also supports negative values.
\section{Concluding remarks}
\label{sec:conc}
We have proposed the skew-logistic and bi-logistic distributions as models for single and multiple epidemic waves, respectively. The model is a simple extension of the symmetric logistic distribution, which can readily be deployed in the presence of skewed data that exhibits growth and decay. We provided validation for the proposed model using the ${\cal E}SJS$ as a goodness-of-fit statistic, showing that it is a good fit to COVID-19 data in UK and more powerful than the alternative $KS2$ statistic. As future work, we could use the model to compare the progression of multiple waves across different countries, extending the work of \cite{DYE20}.
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,116,691,498,325 | arxiv | \section{Introduction}
Autonomous systems primarily work through uncertain environments, and due to the changes, they must autonomously decide about adaptive actions. To include all aspects of the system in decision-making, one can use a model-driven approach~\cite{model} to rigorously analyze the local system and the environment for changes and decide what action better adapts the autonomous system regarding formal specifications. The main obstacle against runtime analysis of autonomous systems is the size of the model that is relatively large for a resource-constraint system to meet timing limitations~\cite{limitation}. Therefore, we need to verify the system at runtime efficiently.
The early solution to this problem is to use approximation~\cite{abate1, abate2}. As the model is changed at runtime, the approximation must be repeated, bringing overhead to the verification process. One can use incremental approximation~\cite{man1} to enhance the reusability of previously verified parts of the model~\cite{incremental}. In this approach, we partition the model into a set of independent components. At runtime, if a change occurs, we need to only re-approximate/re-verify those components that are affected by the changes. This improves the verification process under a certain upper bound for errors and a correct aggregation algorithm for a central decision-making configuration.
According to the stochastic nature of the environment and the ability to choose an adaptive action in autonomous systems, we can deploy the Markov decision process (MDP) to model the system~\cite{man2}. The changes are brought to the model as a set of parameters. Then, we use a parametric MDP (pMDP)~\cite{pmdp} for capturing the changes. In this research, we use a case study on an energy-harvesting system that uses a MAPE-K loop for adaptation purposes~\cite{man3}. MAPE-K stands for monitoring, analyzing, planning, executing, and knowledge. In this research, we focus on the analyzing round of the MAPE-K loop.
In this research, we tackle the problem of efficiently deciding about the changes in autonomous systems at runtime. We use a pMDP as the modeling construct and apply incremental approximation logic to the model. It means that we aim at partitioning the model into a set of fine-grained components to ease the process of incremental approximation. In this regard, we propose two metrics to evaluate which system policy results in the best partitions according to the size and the number of generated components. We categorize the policies into available and unavailable subsets and elaborate a hierarchy of elimination for available policies, indicating how we can achieve the best partitioning policy. In this paper, we investigate the metrics both theoretically and experimentally. The evaluation of the metrics is calculated using an offline approach, and the outcome is deployed at runtime.
\section{System Model and Problem Definition}
The autonomous systems are affected by the changes coming mainly from the environment. As the model is variable, it is updated at runtime, which is composed of two independent parts: environmental and local parts. The environmental part works under uncertainty, and it may face a few changes during its operation. The local part must adjust its behavior to keep the systems' functionalities above an acceptable threshold. Hence, the system needs to follow up the model, be informed about the latest changes, and repeatedly decide about triggering an adaptation action in responding to changes. In a model-driven approach, the model of the system is composed of both local and environmental parts. The local system model needs non-determinism to reflect the choice among actions, and the environment requires probability to describe uncertainty. The best model for describing this type of autonomous system requires deploying a pMDP that provides sequential decision-making among actions and models the changes via a set of parameters.
\begin{definition}\label{def:mdp}
A pMDP is denoted by a tuple $M = (S,T,V,R,P)$ where $S=(s_1,s_2,..., s_n)$ is the finite set of states, $A=(a_1,a_2,..., a_n)$ is the set of actions, $V$ is a finite set of variables, $P:S \times A \times S \rightarrow \mathcal{F}v$ is the parametric transition probability function that evaluates each parameter $v\in V$, and maps each transition to a real number in [0,1] and $R:S\rightarrow\mathbb{R}$ is the reward function that maps each state to a real number. $\Box$
\end{definition}
The autonomous system verifies the model against a few properties at runtime. However, the efficiency of the verification is affected by the large state space of the model. A solution to this issue is to partition the model into a set of components $C=(c_1,c_2,..., c_n)$. Using an incremental approach~\cite{man1}, the components, e.g. Strongly Connected Components (SCCs), can be analyzed and formally verified independently. When a few components are affected by the changes, we need to only re-verify the affected components. Choosing the best partitioning policy leads to efficient verification, and consequently, efficient decision-making mechanism in autonomous systems. We define a policy by Definition~\ref{def:policy}.
\begin{definition}\label{def:policy} A policy $\pi$ for an MDP $M$ is defined as a function $\pi : S \rightarrow {P}(A)$ where {P}(A) is the set of probability distributions on $A$ so that, a policy $\pi$ is a mapping function from any given state s$\in$S to a distribution over actions. $\Box$
\end{definition}
In each state $s$ with the available actions $a,b,c\in$A, a policy $\pi$ is formulated as
$\pi(s): p(a) \times a :+ p(b) \times b :+ p(c) \times c$, where $p(a)+ p(b)+ p(c) = 1$ and $+$ operator denotes a choice. There are two types of policy selection criteria~\cite{policy}: the first approach is based on the probability distribution of actions, and the second is choosing a certain action among available actions. In this research, we mostly follow the second approach because in each situation, the autonomous system must strongly resolve the non-determinism among available actions by applying a policy.
The research's main problem is finding the best policy for partitioning the model into a set of components. On the one hand, the policy must fulfill the formal requirements of the system. On the other hand, the policy must lead the system toward the best partitioning model in which the outcome includes a set of fine-grained components (see Section~\ref{sec:theory}).
\section{Theoretical Foundations}\label{sec:theory}
An autonomous system may have countless policies to run, and under different situations, it decides which policy should be selected. However, the system cannot choose all possible policies because of environmental changes or internal system constraints limitations. In each situation, a subset of policies cannot be applied. We divide the policies into two categories, including available ($\Pi_{a}$) and unavailable ($\Pi_{u}$) policies. Eliminating unavailable policies makes a subset of states unreachable~\cite{abate3} where some parts of the model are pruned in each situation. This has two effects on the efficiency of the model: the first concerns the state-space reduction of the model, and the second is about eliminating a subset of transitions that leads to forming more independent components~\cite{incremental2}. In this section, we investigate and formulate the theoretical foundations for selecting the best policy among the available policies. The outcome is expected to increase the number of fine-grained components. To assist the autonomous system in deciding the best available policy, we propose two quantitative metrics.
The policies of the system are categorized into available ($\Pi_a$) and unavailable ($\Pi_u$) subsets in which $\Pi = \Pi_a~\cup~\Pi_u$ is hold. In each situation, we aim at pruning the model by eliminating $\Pi_u$ from the choices, and deciding the best policy from $\Pi_a$. The pruning process includes two steps. At first, we eliminate the unavailable policies. As a result of this action, the total state of the system $S$ which consists of reachable $S_a$ and unreachable states $S_{u1}$ ($S = S_a \cup S_{u1}$), changes into a new subset of $S$ represented by $S'= S - S_{u1}$. Similarly, unavailable transitions ($T_u$) are removed from the total transitions $T$ denoted by $T' = T - T_{u1}$. In the second step, a specific policy ($\pi_{i}\in \Pi_{a}$) is selected based on the partitioning results. This time, a subset of $S'$, namely $S_{u2}$, is unreachable, and the set of reachable states is changed into $S" = S' - S_{u2}$. Likewise, the set of transitions $T'$ is changed into $T'' =T' - T_{u2}$.
Consider the model $M$ with state-space $S$ is partitioned into a set of components $C$ (e.g. the set of SCCs of $M$). At the first elimination round, $S\rightarrow S'$ is applied, and the model is partitioned into a new set of components $C'$. In the second round of elimination, the model is divided into components $C''$ that result from applying a specific policy as $S'\rightarrow S''$. We expect that $C''$ includes the best available partition of the model in terms of size and number. To define the theoretical criteria for deciding the best policy, we propose two metrics: Balancing and Variation.
\noindent \textbf{Balancing.} This metric evaluates the effect of policy on the components that considers the ratio of multi-state components' distribution to the number of total components. As denoted in~(\ref{eq:bal}), we show the metric using $Bal(C)$ where it gets the components resulting by applying a policy $\pi_{i}\in \Pi_{a}$, and returns a quantitative evaluation that reveals how much the components are balanced in terms of size. $max$ denotes the maximum number of states in the components, $i$ represents the number of states in a component, and $|C_i|$ indicates how many components with $i$ states exist. As denoted in the denominator, the calculation starts by $i=2$ because the formula does not point to single-state components. As the number of single-state components decreases, the system calculates lower values for $Bal(C)$. The Balancing metric evaluates the effect of policy on the components locally via inter-component comparison.
\begin{equation}\label{eq:bal}
Bal(C)=\frac{\sum_{i=1}^{max} |C_i|}{\sum_{i=2}^{max} \frac{1}{|(max - i) + 1|} \times |C_i|}
\end{equation}
\noindent \textbf{Variation} The second metric extends the ability of the autonomous system to evaluate the effect of policy to broader aspects. Balancing only focuses on the size, but Variation globalizes the analysis to the parameters of the components. In our proposed configuration, the autonomous system tends to use incremental verification and reuses the previous results of the components in the next verification steps if those components are not affected by the changes. In other words, as the variety of parameters in a component decreases, the system experiences more efficient verification against the changes. As the value of $Var(C,\theta(V))$ decreases, the components are more resilient against the changes, and the Variation among the components is lower. Formula (\ref{eq:var}) denotes the criteria for calculating the Variation among valuation $\theta$ for a set of parameters $V$ including $(p_1, \cdots,p_n)$. In all parts of the formula, $|C_{val}|$ denotes the number of components that are affected by the changes in the worst-case scenario.
\begin{equation}\label{eq:var}
Var(C, \theta(V))=\frac{\sum_{i=1}^{n} [p_{i}\times |C_{i}|] + \sum_{i=1}^{n-1} [(p_{i}+p_{i+1})\times |C_{(i,i+1)}|] +\cdots + \sum_{i=1}^{n} [p_{i}]\times |C_{(1,\cdots,n)}|} {\sum_{i=1}^{n} p_i\times \sum_{i=1}^{max} |C_i|}
\end{equation}
We argue that the autonomous system finds the best partitioning policy among the set of available policies by calculating Balancing and Variation metrics. The smallest additive value determines the best partitioning policy $\pi_{best}\in\Pi_{a}$. To this end, we propose Lemma~\ref{lemma}, and prove its correctness in Appendix I.
\begin{lemma}\label{lemma}
The additive value of Balancing and Variation determines the best partitioning policy. $\Box$
\end{lemma}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{img/MDP.png}
\caption{The parts of the autonomous system model: (a) sensors, (b) battery model, and (c) the model of the environment. All states of sensors and battery models have self-loop which are skipped in the figure.}\label{fig:model}
\end{figure}
\section{Finding the Best Policy in Practice}\label{sec:res}
To investigate the theoretical findings in practice, we use a case study on an energy harvesting system, propose the steps toward reaching the best partitioning policy and measure the results in practice.
\subsection{Case Study on Self-Adaptive Energy Harvesting Systems}\label{sec:case}
The case study is a self-adaptive solar energy harvesting system which is consists of environmental and local parts. The environment model is responsible for capturing how much energy can be harvested as denoted in Figure~\ref{fig:model}-(c) in which each state determines the expected energy harvesting on an hourly basis~\cite{man3}. The local part is a sensor network with a central battery to save the harvested energy denoted by Figure~\ref{fig:model}-(a) and (b) respectively. Each sensor can work under four operational modes: busy, idle, standby, and sleep, so that the self-adaptive system controls the operating mode to balance the harvested energy and the battery level. We use the PRISM model-checker for modeling and verification~\cite{prism}, and the repository including the codes, models, and sample results is accessible via~\cite{repo}. As mentioned, we use a pMDP to model the autonomous system. If any change occurs, the parameters of the model capture it.
\subsection{The Proposed Solution}\label{sec:sys}
The overview of the proposed system is denoted by Figure~\ref{fig:system}. The system starts by constructing the autonomous system model from the monitoring step and collaborates with MAPE-K to collect information. Then, the system filters available policies $\Pi_{a}$, and constructs a hierarchy of policies regarding the environmental situations and battery levels. The nine categories of policies are investigated to find the best partitioning policy under each situation. Afterward, we analyze the generated components from executing different types of policies. The components are evaluated by Balancing and Variation metrics, and the best (lowest) additive results in each category determine the best policy. As an extension of this process, we plan to use the best policies for training a reinforcement learning-assisted component for decisions beyond these known categories. Note that the entire process of analysis is performed once. However, the autonomous system will deploy the results at runtime. For instance, if the harvested energy is under 200 watt-hours and the battery level is low, the sensors are only allowed to operate through standby and sleep modes, and the best previously investigated policy is selected.
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{img/fig2.png}
\caption{The process of calculating the best partitioning policy under each situation of the autonomous system.}\label{fig:system}
\end{center}
\end{figure}
\begin{table}[h]
\scriptsize
\centering
\caption{The results of evaluations for finding the best partitioning policies by the metrics.\label{tab:res}}
\begin{tabular}{!{}l!{}l!{}l!{}l!{}l!{}l!{}!{}l!{}l!{}l!{}l!{}l!{}}
\hline
$\pi_i$ & $p_{2}, p_{3}, p_{4}, p_0$ & $p_5, p_6, p_0$ & $p_7, p_8, p_0$ & $p_2, p_3, p_4, p_0$ & $p_5, p_6, p_0$ & $p_7, p_8, p_0$ & $\#C$ & $\#SS$ & $S:\#C$ & $Bal+Var$
\\
\hline
$b_1$& 0,0.8,0.2,0& 0,0.7,0.3& 0,0.7,0.3& 0,0.5,0.5,0& 0,1,0& 0,0,1& 1584& 1056& 2:528& 4.26\\
$w_1$& 0,0.5,0.5,0& 0,1,0& 0,0,1& 0,0.5,0.5,0& 0,1,0& 0,0,1& 2112& 2112& 1:2112& infinite \\
\hline
$b_2$& 0,0.8,0.2,0& 0,0.5,0.5& 0,0,1& 0.5,0,0,0.5& 1,0,0& 1,0,0& 528& 0& 2:528& 2.33\\
$w_2$& 0,0.5,0.5,0& 0,0.8,0.2& 0,0.9,0.1& 1,0,0,0& 0.9,0.1,0& 0.9,0,0.1& 396& 0& 2:264& 5.115\\
& & & & & & & & &
4:132& \\
\hline
$b_3$& 0.9,0,0,0.1& 1,0,0& 1,0,0& 0.2,0,0,0.8& 1,0,0& 1,0,0& 66& 0& 4:66& 4.31\\
$w_3$& 1,0,0,0& 1,0,0& 1,0,0& 1,0,0,0& 1,0,0& 1,0,0& 66& 0& 8:66& 8.33\\
\hline
$b_4$& 0,0,1,0& 0,0,1& 0,1,0& 0,0.7,0.3,0& 0,0.8,0.2& 0,1,0& 6672& 4448& 2:2224& 4.316\\
$w_4$& 0,0.5,0.5,0& 0,0.8,0.2& 0,0.9,0.1& 0,0,0.3,0.7& 0,0.8,0.2& 0,1,0& 5004& 2224& 2:2224& 7.227\\
& & & & & & & & &
4:556& \\
\hline
$b_5$& 1,0,0,0& 1,0,0& 1,0,0& 0,0.7,0.3,0& 0.1,0.8,0.1& 0.1,0.8,0.1& 1112& 0& 2:556& 1.976\\
& & & & & & & & &
6:556& \\
$w_5$& 0.8,0,0,0.2& 1,0,0& 1,0,0& 0,0.7,0.3,0& 0,0.8,0.2& 0,1,0& 1668& 0& 2:1112 & 2.117\\
& & & & & & & & &
4:556& \\
\hline
$b_6$& 0.5,0,0,0.5& 1,0,0& 1,0,0& 0.9,0,0,0.1& 1,0,0& 1,0,0& 556& 0& 4:556& 1.32 \\
$w_6$& 0,0,0,1& 0.9,0,0.1& 1,0,0& 0.4,0.1,0.1,0.4& 1,0,0& 1,0,0& 1112& 0& 4:1112& 1.34\\
\hline
$b_7$& 0.1,0.8,0,0.1& 0,0.5,0.5& 0,0.5,0.5& 0.1,0.7,0.1,0.1& 0,0.8,0.2& 0,0.9,0.1& 2224& 0& 4:1112& 1.983\\
& & & & & & & & &
8:1112& \\
$w_7$& 0,0.5,0.5,0& 0,0.5,0.5& 0,0.5,0.5& 0,0.5,0.5,0& 0,0.5,0.5& 0,0.5,0.5& 5004& 1112& 2:2224& 6.036\\
& & & & & & & & &
4:1390 & \\
& & & & & & & & &
8:278& \\
\hline
$b_8$& 0.9,0,0,0.1& 0.8,0.1,0.1& 1,0,0& 0.1,0.7,0.1,0.1& 0,0.8,0.2& 0,0.9,0.1& 1112& 0& 4:556& 1.956\\
& & & & & & & & &
8:556& \\
$w_8$& 0.5,0,0,0.5& 1,0,0& 1,0,0& 0,0,0.5,0.5& 0,0.8,0.2& 0,0.9,0.1& 1668& 0& 2:556& 3.461\\
& & & & & & & & &
4:834& \\
& & & & & & & & &
8:278& \\
\hline
$b_9$& 0.5,0,0,0.5& 1,0,0& 1,0,0& 1,0,0,0& 1,0,0& 1,0,0& 278& 0& 8:278& 1.27\\
$w_9$& 0.5,0,0,0.5& 1,0,0& 1,0,0& 0.5,0,0,0.5& 1,0,0& 1,0,0& 556& 0& 4:278& 1.97\\
& & & & & & & & &
8:278& \\
\hline
\end{tabular}
\end{table}
\subsection{Analytical Results}\label{sec:results}
Table~\ref{tab:res} presents the quantitative results of applying the proposed system to the case study on the energy harvesting system. As denoted in the table, we have reported two evaluations for each subset of available policies, including $b_i$ and $w_i$ that are representative for best and worst analyzed policy in terms of the additive value of metrics that $ Bal+Var$ denotes. The valuation of the parameters for the two sensors is reported. In addition, other effective specifications of the system model are included in the results, such as the number of components $\#C$, number of single states $\#SS$, and number of states in each component $S:\#C$. For parameters' evaluation, we mostly used 0 and 1 policy; however, in some positions, we have used the distribution of the parameters and granted the ability to choose to the system because, in those situations, the valuation of a few parameters do not affect the size of the components. As denoted in the results, Lemma~\ref{lemma} is supported in practice in which the best partitioning policies correspond to the lower additive values of the metrics, and we can intuitively observe that better components correspond to the better additive values of the metrics.
\section{Conclusion and Future Direction}\label{sec:con}
In this research, we tackled the problem of efficient decision-making in autonomous systems. We have used the incremental approximation approach as our base model and proposed a new method for best partitioning the model into a set of independent components. In this regard, we proposed a hierarchy of policies by determining the available policies and then investigating the best partitioning policy among available ones. To evaluate which policy better partitions the model, we have proposed two metrics, including Balancing and Variation, and argue that the additive value of these metrics determines the best partitioning policy. We have investigated the proposed approach both theoretically and experimentally. Future work has already started integrating the proposed system with a reinforcement learning-assisted subsystem to extend it to similar applications where the autonomous systems may face an unprecedented situation.
\subsection*{Appendix I: Proof of the Lemma}\label{sec:app}
The best partitioning policy depends on two aspects of the components: the first is the size of the components, and the second is the number of generated components. The first is fulfilled if the formula~\ref{eq:bal} guarantees optimization of balancing. So, we discuss and prove how this equation minimizes the quantitative measurement for balancing. Consider that the system includes partitions $C_1,\cdots ,C_{max}$. $max$ denotes the maximum size of the components. We divide the total number of components into a cumulative number of this sequence. By looking at the formula, we can find that the best balancing value is measured when all components have the same size where the minimum value becomes 1. Any other scenario increases the outcome upper than 1. Therefore, it is acceptable that the best value of balancing is 1, and it is the minimum possible calculation. We tend to generate larger components as much as possible. Then, as the size of the components becomes larger, we expect lower values for balancing metrics because the overhead of approximation is decreased. It is reflected in the formula by a coefficient measured in the range of $[0,1]$. The minimum value of coefficients belongs to the single-state components, and one is assigned to $C_{max}$. It can be shown mathematically that the additive numbers of components produce an arithmetic sequence. The validity of Equation~\ref{eq:bal} can be easily proved by induction. The range of this metric is calculated between 1 and infinity, where infinity is the worst case and rarely happens due to the experimental results.
Variation has a different view to the components, which is more globally by investigating the impact of each component regarding the changes in their probabilistic variables. A different number of probabilistic variables in a component makes the system more vulnerable to the changes so that more verification results must be re-evaluated. The minimum number can be achieved when the components do not have any probabilistic variable, and the outcome would be 0. In other words, the maximum value measured by Equation~\ref{eq:var} is 1 when all components contain all parametric variables that are rarely (never) happen. The validity of this equation can also be validated by induction.
The best policy regarding the metrics is achieved when both generate minimum values, 1 and 0 for Balancing and Variation, respectively. As the metrics work in different ranges of values, we scale Variation by 10. It means the possible range of values for this metric is scaled in the range of $[0,10]$. As mentioned in Lemma~\ref{lemma}, the additive value of these metrics determines the best partitioning policy that is evaluated by minimum value.
\nocite{*}
\section{Introduction}
The domain of cyber-Physical systems (CPSs) works under uncertain conditions that are expected to handle changes autonomously. The CPS may encounter a number of changes during its operating life-cycle so that each fluctuation in resource access or variation in requirements is considered as a change. If a change is occurred, the system must adapt itself to the new situation without the intervention of human, and this is why the self-adaptation is required. The self-adaptation is the process of deciding about the possible actions against the incoming changes when the analysis of the system reveals that the change may cause violation from what the system is intended to do or a better functionality is available [] that are described using a few predefined properties. A self-adaptive system in composed of the environment (e.g. the components that we don’t have any control on them), local system (e.g. controllable resources) and a feedback control loop. The changes are mainly come from the environment (and a few parts of the controllable resources) and the local system is affected by those changes. One may deploy rigorous techniques like model-checking in order to verify the compliance of the system to the predefined properties.
Model-driven engineering for autonomous and self-adaptive system is widely adopted by the community [] because the behavior of the system can be analyzed using rigorous techniques. That’s why for a suitable adaptation we need to come up with a model to analyze the whole local system and make an appropriate decision for the system. In self-adaptive systems, there is a closed-loop that is responsible for controlling the entire adaptation phases including Monitoring, Analyzing, Planning and Executing steps which is simply called MAPE loop. The model is constructed in monitoring phase and the system decides about the model in analyzing round. The decision making process in the main result of analyzing step and it aims at selecting a number of adaptation actions against the changes. In fact, the self-adaptive system verifies the model against a few properties, and synthesizes a number of actions for the next rounds.As we need sequential decision-making in local system, we model it using Markov Decision Process (MDP). Further, we reflect the changes from the environment as the valuation for a set of parameters. Since the environment model has some probabilistic variables that are subject to change, we need to use parametric MDP to model the entire self-adaptive system.
By every change, we have to reconstruct the model because the parameters of the MDP change. Each evaluation of parameters leads us to reconstruct the model, and consequently, re-verify it against the properties. As we work with a changing model, we need to calculate the quantitative verification at runtime []. But runtime verification is difficult to apply regarding the outstanding state-space explosion problem [], resource constraints of and timing limitations for decision making. As the number of states in real-world application of self-adaptive system is relatively large, deploying approximation/efficient techniques are necessary []. One may use incremental approximation technique, divides the concrete model into a set of strongly connected components (SCC) [], and approximate each SCC independently. The main contribution of this research is to improve the scalability of the approximation technique in a way that the number of SCCs in the model is increased. We do this using strategy synthesis technique that aims at refining the number of available actions in each state of the MDP. As a result of this technique, in some situation, a number of actions is excluded which leads to eliminating a set of transition through the MDP. It works like a harassment technique, and helps out to increase the number of SCCs in the model. To this end, the contributions of the research are as follows:
\begin{itemize}
\item We argue that by increasing the numbers of SCCs, the scalability of the verification is improved at runtime.
\item We synthesize a dynamic strategy for self-adaptive system model in which it increases the number of extracted SCCs.
\item The dynamic strategy may cause some error to the verification results. We argue that this error is controllable.
\item We synthesize the best strategy by considering the error and the requirements of the system that has the minimum error on decisions.
\end{itemize}
\section{Background}
This section will be written later with formal definitions and backgrounds.
\section{Application Description}
To implement these ideas and check the final result out of it to make a proper decision for the whole system in the next few steps, we need an application. This application must have either environmental or local changes. Although in this research we more need an environmental change. These changes make the system uncertain and the role of self-adaptation is highlighted. In this section we briefly introduce the application.
Suppose we have an energy site and a set of sensors that uses the central solar batteries as their source of energy. It is supposed that there is no energy waste and all amount of harvested energy saves in the battery. The harvested energy has some fluctuations in different hour of the different month of the year. The system should use the harvested energy in all day long. By considering the amount of solar irradiance in different time of the day, if the system wants to use the harvested energy as its energy source, it must be able to control the usage. Each sensor is able to operate in several operational modes that can be adjusted based on the expected harvested energy from the environment. If the expected harvested energy changed, the self-adaptive system must adjust power mode of sensors in order to balance power consumption of sensors with the amount of the expected energy harvesting from the environment. The environment imposes the change to the local system. The difference in solar irradiance is the change of this system. The sensors should change their power mode in order to save the energy for next 2 hours for example.
The model of the environment is expected to show how much energy can be harvested by the solar irradiance. Each state in the model denotes the expected value of solar irradiance. The model of the environment is a DTMC that has some parameters on its transitions. Each transition occurs in an hourly basis. The parameters will be updated according to the new monitored data from the environment. The first state is obvious and has a real number. But other states will estimated based on the previous data. But since the algorithm to discretizing the model is accurate for at last 3 steps, it is recommended to predict the harvested energy for 3 next hours.
Furthermore the local model is composed of some sensors with a central battery that can save the harvested energy from the environment for their long term use. Each sensor has four operational mode: Busy, Idle, Standby and Sleep. Each of which has a special power consumption value. Also transition into different mode needs some energy. Sleep mode is the lowest energy consumption mode and the busy mode is the highest one. The local system is modelled by an MDP to reflect sequential decision-making and non-determinism. It is good that all sensors operate in high power modes (Busy and Idle). If the energy is high, all sensors can work in their high power mode, but otherwise some sensors should work in their low power mode to balance the whole system.
The battery model is also consist of three operational states and are categorized according to the current power of the battery: High level, Regular level and Low level. We always want to keep the system in the regular mode to maintain the trade-off between energy consumption and energy harvested.
With keeping these information about the environment and the local models in mind, the system states are composed of the combination of environment, local sensors and the battery model. This model is parametric MDP where its parameters are evaluated by the environment model, and is updated with new data collected in each round of MAPE loop. If we consider 20 sensors in the local system, by a very simple calculation we reach to this point that we have a state space explosion. So this is the thing we were searching for and somehow it is applicable to our search topic in which by partitioning the states into more SCCs we can increase the efficiency of the system.[]
\section{Problem Definition}
Assumption:
Parametric MDP ($\theta$) -> Parameter -> change
Approximation -> SCC/sub-model -> incremental approximation
Verification result -> Decision-making - strategy
Verification property -> maximizing total reward in the next few workflows
utility(total reward)
Problem Formulation:
Consider an MDP $M(S,T,V,R,P)$ with a strategy $\sigma \colon S \to \mathcal{D}(A)$ . Assume that the MDP is divided into a set of individual components $C=(C_1,C_2,..., C_i)$ e.g. strongly-connected components (SCCs). There is an aggregation mechanism for computing the final verification result of each component that is calculated by $RQV(M,\Phi_i)$ []. But To deal with the dynamicity of the environment and a local system we need to define some parameters (e.g.θ) to analyze the system properly. Every variation in θ is a sign of change in the system. Suppose a parametric MDP $M(S,A,V,P,R)$ model a system with parameter θ. Each evaluations of variable V, gives a new inference of θ in each round of execution of the model. To decide about a new strategy σ in confronting with the change, we again calculate the $RQV(M,\Phi_i)$. Since the size of the model is large, we approximate the model. In other words the techniques of approximation help us to reduce the size of the model in a way that it gets possible to verify the approximate model at runtime. In order to verify the model we just have to use the verification result of each component based on the verification property. Also there is a defined utility $U$ for each state. By transit to other possible states this value may get changed and affects the total reward.
We define $\beta$ as a subset of $A$ in which the MDP is allowed to exclude a few or all of these actions from its choices. Because it can control the value of the utility of the system. If the utility deviate from what we expected and gets decreased, we can use this subset to eliminate some actions. In fact, choosing a strategy for the MDP imposes some restrictions on how the MDP will choose the action(s) in each state from the set of $\beta$. For example, if the strategy excludes action $a\in A$ in state $s_1$, it may cause some transitions to be eliminated. Thus, it has a certain impact on the verification result, if the eliminated transition(s) belongs to the path that maximizes the total reward of the system it might cause some problems to the whole system. The main problem of the research is to synthesize an appropriate strategy $\sigma$ with respect to the excluded action set $\beta \subset A$ in which the number of components are increased.
Problems:
1- Consider a parametric MDP M with a set of components C and a set of excluded action(s) $\beta$; propose a strategy $\sigma$ for eliminating some paths through the MDP to increase the number of SCCs.
2- Defining a strategy $\sigma$ for an MDP M, how much the strategy brings error $E(\beta)$ to the verification result $U^C$ where $E(\beta)=U^C-U^\sigma$?
\bibliographystyle{plain}
|
1,116,691,498,326 | arxiv | \section{Introduction}
Classical lattice models attract attention nowadays for several reasons. Classical Heisenberg model
is frequently used in Monte Carlo simulations of nonlinear sigma models \cite{Justin},
and also for modeling real compounds \cite{EuX,EuX2,Stenli} and other systems \cite{ptice,grain}.
In the recent paper \cite{nas} multipath Metropolis simulation of
$O(3)$ classical Heisenberg model is introduced.
Since multipath approach is embarrassingly parallelizable, it utilizes
easily
computing power of any number of computing elements and provides
normally distributed results with desired precision.
One of the main
advantages of the multipath Metropolis simulation is its applicability
to many different classical lattice models,
such as Ising \cite{Montrol,San, Izing}, Potts \cite{Wu,Glumac} etc.
The multipath approach allows complete control over the simulation in a sense that
it is possible to conduct a ''short simulation''\footnote{Simulation with just a few
simulation paths that can be conducted in short period of time.}
in order to make a reasonable estimate.
Later, the simulation precision can be incrementally improved with additional,
subsequently computed results.
This is of great practical importance as it turns out that the optimal simulation parameters
(number of lattice sweeps and the number of simulation paths),
strongly depend on the temperature and lattice size.
The simulation results presented in this paper were
computed using free software C++ library called ''Hypermo'' \cite{web-Hypermo}
on computing services of the
Supercomputing Center of Galicia (CESGA) \cite{Cesga}.
The figures are created using ''Tulipko'' \cite{web-tulip}
interactive visualization tool.
\section{Model and simulation}
The Hamiltonian of classical $O(3)$ Heisenberg model is
\begin{equation}
H=-\frac{J}{2}\sum_{\bm{n},\bm{\lambda}}\bm{S}_{\bm{n}}\cdot\bm{S}_{\bm{n}+\bm{\lambda}}
\label{ham},\end{equation}
where the summation is taken over all lattice sites $\{\bm{n}\}$ with total $N=L^3$ sites of simple cubic lattice, and $\bm{\lambda}$
connects
a given site to its nearest neighbors.
The convinient
energy scale is set by $J=k_{\mathsf{B}}=1$ and
we use the standard spherical parametrization for spin vectors
\begin{equation} \bm{S}_{\bm{n}}=[\sin \theta_{\bm{n}} \cos \varphi_{\bm{n}},\sin \theta_{\bm n}\sin \varphi_{\bm{n}},\cos \theta_{\bm{n}}]^{\mathsf{T}}.\end{equation}
The quantities of interest are the total spin
\begin{equation} \bm{M}=\frac{1}{N}\sum_{\bm{n}}\bm{S_{\bm{n}}},\end{equation}
of which the average value is magnetization $\langle \bm{M}\rangle$
,
the internal energy of the system $\langle H \rangle$, magnetic susceptibility
\begin{equation} \chi(T)=\frac{L^3}{T} \left[\langle |\bm M|^2 \rangle-\langle |\bm M| \rangle^2 \right],\label{chi} \end{equation}
and capacity
\begin{equation} C_V(T)=\frac{L^3}{T^2}\left[\langle H^2 \rangle-\langle H \rangle^2 \right].\label{cv}\end{equation}
Because there can't be no spontaneous symmetry breaking in finite
lattices magnetic susceptibility is defined with
\begin{equation} |\bm{M}|=\frac{1}{N}\Big{|}\sum_{\bm{n}}\bm{S_{\bm{n}}}\Big{|}.\label{magnetic-susc}\end{equation}
In multipath approach, each simulation consists of a certain number $\mathcal{N}$ of
simulation paths (simulation path, SP).
Each SP produces output.
Outputs of all the $\mathcal{N}$ SPs, together, form a simulation output (SO). Monte Carlo averages
are then computed as
\begin{equation} \langle A \rangle=\frac{1}{\mathcal{N}}\sum_{i=1}^{\mathcal{N}}A_i \label{MC}\end{equation}
and $\chi$ and $C_V$ are calculated from (\ref{chi}) and (\ref{cv}).
It should be noted that all thermodynamic quantities in the paper are
calculated per lattice site.
Multipath Metropolis simulation can be easily visualized
in the phase space of the lattice, which is
the direct product of the two-spheres $\mathcal{S}^{\mathbf 2}$
located
at lattice sites\footnote{The state of each site is determined by
two angles $\varphi \in[0,2\pi]$
and $\theta \in [0,\pi]$ and thus the
dimension
of phase space is $\dim(PS)=2^{L^3}$.}.
\begin{figure}[ht]
\includegraphics[width=7cm]{ps.pdf}
\caption{(Color online) Illustration of the lattice phase space trajectories
in the multipath simulation at low temperature for random initial state.
Each line represents one path.}
\label{Fig:multipath}
\end{figure}
Figure \ref{Fig:multipath} illustrates
multipath simulation in the lattice phase space ($PS$) at
low temperatures and random initial state.
Every curve represents a single--path through the lattice phase space.
Each path starts from some random state of the lattice and it contributes
with single result (the final state of that path) in (\ref{MC}).
In contrast to single--path simulation, there is no correlation
between the multipath SP outputs.
Thus, standard statistical analysis can be applied on it (See \cite{nas}
for detail discussion).
Note that existence of two limit points
in phase space is a consequence of finite lattice size \cite{nas,Binder81}.
\begin{figure}[H]
\includegraphics [width=7.5cm]{Fig1.pdf}
\centering
\caption{(Color online) Magnetization as a function of temperature for $L=10$ in the single--path
and multipath approach. }
\label{Fig:mag}
\end{figure}
\section{Results and discussion}
All simulations were conducted for linear
size of the system
$L=10$ with
periodic boundary condition, in both single and multipath approach.
In single--path approach we used $2\times 10^6$ lattice sweeps to achieve
thermal equilibrium in whole temperature range, and afterwards only one out
of every five lattice sweeps was used to calculate the averages of
physical quantities \cite{Kitaev}. At every temperature $5 \times 10^5$ measurements were averaged.
To make sure that revailable results are generated by multipath simulation, it is prepared
in two different setups. In the first one, refered to as random initial state simulation in the text,
at every lattice site both angles $\theta$ and $\varphi$ are taken to be arbitrary. In the second one,
denoted as ordered initial state simulation spins are taken to points along z-axis, with no restriction
on second spherical angle $\varphi$.
We have to bear in mind, however, that multipath simulations naturally split
into three temperature domains in which different numbers of lattice sweeps/simulation paths are needed.
In low temperature region for simulation convergence
(See \cite{nas}) more lattice sweeps is needed since all paths start
from some random state of the lattice.
(Simulation speed can be optimized if
ordered state is taken to be ''starting point'' of all paths.)
On the other hand, high temperature region
requires more simulation paths. In the critical region
we take sufficiently large number of lattice sweeps and results due
to overlaping of the two different output distributions \cite{Binder81}.
\begin{figure}[H]
\centering
\includegraphics [width=7.5cm]{Fig2.pdf}
\caption{(Color online) Energy as a function of temperature for
$L=10$ in the single--path and multipath approach. }
\label{Fig:e}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics [width=6.0cm]{Fig3.pdf}
\caption{(Color online) Magnetic susceptibility as a function of temperature for
$L=10$ in the single--path and multipath approach. }
\label{Fig:susc}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics [width=6.0cm]{Fig5.pdf}
\caption{(Color online) Heat capacity as a function of temperature for
$L=10$ in the single--path and multipath approach. }
\label{Fig:tcap}
\end{figure}
From Figs. 2--5, we note that the differences in the thermodynamical characteristic
obtained by single--path and multipath approach are negligible.
\begin{figure}[H]
\includegraphics [width=5.0cm]{Fig6.pdf}
\centering
\caption{(Color online) Magnetization, calculated starting
from both ordered and disordered states, as a function
of number of lattice sweeps for $\mathcal{N}=10^4$, at $T=1$.}
\label{Fig:ee}
\end{figure}
\begin{figure}[H]
\includegraphics [width=8.5cm]{Fig7.pdf}
\centering
\caption{(Color online) Magnetization, calculated starting
from ordered state, as a function
of number of lattice sweeps for different number of simulation paths, at $T=1$, for
$\mathcal{N}=10,100,1000$ and $10000$.
}
\label{Fig:eee}
\end{figure}
The number of lattice sweeps needed for a lattice to reach it's representative state
(also called burn-in or warm--up phase) is unknown.
It depends on many parameters and can vary substantially.
Insufficient number of lattice sweeps causes inaccurate simulation results.
To overcome this problem for each temperature
half of the simulation paths are computed from the random initial state
where other half started from the ordered state
These two sets are averaged using (\ref{MC}) but results from each half separately.
When both halves produce the same result (Figure \ref{Fig:ee}) we can be reasonably certain
that it is an accurate value.
Total spin distribution at $T=1$, with
$5\times 10^3$ lattice sweeps and $10000$ simulation paths is presented in Figure \ref{Fig:goredole}. In
Figure \ref{Fig:goredole} every path starts from random lattice configuration. From all those measurments magnetization
is obtained (see gray line at Figure \ref{Fig:ee}).
However, contrary of that, total spin
distributions in Figs \ref{Fig:gore} and \ref{Fig:dole} are
obtained from multipath simulation where every path started
from ordered state. Both sets of measurments,
one from Figure \ref{Fig:gore}
and the other one from Figure \ref{Fig:dole}, give the same value of magnetization
(blue line in Figure \ref{Fig:ee}).
Multipath approach of the $O(3)$ classical Heisenberg model
shows phase transition from the ordered ferromagnetic phase to the
paramagnetic phase at temperature $T_c=1.442(20)$ (see \cite{nas}).
%
\begin{figure}[H]
\includegraphics [width=14.0cm]{goredole1.pdf}
\centering
\caption{(Color online) Distribution of total spin
at $T=1$, for $5\times 10^3$ lattice sweeps and $10^4$ simulation paths. Every path
started from random spin configuration, where both angles
$\theta$ and $\varphi$ are taken to be random. }
\label{Fig:goredole}
\end{figure}
%
\begin{figure}[hbt]
\includegraphics [width=7.0cm]{gore1.pdf}
\centering
\caption{(Color online) Distribution of total spin
at $T=1$, for $5\times 10^3$ lattice sweeps and $10^4$ simulation paths.
Every path started from ordered configuration, with $\theta=0$ and
$\varphi$ arbitrary. }
\label{Fig:gore}
\end{figure}
\begin{figure}[hbt]
\includegraphics [width=13.0cm]{dole1.pdf}
\centering
\caption{(Color online) Distribution of total spin at
$T=1$, for $5\times 10^3$ lattice sweeps and 10000 simulation paths.
Every path started from ordered configuration, with $\theta=\pi$ and
$\varphi$ arbitrary. }
\label{Fig:dole}
\end{figure}
To demonstrate the applicability of multipath approach we examined the
thermodynamical properties of classical Heisenberg model and
compared it with the results obtained from conventional single--path approach.
As expected,the results are in good agreement.
The multipath approach produces statistically independent results
on which standard statistical methods can be applied \cite{nas}.
Therefore, it is possible to conduct a ''short simulation''
for a quick qualitative analysis (Figure \ref{Fig:eee}),
which can be of great importance in research of new models.
\section*{Acknowledgments}
This work was supported by the Serbian Ministry of
Education and Science under Contract No. OI-171009.
The authors acknowledge the use of the Computer Cluster of the
Galicia Supercomputing Centre (CESGA).
\section*{References}
\bibliographystyle{elsarticle-num}
|
1,116,691,498,327 | arxiv | \section{Introduction}
In a recent paper [\citet{lfl1}, hereafter Paper I], we
proposed a novel mechanism for producing misalignment between the spin
axis of a protostar and the normal vector of its circumstellar
disc. Our work was motivated by recent measurements of the
sky-projected stellar obliquity using the Rossiter-McLaughlin effect
in transiting exoplanetary systems, which showed that a large fraction
of the systems containing hot Jupiters have misaligned stellar spin
with respect to the planetary angular momentum axis [see \citet{triaud};
\citet{win3} and references therein]. Additional evidence for
nonzero stellar obliquity came from the statistical analysis of the
apparent rotational velocities ($v\sin i_\star$) of planet-bearing
stars \citep{sch1}.
The basic mechanism (``Magnetically driven misalignment'') for
producing spin -- disc misalignment in accreting protostellar systems
can be sumarized as follows (Paper I). The magnetic field of a
protostar (with $B_\star\go 10^3$~G) penetrates the inner region of
its accretion disc. These field lines link the star and the disc in a
quasi-cyclic fashion (e.g., magnetic field inflation followed by
reconnection; see~\cite{bou} and~\cite{ale} for observational evidence). Differential rotation between the star and the disc
not only leads to the usual magnetic braking torque on the disc, but
also a warping torque which tends to push the normal axis of the
inner disc away from the spin axis\footnote{The warping torque vanishes
if the angular momentum of the disc is exactly aligned with the stellar spin, but exists for
arbitrary small angles. A flat disc in the aligned configuration is in an unstable equilibrium}. Hydrodynamical
stresses in the disc, on the other hand, tend to inhibit significant disc warping. The
result is that, for a given disc orientation imposed at large radii
(e.g., by the angular momentum of the accreting gas falling onto the
disc), the back-reaction of the warping torque can push the stellar
spin axis toward misalignment with respect to the disc normal vector.
Planets formed in the disc will then have a misaligned
orbital normal axis relative to the stellar spin axis, assuming that
no evolution mechanism occurring after the dissipation of the disc forces the alignment of the system.
The process of planetary system formation can be roughly divided into
two stages \citep{jt1}. In the first stage, which lasts a
few million years until the dissipation of the gaseous protoplanetary
disc, planets are formed and undergo migration due to tidal
interactions with the gaseous disc [\citet{lin1}; see \citet{pap1} for a
review]. The second stage, which lasts from when the disc has
dissipated to the present, involves dynamical gravitational
interactions between multiple planets, if they are produced in the
first stage in a sufficiently close-packed configuration \citep{jt1,cha1},
and/or secular interactions with a distant
planet or stellar companion \citep{eke1,wm1,ft1,wu1,nib1}.
The eccentricity distribution of exoplanetary systems and
the recent observational results on the spin -- orbit misalignment
suggest that the physical processes in the second stage
play an important role in determining the properties of
exoplanetary systems. Nevertheless, the importance of the first
stage cannot be neglected as it sets the initial condition for the possible
evolution in the second stage.
Our result in Paper I shows that at the end of the first stage,
the symmetry axis of the planetary orbit may be inclined with respect to
the stellar spin axis.
At first sight, it may seem strange that the magnetic field effects
can drive the stellar spin axis toward misalignment with respect to
the disc symmetry axis, given that the spin angular momentum of the
star ultimately comes from the disc and the disc contains a large
reservoir of angular momentum. The key to understand this is to
realize that when the gas reaches the magnetosphere boundary, its
angular momentum is much smaller than in the outer disc
(the specific angular momentum of the disc is $j_{\rm disc}(r) = \sqrt{GMr}$
for a Keplerian disc), and any magnetic torque, which in
general can break the axisymmetry of the system, is of the same order
of magnitude as the accretion torque on the star.
A key assumption adopted in Paper I for the calculation of
the magnetic torque on the star from the disc is that the disc is flat.
This is a nontrivial assumption. Indeed, the magnetic coupling between the
star and the disc operates only in the innermost disc region (e.g., between
the inner radius $r_{\rm in}$ and $r_{\rm int} \approx 1.5r_{\rm in}$), and this region has a much
smaller moment of inertia than the star. Therefore, if there were no coupling
between this inner disc region and the outer disc,
the inner disc would be significantly warped on a timescale much shorter
than the timescale for changing the stellar spin
\citep{pl1}. If there is any secular
change in the stellar spin direction, the inner disc warp would then follow
the varying spin axis.
Clearly, in order to determine the long-term spin evolution of the star,
it is important to understand the dynamics of the warped disc, taking into
account the magnetic torques on the inner disc and the hydrodynamical
coupling between different disc regions. This is the goal of our paper.
To be more specific, there is a hierarchy of timescales related to the combined
evolution of the stellar spin and the disc warp:
(i) The dynamical time $t_{\rm dyn}$
associated with the spin frequency $\omega_s$, disc rotation frequency $\Omega$ and
the beat frequency $|\omega_s-\Omega|$. This is much shorter than the effects
(steady-state disc warping and spin evolution) we study in this paper.
(ii) The warping/precession timescale of the inner disc
[see Eq.~(\ref{eqn:Gamma_w})]
\begin{eqnarray}
&&t_w\sim \Gamma_w^{-1}=\left(92\,{\rm days}\right) \left({1~{\rm kG}\over
B_\star}\right)^{\!2} \left({2R_\odot\over R_\star}\right)^{\!6}
\left({M_\star\over 1\,M_\odot}\right)^{\!1/2}\nonumber\\
&&\qquad
\times \left({r_{\rm in}\over 8R_\odot}\right)^{\!11/2}
\left({\Sigma\over 10\,{\rm g\,cm}^{-2}}\right)
\left(\zeta\cos\theta_\star\right)^{-1},
\label{eq:twarp}\end{eqnarray}
where $M_\star,\,R_\star,\,B_\star$ are the mass, radius and surface (dipole)
magnetic field of the protostar, respectively, $\theta_\star$ is the
inclination angle of the stellar dipole relative to the spin,
$\Sigma$ is the disc surface density, and $\zeta$ is a dimensionless magnetic twist parameter
of order unity related to the strength of the azimuthal magnetic field generated by star-disc twist.
(iii) The disc warp evolution timescale $t_{\rm disc}$. This is the time for the
disc to reach a steady-state under the combined effects of magnetic torques
and internal fluid stresses (see Section 5). For high-viscosity discs, $t_{\rm disc}$
is the viscous diffusion time for the disc warp [see Eq.~(\ref{eq:tvis})] and depends
on the viscosity parameter $\alpha$ and the disc thickness $\delta=H/r$:
\begin{equation}
t_{\rm vis}\sim (3000\,{\rm yrs})\left({\alpha\over 0.1}\right)
\left({\delta\over 0.1}\right)^{-2}\!\!\left({r\over 100\,{\rm AU}}\right)^{3/2}.
\end{equation}
For low-viscosity discs ($\alpha\lo \delta$), $t_{\rm disc}$ is the propagation time
of bending waves across the whole disc and depends on the sound speed.
In general, $t_{\rm disc}$ can be several orders of magnitude larger than
$t_w$.
(iv) The stellar spin evolution timescale. The magnetic misalignment
torque on the star is of order $\mu^2/r_{\rm in}^3$ ($\mu$ being the magnetic dipole moment of
the star), which is comparable to the fiducial accretion torque, given for Keplerian discs by ${\cal N}_0=\dot M\sqrt{GM_\starr_{\rm in}}$. Assuming the spin angular momentum
$J_s=0.2M_\star R_\star^2\omega_s$ (the value for a $\Gamma=5/3$ polytrope,
representing a convective star), we find the spin evolution time
\begin{eqnarray}
&&t_{\rm spin}={J_s\over {\cal N}_0}
=(1.25\,{\rm Myr})\left(\!{M_\star\over 1\,M_\odot}\!\right)
\!\!\left({{\dot M}\over 10^{-8}{M_\odot}{\rm yr}^{-1}}\right)^{\!-1}
\nonumber\\
&&\qquad \times \left(\!{r_{\rm in}\over 4R_\star}\!\right)^{\!-2}
\!\!{\omega_s\over\Omega(r_{\rm in})}.
\label{eq:tspin}\end{eqnarray}
In general $t_{\rm spin}\gg t_{\rm disc}$. In this paper we will study
the evolution of the disc warp on timescales ranging from $t_w$ to $t_{\rm disc}$,
and the evolution of the stellar spin direction on timescales of order $t_{\rm spin}$.
It is important to note that we are not interested
in disc warpings that vary on the dynamical timescale $t_{\rm dyn}$ in this paper.
In general, when the stellar dipole axis is inclined with respect to the
spin axis, there will be periodic vertical forces at the rotation
frequency of the star acting on the inner disc\footnote{The forcing frequency
may also be twice of the spin frequency under certain conditions
(e.g., when the disc is partially diamagnetic); see \citet{lz1}.}.
These periodic forces will lead to the warping of the disc, particularly
for low-viscosity discs in which bending waves propagate
\citep{tp1,lz1}. Indeed, there
is observational evidence for such magnetically-warped discs.
For example, the recurrent luminosity dips observed in the classical T Tauri
star AA Tauri has been attributed to the periodic occultation of the central
star by a warped inner disc \citep{bou}.
However, such dynamical disc warps average exactly to zero over a rotation
period and have no effect on the secular evolution of the system.
The remainder of the paper is organized as follows. In Section
\ref{sec:analytic}, we summarize our analytical model of magnetopshere
-- disc interaction and derive the equation for the evolution of the
stellar spin axis when the disc is warped. In Section 3 we present
theoretical formalisms for determining the steady-state and time
evolution of warped discs, for both high-viscosity regime (where warps
propagate diffusively) and low-viscosity regime (where warps propagate
as bending waves). An approximate analytical expression for the
steady-state linear warp is also derived (see Section 3.2.2).
In Section 4 we present numerical results for the steady-state disc warp profiles
under various conditions and in Section 5 we study the time evolution of disc warps.
We examine in Section 6 how the inner disc warp and the stellar spin evolution
respond to variations of the outer disc, and discuss in Section 7 how this could, in principle, lead
to anti-aligned planetary orbits, even for discs with initial angular momentum nearly aligned
with the stellar spin. We conclude in Section 8 with
a discussion of our results.
\section{Analytic Model of the Disc -- Magnetic Star System}
\label{sec:analytic}
\subsection{Magnetic Torques on the Disc}
The interaction between a magnetic star and a disc is complex
(see references in Paper I).
However, the key physical effects of this interaction on the disc
can be described robustly in a parametrized manner. The model
used throughout this paper is detailed in Paper I. Here, we will limit
ourselves to a brief summary of the magnetic torques acting on the disc.
The stellar magnetic field disrupts the accretion disc at the
magnetospheric boundary, where the magnetic and plasma stresses balance.
For a dipolar magnetic field with magnetic moment $\mu$, we have
\begin{equation}
r_{\rm in}=\eta \left({\mu^4\over GM_\star\dot M^2}\right)^{1/7},
\label{alfven}\end{equation}
where $\eta$ is a dimensionless constant somewhat less than unity ($\eta
\sim 0.5$ according to recent numerical simulations; see Long et
al.~2005~\footnote{In the notation of Long et al., $\eta=k_A/2^{1/7}$ and $k_A=1$
corresponds to the solution for spherical accretion}). We take $r_{\rm in}$ to be the inner edge
of the disc. Before
being disrupted, the disc generally experiences nontrivial magnetic
torques from the star (Lai 1999; Paper I).
Consider a cylindrical coordinate
system $(r,\phi,z)$, with the vertical axis Oz orthogonal to the plane of the disc.
The magnetic torques are of two types:
(i) A warping torque ${\bf N}_w$ which acts in a small interaction
region $r_{\rm in} < r < r_{\rm int}$, where some of the stellar field
lines are linked to the disc in a quasi-cyclic fashion (involving
field inflation and reconnection).
These field lines are twisted by
the differential rotation between the star and the disc, generating
a toroidal field $\Delta B_\phi=\mp \zeta B_z^{(s)}$ from the quasi-static
vertical field $B_z^{(s)}$ threading the disc, where $\zeta\sim 1$~\citep{aly2,lov} and
the upper/lower sign refers to the value above/below the disc
plane. Since the toroidal field from the stellar dipole
$B_\phi^{(\mu)}$ is the same on both sides of the disc plane, the net
toroidal field $B_\phi=B_\phi^{(\mu)}+\Delta B_\phi$ differs above and below the disc plane, giving rise to
a vertical force on the disc. While the mean force (averaging over the
azimuthal direction) is zero, the uneven distribution of the force
induces a net warping torque which tends to push the orientation of
the disc angular momentum $\hat{\mbox{\boldmath $l$}}$ away from the stellar spin axis
$\hat{{\mbox{\boldmath $\omega$}}}_s$ (see Paper I for a simple model for this effect, involving a metal plane in an
external magnetic field). (ii) A precessional torque ${\bf N}_p$ which
arises from the screening of the azimuthal electric current induced in the
highly conducting disc. This results in a difference in the radial
component of the net magnetic field above and below the disc plane
and therefore in a vertical force on the disc. The resulting
torque tends to cause $\hat{\mbox{\boldmath $l$}}$ to precess around
$\hat{\mbox{\boldmath $\omega$}}_s$. In Paper I, we parametrized the two magnetic torques
(per unit area) on the disc as
\begin{eqnarray}
{\bf N}_w &=& -(\Sigma r^2\Omega)\cos\beta\,
\Gamma_w \,\hat{\mbox{\boldmath $l$}}\times(\hat{{\mbox{\boldmath $\omega$}}}_s\times\hat{\mbox{\boldmath $l$}}),\label{eq:torquew}\\
{\bf N}_p&=&(\Sigma r^2\Omega)\cos\beta\,
\Omega_p \,\hat{{\mbox{\boldmath $\omega$}}}_s\times\hat{\mbox{\boldmath $l$}},
\label{eq:torque}\end{eqnarray}
where $\Sigma(r)$ is the surface density, $\Omega(r)$ the
rotation rate of the disc, and $\beta(r)$ is the disc tilt angle (the angle
between $\hat{\mbox{\boldmath $l$}} (r)$ and the spin axis $\hat{\mbox{\boldmath $\omega$}}_s$).
The warping rate and precession angular frequency at radius $r$ are
given by
\begin{eqnarray}
&&\Gamma_w (r)=\frac{\zeta\mu^2}{4\pi r^7\Omega(r)\Sigma(r)}\cos^2\theta_\star,
\label{eqn:Gamma_w}\\
&&\Omega_p (r)=\frac{\mu^2}{\pi^2 r^7\Omega(r)\Sigma(r)
D(r)}F(\theta_\star),
\label{eqn:Omega_p}
\end{eqnarray}
where $\theta_\star$ is the angle between the magnetic dipole axis and the
spin axis, and the dimensionless function $D(r)$ is given by
\begin{equation}
D(r)={\rm max}~\left(\sqrt{r^2/r^2_{\rm in}-1}, \sqrt{2H(r)/r_{\rm in}}\right).
\label{eqn:D(r)}
\end{equation}
with $H(r)$ the half-thickness of the disc.
The function $F(\theta_\star)$ depends on the dielectric properties of the disc.
We can write
\begin{equation}
F(\theta_\star)=2f\cos^2\theta_\star-\sin^2\theta_\star.
\end{equation}
If the stellar vertical field is entirely
screened out by the disc, the parameter $f=1$; if only the time-varying component
of that field is screened out, we get $f=0$. In reality, $f$ lies
between 0 and 1.
The magnetic torque formulae given above
contain uncertain parameters (e.g., $\zeta$, which
parametrizes the amount of azimuthal twist of the magnetic field
threading the disc); this is inevitable given the complicated nature
of magnetic field -- disc interactions.
Also, while the expression for the
warping torque [eq.~(\ref{eq:torquew})]
is formally valid for large disc warps, the expression for
the precessional torque was derived under the assumption that the disc is
locally flat [eq.~(\ref{eqn:Omega_p}) is strictly valid only for a
completely flat disc \citep{aly1}]; when this assumption breaks
down (i.e., when $|\partial\hat{\mbox{\boldmath $l$}}/\partial\ln r|$ is large), we expect
a similar torque expression to hold, but with modified numerical
factors (e.g. the function $D(r)$ in eq.~(\ref{eqn:Omega_p}) will be
different). In the application discussed in the following sections,
we find that the condition $|\partial\hat{\mbox{\boldmath $l$}}/\partial\ln r| \lo 1$ is always
satisfied. Thus we believe that our simple formulae
capture the qualitative behavior of accretion discs subject to
magnetic torques.
It is also worth noting that the expressions~(\ref{eq:torquew}-\ref{eq:torque}) for the
torques only correspond to the zero-frequency component of the magnetic forces acting on the
disc. The time varying components of these forces can also have significant effects. In particular,
\citet{lz1} discussed how the components of the magnetic forces varying at the stellar spin frequency and at twice that frequency can excite bending waves in discs, while \citet{tp1} showed
that if the star has a dipole field misaligned with its rotation axis, magnetic effects create a steady-state warp in a frame corotating with the star. However, these ``dynamical waves''
average to zero over the stellar rotation period and do not affect the secular evolution of the stellar spin. In this paper, we concern ourselves only with long-term effects, effectively studying a disc profile averaged over multiple stellar rotations.
\subsection{Spin Evolution of the Star}
\label{sec:spinevol}
The effects of the magnetic torques on the
evolution of the star -- disc system are twofold. First, they will
cause the orientation of the disc $\hat{\mbox{\boldmath $l$}}(r)$ to deviate from a flat
disc profile $\hat{\mbox{\boldmath $l$}}(r)=\hat{\mbox{\boldmath $l$}}_{\rm out}=\hat{\mbox{\boldmath $l$}}(r_{\rm out})$, set at the
outer disc radius $r_{\rm out}$. These deviations will be studied in
details for different disc parameters in Sections 3-5.
Second, the back-reaction of the torques will
change the orientation of the stellar spin axis on a longer timescale.
The secular evolution of the stellar spin under the combined
effects of matter accretion and star -- disc interactions is
explored in Paper I in the case of flat discs. Here we
generalize the basic formulae derived in Paper I to warped discs.
In general, the spin angular momentum of the star, $J_s\hat{\mbox{\boldmath $\omega$}}_s$, evolves
according to the equation
\begin{equation}
{d\over dt}\left(J_s\hat{\mbox{\boldmath $\omega$}}_s\right)={\mbox{\boldmath ${\cal N}$}}=
{\mbox{\boldmath ${\cal N}$}}_l+{\mbox{\boldmath ${\cal N}$}}_s+{\mbox{\boldmath ${\cal N}$}}_w+{\mbox{\boldmath ${\cal N}$}}_p.
\label{spin}\end{equation}
Here ${\mbox{\boldmath ${\cal N}$}}_l$ represents the torque component that is aligned with the
inner disc axis $\hat{\mbox{\boldmath $l$}}(r_{\rm in})=\hat{\mbox{\boldmath $l$}}_{\rm in}$. We
parametrize ${\mbox{\boldmath ${\cal N}$}}_l$ by
\begin{equation}
{\mbox{\boldmath ${\cal N}$}}_l=\lambda\dot M (GM_\starr_{\rm in})^{1/2}\,\hat{\mbox{\boldmath $l$}}_{\rm in}
=\lambda {\cal N}_0\,\hat{\mbox{\boldmath $l$}}_{\rm in},
\label{eq:Nl}\end{equation}
Equation~(\ref{eq:Nl}) includes
not only the accretion torque carried by the accreting
gas onto the star, $\dot M_{\rm acc} (GMr_{\rm in})^{1/2}\hat{\mbox{\boldmath $l$}}$ (where
$\dot M_{\rm acc}$ may be smaller than $\dot M$, the disc accretion rate), but
also the magnetic braking torque associated with
the disc -- star linkage, as well as any angular momentum
carried away by the wind from the magnetosphere boundary~\citep{shu,rom1}.
All these effects are parametrized by the parameter $\lambda \lo 1$. In particular,
if a wind carries away most of the angular momentum of the inner disc, we may get
$\lambda \ll 1$.
The term ${\mbox{\boldmath ${\cal N}$}}_s=-|{\cal N}_s| \hat{\mbox{\boldmath $\omega$}}_s$ represents a spindown torque
carried by a wind/jet from the open field lines region of the star (e.g. Matt \& Pudritz 2005).
The terms ${\mbox{\boldmath ${\cal N}$}}_w$ and ${\mbox{\boldmath ${\cal N}$}}_p$ represent the back-reactions
of the warping and precessional torques:
\begin{equation}
{\mbox{\boldmath ${\cal N}$}}_{w,p}=-\int_{r_{\rm in}}^{r_{\rm out}}\! 2\pi r {\bf N}_{w,p}\,dr.
\end{equation}
Since both ${\bf N}_w$ and ${\bf N}_p$ decrease rapidly with radius (as $r^{-5}$),
the integral can be carried out approximately, giving
\begin{equation}
{\mbox{\boldmath ${\cal N}$}}_p+{\mbox{\boldmath ${\cal N}$}}_w \approx {\cal N}_0\left[n_p\hat{\mbox{\boldmath $\omega$}}_s\times\hat{\mbox{\boldmath $l$}}_{\rm in}
+n_w\hat{\mbox{\boldmath $l$}}_{\rm in}\times (\hat{\mbox{\boldmath $\omega$}}_s\times\hat{\mbox{\boldmath $l$}}_{\rm in})\right],
\label{eq:Nev}\end{equation}
with
\begin{eqnarray}
&&n_p=-{4\over 3}{1 \over \pi\eta^{7/2}}F(\theta_\star)\,\cos\beta_{\rm in}\
,\label{eq:np}\\
&&n_w={\zeta [1-(r_{\rm in}/r_{\rm int})^3]
\over 6\eta^{7/2}}\cos^2\!\theta_\star\,\cos\beta_{\rm in},
\end{eqnarray}
where $\cos\beta_{\rm in}=\hat{\mbox{\boldmath $\omega$}}_s\cdot\hat{\mbox{\boldmath $l$}}_{\rm in}$. Note that both ${\mbox{\boldmath ${\cal N}$}}_0 n_p$
and ${\mbox{\boldmath ${\cal N}$}}_0 n_w$ are of order $\mu^2/r_{\rm in}^3$.
For a fixed outer disc orientation $\hat{\mbox{\boldmath $l$}}_{\rm out}$, the
inclination angle of the stellar spin relative to the outer disc, $\beta_\star=\beta_{\rm out}$,
evolves according to the equation
\begin{eqnarray}
&&J_s{d\over dt}\cos\beta_\star={\mbox{\boldmath ${\cal N}$}}\cdot\hat{\mbox{\boldmath $l$}}_{\rm out}-\cos\beta_\star
\,({\mbox{\boldmath ${\cal N}$}}\cdot\hat{\mbox{\boldmath $\omega$}}_s)\nonumber\\
&&\qquad \approx {\cal N}_0\Bigl[\lambda \,(\hat{\mbox{\boldmath $l$}}_{\rm in}\cdot\hat{\mbox{\boldmath $l$}}_{\rm out}
-\cos\beta_\star\,\cos\beta_{\rm in})\nonumber\\
&&\qquad ~~ +n_w\,(\cos\beta_\star-\cos\beta_{\rm in}\,\hat{\mbox{\boldmath $l$}}_{\rm in}\cdot
\hat{\mbox{\boldmath $l$}}_{\rm out}-\cos\beta_\star \sin^2\!\beta_{\rm in})\nonumber\\
&&\qquad ~~ +n_p\,\hat{\mbox{\boldmath $\omega$}}_s\cdot (\hat{\mbox{\boldmath $l$}}_{\rm in}\times\hat{\mbox{\boldmath $l$}}_{\rm out})
\Bigr].
\label{eq:warpedspinevol}
\end{eqnarray}
Note that this does not depend on the specific form of ${\mbox{\boldmath ${\cal N}$}}_s$.
For flat discs, equation (\ref{eq:warpedspinevol})
reduces to (Paper I)
\begin{equation}
\left(\frac{d}{dt} \cos{\beta_\star}\right)_{\rm flat}
= \frac{\mathcal{N}_0}{J_s} \sin^2{\beta_\star}
\left(\lambda - \tilde{\zeta} \cos^2\!{\beta_\star} \right),
\label{eq:flatspinevol}
\end{equation}
with
\begin{equation}
\tilde{\zeta} = \frac{\zeta [ 1 - (r_{\rm in}/r_{\rm int})^3 ]
\cos^2{\theta_\star}}{6 \eta^{7/2}}.
\end{equation}
In the flat-disc approximation, the star -- disc systems can thus be
divided in two classes with very different long-term spin evolution
(see Fig.~\ref{f1}). If $\tilde{\zeta} < \lambda$,
$\cos{\beta_\star}$ always increases in time and the system will be
driven towards the aligned state ($\beta_\star=0$). On the other
hand, if $\tilde{\zeta} > \lambda$, there are two ``equilibrium''
misalignment angles (defined by $d\beta_\star/dt=0$):
\begin{equation}
\cos{\beta_{\pm}} = \pm \sqrt{\frac{\lambda}{\tilde{\zeta}}}.
\end{equation}
The smaller angle $\beta_+$ corresponds to a stable equilibrium, while
$\beta_-$ is unstable. Thus, the final state of the systems depends on
the initial misalignment angle $\beta_\star(t=0)$. If
$\beta_\star(t=0) < \beta_-$, the system will be driven towards a
moderate misalignment $\beta_+<90^\circ$; otherwise it will evolve
towards a completely anti-aligned configuration ($\beta_\star=180^\circ$). From
these results, we can see that, according to the flat-disc approximation, if $\lambda \ll 1$ a misaligned configuration is
strongly favored.
The probability distribution of the different cases for astrophysical systems will thus depend on the unknown value of the parameters of our model, as well as on $\beta_\star(t=0)$ --- which depends
on the formation history of the star -- disc system and is quite uncertain~\citep{blp}. For example, for an isotropic distribution of $\hat{\mbox{\boldmath $l$}}_{\rm out}$ on the unit sphere and $\tilde{\zeta} > \lambda$, a fraction $0.5(1-\sqrt{\frac{\lambda}{\tilde \zeta}})$ of the systems would be anti-aligned, while the rest would tend towards a misalignment $\beta_+$. The real
distribution of disc inclination is certainly more complex, as $\lambda$ and $\tilde \zeta$ will vary
from system to system, and the distribution of the initial misalignment $\beta_\star (t=0)$ is probably not isotropic. Additionally, the orientation of the outer disc might vary in time. For more details on the distribution of final inclination angles $\beta_\star$, see
Paper I (Sec. 5), as well as Section~\ref{sec:aa} of this paper, which discusses a process to reach
anti-alignment starting from $\beta_\star (t=0) < \beta_-$.
\begin{figure}
\begin{centering}
\includegraphics[width=7cm]{Fig1-s}
\caption{Time derivative of the inclination angle of the stellar spin
(in degrees) relative to the outer disc for a flat (unwarped) disc.
The upper panel is for
$\tilde{\zeta}/\lambda=0.5<1$, in which case
the spin evolves towards alignment. The lower panel is for
$\tilde{\zeta}/\lambda=\sqrt{2}>1$, in which case
the spin either evolves toward $\beta_+\neq 0$ or toward
$\beta_\star=180^\circ$, depending on the initial value of
$\beta_\star$. The quantity $t_{\rm spin}$ is defined in equation~(\ref{eq:tspin}).
The arrows show the direction of the evolution of $\beta_\star$
in different regions of the parameter space.}
\label{f1}
\end{centering}
\end{figure}
In general, the magnetic torques induce disc warping so that $\hat{\mbox{\boldmath $l$}}$
depends on $r$, and equation (\ref{eq:warpedspinevol}) must be used to
determine the long-term spin evolution. Since the disc warp evolution
timescale is much shorter than the stellar spin evolution timescale,
the steady-state warp profile $\hat{\mbox{\boldmath $l$}}(r)$ must be solved before equation
(\ref{eq:warpedspinevol}) can be applied. In the following sections,
we will show that for most (but not all) realistic choices of the free
parameters in our model, using equation~(\ref{eq:flatspinevol})
instead of equation~(\ref{eq:warpedspinevol}) does not significantly
change our qualitative description of the long term behavior of the
system. We will measure deviations from the flat-disc approximation
through the parameter $\xi$ defined by
\begin{equation}
\label{eq:paramangevol}
\frac{d}{dt} \cos{\beta_\star} = \xi
\left(\frac{d}{dt} \cos{\beta_\star}\right)_{\rm flat},
\end{equation}
where the left-hand side is computed using the first line of
equation~(\ref{eq:warpedspinevol}).
\section{Description of Warped Discs: Theory}
As noted in Section 1, the evolution of the coupled star -- disc
system occurs over two different timescales. The first, $t_{\rm disc}$,
characterizes the evolution of the disc under the magnetic torques and
internal stresses towards a warped steady-state configuration,
assuming that the spin of the star $\hat{{\mbox{\boldmath $\omega$}}}_s$ is fixed. The second,
$t_{\rm spin}$, determines the evolution of $\hat{{\mbox{\boldmath $\omega$}}}_s$ due to the
combined effects of mass accretion and magnetic torques. Since we
expect $t_{\rm disc} \ll t_{\rm spin}$, if the orientation of the
outer disc is fixed we can consider that, at all times, the disc is in
a steady-state $\hat{\mbox{\boldmath $l$}}_{\rm eq}(r;\hat{\mbox{\boldmath $\omega$}}_s)$. The evolution of
the system is then described by a sequence of steady-state profiles
$\hat{\mbox{\boldmath $l$}}_{\rm eq}(r,\hat{\mbox{\boldmath $\omega$}}_s(t))$ where $\hat{\mbox{\boldmath $\omega$}}_s(t)$ evolves
according to equation (\ref{eq:warpedspinevol}) applied to $\hat{\mbox{\boldmath $l$}}(r)=
\hat{\mbox{\boldmath $l$}}_{\rm eq}(r;\hat{\mbox{\boldmath $\omega$}}_s(t))$.
As discussed before, the disc itself will always show variations
on shorter timescales (of the order of the stellar rotation period), which
do not affect the secular evolution of the stellar spin and are averaged over in our
description of the system.
Here we describe our method to calculate the evolution and
steady-state of warped discs.
Systematic theoretical study on warped discs began with the work of
\citet{pp1} and \citet{pl2}, who showed
that there are two dynamical regimes for warp propagation in linear
theory (for sufficiently small warps). For high viscosity Keplerian
discs with $\alpha\go \delta\equiv H/r$ (where $\alpha$ is the Shakura-Sunyaev
parameter so that the viscosity is $\nu=\alpha H^2\Omega$), the warp
satisfies a diffusion-type equation with diffusion coefficient
$\nu_2=\nu/(2\alpha^2)$. For low-viscosity discs, on the other hand,
the warp satisfies a wave-like equation and propagates with speed
$\Omega H/2$. In the diffusive regime, the linear theory
of \citet{pp1} was generalized to large inclination angles by \citet{pr1}
in the limit of small local variations of the disc inclination.
A fully nonlinear theory was derived by \citet{og1},
with prescriptions for arbitrary variations of the inclination.
The basic features of the theory
were recently confirmed by the numerical simulations of
\citet{lp1}. For low-viscosity Keplerian
discs ($\alpha\lo H/r$), the linearized
equations for long wavelength bending waves were derived by
\citet{lo1} and \citet{lop1}, and a theory for
non-linear bending waves was developed by \citet{og2}.
For protostellar discs, recent work by \citet{ter1} suggests that
far away from the star the disc could have a very small
viscosity parameter ($\alpha \sim 10^{-2}-10^{-4}$), and would
thus be described by the formalism of~\citet{lo1}. However,
close to the star (around a few stellar radii) where magnetic effects
are most important and the disc warp can develop,
the value of the effective viscosity is unknown. Thus in this paper,
we will study both high-viscosity discs and low-viscosity discs.
\subsection{High-Viscosity Discs}
\subsubsection{Evolution Equations}
For viscous discs satisfying $\alpha\go H/r$, we start from the equations
derived by \citet{og1}. The main evolution equations for the disc are
the conservation of mass
\begin{equation}
\label{MassCon}
\frac{\partial \Sigma}{\partial t} + \frac{1}{r} \frac{\partial}{\partial r} \left( r\Sigma V_{R}\right) =0
\end{equation}
and angular momentum
\begin{eqnarray}
&&\frac{\partial}{\partial t}\left(\Sigma r^2 \Omega \hat{\mbox{\boldmath $l$}} \right) +\frac{1}{r}\frac{\partial}{\partial r}\left(\Sigma V_{R}r^3\Omega \hat{\mbox{\boldmath $l$}}\right)=\frac{1}{r}\frac{\partial}{\partial r}\left( Q_1Ir^2\Omega^2\hat{\mbox{\boldmath $l$}}\right) \nonumber \\
&& + \frac{1}{r}\frac{\partial}{\partial r}\left(Q_2Ir^3\Omega^2\frac{\partial \hat{\mbox{\boldmath $l$}}}{\partial r} + Q_3Ir^3\Omega^2\hat{\mbox{\boldmath $l$}}\times\frac{\partial \hat{\mbox{\boldmath $l$}}}{\partial r}\right) + {\bf N}_{m},
\label{MomCon}
\end{eqnarray}
where $V_{R}$ is the average radial velocity of the fluid at a given radius.
The coefficients $Q_{1,2,3}$ characterize the magnitude of the various viscous interactions, while
\begin{equation}
I = \frac{1}{2\pi}\int_0^{2\pi} {\rm d}\phi \int_{-\infty}^{\infty} \rho z^2 {\rm d}z
\end{equation}
depends on the vertical density profile of the disc. The term ${\bf N}_m={\bf N}_w+{\bf N}_p$
is the external magnetic torque per unit area.
In general, the viscous coefficients $Q_{1,2,3}$ are functions of the
viscosity parameter $\alpha$, the warp amplitude $\psi^2\equiv
|\partial\hat{\mbox{\boldmath $l$}}/\partial\ln r|^2$, and the
disc rotation law $\Omega$. Their values can be obtained through numerical
integration of a set of coupled ODEs \citep{og1}.
In the limit $\psi^2 \rightarrow 0$, the viscous
coefficients are given by equations [141-143] of \citet{og1}:
\begin{eqnarray}
Q_1 &=& -\frac{3 \alpha}{2} + \frac{1}{16 \alpha}\psi^2 +O(\psi^4) \label{Q1}\\
Q_2 &=& \frac{1}{4\alpha} + O(\psi^2)\\
Q_3 &=& \frac{3}{8} + O(\psi^2).
\end{eqnarray}
For $\psi^2=0$ and $Q_3=0$, this is equivalent to the formalism of
\citet{pr1}: the viscosities $\nu_1=\nu$ and $\nu_2$ used by \citet{pr1},
which correspond respectively to the shear viscosity usually
associated with flat discs and the viscous torque working against the
warping of the disc, are proportional to $Q_1$ and $Q_2$. The
additional term $Q_3$ was discovered by \citet{og1}, and contributes
to the precession of a warped disc. For the disc configurations considered
in this paper, the effects of finite $\psi^2$ are small --- hence, our
numerical results will be computed in the limit $\psi^2\ll 1$. However,
we do consider the effects of non-zero $Q_3$.
To obtain numerical solutions to equations
(\ref{MassCon})-(\ref{MomCon}), it is convenient to switch to the
logarithmic coordinate $\rho=\ln{(r/r_{\rm in})}$. We then define the
logarithmic derivative $'=\partial/\partial \ln{r}=\partial/\partial
\rho$ and the warp amplitude $\psi^2=|\hat{\mbox{\boldmath $l$}}'|^2$.
From equations (\ref{MassCon}-\ref{MomCon}), we can derive the
radial velocity
\begin{equation}
\label{VR}
V_R = \frac{1}{\Sigma r (r^2 \Omega)'} \left[ (IQ_1r^2\Omega^2)'-IQ_2r^2\Omega^2 \psi^2\right].
\end{equation}
Using equation (\ref{VR}) in (\ref{MassCon})-(\ref{MomCon}) then yields
\begin{equation}
r^2 \frac{\partial \Sigma}{\partial t} = \left[\frac{(IQ_1r^2\Omega^2)'-IQ_2r^2\Omega^2\psi^2}{(r^2\Omega)'}\right]'
\end{equation}
and
\begin{eqnarray}
&&r^2\frac{\partial}{\partial t}\left(\Sigma r^2 \Omega \hat{\mbox{\boldmath $l$}} \right)+
\Bigl[ \frac{r^2\Omega}{(r^2\Omega)'} \left( (IQ_1r^2\Omega^2)'-IQ_2r^2\Omega^2 \psi^2\right) \hat{\mbox{\boldmath $l$}}\nonumber \\
&&-Ir^2\Omega^2\left(Q_1\hat{\mbox{\boldmath $l$}} + Q_2\hat{\mbox{\boldmath $l$}}' + Q_3\hat{\mbox{\boldmath $l$}}\times \hat{\mbox{\boldmath $l$}}'\right)
\Bigr]' = r^2 {\bf N}_{m}.
\end{eqnarray}
\subsubsection{Disc Model}
For our numerical calculations, we consider Keplerian discs.
If we compare
the projection of (\ref{MomCon}) along $\hat{\mbox{\boldmath $l$}}$ for $\hat{\mbox{\boldmath $l$}}'=0$ with the
standard flat-disc equation
\begin{equation}
\frac{\partial}{\partial t} (\Sigma r^2 \Omega) + \frac{1}{r}\frac{\partial}{\partial r}\left(\Sigma V_R r^3 \Omega - \nu \Sigma r^3 \frac{\partial \Omega}{\partial r} \right)=0,
\end{equation}
we see that
\begin{equation}
Q_1[\psi^2=0] I r^2 \Omega^2 = \nu \Sigma r^3 \frac{\partial \Omega}{\partial r}.
\end{equation}
Using $Q_1[\psi^2=0]=-3\alpha/2$ and $\Omega = \sqrt{GMr^{-3}}$, we then have
\begin{equation}
I = \Sigma H^2.
\end{equation}
We also rescale the time coordinate by the viscous time evaluated
on the inner edge of the disc: $\tau=t/t_{\rm vis}(r_{\rm in})$,
where
\begin{equation}
\label{eq:tvis}
t_{\rm vis}={r^2\over \nu_2},
\end{equation}
and
\begin{equation}
\nu_2=2Q_2 H^2\Omega\simeq {1\over 2\alpha}H^2\Omega
\end{equation}
is the viscosity associated with the vertical shear in the disc. By
projecting the evolution equation of the disc angular momentum
onto directions parallel and orthogonal to $\hat{\mbox{\boldmath $l$}}$, we find
\begin{eqnarray}
\label{eq:dtlambda}
\frac{\partial}{\partial \tau}\sigma &=& -\rho^{-3/2} \frac{Q_1}{Q_2} \left(
{\bf S'}\cdot\hat{\mbox{\boldmath $l$}} \right) \\
\label{eq:dtl}
\frac{\partial}{\partial \tau} \hat{\mbox{\boldmath $l$}} &=& - \rho^{-3/2} \frac{Q_1}{\sigma Q_2}
\bigg[ \left[{\bf S'}-({\bf S'}\cdot\hat{\mbox{\boldmath $l$}})\hat{\mbox{\boldmath $l$}}\right]+ \\
&&\frac{(\hat{\mbox{\boldmath $\omega$}}_s \cdot \hat{\mbox{\boldmath $l$}}) }{\rho^{3}\eta^{7/2}}
\left( \frac{F(\theta_\star)}{\pi D(\rho)}
\hat{{\mbox{\boldmath $\omega$}}}_s \times \hat{\mbox{\boldmath $l$}} - \frac{\zeta \cos^2{\theta_\star}}{4} \hat{\mbox{\boldmath $l$}} \times ( \hat{{\mbox{\boldmath $\omega$}}}_s
\times \hat{\mbox{\boldmath $l$}} )
\right)\bigg]
\nonumber
\end{eqnarray}
where the new variables $\sigma$, ${\bf S}$ and $\rho$ are defined by
\begin{eqnarray}
\sigma &=& \frac{\Sigma r}{(r \Sigma_{\rm flat})|_{r=r_{\rm in}}}\\
\label{eq:defS}
{\bf S} &=& \left( \sigma' - \frac{\sigma}{2} - \frac{Q_2}{Q_1}\sigma \psi^2 \right) \hat{\mbox{\boldmath $l$}}
-\frac{Q_2}{2Q_1}\sigma \hat{\mbox{\boldmath $l$}}' - \frac{Q_3}{2Q_1} \sigma \hat{\mbox{\boldmath $l$}} \times \hat{\mbox{\boldmath $l$}}'\\
\rho &=& \frac{r}{r_{\rm in}}.
\end{eqnarray}
and $\Sigma_{\rm flat}$ is the surface density of a flat disc
\begin{equation}
\label{eq:sflat}
\Sigma_{\rm flat}=\frac{\dot{M}}{3\pi \nu}
=\frac{\dot{M}}{3\pi \alpha H^2\Omega}.
\end{equation}
Equations (\ref{eq:dtlambda}) and (\ref{eq:dtl}) form our model for
the evolution of viscous discs interacting with a magnetic star. Note
that as $\hat{\mbox{\boldmath $l$}}$ is a unit vector, it only corresponds to two degrees of freedom in the
system. Accordingly, equation~(\ref{eq:dtl}) guarantees that $\partial \hat{\mbox{\boldmath $l$}} / \partial \tau$ is orthogonal to $\hat{\mbox{\boldmath $l$}}$. In practice, to avoid introducing a preferred direction in the system (as we want to allow arbitrary inclination angles for the disc), we evolve all 3 components of $\hat{\mbox{\boldmath $l$}}$, but
normalize $\hat{\mbox{\boldmath $l$}}$ at each timestep to the accumulation of numerical errors.
\subsubsection{Steady-State Equations}
\label{sec:warpsse}
From equations (\ref{eq:dtlambda}), (\ref{eq:dtl}) and
(\ref{eq:defS}), it is fairly easy to derive the equations defining
the steady-state configuration of the disc. If we set
$\partial \hat{\mbox{\boldmath $l$}}/ \partial \tau = 0$ and $\partial \sigma / \partial \tau = 0
= {\bf S}'\cdot\hat{\mbox{\boldmath $l$}}$ in (\ref{eq:dtl}), we obtain
\begin{equation}
{\bf S}' = \frac{(\hat{\mbox{\boldmath $\omega$}}_s . \hat{\mbox{\boldmath $l$}}) }{\rho^{3}\eta^{7/2}}
\left( \frac{\zeta \cos^2{\theta_\star}}{4} \hat{\mbox{\boldmath $l$}} \times( \hat{{\mbox{\boldmath $\omega$}}}_s
\times \hat{\mbox{\boldmath $l$}} )-\frac{F(\theta_\star)}{\pi D(\rho)}
\hat{{\mbox{\boldmath $\omega$}}}_s \times \hat{\mbox{\boldmath $l$}} \right).
\end{equation}
Equation (\ref{eq:defS}) projected onto $\hat{\mbox{\boldmath $l$}}$ gives
\begin{equation}
\label{eq:sslambda}
\sigma' = \sigma \left(\frac{1}{2} + \frac{Q_2}{Q_1} \psi^2 \right)
+ {\bf S}\cdot\hat{\mbox{\boldmath $l$}},
\end{equation}
and projected in the plane orthogonal to $\hat{\mbox{\boldmath $l$}}$ gives
\begin{equation}
\label{eq:ssl}
\hat{\mbox{\boldmath $l$}}' = \frac{2Q_1}{Q_2 \sigma}\left[({\bf S.}\hat{\mbox{\boldmath $l$}})\hat{\mbox{\boldmath $l$}} - {\bf S}\right] - \frac{Q_3}{Q_2} (\hat{\mbox{\boldmath $l$}} \times \hat{\mbox{\boldmath $l$}}').
\end{equation}
For $Q_3=0$, we thus have a set of first order differential equations of the form
${\bf U}' = {\bf F}({\bf U})$. Given appropriate boundary conditions at $r_{\rm in}$, it can easily be
solved by numerical integration. For $Q_3\neq 0$, we can still perform numerical integration if we
consider (\ref{eq:ssl}) as an implicit equation for $\hat{\mbox{\boldmath $l$}}'$ which has to be solved at each step of the
integration algorithm.
In practice however, the boundary conditions are imposed partly at the inner edge $r_{\rm in}$ and
partly at the outer edge $r_{\rm out}$. Indeed, we consider the orientation of the outer disc to be
fixed
\begin{equation}
\hat{\mbox{\boldmath $l$}}(r_{\rm out}) = \hat{\mbox{\boldmath $l$}}_{\rm out}
\end{equation}
and the mass accretion rate to be constant
\begin{equation}
\label{Mdot0}
\dot{M} = -2\pi r V_R \Sigma
\end{equation}
(the sign is chosen so that $\dot M > 0$ for $V_R<0$). We also impose
a zero-torque boundary condition at the inner edge
\begin{equation}
\hat{\mbox{\boldmath $l$}}'(r_{\rm in}) = 0
\end{equation}
and set the surface density there to
\begin{equation}
\label{SBC}
\Sigma(r_{\rm in}) = \sigma_{\rm in} \Sigma_{\rm flat}(r_{\rm in})
\end{equation}
for some freely specifiable scalar $\sigma_{\rm in}$. Combining
(\ref{Mdot0}) with the zero-torque boundary condition and equations
(\ref{VR}) and (\ref{eq:sflat}), we obtain a simple boundary condition on
$\sigma'$ at $r_{\rm in}$:
\begin{equation}
\sigma'[r_{\rm in}]=\frac{1}{2},
\end{equation}
while (\ref{SBC}) gives the value of $\sigma[r_{\rm in}]$:
\begin{equation}
\sigma[r_{\rm in}] = \sigma_{\rm in}.
\end{equation}
We thus have 4 boundary conditions at $r_{\rm in}$ (on $\hat{\mbox{\boldmath $l$}}'$,
$\sigma$ and $\sigma'$) and 2 at $r_{\rm out}$ (on $\hat{\mbox{\boldmath $l$}}$). To
solve the system numerically we use a shooting method starting at
$r_{\rm in}$. Writing $\hat{\mbox{\boldmath $l$}} = (\cos{\beta} \cos{\gamma}, \cos{\beta}
\sin{\gamma}, \sin{\beta})$ and $\hat{{\mbox{\boldmath $\omega$}}}_s=(0,0,1)$, we use a
2-D Newton-Raphson method to solve for the values of $\beta[r_{\rm
in}]$ and $\gamma[r_{\rm in}]$ leading to a solution satisfying
$\hat{\mbox{\boldmath $l$}} (r_{\rm out})=\hat{\mbox{\boldmath $l$}}_{\rm out}$. The system of first-order ODEs
which has to be solved at each iteration of the Newton-Raphson
algorithm is treated using the 5th order {\it StepperDopr5} method of
\citet{NR}, and the integration is performed under the constraint
$|\hat{\mbox{\boldmath $l$}}|=1$.
\subsection{Low-Viscosity discs}
\subsubsection{Evolution Equations}
\label{sec:evollv}
For discs with a viscosity parameter small compared to the thickness
($\alpha\lo \delta=H/r$), we can no longer use the evolution
equations of \citet{og1}. In this case, disc warps
propagate as bending waves. In the linear regime,
the warp evolution equations were derived by \citet{lo1}:
\begin{eqnarray}
\label{eq:dtlvl}
&&\Sigma r^2 \Omega \frac{\partial \hat{\mbox{\boldmath $l$}}}{\partial t} = \frac{1}{r} \frac{\partial {\bf G}}{\partial r} + {\bf N}_m, \\
\label{eq:dtG}
&&\frac{\partial {\bf G}}{\partial t} = \left(\!\frac{\Omega^2 - \Omega_r^2}
{2\Omega}\!\right) \hat{\mbox{\boldmath $l$}} \times {\bf G} - \alpha \Omega {\bf G}
+ \frac{\Sigma r^3c_s^2 \Omega}{4}
\frac{\partial \hat{\mbox{\boldmath $l$}}}{\partial r},
\end{eqnarray}
where $c_s=H\Omega_z$ is the disc sound speed,
$\Omega_r$ and $\Omega_z$ are the radial epicyclic frequency and
the vertical oscillation frequency associated with circular orbits at
a given radius from the star, ${\bf G}$ is the internal torque of the
disc, and $\Sigma =\Sigma_{\rm flat}$ is the surface density.
These equations are only valid for
$\alpha\lo \delta$, $|\Omega_r^2-\Omega^2|<\delta \Omega^2$ and
$|\Omega_z^2-\Omega^2|<\delta \Omega^2$. In the following, we shall use
$\Omega_r=\Omega_z=\Omega$, although we verified that small deviations
from these equalities do not significantly modify our results.
Equations (\ref{eq:dtlvl})-(\ref{eq:dtG}) admit wave solutions.
We define a Cartesian coordinate system so that
$\hat l_z \simeq 1$ and $|\hat l_{x,y}|\ll 1$, and
the internal torque ${\bf G}$ acts in the $xy$-plane.
Consider a local (WKB) wave with $\hat{\mbox{\boldmath $l$}}_{x,y}, {\bf G}\propto
e^{ikr-i\omega t}$ in a Keplerian disc
with ${\bf N}_m=0$. For $\omega\ll \Omega$, the dispersion
relation of the wave is, neglecting the damping term $\alpha \Omega {\bf G}$
in~(\ref{eq:dtG}),
\begin{equation}
{\omega\over k}=\pm {c_s\over 2}=\pm \frac{H\Omega}{2},
\end{equation}
with the eigenmodes satisfying
\begin{equation}
\hat{\mbox{\boldmath $l$}}_{x,y}=(\hat{\mbox{\boldmath $l$}}_{x,y})_\pm
\equiv \mp {2\over r^3 c_s\Omega\Sigma}{\bf G}_{x,y}
=\mp {6\pi\alpha \delta\over \dot M r^2\Omega}{\bf G}_{x,y}.
\end{equation}
The $+$ mode and $-$ mode correspond to the outgoing and ingoing bending waves,
respectively.
A generic warp perturbation will not behave as pure eigenmodes.
For numerical evolutions, it is convenient to define the variables
\begin{equation}
{\bf V}_{\pm x,y} = \hat{\mbox{\boldmath $l$}}_{x,y} \mp \sqrt{\frac{r_{\rm in}}
{r}} \frac{6\pi \alpha \delta}{\dot{M} r_{\rm in}^2
\Omega[r_{\rm in}]} {\bf G}_{x,y}.
\end{equation}
Then, the evolution equations for the disc can be written as
\begin{eqnarray}
\label{eq:Vwave}
&&\frac{\partial}{\partial \tau}{\bf V}_{\pm} = \frac{1}{2\rho^{3/2}}
\bigg[ \mp {\bf V}'_{\pm}
+ ({\bf V}_- - {\bf V}_+)
\left( \frac{1}{4} \pm \frac{\alpha}{\delta} \right)
\bigg] \\
\nonumber
&& + \frac{\cos{\beta}}{\rho^5 \eta^{7/2}} 3\alpha \delta \left[ \frac{F(\theta_\star)}{\pi D(\rho)}
\hat{{\mbox{\boldmath $\omega$}}}_s \times \hat{\mbox{\boldmath $l$}} - \frac{\zeta \cos^2{\theta_\star}}{4} \hat{\mbox{\boldmath $l$}} \times (\hat{{\mbox{\boldmath $\omega$}}}_s
\times \hat{\mbox{\boldmath $l$}}) \right].
\end{eqnarray}
Here the dimensionless time $\tau = t \delta \Omega(r_{\rm in})$ and
length $\rho = r/r_{\rm in}$ are chosen so that the sound speed
at the inner edge of the disc is $c_s(r_{\rm in})=H\Omega_z=1$.
For the computation of the magnetic torque, we use the following
approximations, accurate to first order in $l_{x,y}$:
\begin{eqnarray}
\hat{{\mbox{\boldmath $\omega$}}}_s \times \hat{\mbox{\boldmath $l$}} &=& -\cos{\beta}\hat{l}_y \hat{e}_x+(\sin{\beta} + \cos{\beta} \hat{l}_x) \hat{e}_y\\
\hat{\mbox{\boldmath $l$}} \times (\hat{{\mbox{\boldmath $\omega$}}}_s \times \hat{\mbox{\boldmath $l$}}) &=& -(\sin{\beta}+\cos{\beta} \hat{l}_x )\hat{e}_x
-\cos{\beta}\hat{l}_y \hat{e}_y.
\end{eqnarray}
The boundary conditions are particularly simple to implement for this
choice of variables. At the outer edge of the disc, we require the
ingoing mode to vanish
\begin{equation}
{\bf V}_-(r_{\rm out})=0.
\end{equation}
At the inner edge, we impose the zero-torque boundary
condition $\hat{\mbox{\boldmath $l$}}'=0$, ${\bf G}=0$, which can be written in terms of our
evolution variables as
\begin{eqnarray}
{\bf V}_-(r_{\rm in})&=&{\bf V}_+(r_{\rm in}),\\
{\bf V}_-'(r_{\rm in})&=&-{\bf V}_+'(r_{\rm in}).
\end{eqnarray}
In terms of the propagation of bending waves, this corresponds to the
requirement that the waves be reflected at the inner edge of the disc.
\subsubsection{Steady-State Warp}
\label{sec:sslv}
The steady-state profile of low-viscosity discs can be obtained by
numerical integration of equation (\ref{eq:Vwave}) or equations
(\ref{eq:dtlvl})-(\ref{eq:dtG}), by setting $\partial / \partial \tau
= 0$. In practice however, the steady-state profile of a low-viscosity
disc is nearly always very well approximated by a flat disc
profile. The amount of disc warping can then be
evaluated analytically. Noting
that $\Sigma r^3 c_s^2\propto r^{3/2}$, equations
(\ref{eq:dtlvl})-(\ref{eq:dtG}) can be combined to give
\begin{equation}
\frac{\partial}{\partial r} \left(\rho^{3/2} \frac{\partial}{\partial r} \hat{\mbox{\boldmath $l$}} \right)
\simeq -\frac{4\alpha r {\bf N}_m}{(r^3\Sigma c_s^2)_{\rm in}}.
\end{equation}
Since ${\bf N}_m$ is falling rapidly with $r$ (${\bf N}_m \sim r^{-5}$),
and $\partial\hat{\mbox{\boldmath $l$}}/\partial r=0$ at $r=r_{\rm in}$, we integrate the above equation
from $r_{\rm in}$ to $r$:
\begin{equation}
\frac{\partial}{\partial r} \hat{\mbox{\boldmath $l$}} \simeq
{4\alpha\over 3} {(r^2{\bf N}_m)-(r^2{\bf N}_m)_{\rm in}
\over \rho^{3/2}(r^3\Sigma c_s^2)_{\rm in}}.
\label{eq:lindl}
\end{equation}
Integrating from $r_{\rm out}$ to $r_{\rm in}$, we then obtain
\begin{equation}
\hat{\mbox{\boldmath $l$}}_{\rm in}-\hat{\mbox{\boldmath $l$}}_{\rm out}\simeq \left({16\alpha {\bf N}_m\over 7\Sigma c_s^2}
\right)_{\rm in}.
\label{eq:deltal}
\end{equation}
Using Eqs.~(\ref{eq:torquew})-(\ref{eq:torque}), we have
\begin{equation}
|\hat{\mbox{\boldmath $l$}}_{\rm in}-\hat{\mbox{\boldmath $l$}}_{\rm out}|\simeq
{4\over 7}\Bigl[t_{\rm vis}(|\Gamma_w|+|\Omega_p|)\sin(2\beta)\Bigr]_{\rm in},
\end{equation}
where $t_{\rm vis}=r_{\rm in}^2/\nu_2$ is the viscous timescale for the warp,
with $\nu_2=c_sH/(2\alpha)$. Thus, the distortion of the disk can be
seen as arising from the warping and precessional torques acting over
the disc during a time of order the viscous time scale at the inner
disc edge (where the magnetic torques are the strongest).
Projecting Eq.~(\ref{eq:deltal}) in the direction of the stellar spin axis
$\hat{\mbox{\boldmath $\omega$}}_s$ and using Eqs.~(\ref{eq:torquew})-(\ref{eq:torque}), we have
\begin{equation}
\cos\beta_{\rm in}-\cos\beta_{\rm out}=-{8\over 7}\left(t_{\rm vis}\Gamma_w
\cos\beta\sin^2\beta\right)_{\rm in}.
\label{eq:cosbeta}\end{equation}
Since
\begin{equation}
\label{eq:tvisgam}
\left(t_{\rm vis}\Gamma_w\right)_{\rm in}={3\alpha^2\zeta\over 2\eta^{7/2}}
\cos^2\theta_\star,
\end{equation}
we see that as long as $\alpha^2\zeta\ll\eta^{7/2}$, a condition satisfied for
most parameters, the warp across the whole disc is small:
\begin{equation}
|\beta_{\rm in}-\beta_{\rm out}|\simeq
\frac{6\alpha^2\zeta \sin{(2\beta)}\cos^2{\theta_\star}}{7\eta^{7/2}}.
\label{eq:approxwarp}
\end{equation}
For example, with $\eta\go 0.5$, we find that for all discs
$|\beta_{\rm in}-\beta_{\rm out}|\ll 1$ if $\alpha\lo 0.15$.
This is almost certainly true for discs in which bending waves can propagate.
It is important to note that, although the approximate analytical
expression of the global disc distortion derived above is based on
low-viscosity discs, our result for $|\beta_{\rm in}-\beta_{\rm out}|$
is also valid for higher-viscosity discs. Indeed, in the linear
regime and for Keplerian discs, the steady-state equations are
identical regardless of the viscosity regime considered.
\section{Steady-State Profile of Warped Discs and Back-Reaction on Stellar Spins}
Using the numerical scheme presented in Section \ref{sec:warpsse}, we can now determine
the time-averaged steady-state profile of the disc under the influence of the torques exerted by a magnetic star.
The characteristics of the warped disc will of course vary with the choice of the free parameters
included in our theoretical model. We begin our study by showing results for two standard
discs, chosen so that they belong to the two classes of long term stellar spin evolution predicted in
Section \ref{sec:spinevol} when the accretion parameter defined in
equation~(\ref{eq:Nl}) is $\lambda \approx 0.5$ (a typical value in the allowed range
$0 \le \lambda \le 1$).
We then vary the disc parameters, and discuss their
influence on the disc profile, and on the spin evolution.
Finally, we check that, as predicted in section~\ref{sec:sslv}, low-viscosity discs,
which follow the different evolution equations described in section~\ref{sec:evollv}
(valid for $\alpha \lo \delta=H/R$)
have only negligible steady-state warps and are for all practical purposes well described by the flat-disc
approximation.
Our base models are discs with viscosity $\alpha=0.15$ and thickness $\delta=0.1$. We fix the
surface density at the inner boundary by setting $\sigma_{\rm in}=1.0$ so that it is equal to the surface density of a flat disc~(\ref{eq:sflat}), choose the inclination
angle of the outer disc $\beta_{\rm out}=\beta_\star=10^{\circ}$ and the magnetic inclination angle $\theta_
\star=30^{\circ}$ with respect to the spin $\hat{\mbox{\boldmath $\omega$}}_s$. The star is assumed to have mass $M_
\star= M_{\odot}$ and radius $R_\star = 2 R_{\odot}$ The strength of the magnetic field is chosen
so that $r_{\rm in}=2.5 R_*$ (Corresponding to $B_\star\sim 1kG$ for typical parameters, see
Eq.~[\ref{alfven}]), and the action of the torque ${{\bf N}}_m$ is limited to the region $r_{\rm in} \leq r \leq r_{\rm int}=1.5r_{\rm in}$.
The accretion rate is $\dot{M}=10^{-8} M_{\odot}/{\rm yr}$, and we put
the outer disc boundary at $r_{\rm out} = 10^4 r_{\rm in}$ (corresponding to $r_{\rm out} \approx
250 {\rm AU}$, a size typical of the observed protoplanetary discs). This disc has small values of $\psi^2$
everywhere, and accordingly we neglect the nonlinear terms in $Q_i$ (but we keep $Q_3=3/8$).
The other parameters are chosen to be $\zeta=1$, $f=0$, and either $\eta=1$
(so that the long-term evolution of the system aligns the spin axis $\hat{\mbox{\boldmath $\omega$}}_s$ with the disc
axis) or $\eta=0.5$ (for
which the flat-disc approximation predicts a long term misalignment toward
$\beta_+ \approx 45^{\circ}$ if
the initial disc has $\beta_\star \leq 135^\circ$).
We should note that these parameters are purposefully chosen to test the limits of
the flat disc approximation. Our choices $M_\star$, $R_\star$, $B_\star$, $\delta$ and
$M_\odot$ are relatively standard values for protoplanetary discs around T-Tauri stars
[see~\cite{bourev} and references therein],
while there are no particular reasons to prefer any specific orientation of the magnetic dipole
$\theta_\star$. But $\alpha=0.15$ is larger that recent estimates of the viscosity in the outer parts of
the disc~\citep{ter1}, and probably on the high end of what can be expected in the inner disc.
However, we have shown that smaller values of $\alpha$ lead to smaller amplitudes of the
steady-state warp (the warp amplitude is proportional to $\alpha^2$).
Thus, the flat disc approximation is more likely to be satisfied at low
viscosities.
In order to analyze the radial variations of the disc warp profile, we define the tilt $\beta[r]$ and the twist
$\gamma[r]$ by
\begin{equation}
\hat{\mbox{\boldmath $l$}}[r]=(\sin\beta[r] \cos\gamma[r], \sin\beta[r] \sin\gamma[r],\cos\beta[r]),
\end{equation}
with
the convention that $\gamma[r_{\rm out}]=0$. Some parameters of the system can be varied without modifying the dimensionless solution for the profile of the surface density
$\sigma(\rho)$ and the orientation of the disc $\hat{\mbox{\boldmath $l$}}(\rho)$: modifications of $\dot M$, $M_\star$, $R_\star$ or $r_{\rm in}$
(at constant $\eta$, $r_{\rm out}/r_{\rm in}$ and $r_{\rm int}/r_{\rm in}$) will influence the values of the timescales $t_{\rm vis}$ and $t_{\rm spin}$, but not $\beta[\rho]$ or
$\gamma[\rho]$. Thus, the steady-state profile can be solved while keeping these parameters fixed
without any loss of generality. The disc profile in physical units [$\Sigma(r)$, $\hat{\mbox{\boldmath $l$}}(r)$]
can easily be retrieved from the dimensionless solution [$\sigma(\rho)$, $\hat{\mbox{\boldmath $l$}}(\rho)$].
Additionally, the four parameters $(\eta,\theta_\star,f,\zeta)$
correspond to only two degrees of freedom in the model, through the quantities
\begin{eqnarray}
c_1 &=&{ \zeta \cos^2{\theta_\star} \over \eta^{3.5}} \\
c_2 &=& {F(\theta_\star) \over \eta^{3.5}} = {2f \cos^2{\theta_\star} - \sin^2{\theta_\star}
\over \eta^{3.5}}.
\end{eqnarray}
We will thus limit ourselves to variations of $\zeta$ and
$f$. Varying the thickness $\delta$ of the disc has very similar effects: it changes the value of the
function D(r) at small radii, effectively modifying the value of $c_2$ close to $r_{\rm in}$. As the
magnetic torques mostly affect the region close to the inner edge of the disc, the influence of
$\delta$ is similar to that of $F(\theta_\star)$.
Finally, we are also free to modify the boundary conditions used, and in particular the choices
of $r_{\rm out}$ and $\sigma_{\rm in}$. Varying $r_{\rm out}$ seems to have only negligible effects, as
long as $r_{\rm out}/r_{\rm in}$ is large enough for a steady-state solution to exist. Decreasing
$\sigma_{\rm in}$, on the other hand, leads to more significant changes in the warp profile.
A small $\sigma_{\rm in}$ favors warping disc, so that a decrease of
$\sigma_{\rm in}$ has an effect similar to increasing
both $c_1$ and $c_2$.
Thus, the effects of varying various parameters of the system can be examined with our
standard discs, by varying only two parameters, $\zeta$ (or $c_1$) and $f$ (or $c_2$).
In section~\ref{sec:stddisc}, we present our results for our two standard discs, which are similar except for the value of the parameter $\eta$ (changing $\eta$ correspond to a rescaling of
both the warping and the precessional torque). Then, in section~\ref{sec:nwinf}, we study
variations of the warping torque alone, by modifying the value of the parameter $\zeta$
characterizing the strength of the toroidal field in the disc. The influence of the precessional
torque is studied in more details in section~\ref{sec:npinf}, through variations of the parameter $f$
(related to the ability of the time-varying component of the vertical magnetic field to penetrate
the disc). Finally, in section~\ref{sec:Q3} we comment on the influence of the
parameter $Q_3$, which was usually neglected in previous studies of warped discs.
\subsection{Standard disc results}
\label{sec:stddisc}
The profile for the tilt and twist angles of our standard
configurations (Fig. \ref{fig:tiltstd}) show a relatively weak warping of the disc. For the disc with weaker magnetic
interactions ($\eta=1$), the difference in tilt
between the inner and outer edges is about $0.17^{\circ}$ and the twist over the whole disc is
$1.2^{\circ}$, while for stronger interactions ($\eta=0.5$) the disc is tilted by $1.8^\circ$ and
twisted over $16^\circ$.
These warps are comparable in magnitude to what we could have predicted using the
approximate equations~(\ref{eq:deltal})
and~(\ref{eq:approxwarp}). In particular, formula~(\ref{eq:approxwarp}) applied to these two
choices of parameters predicts tilts of $0.28^\circ$ and $3.2^\circ$, respectively,
with most of the difference between the approximate formula and the numerical results due to the cutoff applied to the magnetic torques at $r=r_{\rm int}$, neglected in the derivation
of~(\ref{eq:approxwarp}).
Note that we choose to vary the parameter $\eta$ defined in equation~(\ref{alfven})
as it conveniently modifies the effective strength of both magnetic torques in our model.
In practice, $\eta$ is determined by the
geometry of the accretion flow, while unknown physical parameters such as the dipole strength
$\mu$, its orientation $\theta_\star$, the surface density at the inner edge of the disc
$\sigma_{\rm in}$ or the magnetic twist parameter $\zeta$ will vary from
system to system.
Given the small warp, the evolution of the misalignment angle $\beta_\star$
between $\hat{{\mbox{\boldmath $\omega$}}}_s$ and ${\hat{\mbox{\boldmath $l$}}}_{\rm out}$ is well approximated
by equation~(\ref{eq:warpedspinevol}) with ${\hat{\mbox{\boldmath $l$}}}_{\rm in} = {\hat{\mbox{\boldmath $l$}}}_{\rm out}$: if we
compute ${\mbox{\boldmath ${\cal N}$}}$ from the steady-state profile $\hat{\mbox{\boldmath $l$}}(r)$, we find that the parameter $\xi$ in
equation (\ref{eq:paramangevol}), which parametrizes deviations from the flat
disc approximation ($\xi=1$ for a flat disc) is $\xi = 0.997$ for $\eta=1$ and
$\xi = 0.86$ for $\eta=0.5$, if we set the accretion parameter $\lambda$ to 0
(we choose $\lambda=0$ when computing $\xi$ in order to measure directly differences in the
effect of the back-reaction magnetic torques between the flat-disc model and the warped disc
steady-state, disentangled from the effect of angular momentum
accretion).
However, even a small disc warp can significantly change the critical angles $\beta_\pm$ for which
$d\beta_\star/dt=0$. By varying $\beta_\star=\beta(r_{\rm out})$, we can determine the values of
$\beta_\pm$ numerically. For $\eta=0.5$ and $\lambda=0.5$, we find $\beta_+ = 32^\circ$ and
$\beta_-=148^\circ$ which are quite different from the prediction of the flat-disc approximation
($\beta_+=45^\circ$, $\beta_-=135^\circ$). This is due mainly due to the effect of the twist of the
disc. In the flat-disc approximation, $\gamma=0$ and the back-reaction due to the precession
torque has no effect on the evolution of the stellar spin. But as long as $\gamma_{\rm in}<\pi$,
that back-reaction will tend to align the stellar spin and the disc orbital angular momentum, and
this effect can be large enough to significantly shift the value of $\beta_\pm$ (see also
subsection~\ref{sec:npinf}).
The same effect can also modify the qualitative behavior of systems for which the predicted misalignment angle $\beta_+$ is close to $0$, in such a way that the stable configuration at $\beta_+$ no longer exist. The orbital angular momentum of the disc would then
be expected to align with the direction of the stellar spin.
\begin{figure}
\includegraphics[width=8cm]{ProfEta}
\caption{{\it Upper panel:} Disc tilt angle $\beta$ for our standard disc, with $\eta=1$ and $\eta=0.5$. {\it Lower panel: } Twist angle $\gamma$ for the same parameters.}
\label{fig:tiltstd}
\end{figure}
\subsection{Large warping torques}
\label{sec:nwinf}
Realistic discs are expected to have parameters $\zeta \sim 1$ (characterizing the azimuthal magnetic twist) and $\eta \sim 0.5$ (characterizing the inner disc radius; see Eq.~[\ref{alfven}]), but the exact values of those
parameters are unknown (see Section~\ref{sec:analytic}). By increasing $\zeta$ we see that the approximation $
\hat{\mbox{\boldmath $l$}}(r_{\rm in}) = \hat{\mbox{\boldmath $l$}}(r_{\rm out})$ can break down for high-viscosity discs.
In Fig. \ref{fig:zetaseqtilt}, we show the variation of the steady-state disc profile when $\zeta$ is varied
between 1 and 5
for $\eta=0.5$ and our standard disc parameters. Clearly, there can be large
differences between the orientation of the disc at its inner and outer edges when $\zeta \go 1$.
In Fig. \ref{fig:xizeta}, we show the value of $\xi$ [Eq.~\ref{eq:paramangevol}]
for various choices of $\zeta$. At low
$\zeta \lo 4$ and for the choice of accretion parameter $\lambda=0$,
the flat-disc approximation predicts the magnitude of the back-reaction
torques acting on the star within a factor of 2. Deviations at low $\zeta \approx 0.5$ are due
to the relatively large influence of the precessional torque (which is independent of $\zeta$)
when the warping torque becomes small.
The long-term evolution of the stellar spin direction will remain similar to the flat-disc
predictions, with a stable configuration at
some misalignment angle $\beta_+ \neq 0$ for most values of the accretion parameter
$\lambda$. But for $\zeta \go 4.5$, the twist is so large that the
behavior is the opposite of what would be predicted by our approximate flat-disc formula:
the back-reaction
tends to align the disc and the spin of the star. This shows that at large $\zeta$ we must
determine for each set of parameters the profiles $\beta[r]$ and $\gamma[r]$ in order to predict the long term evolution of the stellar spin.
\begin{figure}
\includegraphics[width=8cm]{ProfZeta}
\caption{{\it Upper panel:} Disc tilt angle $\beta$ for different choices of $\zeta$ values, all with $\eta=0.5$. {\it Lower panel: } Twist angle $\gamma$ for the same disc parameters. The outer edge
of the disc is at $r_{\rm out}=10^4r_{\rm in}$.}
\label{fig:zetaseqtilt}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{XiZeta}
\caption{Variation of the parameter $\xi$ characterizing the deviations from the flat-disc
approximation [see Eq.~(\ref{eq:paramangevol})] as a function of $\zeta$, for a sequence
of discs with $\eta=0.5$ and the choice
$\lambda=0$ for the accretion parameter [see Eq.~(\ref{eq:Nl})].}
\label{fig:xizeta}
\end{figure}
As an example, we construct sequences of steady-state disc configurations for a fixed $\zeta$,
varying the inclination angle of the outer disc $\beta_\star$. In Figs. \ref{fig:zeta1seq}-\ref{fig:zeta5seq}, we show the resulting $d\cos{\beta_\star}/dt$ for $\zeta=1,3$ and $5$, and compare with the predictions of the flat-disc approximation. For $\zeta=1,3$, the general behavior is similar to what the flat-disc approximation predicts. As seen in subsection~\ref{sec:stddisc},
the precessional torque will favor alignment of the stellar spin with the disc orbital angular momentum,
so that the numerical results usually show that $\beta_{+,Num} \leq \beta_{+,Flat}$ --- at least
as long as $F(\theta_\star)$ is of order unity.
For $\zeta=5$, however, significant differences become visible. At small inclination angles, the system will evolve towards $\beta_\star=0$, while at large inclinations, the system will evolve towards $\beta_\star \approx 165^\circ$. In the intermediate region
$15^\circ \lo \beta_\star \lo 135^\circ$, the system will evolve towards
$\beta_\star \approx 50^\circ$ (for $\lambda=0.5$).
Finally, for larger $\zeta$ we are in a completely different regime: for some inclinations, two steady-state solutions exist. Clearly, to determine which of those steady-state solution is relevant
requires numerical integration of the time evolution of the star-disc system.
\begin{figure}
\includegraphics[width=8cm]{Zeta1}
\caption{Secular evolution rate of the spin-disc inclination angle $\beta_\star$ for discs
with $\zeta=1$.
The time derivative of $\cos{\beta_\star}$ is given for the flat-disc approximation (Flat) and for our
numerical results for warped discs (Num), as well as for 3 different values of the accretion
parameter $\lambda=0,0.5,1$(see equation~\ref{eq:Nl}).
The angle $\beta_\star$ will increase if $d\cos{\beta_\star}/dt<0$.}
\label{fig:zeta1seq}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{Zeta3}
\caption{Same as Fig. \ref{fig:zeta1seq}, except that we use $\zeta=3$.}
\label{fig:zeta3seq}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{Zeta5}
\caption{Same as Fig. \ref{fig:zeta1seq}, except that we use $\zeta=5$. Note that when the disc is
nearly aligned or nearly anti-aligned, the qualitative behavior of the solution is different from
the predictions of the flat-disc approximation.}
\label{fig:zeta5seq}
\end{figure}
Our current understanding of the effects of magnetic fields close to the inner edge of
the accretion disc is not sufficient to determine with certainty the range of realistic values of the
parameters $\zeta$ and $\eta$. However, their favored values lie in a region of parameter space where the flat disc approximation appears to hold relatively well ($\zeta \sim 1$, $\eta \sim 0.5$).
The large deviations from the flat disc model observed at high $\zeta$ are thus unlikely to be
encountered in astrophysical systems, though they cannot be entirely ruled out. Thus, these results
implies that the flat disc approximation is likely to be justified, with the caveat that it tends to
overestimate the value of the misalignment angle $\beta_+$.
\subsection{Varying the precessional torque}
\label{sec:npinf}
The results presented in previous subsections were all obtained with $f=0$, thus
fixing the choice of the
function $F(\theta_*)$ characterizing the magnetically driven disc precession rate. Using
different values of $f$, even at low $\zeta$ it is possible to find discs which require numerical
solutions to determine their warp profiles. For example, if we choose $f=1$ instead of $f=0$, the sign and
magnitude of $\Omega_p$ will change. The twist of the disc becomes more important, so that even
for $\zeta=1$, $\eta=0.5$, there is a significant deviation from the behavior of the flat-disc
configuration. Comparisons between the disc profiles for different $f$ can be found in
Fig. \ref{fig:ftilt}. The most important feature of these profiles is that, for the larger values of $f$, we have
a large twist $\gamma(r_{\rm in})$. Hence, the precession term in
equation~(\ref{eq:warpedspinevol})
(proportional to $n_p$), which does not contribute to the evolution of $\beta_\star$
in the flat-disc approximation, now has a significant impact. For a twist $\gamma$ such that $\sin
(\gamma-\gamma[r_{\rm out}]) F(\theta_\star)>0$, the precession term directly contributes to the
alignment of the outer disc axis with the stellar spin. This is always the case for disc
twists $|\gamma_{\rm in}| \leq 180^\circ$, as a positive $F(\theta_\star)$ causes the inner disc
to precess in the prograde direction, while $F(\theta_\star) \leq 0$ causes a retrograde precession.
If the precessional torque becomes large enough compared to the warping torque
(proportional to $n_w$), the long-term evolution of the stellar spin direction will be modified. For our standard parameters and the choices of $\eta=0.5$ and $\lambda=0.5$, we find that discs with
$f \go 0.5$ will always lead to spin-disc alignment, contradicting the flat
disc predictions (see Fig. \ref{fig:F5seq}). However, it is worth noting that some configurations with
high $f$ still allow for
long term misalignments: for example, for $f=0.5$, increasing the strength of the azimuthal B-field
to $\zeta=3$ leads to a behavior very similar to what we found for $f=0$, $\zeta=3$ (see Fig. \ref
{fig:zeta3seq}), while decreasing the viscosity parameter to $\alpha=0.015$ (and choosing
$\delta=0.01$) limits the twist of the disc, so that the flat-disc approximation remains valid.
\begin{figure}
\includegraphics[width=8cm]{Proff}
\caption{{\it Upper panel: }Disc tilt angle $\beta$ for different choices of $f$.
{\it Lower panel: } Twist angle $\gamma$ for the same disc parameters. The outer edge of
the disc is fixed at $r_{\rm out}=10^4r_{\rm in}$.}
\label{fig:ftilt}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{Fseq}
\caption{Same as Fig. \ref{fig:zeta1seq}, except that we set $\lambda=0.5$ and choose
$f=0,0.16,0.5,1$. For $f=0.16$, the disc twist is very small, and our numerical result matches the
flat-disc approximation better than for $f=0$. For larger values of $f$, the disc twist is large, and
the disc will align with the stellar spin regardless of the initial value of $\beta_\star$.}
\label{fig:F5seq}
\end{figure}
The above results show that the precessional torque can in principle cause non-negligible deviations from the flat-disc model. Nevertheless, for the largest part of the favored parameter
space (small $\alpha$, or large $\alpha$ with small precessional torque), the flat-disc approximation
is justfied.
\subsection{Influence of $Q_3$}
\label{sec:Q3}
As mentioned before, most previous works on warped discs have been done using the formalism
of \citet{pr1}, which corresponds to $Q_3=0$ in the formalism of \citet{og1}. This is a good approximation, as long as the influence of the small precessional torque due to $Q_3 \neq 0$ is negligible. For the system studied here a small change in the twist of the disc can affect whether a configuration will align over time, or be driven towards a stable misaligned steady-state. In Fig.~\ref{fig:tiltq}, we show the difference in the disc tilt and twist for our standard disc with $\eta=0.5$, using both the formalism of \citet{og1} and \citet{pr1}. Differences in the warp of the disc of a few degrees are observed, though the warps are small in both cases. Because the precessional torque acting on the disc using $Q_3=0$ is smaller, it will be less twisted. This leads to a behavior slightly closer to what the flat-disc approximation predicts. If we choose the accretion parameter $\lambda=0.5$ (equation~\ref{eq:Nl}), then the flat-disc approximation predicts a stable misaligned configuration at $\beta_+=45^\circ$. For warp discs, we find that the misalignment angle is significantly smaller, $\beta_+=32^\circ$. The
difference in $\beta_+$ between profiles obtained using $Q_3=3/8$ and $Q_3=0$ is only
$0.5^\circ$, which is negligible at the level of accuracy our model can achieve.
\begin{figure}
\includegraphics[width=8cm]{TiltQ}
\caption{{\it Upper panel: }Disc tilt angle $\beta$ for different choices of $Q_3$.
{\it Lower panel: } Twist angle $\gamma$ for the same disc parameters. The outer edge of
the disc is fixed at $r_{\rm out}=10^4r_{\rm in}$.}
\label{fig:tiltq}
\end{figure}
$Q_3$ can also influence the qualitative behavior of the steady-state solutions at high-$\zeta$. For strongly warped discs, it is sometimes possible to have two solutions satisfying the steady-state equations. Choosing $Q_3 \neq 0$ seems to limit the size of the region of parameter space where this happens. For example, for $f=0$, $\eta=0.5$, $\theta_\star=10^{\circ}$ and $\zeta=5.5$, two profiles are acceptable steady-state solutions if we chose $Q_3=0$, while for $Q_3=3/8$ the same parameters lead to a unique solution (see Fig \ref{fig:2sol}).
\begin{figure}
\includegraphics[width=8cm]{TiltTwoProfs}
\caption{Disc tilt angle $\beta$ for $\zeta=5.5$ and $Q_3=0,3/8$. For $Q_3=0$, the steady-state
equations admit two solutions.}
\label{fig:2sol}
\end{figure}
\subsection{Low-Viscosity Discs}
In the linear regime, the equations determining the steady-state profile of the disc are identical
for the $\alpha \leq \delta$ and $\alpha \geq \delta$ cases. In the previous subsections, we have seen that our approximate formulae for the amplitude of the warp, equations~(\ref{eq:deltal})
and~(\ref{eq:approxwarp}), give relatively good results for $\alpha \sim \delta = 0.1$.
We also confirmed numerically the $\alpha^2$ dependence of the
warp of the disc, shown in Figure~\ref{fig:tiltalpha}.
\begin{figure}
\includegraphics[width=8cm]{TiltAlpha}
\caption{Disc tilt angle $\beta$ for $\alpha=0.15,0.015,0.0015$. To check the $\alpha^2$
dependance of $\beta$, the deviation from a flat disc is multiplied by $10^2$ and $10^4$
for $\alpha=0.015$ and $\alpha=0.0015$ respectively.}
\label{fig:tiltalpha}
\end{figure}
For smaller viscosities, $\alpha \leq \delta$, we expect the warp to be even smaller, and the linear approximation more accurate. Hence, we can
immediately deduce that the time-averaged warp of low-viscosity discs will be extremely small. For such discs, the flat-disc approximation will nearly always give accurate results for the secular evolution of the stellar spin.
\section{Time Evolution of Disc Warp Toward Steady-State}
Having established the steady-state of warped discs, we now study
their time evolution starting from some generic initial conditions,
when the symmetry axis of the outer disc is misaligned with the
stellar spin. To this end, we evolve
equations~(\ref{eq:dtlambda})-(\ref{eq:dtl}) for high-viscosity discs
and~(\ref{eq:Vwave}) for low-viscosity discs. Since the timescale to
reach steady state is generally much longer than the local disc
warp/precession time $\Gamma_w^{-1}\sim \Omega_p^{-1}$ [see
Eq.~(\ref{eq:twarp})], an implicit evolution scheme is
necessary. Our numerical method is detailed in the Appendix.
\subsection{High-Viscosity Discs}
\label{subsec:highvis}
For viscous discs with $\alpha\go \delta=H/r$, we expect the evolution of
the system to occur over the timescale $t_{\rm vis}(r) =
r^2/\nu_2=2\alpha/(\delta^2\Omega)$.
Note that at the disc inner edge, $t_{\rm vis}(r_{\rm in})=(3\alpha^2\zeta
\cos^2\theta_\star/2\eta^{3.5})\Gamma_w^{-1}(r_{\rm in})$ [see Eq.~(\ref{eq:tvisgam})] is
smaller than the warping timescale for typical parameters.
In terms of the dimensionless time $\tau= t/t_{\rm vis}(r_{\rm in})$,
we expect the disc to reach the steady-state profile
at radius $r$ within a time of order $\tau \sim (r/r_{\rm in})^{3/2}$
(assuming constant $\delta$).
To test this expectation, we
evolve our standard disc model (see Section 4) for $\eta=0.5$ and
different locations of the outer radius ($r_{\rm out}=100 r_{\rm in}$
and $r_{\rm out}=1000r_{\rm in}$), as well as for a more viscous disc
with $\alpha=0.3$. The disc is initialized in a flat configuration
with $\hat{\mbox{\boldmath $l$}}=\hat{\mbox{\boldmath $l$}}_{\rm out}$ and we
observe its evolution towards the steady-state profile. In
Figs.~\ref{fig:evolvis1}-\ref{fig:evolvis3}, we plot the disc warp
profiles at times $\tau=10^{3n/4}$ for $n=0,1,...,4$ ---
by which point the viscous forces
should have brought the disc into its steady state up to
radius $r\sim 10^{n/2} r_{\rm in}$.
\begin{figure}
\includegraphics[width=8cm]{R2A15}
\caption{Time evolution of the disc tilt angle profile for standard discs with
$\alpha=0.15$ and $r_{\rm out}=100r_{\rm in}$. Time is in units of
$t_{\rm vis}(r_{\rm in})$.}
\label{fig:evolvis1}
\end{figure}
In all cases, we see that the evolution occurs approximately on the
expected timescales: the local distortion of the disc (i.e.
$\partial\hat{\mbox{\boldmath $l$}}/\partial\ln r$) up to radius $r$ does not vary much
past the viscous timescale at that radius. The orientation of the disc
($\hat{\mbox{\boldmath $l$}}$), on the other hand, continues to change to accommodate the
evolution of the disc at larger radii. Overall, the disc will
reach its equilibrium profile within the viscous timescale
$t_{\rm vis}(r_{\rm warp})$, where $r_{\rm warp}$ is defined as the
largest radius at which the warp $|\partial\hat{\mbox{\boldmath $l$}}/\partial\ln r|$
is significant.
\begin{figure}
\includegraphics[width=8cm]{R3A15}
\caption{Same as Fig.~\ref{fig:evolvis1}, except for $r_{\rm out}=1000r_{\rm in}$.}
\label{fig:evolvis2}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{R2A3}
\caption{Same as Fig.~\ref{fig:evolvis1}, except for $\alpha=0.3$.}
\label{fig:evolvis3}
\end{figure}
For the two simulations with the outer disc boundary at
$r_{\rm out}=100r_{\rm in}$, we find that $r_{\rm warp} \sim r_{\rm out}$
and the disc reaches its steady-state profile within $\tau \sim 1000$.
At later times, the evolution of the profiles becomes negligible. For the larger disc
($r_{\rm out}=1000r_{\rm in}$), the situation is slightly
different. At $\tau=1000$, the disc has reached its steady-state
distortion up to $r=100r_{\rm in}$. The disc will still evolve up to
$\tau \sim 10^{4.5}$, but as the warp is very small for $r\go 100r_{\rm in}$,
the changes in the profile are minimal. As most discs studied
in this paper show negligible warps for $r>(10^2-10^3)r_{\rm in}$, we
expect the steady-state to be reached within at most
\begin{equation}
t_{\rm vis}(10^3 r_{\rm in}) \sim 10^{4.5} \left(\frac{r^2}{\nu_2}
\right)_{\rm in}\sim 500\,\left({\alpha\over 0.15}\right)
\left({\delta\over 0.1}\right)^{-2}~{\rm yrs},
\end{equation}
regardless of the outer radius of the disc. As this is much
smaller than the evolution timescale for the spin of the star, we are
justified to consider only the steady-state configuration of the disc
when attempting to determine the long-term evolution of the
misalignment between the stellar spin and the orientation of the
outer disc.
\subsection{Low-Viscosity Discs}
The evolution of low-viscosity discs ($\alpha\lo \delta$) is
qualitatively different from high-viscosity discs.
According to equation~(\ref{eq:Vwave}), perturbations around the steady-state
propagates as bending waves, at roughly half the local
sound speed. Thus, we expect the disc to settle to an equilibrium
within the propagation timescale of these waves,
\begin{eqnarray}
\label{eq:twave}
t_{\rm wave} &=& \int_{r_{\rm in}}^{r_{\rm out}} \frac{2dr}{c_s}
\sim \frac{4}{3\delta \Omega(r_{\rm out})}\\
\nonumber
&\approx&(2\times10^3 {\rm yrs})\frac{0.1}{\delta}
\left(\frac{r_{\rm out}}{100 {\rm AU}}\right)^{3/2}
\end{eqnarray}
In Figures~\ref{fig:wave1}-\ref{fig:wave3}, we show the evolution of
the disc tilt profile $\beta$ as the bending wave propagates across the disc,
using our standard choice of parameters for the magnetic torques. We consider
different discs: the first two use $\alpha=0.01$ and have their
outer boundaries at $r_{\rm out} = 100 r_{\rm in}$
(Fig.~\ref{fig:wave1}) and $r_{\rm out} = 1000 r_{\rm in}$
(Fig.~\ref{fig:wave2}). The third has a higher viscosity
$\alpha=0.05$, and $r_{\rm out} = 1000 r_{\rm in}$
(Fig.~\ref{fig:wave3}). All three simulations are started from a flat
disc configuration, and show the same behavior: the magnetic torques
perturb the inner disc, and the perturbation propagates outwards over
the timescale $t_{\rm wave}$. Again, this timescale is much less
than the spin evolution timescale.
\begin{figure}
\includegraphics[width=8cm]{WaveR2Std}
\caption{Evolution of the disc tilt angle profile for discs with
$\alpha=0.01$ and $r_{\rm out}=100r_{\rm in}$.
The unit of time is $(\delta \Omega(r_{\rm in}))^{-1}$.}
\label{fig:wave1}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{WaveR3Std}
\caption{Same as Fig.~\ref{fig:wave1} except
for $r_{\rm out}=1000r_{\rm in}$.}
\label{fig:wave2}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{WaveR3A5}
\caption{Same as Fig.~\ref{fig:wave1} except
for $\alpha=0.05$ and $r_{\rm out}=1000r_{\rm in}$.}
\label{fig:wave3}
\end{figure}
The location of the outer disc radius apparently does not have a
significant influence on the final state of the system. The viscosity,
on the other hand, affects the disc warp amplitude as predicted by
equations~(\ref{eq:deltal}) and~(\ref{eq:approxwarp}): the warp is
proportional to $\alpha^2$, with the amplitude
$|\beta_{\rm out}-\beta_{\rm in}|$ given
by Eq.~(\ref{eq:approxwarp}) to within a factor of two.
\section{Variations of the Outer Disc orientation}
In the previous section, we have studied the time evolution of warped
discs under the assumption that the orientation of the outer disc is
fixed. However, a protoplanetary disc is formed inside the star forming core
of a turbulent molecular cloud [e.g., \citet{mo1}].
Thus in general we expect the outer orientation of protoplanetary
discs to have some variations in time. In this section, we study
how the warped disc and particularly the inner disc orientation respond
when the outer disc orientation varies by some finite amplitude
(chosen to be $20^\circ$) over a period of time
short compared to the evolution timescale of the disc, and how
such variations affect the secular evolution of the stellar spin direction.
\subsection{High-Viscosity Discs}
\begin{figure}
\includegraphics[width=8cm]{ViscousEvolBetaOut}
\caption{Time evolution of the disc tilt angle profile $\beta$
for $\alpha=0.15$ and $r_{\rm out}=1000r_{\rm in}$, when
the outer disc orientation is changed from $\beta(r_{\rm out})
=10^\circ$ at $t=t_0-\Delta t$ to $\beta(r_{\rm out})=30^\circ$ at $t=t_0$,
with $\Delta t=10^3t_{\rm vis}(r_{\rm in})$.
Time is in units of $t_{\rm vis}(r_{\rm in})$.}
\label{fig:visevol}
\end{figure}
We first consider a viscous disc with $\alpha=0.15$ and
$r_{\rm out}=1000r_{\rm in}$. We choose to vary the outer disc orientation
over $\Delta t=1000 t_{\rm vis}(r_{\rm in}) \sim t_{\rm vis}(100r_{\rm in})$.
As in the case of the evolution towards the steady-state,
the evolution of the disc occurs on the viscous timescale $t_{\rm vis}$
(see Fig.~\ref{fig:visevol}). However, as significant changes
now take place at the outer radius, the new steady-state configuration
will be reached in a time of order the viscous timescale at the
outer radius $t_{\rm vis}(r_{\rm out})$, whis is larger than $t_{\rm vis}
(r_{\rm warp})$ (see Section \ref{subsec:highvis}).
Nevertheless, even though the steady-state is likely to be reached over
a longer timescale than when the outer orientation is fixed, we still
expect $t_{\rm vis}(r_{\rm out})$ to be significantly less than
the evolution time for the stellar spin $t_{\rm spin}$.
Thus, if the variation of the orientation of the outer disc occurs on a
timescale shorter than $t_{\rm spin}$, the evolution of the stellar
spin is well described by the approximation in which the disc is
assumed to be in its steady-state configuration at all times, and
adapting instantaneously to modifications of its orientation at the
outer boundary.
\subsection{Low-Viscosity Discs}
\begin{figure}
\includegraphics[width=8cm]{BetaEvolWave}
\caption{Same as Fig.~\ref{fig:visevol} except for
a disc with $\alpha=0.01$ and $r_{\rm out}=1000r_{\rm in}$,
and the outer disc orientation varies by $20^\circ$
over $\Delta t=t_{\rm wave}(100r_{\rm in})$.}
\label{fig:lvevol}
\end{figure}
The same type of evolution can also be studied for low-viscosity
discs. If we choose the viscosity parameter $\alpha=0.01$, and change
the orientation of the disc by $20^\circ$ over a timescale $\Delta t =
t_{\rm wave}(100r_{\rm in})$, where $t_{\rm wave}(r)$ is defined by
equation~(\ref{eq:twave}) with $r_{\rm out}$ replaced by $r$,
we obtain the evolution shown in
Fig.~\ref{fig:lvevol}. We see that a bending wave created at the
outer boundary propagates inward, until it reaches the inner edge
of the disc where it is reflected. The total time required for the
disc to reach a new steady-state is thus twice the crossing time of
the bending wave, $\sim 8/(3\delta \Omega(r_{\rm out}))$. For
low-viscosity discs, the condition for the steady-state approximation
to be valid when the orientation of the outer disc is allowed to
change over time is thus
\begin{equation}
t_{\rm spin} \go
\frac{8}{3\delta\Omega(r_{\rm out})} \approx 4000 {\rm yrs}.
\end{equation}
As $t_{\rm spin} \approx 1 {\rm Myr}$ [see Eq.~(\ref{eq:tspin})], this condition is easily satisfied.
Also note that the evolution equations of bending waves
adopted in our analysis are based on the flat-disc approximation. When the outer
disc boundary evolves as fast as shown in Fig.~\ref{fig:lvevol}, this approximation
is no longer valid. Thus in practice, we should also require $\Delta t \gg
8/[3\delta\Omega(r_{\rm out})]$.
\section{Application: Anti-aligned exoplanetary orbits}
\label{sec:aa}
Our calculations in Sections 3-5 show that for the most likely
physical parameters that characterize a magnetic star -- disc system,
the disc warp is small. Therefore
the long-term evolution of the stellar spin is generally well-described by
equation~(\ref{eq:flatspinevol}), as long as the orientation of the
outer disc is kept constant. According to~(\ref{eq:flatspinevol}),
three types of spin evolution trend are possible, depending on the parameters of
the system and the initial conditions (Paper I).
If $\tilde\zeta<\lambda$, the stellar spin and the disc axis will
always align (given enough time) regardless of their initial relative
inclination. If $\tilde\zeta>\lambda$, misalignment between the disc and the
stellar spin will develop, evolving towards one of the two possible
final states: either $\beta_\star=\beta_+<90^\circ$, or a perfectly
anti-aligned configuration. The second configuration can only be
reached if the initial disc has a retrograde rotation with respect to the stellar spin, with
$\beta(t=0) > 180^\circ - \beta_+ = \beta_-$. In this case, to explain the observed
expolanetary systems with retrograde orbits relative to the stellar spin
\citep{triaud}, we have to require that the disc rotates in a very different
direction from the stellar rotation axis during the time of planet formation.
As discussed in paper I [called scenario (2) in Section 5 of Paper I], this
is certainly possible if we consider the complex nature of star formation
in molecular clouds and in star clusters [see also~\citet{blp}].
In Paper I, we describe another potential pathway to create retrograde
exoplanetary systems [called scenario (1)] starting from
prograde-rotating discs. If the disc axis and stellar spin axis are
initially nearly (but not perfectly) aligned, and the magnetic torques are such that the aligned
configuration is unstable, then the misalignment angle will tend towards
$\beta_+$, and no retrograde planets can be produced.
However, this is only true if the orientation of the outer disc does not vary.
If instead we assume that the outer disc experiences a
change of its orientation $\Delta \beta > \beta_- - \beta_+$ over a timescale
$\Delta t$ sufficiently long that this change can propagate to the inner disc,
but short enough that the stellar spin direction does not significantly evolve
over $\Delta t$, then the star -- disc inclination can jump to
$\beta>\beta_-$, and continue to evolve towards anti-alignment.
These conditions can be summarized as:
\begin{eqnarray}
\Delta \beta &>& \beta_- - \beta_+=180^\circ-2\cos^{-1}\!\sqrt{\lambda\over
\tilde\zeta},\\
t_{\rm disc} & \lo & \Delta t \lo t_{\rm spin},
\end{eqnarray}
where the disc warp evolution time $t_{\rm disc} \sim t_{\rm vis}(r_{\rm out})$
if $\alpha\go \delta$ (high-viscosity disc) and $t_{\rm disc} \sim t_{\rm wave}$ for
$\alpha\lo\delta$ (low-viscosity disc). As we have seen in Sections 6.1-6.2,
the second and third conditions are fairly easy to satisfy, as $t_{\rm disc}$
is at most of order $10^4~{\rm yrs}$ for a viscous
disc with $r_{\rm out} \sim 10^4r_{\rm in}$ (and $t_{\rm disc}$
would be significantly shorter for a smaller outer disc radius),
while $t_{\rm spin} \sim 10^6 {\rm yrs}$ for typical parameters
[See Eq.~(\ref{eq:tspin})]. The potential to satisfy
the first condition, on the other hand, will
depend on the fraction of the disc angular momentum which is
accreted by the star (the parameter $\lambda$ in
equation~\ref{eq:Nl}) and the magnetic warp efficiency (the parameter
$\tilde\zeta$).
If the star only accretes a small fraction of the angular momentum
($\lambda\ll 1$), then the angles $\beta_{\pm}$ are both close to $90^\circ$,
and small variations of the outer disc are sufficient
to allow the system to jump to the retrograde state and eventually evolve towards
the anti-aligned configuration.
\section{Discussion}
The main finding of our paper is that although magnetic interactions
between a protostar and its disc have a strong tendency to induce
warping in the inner disc region, internal stresses in the disc
tend to suppress the warping under most circumstances. The result is that
in steady-state, the whole protoplanetary disc approximately lies
in a single plane, which is determined by the disc angular momentum at large radii
(averaging out the dynamical warps which vary on timescales of order the stellar rotation period ---
such dynamical warps do not affect the secular evolution of the stellar spin).
The reason for the small steady-state disc warp is that
the effective viscosity acting to suppress disc warp,
$\nu_2\simeq \nu_1/(2\alpha^2)$, is much larger than the viscosity
($\nu_1=\alpha H c_s$) responsible for angular momentum transfer within the
disc \citep{pp1,og1}. In fact,
our anaylsis of the steady-state magnetically driven disc warp shows that,
in the linear regime, the disc inclination angle (relative to the
stellar spin axis) varies from the outer disc to inner disc by the amount
[see Eqs.~(\ref{eq:cosbeta}) and (\ref{eq:approxwarp})]
\begin{equation}
|\beta_{\rm in}-\beta_{\rm out}|\sim
\left(t_{\rm vis}\Gamma_w\sin 2\beta\right)_{\rm in}
\sim {\alpha^2\zeta\sin (2\beta_{\rm in})\over\eta^{7/2}}.
\end{equation}
where $t_{\rm vis}=r^2/\nu_2$ is the viscous time and
$\Gamma_w$ is the warping rate due to the magnetic torque.
This result is valid regardless of whether the warp perturbations
propagate diffusively (for $\alpha\go H/r$, high-viscosity discs) or as bending waves
(for $\alpha\lo H/r$, low-viscosity discs). Thus, for the preferred values of the parameters
$\eta\sim 0.5$, $\zeta\sim 1$, we find
$|\beta_{\rm in}-\beta_{\rm out}|\ll 1$ for $\alpha\ll 0.3$.
Moreover, our analysis of the time evolution of warped discs shows
that, starting from a generic initial condition, the steady-state
can be reached quickly, on a timescale shorter than the characteristic timescale
for the evolution of the stellar spin orientation.
Overall, our study of magnetically driven warped discs presented in
this paper justifies the approximate analysis (based on the flat-disc
approximation) of the long-term evolution
of spin-disc misalignment presented in Paper I. Nevertheless,
we note that even relatively small disc warps can modify the
``equilibrium'' spin -- disc inclination angles $\beta_\pm$
(see Fig.~1) from the flat-disc values, thereby affecting the
``attractors'' of the long-term evolution of the
spin -- disc inclination angle. If we allow for more extreme
parameters (but still reasonable by physical considerations)
for the disc -- star systems, much larger disc warps become
possible and qualitatively different evolutionary trends
for $\beta$ may be produced (see Figs.~\ref{fig:zeta1seq}-\ref{fig:zeta5seq}
and~\ref{fig:F5seq}).
Taken together, the results of this paper and paper I demonstrate that
at the end of the first stage of the planetary system formation (see
Section 1), the inclination angle between the stellar spin and the
angular momentum axis of the planetary orbit may have a wide range of values,
including alignment and anti-alignment (see also section~\ref{sec:aa}). Dynamical processes (e.g.,
planet-planet scatterings and Kozai interactions) in the second stage,
if they exist, would further change the spin -- orbit misalignment angle.
More work is needed to determine the relative importance of the two
stages in shaping the properties of planetary systems. Currently, the
orbital eccentricity distribution of exoplanetary systems
suggests that the second stage is important (e.g., Juric \& Tremaine
2008). On the other hand, as noted in paper I, the $7^\circ$ misalignment between the ecliptic plane of the solar system and the sun's equatorial plane may be explained by the magnetically
driven misalignment effect studied in this paper. Also, the
recent discovery of Kepler-9 (Holman et al.~2010), a planetary system
with two or three planets that lie in the same orbital plane, seems to
suggest that at least some planetary systems are formed in a ``quiet''
manner without violent multi-body interactions. Obviously, measuring
the stellar obliquity of such ``quiet'' systems would be most
valuable.
\section*{Acknowledgments}
DL thanks Doug Lin, Gordon Ogilvie and other participants of the KITP
Exoplanet program (Spring 2010) for useful discussions, and
acknowledges the hospitality of the Kavli Institute for Theoretical
Physics at UCSB (funded by the NSF through Grant PHY05-51164).
FF thanks Harald Pfeiffer for useful discussions on the numerical evolution of warped
discs, as well as for access to his evolution code for comparison tests.
We thank the referees for useful comments which improved the paper.
This work has been supported in part by NASA Grant No NNX07AG81G and NSF
Grant No AST 1008245.
|
1,116,691,498,328 | arxiv | \section{Introduction}
Relaxor ferroelectrics have received a great deal of attention from the physics and materials science communities because of their exceptional dielectric properties and potential for use as piezoelectric devices.~\cite{Park97:82,Cowley11:60,Xu10:79} Pb(Mg$_{1/3}$Nb$_{2/3}$)O$_3$ (PMN), Pb(Zn$_{1/3}$Nb$_{2/3}$)O$_3$ (PZN), and Pb(Zr$_{1-x}$Ti$_{x}$)O$_{3}$ (PZT) are prototypical relaxors that display a broad and frequency-dependent peak in the dielectric response.~\cite{Ye98:81,Ye09:34} Single crystals of solid solutions between PMN, PZN, or PZT and ferroelectric PbTiO$_3$ (PT) exhibit piezoelectric coefficients and electro-mechanical coupling factors that are much larger than those of conventional piezoelectric materials. Despite many theoretical and experimental studies of these and other relaxor ferroelectrics, neither the relaxor ground state nor the relaxor transition are well understood, and there are few materials with comparable dielectric properties.~\cite{Hirota06:75}
The structural properties of PMN and PZN display several common features. At a temperature $T_d$, far above the critical temperature $T_C$ below which long range ferroelectric order can be induced in both PMN and PZN by cooling in a sufficiently strong external electric field, evidence of the formation of local regions of polar order, also known as polar nanoregions, has been obtained from refractive index measurements and neutron scattering pair distribution function analysis.~\cite{Burns83:48,Jeong05:94} The existence of such locally ordered regions implies that short-range, polar correlations are present, and these are manifested in the form of strong neutron and x-ray diffuse scattering intensity located in the vicinity of Bragg peaks (Ref.~\onlinecite{Vak95:37,Hirota02:65,Welberry05:38}). The geometry of this diffuse scattering in reciprocal space is illustrated for two different Brillouin zones in PMN at room temperature in Fig.~\ref{diffuse_summary} (identical results have been obtained for PZN as well). In contrast to the sharp, resolution-limited, Bragg peaks that characterize long-range ordered structures, the diffuse scattering originating from short-range, polar correlations is broad in reciprocal space and forms rods that extend along $\langle 110\rangle$.~\cite{Xu03:70,Xu04:69} Several models have been proposed to explain this reciprocal space structure, including pancake-shaped regions in real space (Ref.~\onlinecite{Xu03:70,Welberry05:38,Welberry06:74}), polar domain walls oriented along $\langle 110 \rangle$ (Ref.~\onlinecite{Pasciak07:76}), correlations between chemically-ordered regions (Ref.~\onlinecite{Ganesh10:81}), Huang scattering (Ref.~\onlinecite{Vak05:7,Vak10:400}), and relatively isotropic displacements of the lead cations (Ref.~\onlinecite{Bosak11:xx}). Obviously no consensus or satisfactory description currently exists of the real space structure of the short-range, polar correlations that give rise to the large and temperature-dependent elastic diffuse scattering cross section in these relaxor materials.
The similarity of the reciprocal space geometries of the diffuse scattering in PMN and PZN demonstrates that they are structurally similar on short length scales. However both compounds exhibit similar long-range ordered, average cubic crystal structures as well. At high temperatures both systems possess cubic Pm$\overline{3}$m symmetry, while at low temperatures an average cubic unit cell is retained, at least insofar as neither compound undergoes a bulk structural phase transition as would be revealed by the splitting of a Bragg peak. This low-temperature, average cubic crystal structure has been well established for PMN using both x-ray and neutron scattering techniques.~\cite{Bonneau89:24,Bonneau91:91,deMathan91:03} However the case of PZN is controversial. Variable energy x-ray diffraction studies designed to probe the bulk and near-surface regions of single crystal PZN observed a cubic phase in the bulk and a rhombohedral phase in the ``skin", which led to the discovery of an anomalous skin effect in relaxors.~\cite{Xu03:67} This discovery was later challenged by other researchers who, based on neutron powder diffraction measurements, concluded that PZN exhibits uniform rhombohedral ground state structure instead.~\cite{Kisi05:17} On this subject the weight of experimental evidence seems to favor an average cubic ground state structure for bulk PZN because neutron diffraction measurements on pure PZN are plagued by enormous extinction effects and thus effectively probe only the near-surface region, and because similar skin effects were subsequently reported by x-ray diffraction studies of single crystal PZN doped with 4.5\% PT and 8\% PT (PZN-$x$PT).~\cite{Xu04:84,Xu06:79} Moreover, various diffraction studies of single crystal PMN doped with 10\% and 20\% PT (PMN-$x$PT) show that while the near-surface regions (probed by x-rays) are rhombohedrally distorted,~\cite{Dkhil01:65} the interior or bulk of these crystals (probed by neutrons) remains metrically cubic down to low temperatures, thus confirming the skin effect in this system.~\cite{Gehring04:16,Xu03:68} Subsequent neutron-based strain experiments found a significant skin effect in large single crystals of PMN as well that has since been observed with x-rays,~\cite{Conlon04:70,Stock:unpub,Xu06:79,Pasciak11:226} and local regions of polarization have been directly imaged near the surface using piezoresponse force microscopy.~\cite{Shvartsman11:108} Under the application of strong electric fields the bulk unit cell in PMN remains cubic, whereas the structure of the near-surface region distorts; however the intensity of the diffuse scattering arising from short-range, polar correlations can be suppressed by an external electric field, but only below the critical temperature $T_C \sim 210$\,K, which is indicative of a more ordered structure in the material.~\cite{Stock:unpub,Vak98:40}
In view of these results we believe that the dielectric properties of both PMN and PZN can be described in terms of just two temperature scales: a high-temperature scale $T_d$, below which static, short-range, polar correlations first appear, and a low-temperature scale $T_C$, below which long-range, ferroelectric correlations can be induced by cooling in the presence of a sufficiently strong electric field. Each of these temperature scales is reflected in the neutron elastic diffuse scattering cross section in PMN when measured as a function of temperature and electric field,~\cite{Stock:unpub} as shown schematically in the upper panel of Fig.~\ref{diffuse_cte}. The high-temperature scale $T_d$ is defined as the temperature at which the elastic diffuse scattering cross section becomes non-zero, and the low-temperature scale $T_C$ is defined as the temperature at which the elastic diffuse scattering can be suppressed by electric fields and ferroelectric domains are formed. As shown by a recent high-resolution spin-echo study of the elastic diffuse scattering in PMN,~\cite{Stock10:81} the dynamics of relaxors also reflects these same two temperature scales. The onset of diffuse scattering at $T_d$ was found to coincide with the appearance of static (at least on gigahertz timescales) polar nanoregions that coexist with dynamic, short-range, polar correlations. $T_C$ was found to coincide with the temperature at which all short-range, polar correlations are static (within experimental resolution).~\cite{Stock10:81} These two temperature scales are also clearly seen in measurements of the bulk polarization in La-doped PbZr$_{1-x}$Ti$_x$O$_3$ (PLZT) where $T_C$ marks the temperature at which the field-cooled and zero-field-cooled polarizations diverge.~\cite{Viehland92:46} These two temperature scales can be understood in terms of random dipolar fields that are introduced through the disorder that is inherently present on the B-site of lead-based perovskite relaxors.~\cite{Stock04:69,Gvas05:17,Westphal92:68,Fisch03:67} As a function of Ti content $x$, it has also been shown that $T_d$ and $T_C$ become equal at the morphotropic phase boundary (MPB), where the relaxor properties are suppressed in favor of those associated with a well-defined ferroelectric phase.~\cite{Cao08:78}
While the static structure of relaxors displays a range of interesting properties, the dynamics associated with the relaxor transition have many unusual features as well. The conventional picture of a (displacive) phase transition from a paraelectric state to a ferroelectric state is characterized by a long-wavelength ($q=0$) soft transverse optic (TO) phonon for which the frequency approaches a minimum at $T_C$, as happens in the well-known paraelectric-cubic to ferroelectric-tetragonal phase transition in PbTiO$_3$.~\cite{Shirane70:2,Dove:book} The frequency of the soft mode $\Omega_{TO}$ is directly related to the dielectric constant $\epsilon$ determined from bulk measurements via the Lyddane-Sachs-Teller (LST) relationship, which states that $1 / \epsilon \propto (\hbar \Omega_{TO})^2$. Therefore, neutron inelastic scattering measurements of the lattice dynamics provide an important tool with which to investigate ferroelectricity. In relaxors, the first pioneering work on the lattice dynamics of PMN was conducted by Naberezhnov \textit{et al.}~\cite{Nab99:11} Experiments performed by Gehring \textit{et al.} and Wakimoto \textit{et al.} demonstrated the existence of a soft TO mode in PMN and tracked its temperature dependence.~\cite{Gehring01:87,Wakimoto02:65} The soft mode in PMN reaches a minimum energy around 400\,K and then recovers linearly (i.\ e.\ $\Omega_{TO}^2 \propto |T - T_d|$) at lower temperatures. As shown in Fig.~\ref{diffuse_cte}, 400\,K closely matches the onset temperature of the elastic (i.\ e.\ static) diffuse scattering cross section in PMN when measured on gigahertz timescales using neutron spin-echo and neutron backscattering methods.~\cite{Stock10:81} Motivated by these results, Gehring \textit{et al.} proposed that the upper temperature scale $T_d$, more commonly known as the Burns temperature, should be normalized from the value of 620\,K determined from the measurements of the refractive index in PMN, to $420 \pm 20$\,K, where the soft TO mode reaches a minimum and static ferroelectric correlations (i.\ e.\ polar nanoregions) first appear.~\cite{Gehring09:79}
An important anomalous feature found in these studies concerns the TO mode measured near the Brillouin zone center, which is unusually broad in energy compared to that in PbTiO$_3$ and even becomes overdamped, i.\ e.\ the half-width at half-maximum (HWHM) of the phonon energy lineshape $\Gamma$ exceeds the phonon energy $\hbar \Omega_{TO}$ over a range of temperatures and wavevectors; this indicates that some mechanism is present that shortens the lifetime of long-wavelength TO lattice fluctuations. The term ``waterfall" was coined to describe this effect, and originally the broadening was believed to be the result of scattering of TO phonons from polar nanoregions.~\cite{Tsurumi94:33,Gehring00:84} However, the same effect is seen in ferroelectric PMN-60\%PT, which exhibits no diffuse scattering and is not a relaxor.~\cite{Stock06:73} This proves that the anomalous TO broadening cannot be associated with the relaxor phase but may be the result of the disorder introduced through the heterovalent nature of the B-site cations. Another explanation for the broadening was expressed in terms of coupling between the TO and TA (transverse acoustic) modes as is observed in BaTiO$_3$;~\cite{Hlinka03:91} more recently this same idea has been used to explain the apparent ``waterfall" effect in PbTe.~\cite{Delaire11:10} However a series of other papers have reported the existence of a second, quasi-optic mode at low-energies in PMN, which would then complicate the dynamics near the zone center, thus giving the appearance of a broadening or an extra neutron inelastic scattering cross section at low energies.~\cite{Vak02:66,Zein10:105} This interpretation is particularly attractive as it could reconcile the apparent discrepancies in the dielectric response with the frequency of the low-energy TO mode expected from the LST relation described above as well as the measured deviations from the expected Curie-Weiss behavior.~\cite{Viehland92_2:46} Still other groups claim that the extra spectral weight at low energies originates from quasi-elastic scattering from short-range, ferroelectric ordering.~\cite{Gvas05:17,Gvas04:49}
It is quite clear given the many different aforementioned studies that the low-energy neutron scattering cross section is neither well understood nor well characterized in relaxors. In this paper, we provide a detailed study of the Brillouin zone and temperature dependence of the transverse acoustic phonons in PMN over the entire Brillouin zone. We observe highly anisotropic acoustic lattice fluctuations that reflect the existence of a coupling between the acoustic harmonic modes and the relaxational diffuse scattering. We also provide measurements of the elastic constants obtained on the terahertz timescale and compare these with those made using other techniques and those of the canonical ferroelectric PbTiO$_3$. Our results point to a reduced value of C$_{11}$ and C$_{12}$ in PMN compared to those in PbTiO$_3$. Further, marked changes in the elastic anisotropy (2C$_{44}$/(C$_{11}$-C$_{12}$)) and differences in C$_{11}-$C$_{12}$ are reported here for PMN versus those in PbTiO$_3$. We discuss these results in terms of the models mentioned earlier in order to develop a more complete description and understanding of the low-energy lattice fluctuations in PMN.
\section{EXPERIMENTAL DETAILS}
All of the neutron scattering data presented in this paper were obtained using the C5 and N5 thermal-neutron, triple-axis spectrometers located at Chalk River Laboratories (Chalk River, Ontario, Canada), the SPINS cold-neutron, triple-axis and DCS time-of-flight spectrometers located at the National Institute of Standards and Technology (NIST) Center for Neutron Research (Gaithersburg, Maryland, USA). Measurements made on C5 concentrated on the low-temperature phonons. Inelastic data were measured using a variable vertical-focus, pyrolytic graphite (PG) monochromator and a flat PG analyzer. The (002) Bragg reflections of the monochromator and analyzer crystals were used to analyze the incident and scattered neutron energies, respectively. For elastic measurements, which characterize the nuclear Bragg peaks, the PG analyzer was replaced by a nearly perfect (mosaic $\sim 0.06^{\circ}$ FHWM), single-crystal, germanium (220) analyzer. A PG filter was inserted into the scattered beam to remove higher order neutrons for all thermal neutron measurements. A sapphire filter was placed before the monochromator to remove high-energy fast neutrons. The inelastic data have been corrected for contamination of the incident beam monitor as described elsewhere (see Ref.~\onlinecite{Shirane:book} and the appendix of Ref.~\onlinecite{Stock04_2:69}). The horizontal beam collimations were either 33$'$-33$'$-$S$-29$'$-72$'$ or 12$'$-33$'$-$S$-29$'$-72$'$ ($S$ = sample). The final energy was fixed to either $E_f=14.6$\,meV or 14.8\,meV. Data collected on N5 were taken using a flat PG monochromator and analyzer with a PG filter inserted into the scattered beam to remove higher order neutrons. Horizontal beam collimations were set to 33$'$-26$'$-$S$-24$'$-open$'$ with a final energy of $E_f=14.6$\,meV. These configurations provided an energy resolution at the elastic line of 0.95 $\pm$ 0.10 meV. A cryofurnace was used to control the sample temperature on both C5 and N5, and helium exchange gas was inserted into the sample space to ensure good thermal contact and equilibrium at all temperatures. A thermometer was also placed on the sample stick near the sample position in order to verify and monitor the sample temperature.
\begin{figure}[t]
\includegraphics[width=80mm]{diffuse_summary.eps}
\caption{Elastic diffuse scattering intensity contours measured near $a)$ $\vec{Q}=(0,0,1)$ and $b)$ $\vec{Q}=(1,1,0)$ at $T=300$\,K on SPINS ($E_f=4.5$\,meV). The sample (S-II) was aligned in the (HHL) scattering plane. }
\label{diffuse_summary}
\end{figure}
Data on SPINS were obtained using a variable vertical-focus PG monochromator. A flat PG analyzer was inserted into the scattered beam in series with a beryllium filter to remove higher order neutrons, and the final energy was fixed to $E_f=4.5$\,meV. The collimations were set to guide-80$'$-$S$-80$'$-open for inelastic measurements and guide-10$'$-$S$-10$'$-open for studies of the nuclear Bragg peaks. The energy resolution for the inelastic configuration was 0.28 $\pm$ 0.05 meV. The sample was placed in a high-temperature displex so that temperatures from 50\,K to 600\,K could be reached.
Data collected on DCS were taken using a fixed incident energy of 6.0\,meV, and the time between pulses at the sample was set to 9\,ms, which effectively eliminates frame overlap. Time-of-flight spectra were collected for $\sim 45$ different crystal orientations spaced $0.5^{\circ}$ apart at each temperature using the 325 detectors with an angular coverage from $5-140^{\circ}$. More details about the DCS instrument can be found elsewhere.~\cite{Copley00:283} The energy resolution at the elastic line was measured to be 0.28 $\pm$ 0.03 meV. The sample was mounted in a high-temperature, closed-cycle $^4$He-refrigerator such that temperatures between 50\,K to 600\,K could be reached.
The data presented in this paper were obtained from two different single crystals of PMN, which were aligned in either the (HK0) or (HHL) scattering planes. The room-temperature, cubic lattice constant is $a = 4.04$\AA, so one reciprocal lattice unit (rlu) equals $2\pi/a = 1.56$\,\AA$^{-1}$. The crystal labeled S-I is 9\,cc in volume, and the second crystal labeled S-II is 5\,cc in volume. We found no evidence of a structural distortion in either sample. Crystal S-I contains a very small amount of PbTiO$_3$ that was intentionally added during sample preparation to facilitate the growth of large single crystals; the sample preparation method has been described previously.~\cite{Luo00:39} Crystal S-I is the same sample as that used in a previous study of the low-energy phonons (see Ref.~\onlinecite{Stock05:74}). We emphasize that the phonons we measured are identical across all samples studied. We compare our measurements on these PMN crystals to those on PMN-60\%PT, which is \textit{not} a relaxor and which undergoes a well-defined, cubic-to-tetragonal phase transition just below 550\,K.~\cite{Stock06:73} We also compare our results to published and unpublished measurements made on a smaller PMN single crystal that was studied in a series of earlier publications.~\cite{Gehring09:79,Wakimoto02:65,Waki02:66}
\section{ELASTIC SCATTERING: ANISOTROPIC DIFFUSE SCATTERING AND SHORT-RANGE, POLAR CORRELATIONS}
In this section, we discuss the temperature dependence of the diffuse scattering cross section as well as its dependence on wavevector when projected onto the (HHL) scattering plane. Our results confirm previous experimental findings that the diffuse scattering is highly anisotropic in reciprocal space. We also discuss how the two temperature scales defined in the Introduction are manifested in the diffuse scattering data and the consequences for the polar and ferroelectric correlations.
The temperature and wavevector dependences of the neutron diffuse scattering cross section in PMN have been reported in several studies in which measurements were made mainly in the (HK0) scattering plane.~\cite{Hiraka04:70,Xu04:69} These studies found that the distribution of diffuse scattering intensity in reciprocal space is composed of rods that are elongated along $\langle 110 \rangle$, which agrees with the results of the three-dimensional mapping study conducted with high-energy x-rays on PZN doped with PbTiO$_3$.~\cite{Xu04:70} The reciprocal space anisotropy revealed by these measurements has been interpreted in terms of an underlying two-dimensional, real space structure in which the long axis of the diffuse rod reflects short-range, polar correlations while the shorter, perpendicular axes reflect longer-range correlations. Fig.~\ref{diffuse_summary} shows data obtained on the cold-neutron, triple-axis spectrometer SPINS that illustrate how the neutron diffuse scattering cross section in PMN looks when projected onto the (HHL) scattering plane. Panel $(a)$ shows constant-intensity contours measured near $\vec{Q}=(0,0,1)$ while panel $(b)$ shows contours measured near $\vec{Q}=(1,1,0)$. In both cases a butterfly-shaped pattern is seen that appears to be slightly elongated roughly along $\langle 111 \rangle$. This is consistent with neutron measurements performed in the (HK0) scattering plane as well as those from the three-dimensional x-ray mapping study, which observed rods extending along $\langle 110 \rangle$. The slight elongation along $\langle 111 \rangle$ results from rods that protrude out of the scattering plane along [1$\overline{1}$0], but which nonetheless contribute to the total diffuse scattering pattern because of the non-zero vertical (i.\ e.\ out-of-plane) instrumental $Q$-resolution. These results demonstrate that the diffuse scattering cross section is highly anisotropic in reciprocal space. In particular, Fig.~\ref{diffuse_summary} $(a)$ shows that the diffuse scattering measured near $\vec{Q}=(0,0,1)$ is slightly narrower along [001] than it is along [110] while the opposite is true for the diffuse scattering measured near $\vec{Q}=(1,1,0)$ (Fig.~\ref{diffuse_summary} $(b)$). As mentioned in the Introduction, this reciprocal space anisotropy has been interpreted in terms of a correspondingly anisotropic real space structure composed of local, two-dimensional correlations.
The temperature dependence of the diffuse scattering cross section measured at $\vec{Q}=(0.025,0.025,1.05)$ is summarized in Fig.~\ref{diffuse_cte} $(a)$; this value of $\vec{Q}$ corresponds to the peak in the diffuse scattering intensity when the wavevector is scanned along (H,H,1.05). These measurements were performed using cold neutrons on SPINS with a final neutron energy $E_f=4.5$\,meV, which provides an excellent energy resolution (FWHM) of $2\delta E \sim 0.23$\,meV. The data show a well-defined onset of the diffuse scattering intensity at $T_d \sim 420 \pm 20 $\,K that matches the onset measured using far better energy resolution ($2\delta E \sim 1$\,$\mu$eV) provided by neutron backscattering techniques.~\cite{Gehring09:79} The issue of energy resolution is extremely relevant here because when the diffuse scattering is measured using a technique (e.\ g.\ x-rays) that provides a substantially coarser energy resolution the resulting intensity will necessarily include an integration over the low-frequency (quasielastic) component of the diffuse scattering cross section, and this will lead to an artificially higher (incorrect) onset temperature.~\cite{Hiraka04:70} Given the invariance of the onset temperature of the elastic diffuse scattering when changing the instrumental energy resolution by more than two orders of magnitude, we conclude that $T_d \sim 420 \pm 20 $\,K represents an intrinsic and physically-meaningful temperature scale in PMN. The same onset temperature was obtained for several different values of $\vec{Q}$ on different backscattering and neutron spin echo spectrometers; hence $T_d$ it is not wavevector dependent. We further note that this temperature matches the Curie-Weiss temperature derived from linear fits of the inverse dielectric susceptibility measured at a frequency of 100\,kHz;~\cite{Viehland92_2:46} it also matches the temperature at which the soft TO mode energy in PMN reaches a minimum value.~\cite{Wakimoto02:65,Stock05:74,Stock10:81}
\begin{figure}[t]
\includegraphics[width=80mm]{diffuse_cte.eps}
\caption{$a)$ Elastic diffuse scattering intensity measured in zero field at $\vec{Q}=(0.025,0.025,1.05)$ plotted versus temperature. The data were measured on SPINS using a final energy $E_f=4.5$\,meV. The sample (S-II) was aligned in the (HHL) scattering plane. The dashed line is a schematic representation of the field-cooled data presented in Ref.~\onlinecite{Stock:unpub}. $b)$ The thermal expansion coefficient is plotted as a function of temperature, illustrating no observable structural distortion to a sensitivity of $\alpha \sim 10^{-4}$. The data were taken on C5 with $E_f=14.6$\,meV using a perfect germanium analyzer for improved $Q$-resolution. The sample (S-II) was aligned in the (HK0) scattering plane.}
\label{diffuse_cte}
\end{figure}
Cooling PMN below $T_d$ does not induce a well-defined phase transition to a long-range ordered structure; this is evident from Fig.~\ref{diffuse_cte} $b)$, where the thermal expansion coefficient of the S-II PMN crystal is plotted from just below 200\,K to 600\,K. Near a conventional ferroelectric phase transition, an anomaly is observed in the temperature dependence of the lattice constant like that seen at the ferroelectric transition of PbTiO$_3$;~\cite{Shirane51:6} however no such anomaly is seen in PMN. In the relaxor PZN, the coefficient of thermal expansion exhibits an anomaly of $\alpha \sim 5 \times 10^{-4}$ at $T_C=400$\,K in the near-surface region.~\cite{Xu04_2:70} Efforts to locate a structural distortion in PMN were carried out using the C5 thermal-neutron and the SPINS cold-neutron, triple-axis spectrometers. A configuration that provided excellent wavevector resolution was obtained on C5 by closely matching the $d$-spacing of the sample and that of a nearly perfect Germanium (220) analyzer crystal. This configuration and the resulting resolution function are reviewed elsewhere.~\cite{Xu03:xx} The data shown in Fig.~\ref{diffuse_cte} $b)$ provide no sign of a change in the dimensions of the unit cell; instead they imply an Invar-like behavior of the lattice constant up to 600\,K. Therefore, even though $T_d$ represents a meaningful temperature scale in PMN that is associated with the onset of static, short-range, polar order and the softening of the optic mode, it is not associated with a structural phase transition to long-range, polar order.
In the bulk of single crystal PMN and PZN there is no evidence for a well-defined structural transition. Several powder diffraction studies of pure PMN have seen subtle indications of a structural distortion at temperatures below 200\,K, but these are well-modeled by a disordered lattice with local atomic shifts.~\cite{Bonneau89:24,Bonneau91:91,Dkhil01:65} The temperature of 200\,K corresponds closely to that at which a sufficiently strong electric field applied along [111] is able to suppress the diffuse scattering cross section in PMN (see Fig. \ref{diffuse_cte} $(a)$) while simultaneously inducing a structural distortion in the near-surface region.~\cite{Stock:unpub} This temperature scale is denoted as $T_C$ in Fig.~\ref{diffuse_cte} $(a)$. Other studies of single crystal PMN and PMN substituted with 10\% PbTiO$_3$ (PMN-10\%PT) using neutron scattering have found no evidence of any anomaly in the lattice constant at low temperatures, but they do observe a significant change above $\sim 350$\,K (Ref.~\onlinecite{Gehring09:79,Gehring04:16}). These findings are consistent with x-ray diffraction studies of ceramic samples of PMN-10\%PT,~\cite{King04:31} but they differ from the result reported here in Fig.~\ref{diffuse_cte} $b)$. Strain-scanning experiments performed on single crystals using neutrons have found evidence for a significant near-surface effect in PMN and even large changes of the coefficient of thermal expansion as a function of depth in the crystal.~\cite{Conlon04:70,Xu06:79} We therefore conclude that no well-defined structural transition occurs at $T_d \sim 420$\,K, and that the apparent thermal expansion at higher temperatures is sample dependent and likely linked to the effects of a skin layer in these materials. However, the Invar-like response in the bulk at temperatures below $\sim 350$\,K is consistent across a series of PMN samples and all known studies.
Based on these data and those presented in previous studies, we conclude that the diffuse scattering cross section is highly anisotropic in reciprocal space. This conclusion is consistent with previous measurements that describe the diffuse scattering as being composed of rods oriented along $\langle 110 \rangle$.~\cite{Xu04:70} Elastic diffuse scattering first appears below $T_d \sim 420$\,K,~\cite{Gehring09:79} a temperature scale that does not reflect a phase transition to a long-range ordered structure, but rather to the onset of static, short-range, polar correlations. A longer-range ferroelectric phase can be induced by cooling in the presence of a sufficiently strong electric field, but only below $T_C \sim 210$\,K.~\cite{Stock:unpub}
\section{INELASTIC SCATTERING: ANISOTROPIC TRANSVERSE ACOUSTIC PHONON DAMPING }
\begin{figure}[t]
\includegraphics[width=85mm]{waterfall_figure.eps}
\caption{DCS neutron scattering data measured on PMN sample S-II in the (HK0) scattering plane near (200) and along [010] at $T=300$\,K. The constant inelastic scattering intensity contours reveal well-defined TA$_1$ and TO modes near the zone boundary at $(2,\pm0.5,0)$, but overdamped TO modes near the zone center. The vertical dashed lines indicate where constant-$\vec{Q}$ scans were made; these are discussed in the next figure.}
\label{waterfall}
\end{figure}
So far we have reviewed the wavevector and temperature dependence of the neutron elastic diffuse scattering cross section in PMN, which reflects the presence highly anisotropic, static short-range, polar correlations that develop below $T_d \sim 420$\,K. We now present extensive neutron inelastic scattering measurements of the transverse acoustic (TA) phonons that show how the anisotropy of these short-range, polar correlations is manifested in the damping of acoustic lattice fluctuations. To this end, we discuss the TA phonon scattering cross section measured in three different Brillouin zones and the temperature dependence observed in each case.
We chose to make our neutron scattering measurements in the (HK0) scattering plane because they are then sensitive to two different TA phonons, denoted by TA$_1$ and TA$_2$. This sensitivity is governed by the neutron scattering phonon cross section, which is proportional to $(\vec{Q} \cdot \vec{\xi})^2$, where $\vec{Q}$ is the total momentum transfer, or scattering vector, and $\vec{\xi}$ is the eigenvector of the phonon. Measurements made at reduced wavevectors $\vec{q}$ transverse to (100) or (200), e.\ g.\ for $\vec{Q} = (1,q,0)$, are sensitive to TA$_1$ acoustic phonons, which propagate along [010] and are polarized along [100]. The limiting slope of the phonon dispersion associated with this mode as $q \rightarrow 0$ is proportional to the C$_{44}$ elastic constant. Measurements made at $\vec{q}$ transverse to (110) or (220), e.\ g.\ for $\vec{Q} = (1+q,1-q,0)$, are sensitive to TA$_2$ phonons, which propagate along [1$\overline{1}$0] and are polarized along [110]. The limiting slope of this phonon dispersion is proportional to ${ 1\over2}($C$_{11}-$C$_{12})$. A detailed list of the elastic constant dependence based on scan direction can be found in Ref.~\onlinecite{Dove:book} and Ref.~\onlinecite{Noda89:40}.
In this section we will first consider the TA$_1$ and TA$_2$ acoustic phonons measured in the (200) and (220) Brillouin zones as these zones exhibit relatively weak neutron diffuse scattering cross sections but strong acoustic and optic phonon scattering cross sections. We will then examine the TA$_1$ and TA$_2$ acoustic phonons measured in the (100) and (110) Brillouin zones where strong diffuse scattering cross sections are present. This fact has been demonstrated quantitatively in PMN at room temperature by Vakhrushev \textit{et al}., who found that the ratio of the diffuse scattering intensities measured in the (110) and (220) Brillouin zones was $I_d(110)/I_d(220) \textgreater 63/4 \sim 16$ and that between the (300) and (200) Brillouin zones was $I_d(300)/I_d(200) \textgreater 87/4 \sim 22$.~\cite{Vak95:37} These ratios should be viewed as lower bounds because the experimental determinations of the weak diffuse scattering cross sections in the (200) and (220) zones were limited by experimental uncertainties. We mention that, although our data cover a wide range of $q$ that spans each Brillouin zone, our discussion will not include the highly damped, soft optic phonons that are observed at various zone boundaries, because they have already been studied in detail in Ref.~\onlinecite{Swainson09:79}.
\subsection{$\vec{Q}=(2,0,0)$ - TA$_1$ acoustic phonons in the presence of weak diffuse scattering}
\begin{figure}[t]
\includegraphics[width=90mm]{T2_Q200.eps}
\caption{Temperature dependence of TA$_1$ and TO phonons measured at two different wavevectors on PMN sample S-II in the (HK0) scattering plane using the DCS spectrometer. The constant-$\vec{Q}$ scans integrate over the regions $(2 \pm 0.025, 0.10 \pm 0.02, 0)$ and $(2 \pm 0.025, 0.20 \pm 0.025, 0)$. The vertical dashed lines indicate the average position in energy of the TA$_1$ phonon.}
\label{T1_Q200}
\end{figure}
While the (200) neutron diffuse scattering cross section is non-zero, as pointed out in Ref.~\onlinecite{Gehring09:79}, it is more than one order of magnitude smaller than those in the (100), (110), and (300) Brillouin zones, which is consistent with the structure factor calculations discussed in Ref.~\onlinecite{Hirota02:65} and Ref.~\onlinecite{Hiraka04:70}. We therefore begin our discussion of the phonons in PMN in the (200) zone, where the acoustic and optic phonon scattering cross sections are strong and the diffuse scattering cross section is weak.
Fig.~\ref{waterfall} shows a contour map of the inelastic scattering intensity measured at 300\,K along [010] that spans the entire (200) Brillouin zone and covers the energy range from -20\,meV to 0\,meV. The negative energy scale merely indicates that these data were obtained using a neutron energy gain configuration in which energy is transferred from the lattice to the neutron during the scattering process; this configuration was chosen because it provides a broader detector coverage of momentum-energy phase space. At this temperature, well-defined TA$_1$ and TO phonon branches are seen at large $q$. The TA$_1$ mode reaches a maximum energy of $\sim 6$\,meV at the X-point zone boundary ($\vec{Q}=(2,\pm0.5,0)$), whereas the TO mode is observed at higher energies and extends to $\sim 20$\,meV. However for wavevectors approaching the zone center ($q=0$) the TO mode becomes overdamped while the TA$_1$ remains well-defined. This behavior has been termed the ``waterfall" effect and was discussed in the Introduction.~\cite{Gehring00:84} Previous studies have shown that the TO phonon broadens and softens near the zone center as a function of temperature, reaching a minimum energy at $T_d \sim 420$\,K.~\cite{Gehring01:87,Wakimoto02:65} At temperatures below $T_d$ the TO phonon hardens and the linewidth narrows.
This effect is more readily seen in Fig.~\ref{T1_Q200}, which displays constant-$\vec{Q}$ scans measured at $q=0.1$\,rlu and $q=0.2$\,rlu at 600\,K, 300\,K, and 100\,K; these two constant-$\vec{Q}$ scans correspond to the two vertical dashed lines shown in Fig.~\ref{waterfall}. All of the data were fit to two uncoupled harmonic oscillators in order to describe the TO mode and the TA$_1$ mode; the excellent fits are indicated by the solid curves. For both small ($q=0.1$\,rlu) and large ($q=0.2$\,rlu) wavevectors, and at all temperatures, a strong, sharp, TA$_1$ mode is observed. By contrast, the TO mode is only well-defined at large wavevector; at $q=0.1$\,rlu the TO mode is overdamped due to the waterfall effect. On cooling from 600\,K the intensity of both modes decreases in line with expectations based on the Bose factor. A detailed analysis of the temperature dependence of the structure factors and intensities of the TA modes in PMN has already been published.~\cite{Stock05:74} On the basis of that study it was concluded that the TA-TO mode coupling in PMN is minimal and does not give rise to the waterfall effect. The lack of TA-TO mode coupling is demonstrated by the vertical lines in Fig.~\ref{T1_Q200}, which show that the TA$_1$ phonon energy does not shift with temperature despite the softening and hardening of the optic mode. If strong TA-TO mode coupling were present in PMN, as it is in KTaO$_3$ (Ref.~\onlinecite{Axe70:1}), then a large change in the acoustic mode energy position should occur with temperature that mimics the change in the TO mode frequency. We observe no such change in our experiments. We note that this conclusion differs from those of some other studies,~\cite{Hlinka03:91,Vak02:66} and we will discuss these differences at the end of this paper.
Independent of the issue of mode coupling, Fig.~\ref{T1_Q200} demonstrates that the TA$_1$ mode measured near (200) is underdamped (i.\ e.\ $\Gamma \textless \hbar \Omega_0$) at all wavevectors and temperatures studied, and that it does not shift significantly in energy with temperature. As demonstrated in Ref.~\onlinecite{Stock05:74}, the TA phonon intensities are in reasonable agreement with harmonic theory. We note that the TA$_2$ acoustic phonons measured near (220) also exhibit little to no shift in energy with temperature and are also underdamped at all temperatures studied, i.\ e.\ between 100\,K and 600\,K. In both the (200) and (220) Brillouin zones the diffuse scattering cross section is small in comparison to those in other zones such as (100) and (110). We now move on to discuss the TA phonon cross section in these zones, where the diffuse scattering cross section is very strong.
\subsection{$\vec{Q}=(1,1,0)$ - TA$_2$ acoustic phonons in the presence of strong diffuse scattering}
\begin{figure}[t]
\includegraphics[width=80mm]{summary_110.eps}
\caption{Elastic diffuse scattering intensity contours measured near (110) on SPINS ($E_f=4.5$\,meV) in the (HK0) scattering plane. The white dashed line represents the direction in reciprocal space where the TA$_2$ and TO phonons were measured. The open black circles represent typical reciprocal lattice points where constant-$Q$ scans were measured.}
\label{summary_110}
\end{figure}
The white, dashed line in Fig.~\ref{summary_110} represents the set of reciprocal lattice points where constant-$\vec{Q}$ scans measured in the (110) Brillouin zone are sensitive to TA$_2$ phonons; as explained earlier, constant-$\vec{Q}$ scans directly probe TA$_2$ phonons when measured along a direction $\vec{q}$ transverse to $\vec{Q}=(1,1,0)$. This figure also illustrates the relationship between these reciprocal lattice points and the elastic diffuse scattering cross section measured at 300\,K. The diffuse scattering intensity contours were measured by Hiraka \textit{et al.} and are taken from Ref.~\onlinecite{Hiraka04:70}. The white, dashed line also corresponds to a ridge along which the diffuse scattering cross section is a maximum compared to that along all other parallel lines. Hence constant-$\vec{Q}$ scans measured at points on this dashed line will reflect TA$_2$ phonons in the presence of maximum diffuse scattering intensity.
\begin{figure}[t]
\includegraphics[width=75mm]{Q_110_figure.eps}
\caption{Constant inelastic scattering intensity contours measured on DCS in the (110) Brillouin zone are shown at (a) 600\,K and (b) 300\,K after integrating over $(1 \pm 0.1, 1 \pm 0.1,0)$. Underdamped, dispersive TA$_2$ phonons at 600\,K become heavily damped at all $q$ at 300\,K except for $q \sim 0$. The TO mode is not visible in this zone due to a weak structure factor. The wide, black streak at $E=0$\,meV is where the intensity is dominated by the incoherent, elastic scattering cross section from the sample and the aluminium sample mount. These data were obtained on the S-II PMN crystal aligned in the (HK0) scattering plane.}
\label{dispersion_110}
\end{figure}
Reciprocal space maps spanning the (110) Brillouin zone show the TA$_2$ phonon dispersion along $[1\overline{1}0]$ at 600\,K and 300\,K in Fig.~\ref{dispersion_110}. As mentioned before, because of a better detector coverage of momentum-energy phase space, our data were taken on the neutron energy gain side, which is indicated by the negative value of the energy transfers. The constant inelastic scattering intensity contours in panel $a)$ provide evidence of a distinct and well-defined TA$_2$ phonon branch at 600\,K. But the TO phonon branch that was seen in the (200) Brillouin zone is not visible here because of a very weak structure factor. As in the (200) zone, the TA$_2$ mode reaches a maximum energy of $\sim 6$\,meV at the zone boundary and appears to be well-defined (underdamped) throughout most of the Brillouin zone. The same data are shown after cooling to 300\,K in panel $b)$. At this temperature the contours reveal a TA$_2$ spectrum that is very different from that at 600\,K because the TA$_2$ lineshape is now overdamped throughout most of the Brillouin zone.
\begin{figure}[t]
\includegraphics[width=80mm]{figure1.eps}
\caption{Constant-$\vec{Q}$ scans measured at $\vec{Q}=(1.15,0.85,0)$ and (1.4,0.6,0) at temperatures ranging from 300-600\,K. The solid lines are fits to a damped harmonic oscillator as described in the text. The data were taken with the C5 thermal-neutron, triple-axis spectrometer located at Chalk River using $E_f=14.6$\,meV. The PMN S-I sample was aligned in the (HK0) scattering plane.}
\label{T2_highT}
\end{figure}
To better understand the temperature dependence of the TA$_2$ phonon linewidth $\Gamma$ in the (110) Brillouin zone, we measured constant-$\vec{Q}$ scans similar to those shown in Fig.~\ref{T1_Q200}. These scans are presented in Fig.~\ref{T2_highT}, which summarizes the response of the TA$_2$ acoustic phonon at temperatures well above, near, and below $T_d$, where the elastic diffuse scattering first appears. The solid lines in Fig.~\ref{T2_highT} are fits to a harmonic oscillator lineshape that obeys the principle of detailed balance.~\cite{Shirane:book} The dashed lines correspond to a Gaussian function of energy centered at $E=0$ and describe the elastic scattering cross section, which is a combination of diffuse scattering and incoherent scattering from the sample and sample mount. Details of the lineshape and the fitting have presented previously in Ref.~\onlinecite{Stock05:74}. At 600\,K underdamped TA$_2$ phonons are observed at $\vec{Q}=(1.15, 0.85,0)$ and $(1.4,0.6,0)$, which are consistent with the contours shown in Fig.~\ref{dispersion_110}. However after cooling to 400\,K the TA$_2$ mode at $\vec{Q}=(1.15,0.85,0)$ becomes overdamped and resembles a ``quasi elastic" lineshape in that it overlaps with the elastic line at $E=0$\,meV. A similar lineshape is observed at $\vec{Q}=(1.4, 0.6,0)$, which lies very near the M-point zone boundary. Finally, at 300\,K, which is above $T_C$ but below $T_d$ where the soft TO mode achieves a minimum in energy, the TA$_2$ mode at both wavevectors is again described by an overdamped lineshape.
\begin{figure}[t]
\includegraphics[width=80mm]{figure2.eps}
\caption{Constant-$\vec{Q}$ scans measured at $\vec{Q}=(1.1,0.9,0)$ and $(1.25,0.75,0)$ at 600\,K, 300\,K, and 100\,K. The solid lines represent fits to a damped harmonic oscillator as described in the text. The data were taken with the C5 thermal-neutron, triple-axis spectrometer (Chalk River) using $E_f=14.6$\,meV. The PMN S-I sample was aligned in the (HK0) scattering plane.}
\label{T2_allT}
\end{figure}
Figure~\ref{T2_allT} extends our study of the temperature dependence of the TA$_2$ linewidth to temperatures below T$_c \sim 200$\,K and wavevectors $\vec{Q}=(1.1,0.9,0)$ and $(1.25,0.75,0)$. Both wavevectors exhibit an underdamped TA$_2$ lineshape at 600\,K and an overdamped lineshape at 300\,K; however at 100\,K, i.\ e.\ below $T_C$, the TA$_2$ phonon recovers an underdamped lineshape. This low-temperature recovery is more evident at low wavevector: at $\vec{Q}=(1.1,0.9,0)$ the TA$_2$ phonon lineshape is quite sharp and the energy has hardened to a value comparable with that measured at 600\,K; by contrast the lineshape at $\vec{Q}$=(1.25,0.75,0) displays a more subtle recovery inasmuch as it still resembles an overdamped form. We point out that at 100\,K (see Fig.~\ref{diffuse_cte}) the elastic diffuse scattering cross section is maximum.
These results provide overwhelming evidence of strong coupling between the TA$_2$ mode and the diffuse scattering centered at $E=0$. The fact that strong, temperature-dependent shifts in the TA$_2$ phonon energy and changes in the TA$_2$ linewidth are observed in the (110) zone, where the TO phonon cross section is weak, but not in the (220) zone, where the TO phonon cross section is strong, proves that these effects are \textit{not} due to coupling between acoustic and optic modes. Instead, these effects correlate directly with the strength and temperature dependence of the diffuse scattering cross section. This is further corroborated by measurements of the TA$_{2}$ mode in PZN-4.5PT which demonstrated a recovery in the acoustic mode when an electric field suppressed the diffuse scattering cross section.~\cite{Xu08:7} It has been shown in a series of studies describing the central peak in SrTiO$_3$ (Ref.~\onlinecite{Shapiro72:16}), and in the appendix of Ref.~\onlinecite{Stock05:74}, that a dynamic component of the diffuse scattering can couple to a low-energy, harmonic mode to produce an overdamped lineshape. Theoretical studies have also discussed this possibility.~\cite{Halperin76:14} Recent studies of PMN-30\%PT have applied this model to try to understand the heavily-damped lineshapes observed for the acoustic mode.~\cite{Matsuura11:80} Studies of the diffuse scattering using neutron spin-echo (NSE) techniques, which provide a considerable dynamic range in Fourier time, have documented the presence of such a dynamic component over a wide range in temperatures, which becomes static below $T_C \sim 210$\,K.~\cite{Stock10:81} Broad-band dielectric measurements suggest that the dynamics may even persist to much lower frequencies than are probed by the NSE technique.~\cite{Bovtun06:26,Kamba05:17} To further support the conjecture that the TA phonon damping documented here is the result of a coupling between the acoustic and diffuse scattering cross sections, we now present measurements of the TA$_1$ phonons in the (100) Brillouin zone.
\subsection{$\vec{Q}$=(1,0,0) - TA$_1$ acoustic phonons in the presence of strong diffuse scattering}
The grey, dashed line in Fig.~\ref{summary_100} represents the set of reciprocal lattice points where constant-$\vec{Q}$ scans measured in the (100) Brillouin zone are sensitive to TA$_1$ phonons; as explained earlier, constant-$\vec{Q}$ scans directly probe TA$_1$ phonons when measured along a direction $\vec{q}$ transverse to $\vec{Q}=(1,0,0)$. This figure also illustrates the relationship between these reciprocal lattice points and the elastic diffuse scattering cross section measured at 300\,K. These diffuse scattering intensity contours were also measured by Hiraka \textit{et al.} and are again taken from Ref.~\onlinecite{Hiraka04:70}. However, unlike the white, dashed line shown in Fig.~\ref{summary_110}, the grey, dashed line here does \textit{not} correspond to a ridge along which the diffuse scattering cross section is a maximum. Thus the reciprocal lattice points used to study the TA$_1$ phonon mode in the (100) Brillouin zone are \textit{not} located on ridges of maximum diffuse scattering, as was the case for the TA$_2$ phonons in the (110) Brillouin zone; instead they are located between them where the diffuse scattering cross section is nonetheless strong.
We briefly note here that Fig.~\ref{summary_100} shows diffuse scattering intensity contours measured in the (HK0) scattering plane whereas the those in Fig.~\ref{diffuse_summary} were measured in the (HHL) plane. It can be readily seen that the elastic diffuse scattering cross-section is composed of rods that are extended along the [1$\overline{1}$0] and [$\overline{1}$10] directions.
\begin{figure}[t]
\includegraphics[width=80mm]{summary_100.eps}
\caption{Elastic diffuse scattering intensity contours measured near (100) on SPINS ($E_f=4.5$\,meV) in the (HK0) scattering plane. The grey, dashed line represents the direction in reciprocal space where the TA$_1$ and TO phonons were measured. The open black circles represent typical reciprocal lattice points where constant-$Q$ scans were measured.}
\label{summary_100}
\end{figure}
\begin{figure}[t]
\includegraphics[width=92mm]{Q_100_figure.eps}
\caption{Constant inelastic scattering intensity contours measured on DCS in the (100) Brillouin zone are shown at (a) 600\,K and (b) 300\,K after integrating over $(0, 1 \pm 0.15,0)$. Underdamped, dispersive TA$_1$ phonons at 600\,K remain well-defined near the zone boundary at 300 K, but become heavily damped for intermediate wavevectors. The wide, black streak at $E=0$ is where the intensity is dominated by the incoherent, elastic scattering cross section from the sample and the aluminium sample mount. These data were obtained on the S-II PMN crystal aligned in the (HK0) scattering plane.}
\label{dispersion_100}
\end{figure}
Fig.~\ref{dispersion_100} displays reciprocal space maps, analogous to those in Fig.~\ref{dispersion_110}, that span the (100) Brillouin zone and show the TA$_1$ phonon dispersion along [100] at 600\,K and 300\,K. The constant inelastic scattering intensity contours in panel $(a)$ reveal a distinct and well-defined TA$_1$ phonon branch at 600\,K. The same contours in panel $(b)$ show that the TA$_1$ phonon dispersion does not change dramatically and remains well-defined at large $q$ near the X-point zone boundary ($\vec{Q}=(0.5,1,0)$). This behavior contrasts with that of the TA$_2$ phonon, which becomes overdamped at all $q$ at 300\,K except very close to the zone center. The data in panel $(b)$ also show that the energy of the TA$_1$ mode does not shift significantly between 600\,K and 300\,K at the zone boundary. There is, however, a strong indication that some extra broadening of the TA$_1$ phonon is present for wavevectors near the zone center at 300\,K.
\begin{figure}[t]
\includegraphics[width=80mm]{figure3.eps}
\caption{Constant-$\vec{Q}$ scans measured at $\vec{Q}=(1.0,0.14,0)$ and $(1.0,0.3,0)$ at 600\,K, 400\,K, 200 \,K,and 100\,K. The solid lines represent fits to a damped harmonic oscillator as described in the text. The data were taken with the C5 triple-axis spectrometer located at Chalk River Laboratory using $E_f=14.6$\,meV. The PMN S-I sample was aligned in the (HK0) scattering plane.}
\label{T1_allT}
\end{figure}
To explore the wavevector and temperature dependence of the damping of the TA$_1$ phonon in more detail, we examine the series of constant-$\vec{Q}$ scans shown in Fig.~\ref{T1_allT}. At 600\,K, which is above both $T_C$ and $T_d$, well-defined and underdamped TA$_1$ phonons are observed. At 400\,K, which is near $T_d = 420$\,K, i.\ e.\ the temperature where elastic diffuse scattering first appears and where the TO phonon energy reaches a minimum value, the TA$_1$ phonons at small wavevector ($q=0.14$\,rlu) become overdamped, which is consistent with the behavior suggested by the contours in Fig.~\ref{dispersion_100}. The TA$_1$ phonons at larger wavevectors ($q=0.3$\,rlu) display some evidence of a broadened linewidth, however they remain underdamped. This is a key point of this work: the damping of the TA$_1$ phonon measured at $\vec{Q}=(1,0.3,0)$ is not nearly as dramatic as that observed for the TA$_2$ mode at a comparable wavevector measured along [1$\overline{1}$0] in the (110) Brillouin zone (see the constant-$\vec{Q}$ scans measured at $\vec{Q} = (1.25,0.75,0)$ in Fig.~\ref{T2_allT}). At temperatures below $T_C$ the TA$_1$ lineshape recovers, but the phonon is still heavily damped at small $q$.
\begin{figure}[t]
\includegraphics[width=90mm]{spins_figure.eps}
\caption{Constant-$\vec{Q}$ scans measured at $\vec{Q}=(1.0,1.0,-0.05)$ and $(1.0,1.0,-0.15)$ at 600\,K, 300\,K, and 100\,K. The solid lines represent fits to a damped harmonic oscillator plus a Lorentzian function centered at $E=0$. The data were taken with the SPINS cold-neutron, triple-axis spectrometer located at NIST using $E_f=4.5$\,meV. The PMN S-II sample was aligned in the (HHL) scattering plane.}
\label{T1_spins}
\end{figure}
The behavior of the TA$_1$ phonons measured in the (100) Brillouin zone is intermediate to that of TA$_2$ phonons measured in the (110) Brillouin zone and to that of the TA phonons measured in the (200) and (220) Brillouin zones. Both TA$_1$ phonons in the (100) Brillouin zone and TA$_2$ phonons in the (110) Brillouin zone become overdamped on cooling to temperatures near $T_d$ (but above $T_C$) for a range of wavevectors between the zone center and zone boundary. However for the TA$_2$ phonons this damping extends all the way to the zone boundary; for the TA$_1$ phonons it does not. The data presented in Fig.~\ref{dispersion_100} show that TA$_1$ phonons measured at and close to the zone boundary $\vec{Q} = (1,\pm 0.5,0)$ remain underdamped at 300\,K (i.\ e.\ below $T_d$). At the other extreme is the behavior of TA phonons in the (200) and (220) Brillouin zones, for which overdamped lineshapes are \textit{never} observed at \textit{any} temperature or wavevector.
The low-$q$ damping of the TA$_1$ phonons observed in the (100) Brillouin zone is consistent with the notion of a coupling to the diffuse scattering centered at $E=0$, and Fig.~\ref{summary_100} demonstrates that a significant diffuse scattering cross section is present near (100) at small wavevectors measured along [010]. While we have examined the TA$_1$ phonons for wavevectors $q \geq 0.14$\,rlu, it is important to investigate TA$_1$ phonons at wavevectors in the long-wavelength ($q \rightarrow 0$) limit. It was shown in Ref.~\onlinecite{Gehring09:79} that TA$_2$ phonons measured in the (110) Brillouin zone at $q=0.10$\,rlu exhibit a strong damping that persists to the Brillouin zone boundary; this finding is consistent with the data in Fig.~\ref{dispersion_110}. However TA$_2$ phonons measured at much smaller $q=0.035$\,rlu do not show the dramatic changes in linewidth and energy that are seen at larger $q$. We observe a similar behavior for the long-wavelength TA$_1$ phonons measured in the (100) Brillouin zone. Data taken with good energy and $q$-resolution are presented in Fig.~\ref{T1_spins}. These data were measured in the (HHL) scattering plane near the $\vec{Q}= (1,1,0)$ and display TA$_1$ phonons at $q=0.05$\,rlu and 0.15\,rlu that can be compared to those presented in Fig.~\ref{T1_allT}. While a strongly damped TA$_1$ lineshape is confirmed for $q=0.15$\,rlu at 300\,K, which recovers upon either heating to 600\,K or cooling to 100\,K, the TA$_1$ lineshape at the smaller wavevector of $q=0.05$\,rlu shows comparatively little change in linewidth and almost no shift in energy. We therefore conclude that both TA$_1$ and TA$_2$ phonons are insensitive to the formation of polar nanoregions in the long-wavelength limit. The essential difference is that the TA$_2$ phonons measured in the (110) Brillouin zone are strongly damped all of the way to the Brillouin zone boundary whereas the TA$_1$ phonons are strongly damped only for an intermediate range of momentum transfer $q$.
\section{COMPARISON BETWEEN DIFFERENT BRILLOUIN ZONES}
\begin{figure}[t]
\includegraphics[width=80mm]{figure5.eps}
\caption{The temperature dependence of the linewidth $\Gamma$ (upper panels) and energy $\Omega_0$ (lower panels) of the TA$_1$ and TA$_2$ phonons was determined by fitting a damped harmonic oscillator lineshape to constant-$\vec{Q}$ scans measured at $\vec{Q}=(1.0,0.21,0)$ and $(1.15,0.85,0)$, respectively. The horizontal dashed lines in the two upper panels represent the average TA phonon energy and thus indicate when the phonon becomes overdamped ($\Gamma \geq \Omega_0$). The error bars are derived from the least-squares fit to the data.}
\label{proc_strong_diffuse}
\end{figure}
The results from the previous section can be summarized in the following two figures in which the temperature dependences of the linewidth ($\Gamma$) and energy position ($\Omega_0$) are plotted for TA$_1$ and TA$_2$ phonons measured at the same $|\vec{q}| = \sqrt{0.15^2 + 0.15^2} \sim 0.21$\,rlu over a large temperature range. Figure~\ref{proc_strong_diffuse} presents the lineshape analysis of the TA phonons measured in the (100) and (110) Brillouin zones, where the diffuse scattering cross section is strong, while Fig.~\ref{proc_weak_diffuse} presents that for TA phonons measured in the (200) and (220) Brillouin zones, where the diffuse scattering cross section is weak.
We start with the (100) and (110) Brillouin zones. The upper panels of Fig.~\ref{proc_strong_diffuse} show how the linewidth of the TA phonons varies from 100\,K to 600\,K, and the dashed lines represent the average TA phonon energy position over the same temperature range. These lines therefore represent the threshold above which the TA phonon lineshape changes from being underdamped ($\Gamma \textless \hbar \Omega_{0}$) to overdamped ($\Gamma \geq \hbar \Omega_{0}$). From this one can immediately see that the TA$_1$ phonon at this wavevector, although heavily damped, remains underdamped at all temperatures. By contrast, the TA$_2$ phonon at the same $|\vec{q}|$ becomes overdamped between 200\,K and 400\,K. We emphasize that this does \textit{not} imply that the TA$_1$ phonon is never overdamped; on the contrary, the constant-$\vec{Q}$ scan measured at lower $q$ $(1,0.14,0)$ shown in Fig.~\ref{T1_allT} reveals a clearly overdamped TA$_1$ lineshape at 400\,K. The point of this analysis is to show that for a given temperature the overdamping of the TA$_2$ phonon persists to larger $q$ than does that for the TA$_1$ mode. Hence the damping mechanism affecting the TA modes in PMN is anisotropic in reciprocal space, and this anisotropy correlates with that of the diffuse scattering cross section.
The lower two panels of Fig.~\ref{proc_strong_diffuse} show that the TA phonon energy positions shift significantly with temperature. Of particular note is that the TA mode energies are both minimal at a temperature that coincides well with T$_{d} \sim$ 420\,K (where elastic diffuse scattering first appears, see Fig.~\ref{diffuse_cte}, and the TO phonon energy is lowest). However these data also reveal a key difference in that the TA$_2$ phonon energy position is significantly lower than that for the TA$_1$ phonon over the entire temperature range.
\begin{figure}[t]
\includegraphics[width=80mm]{figure6.eps}
\caption{The temperature dependences of the linewidth $\Gamma$ (upper panels) and energy $\Omega_0$ (lower panels) of the TA$_1$ and TA$_2$ phonons measured in the (200) and (220) Brillouin zones, respectively. The values are obtained from constant-$\vec{Q}$ scans measured at the exactly same reduced wavevector $\vec{q}$ as those shown in Fig.~\ref{proc_strong_diffuse}. The error bars are derived from the least-squares fit to the data; for the energy positions the error bars are equal to the size of the data points.}
\label{proc_weak_diffuse}
\end{figure}
Having demonstrated that the TA phonons in PMN are strongly coupled to the diffuse scattering cross section, we revisit the (200) and (220) Brillouin zones where the elastic diffuse scattering cross section is weak, but non-zero. To this end we show in Fig.~\ref{proc_weak_diffuse} the temperature dependence of the linewidth ($\Gamma$) and energy positions ($\Omega_0$) of the \textit{same} phonons studied in Fig.~\ref{proc_strong_diffuse}, but measured in the (200) and (220) Brillouin zones. Fig.~\ref{proc_weak_diffuse} also combines previously published and unpublished data measured on other PMN single crystals using the BT9 thermal-neutron, triple-axis spectrometer, located at NIST, with data taken on the larger S-I and S-II PMN crystals studied here. The agreement between these data sets prove that there is no sample dependence to the results discussed in this paper. They also demonstrate that both the TA$_1$ and TA$_2$ phonons measured in these zones are strongly damped. However, unlike the behavior observed in the (100) and (110) Brillouin zones, these TA phonons are never overdamped. Moreover, the TA phonon energy positions display conventional behavior in that they slowly increase with decreasing temperature. We also observe a key difference wherein the TA$_1$ phonon linewidth narrows markedly below $T_d$ while the TA$_2$ phonon remains strongly damped to lowest temperatures. It is interesting to note that the largest TA phonon linewidths are roughly comparable to those measured in the (100) and (110) Brillouin zones; however the TA phonons in the (200) and (220) Brillouin zones never become overdamped because the energy positions are not affected to the same degree as they are in the (100) and (110) Brillouin zones.
The lower-$q$ TA phonons display strong damping below $T_d$, and it is only below $T_C$ that the TA$_1$ phonons recover; by contrast the damping of the TA$_2$ phonons persists to low temperature. This result is consistent with the highly anisotropic correlations proposed in several models involving domain walls or short-range, polar correlations existing along [1$\overline{1}$0] as TA phonons propagating along this direction will be most disrupted by the disorder. The Brillouin zone dependence of the TA phonon energy position that results in a strongly overdamped lineshape in the (110), but not in the (220) Brillouin zone comes about from the coupling between the acoustic mode and the diffuse scattering cross section.
The diffuse scattering in PMN has recently been shown to have an inelastic component that was directly observed in neutron spin-echo measurements~\cite{Stock10:81} and that was proposed on the basis of cold-neutron scattering experiments in Ref.~\onlinecite{Gvas04:49}. In these experiments it was found that by fitting the spectrum the inelastic linewidth of the diffuse scattering cross section varied as $q^2$. The energy scale was found to be comparable to the TA phonon energy, thus providing a strong channel for coupling and renormalization of the phonon energy. The diffuse scattering cross section along the direction of propagation of TA$_1$ phonons in the (100) Brillouin zone is not as large as that for TA$_2$ phonons in the (110) Brillouin zone. Since the energy scale and intensity of the diffuse scattering along $\langle 100 \rangle$ are presumably much weaker, it seems logical that it would only provide a coupling over a small range of wavevectors where the energy scale matches that of the corresponding TA$_1$ phonon. The apparent lack of coupling at the smallest wavevectors, as illustrated in Fig.~\ref{T1_spins}, would then result from the energy scale of the diffuse scattering being too small to strongly couple to the TA phonon. This physical model of coupling is consistent with the general ideas proposed for mode coupling in KTaO$_3$ (Ref.~\onlinecite{Axe70:1}) where it was suggested that the coupling constant scales linearly with the reduced wavevector $q$ measured relative to the Brillouin zone center.
It is important to compare our neutron scattering measurements of the low-energy acoustic phonons in PMN with those using other techniques including Brillouin scattering and ultrasound. Ref.~\onlinecite{Lushnikov08:77} presents a summary of both Brillouin (measured by the authors) and ultrasound measurements (taken from Ref.~\onlinecite{Smo85:27}) covering a broad range of frequencies spanning from the MHz to GHz range. Their data demonstrate a softening of the elastic constants at temperatures below 300\,K, and the magnitude of the softening is frequency dependent. The linewidth of the Brillouin peaks has also been reported in Ref.~\onlinecite{Tu95:78}, and a broadening and recovery have been reported for the longitudinal acoustic (LA) [001] mode, but no broadening was observed for the transverse mode. These results are generally inconsistent with the results reported here. We observe significant broadening for the TA$_1$ and TA$_2$ phonons with the former displaying a recovery at low temperatures. We also observe that the anomalies in the energy position become less pronounced at smaller wavevectors and lower energies. We remark that our data reflects the lattice dynamics of PMN on the THz energy scale, which is nearly three orders of magnitude larger than that reported by ultrasound or Brillouin scattering experiments. We therefore conclude that the effects reported at much lower frequencies are distinct from those presented here. We also note that infrared and dielectric experiments have reported significant dynamics down to much lower frequencies than are accessible with neutron scattering techniques.
\section{ELASTIC CONSTANTS OF PMN}
Based on the slopes of the acoustic phonon dispersions near the zone center, we can calculate the elastic constants using the equations of motions outlined in Ref.~\onlinecite{Dove:book}. For a cubic crystal, the elastic constant matrix is determined by three values - C$_{11}$, C$_{44}$, and C$_{12}$. C$_{44}$ is determined uniquely from the slope of the TA$_1$ mode and C$_{11}$ is fixed by longitudinal modes polarized along [100]. The elastic constant C$_{12}$ is not uniquely determined, however the slope of the TA$_2$ mode is determined by the difference (C$_{11}$-C$_{12}$), and the slope of the longitudinal mode polarized along [110] is set by the sum (C$_{11}$+C$_{12}$+2C$_{44}$).~\cite{Noda89:40} Therefore, C$_{12}$ can be determined from our measurements once C$_{11}$ and C$_{11}$-C$_{12}$ are known, and it can be checked for consistency between the two branches.
\begin{table}[h!]
\begin{tabular}{c|c|c|c|}
Material (Technique) & C$_{11}$ & C$_{12}$ & C$_{44}$\\
\hline
\hline
PMN(neutron) & $1.39(7)$ & $0.43(5)$ & $0.53(3)$ \\
PMN(Brillouin-Tu \textit{et al.}) & 1.49 & - & 0.68 \\
PMN(Brillouin-Ahart \textit{et al.}) & 1.56 & 0.76 & 0.685 \\
\hline
\hline
PT (neutron-Tomeno \textit{et al.}) & 2.32 & 1.06 & 0.72 \\
PT (Brillouin-Li \textit{et al.}) & 2.35 & 0.97 & 0.65 \\
\hline
\hline
\end{tabular}
\caption{A summary of the elastic constants (in units of $10^{11} N/m^2$) measured at 300\,K for a series of PMN and PT compounds. The Brillouin and neutron scattering results on PT were taken from Ref.~\onlinecite{Tu95:78,Ahart07:75,Tomeno06:73,Li93:41}. The elastic constants were calculated assuming a density $\rho$=8.19 g/cm$^{3}$.}
\label{table:sample}
\end{table}
We have determined the elastic constants by measuring the slopes of the transverse and longitudinal acoustic phonons near the (200) and (220) Brillouin zone centers. We have chosen these zones because the diffuse scattering cross section is very weak and the temperature dependence of the phonon energy positions exhibits little effects from the coupling to the relaxational dynamics that are prevalent in the (100) and (110) Brillouin zones. To extract an accurate value for the limiting slope for the phonon dispersion curves we have restricted ourselves to wavevectors less than 15\% of the Brillouin zone in order to assure linearity in the dispersion.
The values of the elastic constants for PMN at 300\,K are listed in Table~\ref{table:sample} and are compared to values obtained from Brillouin scattering measurements. The neutron and Brillouin scattering measurements agree well and the discrepancy can be resolved through the fact that neutrons probe the dynamics on the THz timescale where Brillouin techniques study the dynamics in a significantly lower frequency range. The published values for PbTiO$_3$ are also listed for comparison. While C$_{44}$ for PMN and PbTiO$_3$ are very similar, both C$_{11}$ and C$_{12}$ are reduced in PMN. The velocity of the TA$_2$ mode (set by C$_{11}$-C$_{12}$) is also reduced with C$_{11}$-C$_{12}$=0.88 for PMN and 1.32 for PbTiO$_3$. It is interesting to note that near the morphotropic phase boundary (MPB), which occurs when $\sim 33$\%PT is substituted for PMN, the difference C$_{11}$-C$_{12}$ reduces further and nearly approaches zero, but then increases for Ti-rich compositions beyond the MPB boundary.~\cite{Cao04:96} This suggests the presence of an instability of relaxors to TA$_2$ phonons.~\cite{Cowley13:13} This softening of the acoustic branch maybe a contributing factor to the increased coupling observed between the relaxational and harmonic modes in the (100) and (110) Brillouin zones. Another difference is seen in the values of the elastic anisotropy, defined as 2C$_{44}$/(C$_{11}$-C$_{12}$), which for PMN is 1.43 and for PbTiO$_3$ is 0.75. Again, this is indicative of a softening of the TA$_2$ branch and possibly indicative of an instability to a homogeneous deformation along $\langle 110 \rangle$.
Fig.~\ref{proc_weak_diffuse} summarizes the energy positions of the TA$_1$ and TA$_2$ modes as a function of temperature in the (200) and (220) Brillouin zones. These data show that a small increase occurs with decreasing temperature, but there is no concurrent softening of the elastic constants. The data are suggestive that, on the THz timescale probed by neutrons, there are only relatively modest changes in the elastic constants. This finding stands in contrast to that in Ref.~\onlinecite{Lushnikov08:77}, which presents Brillouin data measured in the MHz range that shows a large softening of the elastic constants (including C$_{44}$). We emphasize that our experiments are limited by the energy resolution of the spectrometer to the THz energy range, and we are not able to access the MHz region presented in Ref.~\onlinecite{Lushnikov08:77}. As discussed in Ref.~\onlinecite{Cowley67:90}, the difference in elastic constants at these extremes in frequency maybe due to anharmonic effects. Dielectric and infrared measurements have shown significant dynamics at low energies, which might be contributing to the apparent discrepancy between the MHz and THz range.~\cite{Bovtun09:79}
\section{COMPARISON BETWEEN THE RELAXOR PMN AND THE FERROELECTRIC PMN-60\%PT}
\begin{figure}[t]
\includegraphics[width=75mm]{PMN_60PT_figure.eps}
\caption{The TA$_2$ phonon in PMN-60\%PT measured in the (110) Brillouin zone. The data were taken using the N5 thermal-neutron, triple-axis spectrometer with the sample aligned in the (HK0) scattering plane.}
\label{PMN_60PT_figure}
\end{figure}
As demonstrated in Ref.~\onlinecite{Mat06:74}, the diffuse scattering cross section can be tuned through PbTiO$_3$ (PT) doping, and a suppression of the diffuse scattering cross section is observed for large dopings that lie beyond the MPB. Such Ti-rich compositions also show no relaxor properties; instead they exhibit well-defined ferroelectric phases at low temperatures similar to that reported in PbTiO$_3$. As further evidence that the TA phonon damping reported here in PMN is related to the polar nanoregions that are associated with the diffuse scattering cross section, we present several comparative constant-$\vec{Q}$ scans of the TA$_2$ phonons in PMN-60\%PT measured near the (110) Brillouin zone in Fig.~\ref{PMN_60PT_figure}. PMN-60\%PT displays no measurable diffuse scattering and displays a well-defined, first-order phase transition from a cubic to a tetragonal unit cell. Concomitant with this transition the zone center TO mode softens and then hardens in a manner similar to that measured in pure PbTiO$_3$.~\cite{Shirane70:2,Hlinka06:73} The phonons in Fig.~\ref{PMN_60PT_figure} are also well-defined and underdamped. The temperature of 590\,K at which these scans were made was chosen because it is just above the structural transition of $\sim 550$\,K and serves to demonstrate that the suppression of the diffuse scattering cross section removes the strong damping and energy renormalization reported earlier for PMN in Fig.~\ref{dispersion_110}. These results confirm the fact that the large damping and softening of the TA$_2$ mode in pure PMN is the result of short-range, polar correlations or polar nanoregions.
\section{DISCUSSION AND CONCLUSIONS}
\begin{figure}[t]
\includegraphics[width=85mm]{cartoon.eps}
\caption{A schematic illustration of the phonon dynamics for the TA$_1$ and TA$_2$ phonons measured in Brillouin zones where the diffuse scattering is strong ((100), (110), and (300)) and weak (200). The TO mode in the (110) zone was too weak to provide any information on the dampening. While the TO mode in the (100) zone was also prohibitively weak, the diagram above assumes the lineshape should scale as the structure factor. Further discussion of the curves is provided in the text.}
\label{cartoon}
\end{figure}
We have reported the results of a detailed study of the acoustic phonons in a number of Brillouin zones in PMN. We have observed that the TA phonons measured in the (200) and (220) Brillouin zones are well-defined and exhibit a conventional temperature dependence expected from harmonic theory in the absence of strong TA-TO mode coupling. While the TO phonon cross section in these zones is strong and readily observable, the diffuse scattering intensity is weak. The situation in the (100) and (110) Brillouin zones is completely different. In these zones the diffuse intensity is strong and the optic phonon structure factor is weak. While the TA phonons in these zones (particularly in the (110) Brillouin zone) take on a heavily damped lineshape because of a strongly renormalized energy scale, the energy linewidth is similar to that observed in the (200) and (220) Brillouin zones. Because the lattice dynamics in different Brillouin zones is invariant, we attribute the damped lineshape in these zones to a strong coupling between the diffuse scattering intensity and the TA mode.
For temperatures $T_{c}\textless T \leq T_{d}$, the TA$_2$ phonons studied in the (110) Brillouin zone become overdamped over nearly the entire zone. However, low-$q$ phonons representing long-wavelength excitations remain underdamped as expected on the basis of a coupling strength that scales with $q$.~\cite{Axe70:1} The TA$_1$ phonons measured in the (100) Brillouin zone display significant damping at small wavevectors. These phonons recover below T$_{c}$. At larger wavevectors, however, while some damping is seen, the TA$_1$ phonons remain underdamped at all temperatures. The difference in behavior between TA$_{1}$ and TA$_{2}$ phonons is consistent with the idea of coupling between the diffuse scattering cross section and TA phonons because the former is a maximum when measured along [1$\overline{1}$0].
The behavior of the low-energy TA and TO phonons is qualitatively summarized in Fig.~\ref{cartoon}. This figure illustrates the low-energy TA and TO modes and denotes a broadening of the mode, indicative of damping, through the shading. Based on our experimental measurements, we observe a direct correlation between the damping of the TA mode and the the diffuse scattering cross section; we therefore conclude that there is a strong coupling between the two. We have also reported strong damping of the TA phonon in the (200) and (220) Brillouin zones, representing the anisotropy of the polar correlations.
Indeed the presence of a strong relaxational component to the diffuse scattering may help to explain recent hyper-Raman data,~\cite{Zein10:105} which were used to support the case for a second, low-energy TO mode. We note that this study never observed a well-defined, underdamped mode, but rather a strongly-damped feature that could be interpreted in terms of relaxational dynamics. Thus many of the anomalous features resulting from extra intensity at low energies in the (300) Brillouin zone could be interpreted in terms of the strong diffuse scattering coupling with the harmonic modes as observed in our data, which were measured in the absence of strong optic modes.
\subsection{The Origin of the Waterfall Effect is not due to TA-TO mode coupling}
We do not agree with the interpretation of the waterfall effect, or the temperature dependence of the TA and TO modes, in terms of coupling between TA and TO modes. Such suggestions have been motivated by a comparison with BaTiO$_3$.~\cite{Hlinka03:91} However, the lattice dynamics of BaTiO$_3$ and PMN are completely different. The coupling in BaTiO$_3$ along [110] results from the fact that the TA and soft TO mode branches cross at small wavevectors; thus, because these modes also have the same symmetry, they become strongly coupled and spectral weight is transferred between the modes, which then produces large energy shifts.~\cite{Shirane70:2} Such a situation does not occur in PMN because the lattice dynamics are relatively isotropic in comparison to BaTiO$_3$ and no mode crossing occurs. We have shown that no significant changes occur in either the TA phonon energy position or the spectral weight, both of which should happen in a strong coupling picture. While some small amount of coupling must exist owing to the fact that the low-energy TA and TO phonons have the same symmetry, we do not believe that this weak coupling dominates the physics of PMN as previously suggested based on the comparison with BaTiO$_3$.
We do see anomalies in the TA phonons measured in the (100) and (110) Brillouin zones. But we have proven that these anomalies cannot be the result of coupling to the TO mode because they are absent in the (200) and (220) Brillouin zones, precisely where the TO phonon structure factor is strong. By contrast, the TO phonon structure factor is weak in the (100) and (110) Brillouin zones and this phonon is not easily seen in a constant-$\vec{Q}$ scan. This makes the TA-TO mode coupling idea highly improbable because the coupling comes from terms proportional to $\lambda F_{TA} F_{TO}$, where $\lambda$ is the coupling constant and $F$ is the phonon structure factor.~\cite{Harada71:4} A Brillouin zone in which the TO mode cross section is weak will therefore display weak coupling effects between TA and TO phonon branches. Interpreting the phonon anomalies we have observed in (100) and (110) Brillouin zones in terms of TA-TO mode coupling is therefore not self consistent.
Several studies, including Ref.~\onlinecite{Hlinka03:91} and Ref.~\onlinecite{Waki02:66}, have proposed TA-TO mode coupling in order to explain anomalies observed at small wavevectors in the (300) Brillouin zone. But this zone has a very strong diffuse scattering cross section (see Ref.~\onlinecite{Hirota02:65}) and a strong TO mode, which softens to low energies at small wavevectors. It is possible that a coupling may exist between the diffuse scattering and the TO mode, and this might provide an alternate explanation of some of the anomalous features in this zone such as the observation of a different $q$-dependent TO phonon damping compared to other Brillouin zones (Ref.~\onlinecite{Hlinka03:91}) and an apparent enhancement of the TA mode intensity (Ref.~\onlinecite{Waki02:66}). It might also provide an explanation for the presence of a second ``quasi-optic" mode that was also observed in the (300) Brillouin zone.~\cite{Vak02:66}
We believe a much more likely cause of the broadening of the soft TO mode (i.\ e.\ the waterfall effect) may be the presence of strong disorder induced by random, dipolar fields. This would explain the broadening observed in both pure PMN and PMN-60\%PT, which is largely absent in the parent compound PbTiO$_3$. It is interesting to note that disorder has been suggested as the origin of the broadening of the soft TO mode in BaTiO$_3$.~\cite{Dove:book}
\subsection{A second low-energy optic mode as the cause for the anisotropic dynamics?}
The broadening of the low-energy transverse optic mode has been explained in terms of a second mode based on hyper-raman (Ref. \onlinecite{Zein10:105}) and some neutron scattering results (Refs. \onlinecite{Vak02:66,Vak10:52}). We provide an alternative explanation in this paper for the low-energy spectral weight and we do not believe that our results, particularly in the $\vec{Q}$=(1,1,0) and (1,0,0) zones, are consistent with this picture. The broadening that we observe at low energies in these Brillouin zones becomes less pronounced and to vanish in the limit $q \rightarrow$ 0. This is consistent with coupling to a relaxational component and inconsistent with a zone center feature such as a second optic mode. Furthermore, we note that the broadening coincides with T$\sim$ 400 K where a static component to the diffuse scattering is onset at lower temperatures (Ref. \onlinecite{Stock10:81}), therefore linking the anomalous broadening we observe to the relaxational dynamics characterizing the diffuse scattering. We note that we have not pursued an understanding of the $\vec{Q}$=(3,0,0) Brillouin zone (as discussed in Refs. \onlinecite{Vak02:66,Vak10:52}) in this report as the combination of strong optic, acoustic, and diffuse scattering are present and will undoubtably complicate the interpretation of the dynamics.
Hyper-raman experiments (Ref. \onlinecite{Zein10:105}), however, illustrate the presence of significant spectral weight at small energy transfers consistent with a second low-energy optic mode. The temperature dependence was believed to point to a softening at 400 K, consistent with the neutron results described in Refs. \onlinecite{Vak02:66,Vak10:52}. The interpretation in terms of two modes is further corroborated by the splitting of higher energy optic modes probed using Raman spectroscopy.~\cite{Zein11:109,Zein08:89,Hehlen07:75} The low-energy measured peaks, however, are significantly broader than the resolution and are over damped with the linewidth being larger or at least comparable with the energy position. We therefore believe that the data is consistent the results presented here in the $\vec{Q}$=(2,0,0) and (2,2,0) zones where the zone center scattering is interpreted in terms of a single over damped mode. The hyper raman results maybe also consistent with the relaxational component to the diffuse scattering observed with spin-echo techniques. The comparison made by Raman techniques to higher energy split modes is compelling and neutrons have not yet been successful in observing a splitting of these higher energy modes.
\subsection{Anisotropic polar nanoregions}
Several models of the diffuse scattering have challenged the idea that polar nanoregions are highly anisotropic in real space.~\cite{Bosak11:xx} The results presented here challenges these ideas and instead point to a damping mechanism that is highly anisotropic given the large differences observed between the TA$_1$ and TA$_2$ phonons in the (HK0) scattering plane. The large degree of anisotropy in the damping implies some sort of real space structure that is also highly anisotropic. Phonons propagating along directions that exhbiti weak polar correlations are likely to be more heavily damped than phonons propagating along directions where long range correlations exist. Our results further suggest that short-range, polar order is responsible for the strong damping of the TA phonons and that the size of this damping is tied to the correlation length associated with the diffuse scattering. We note that the TA phonon damping reported here only reflects anisotropic correlations. Thus we are not able to favor the ``pancake" model over the [1$\overline{1}$0] domain model proposed by different groups. The results here simply point to anisotropic polar correlations.
\subsection{Dynamics, two temperature scales, and the high-temperature scale $T_d$}
Considerable debate and discussion have been devoted to the nature and existence of the well-known Burns temperature, which for PMN is purported to occur around 620\,K. Our studies here suggest that this high-temperature scale is a purely dynamic phenomena and thus depends on the energy resolution of the experimental probe used to measure it. This is consistent with results from neutron dynamic pair density function analysis, which illustrate that the off-centering of the lead in PMN is dynamic at temperatures above $\sim 350-400$\,K.~\cite{Dmowski08:100} Static, polar regions begin to appear at 420\,K where we observe a minimum in the soft TO frequency and the appearance of static diffuse scattering on GHz timescales measured with neutron inelastic scattering. This lower temperature is also consistent with Raman studies in which a splitting of some modes was observed, although this was interpreted as evidence for a third temperature scale.~\cite{Tolouse08:369} A third temperature scale has also been postulated based lattice constant measurements;~\cite{Dkhil09:80} however, as demonstrated here, such evidence is highly suspect given that the thermal expansion is sample dependent, and possibly results from the effect of a significant skin effect.
We believe that all of the results on PMN can be broadly interpreted in terms of only two temperature scales. A high temperature scale ($T_d \sim 420$\,K), which is characterized by the appearance of static polar correlations and where a minimum in the soft mode energy exists, and a lower temperature scale ($T_C \sim 210$\,K), where domains form and a long-range ferroelectric phase can be induced through the application of a sufficiently strong electric field. We believe the Burns temperature is dynamical and should not be interpreted in terms of a well-defined phase transition. This idea was discussed in Ref.~\onlinecite{Gehring09:79} where the Burns temperature was redefined to be $T_d = 420 \pm 20$\,K for PMN, instead of the much higher value of 620\,K that had been derived from index of refraction measurements.
The two temperature scales discussed here are naturally reflected in the dynamics of the TA phonons presented above. The high temperature scale ($T_d$) is where the broadening of both the TA$_1$ and TA$_2$ phonons is a maximum and where the energy renormalization due to the coupling to the relaxational diffuse scattering is most apparent. The lower temperature scale ($T_C$) marks the recovery of the TA$_1$ phonon in terms of a decrease in the damping in energy. This lower temperature scale is not strongly apparent in the dynamics of the TA$_2$ phonon, however, we propose that this is due to the real space structure of the low-temperature ferroelectric domains. We emphasize that our results only demonstrate that the ferroelectric correlations in real space are highly anisotropic; this is consistent with the pancake model (Ref.~\onlinecite{Xu03:70,Welberry05:38,Welberry06:74}) and all other models that invoke anisotropic correlations (for example Ref.~\onlinecite{Pasciak07:76,Ganesh10:81}). We are not able to distinguish between these various models on the basis of our experimental data.
\subsection{Summary}
We have demonstrated an anisotropic damping and energy renormalization of the TA phonons in PMN. TA phonons in all zones display strong damping below $T_d$, where static diffuse scattering is onset, indicative of short-range, polar order. Only below $T_C$, where the diffuse scattering becomes completely static, do TA$_1$ phonons propagating along [100] recover a normal lineshape. However, the TA$_2$ phonons, which travel along [1$\overline{1}$0], do not, therefore revealing the underlying anisotropic structure of the polar correlations. TA phonons measured in zones that have a strong diffuse cross section exhibit a strong renormalization in energy, which is indicative of a coupling to the relaxational component of the diffuse scattering. This coupling might provide an explanation for the extra neutron scattering intensity measured in the (300) Brillouin zone that was used to support the ideas of TA-TO mode coupling and the presence of a second, low-energy, TO mode. We have also provided measurements of the three elastic constants in the THz energy range. These are all in reasonable agreement with published data using Brillouin scattering. When compared to pure PbTiO$_3$, our data suggest an instability exists in PMN that is related to the TA$_2$ acoustic branch.
\section{ACKNOWLEDGMENTS}
We acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada, the National Research Council of Canada, and from the U.S. DOE under contract No. DE-AC02-98CH10886, and the Office of Naval Research under Grants No. N00014-02-1-0340, N00014-02-1-0126, and MURI N00014-01-1-0761.
|
1,116,691,498,329 | arxiv | \section{Introduction}
The subgradient method plays a central role in large-scale optimization and its numerous applications.
The primary goal of the method for nonsmooth and nonconvex optimization is to find generalized critical points. For example, for a locally Lipschitz continuous function $f$, we may be interested in finding a point $x$ satisfying the inclusion $0\in \partial f(x)$, where the symbol $\partial f(x)$ denotes the Clarke subdifferential.\footnote{The subdifferential $\partial f(x)$ is the convex hull of all limits of gradients taken at points approaching $x$, and at which the function is differentiable.}
The main difficulty in analyzing subgradient-type methods is that it is unclear how to construct a Lyapunov potential for the iterates when the target function is merely Lipschitz continuous. One popular strategy to circumvent this difficulty is to pass to continuous time where a Lyapunov function may be more apparent. Indeed, for reasonable function classes, the objective itself decreases along the continuous time subgradient trajectories of the function. For example, this is the path classically followed by Bena{\"\i}m et al. \cite{BHS,BHS2}, Borkar \cite{Borkar}, Ljung \cite{Ljung}, and more recently by Davis et al. \cite{DDKL} and Duchi-Ruan \cite{duchi_ruan}. \smallskip
Setting the formalism, consider the task of minimizing a Lipschitz continuous function $f$ on ${\mathbb R}^d$ by the subgradient method. It is intuitively clear that the asymptotic performance of the algorithm is dictated by the long term behavior of the absolutely continuous trajectories $\gamma\colon[0,\infty)\to{\mathbb R}^d$ of the associated subgradient dynamical system
\begin{equation}\label{eqn:subgrad_sys}
-\dot{\gamma}(t)\in \partial f(\gamma(t))\qquad \textrm{for a.e. } t\in [0,\infty).
\end{equation}
Asymptotic convergence guarantees for the subgradient method, such as the seminal work of Nurminskii \cite{Nurminskii1973} and Norkin \cite{norkin1978nonlocal}, and the more recent works of Duchi-Ruan \cite{duchi_ruan} and Davis et al. \cite{DDKL}, rely either explicitly or implicitly on the following assumption.
\begin{enumerate}
\item[-] (Lyapunov) For any trajectory $\gamma(\cdot)$ emanating from a noncritical point of $f$, the composition $f\circ \gamma$ must strictly decrease on some small interval $[0,T)$.
\end{enumerate}
For example, it is known that the Lyapunov property holds for any convex \cite{bruck,Brezis}, subdifferentially regular \cite{DDKL,MMM}, semi-smooth, and Whitney stratifiable functions \cite{DDKL}. Since this property holds for such a wide class of functions, it is natural to ask the following question.
\begin{center}
-- Is the Lyapunov property simply true {\em for all} Lipschitz continuous functions\,?
\end{center}
In this work, we show that the answer is negative ({\em c.f.} Subsection~\ref{ss3.1}).
Indeed, we will show that there exist pathological Lipschitz functions that generate subgradient curves \eqref{eqn:subgrad_sys} with surprising behavior. As the first example, we construct (see Proposition~\ref{prop3}) a Lipschitz continuous function $f\colon{\mathbb R}^2\to{\mathbb R}$ and a subgradient curve $\gamma\colon [0,T)\to{\mathbb R}^2$ emanating from a non-critical point $\gamma(0)$, such that $f\circ \gamma$ {\em increases $\alpha$-linearly}:
$$f(\gamma(t))-f(\gamma(0))\geq \alpha t\,,\qquad \textrm{for all }t\in [0,T].$$
In particular, the Lyapunov property clearly fails.
Our second example presents a Lipschitz function $f\colon{\mathbb R}^4\to{\mathbb R}$ and a {\em periodic} subgradient curve $\gamma\colon [0,\infty)\to{\mathbb R}^d$ that contains no critical points of $f$. In particular, the limiting set of the trajectory $\gamma(\cdot)$ is disjoint from the set of critical points of the function $f$ (see Theorem~\ref{Thm_A} in Subsection~\ref{ss3.2}).
Our final example returns to the discrete subgradient method with the usual nonsummable but square summable stepsize. We construct a Lipschitz function $f\colon{\mathbb R}^4\to{\mathbb R}$, for which the subgradient iterates form a limit cycle that is disjoint from the critical point set of $f$ ({\em c.f.} Subsection~\ref{ss3.3}). Thus the method fails to find any critical points of the constructed function. \smallskip
The examples we construct are built from so-called ``interval splitting sets'' (Definition~\ref{def2}). These are the subsets of the real line, whose restriction to any interval is neither zero- nor full-measure. Splitting sets have famously been used by Ekeland-Lebourg \cite{Lebourg} and Rockafellar \cite{rockafellar1981favorable} to construct a pathological Lipschitz function, for which the Clarke subdifferential is the unit interval $\partial f^{\circ}=[-1,1]$ everywhere. Later, it was shown that functions with such pathologically large subdifferentials are topologically \cite{BW2000} and algebraically generic \cite{DF2019} (see Section~\ref{s2}). Notice the function above does not directly furnish a counterexample to the Lyapunov property, since every point in its domain is critical. Nonetheless, in this work, we borrow the general idea of using splitting sets to define Lipschitz functions with large Clarke subdifferentials. The pathological subgradient dynamics then appear by an adequate selection of subgradients that yields a smooth vector field with simple dynamics. It is worthwhile to note that in contrast to the aforementioned works, the functions we construct trivially satisfy the conclusion of the Morse-Sard theorem: the set of the Clarke critical values has zero measure.
Although this work is purely of theoretical interest, it does identify a limitation of the subgradient method and the differential inclusion approach in nonsmooth and nonconvex optimization. In particular, this work supports the common practice of focusing on alternative techniques (e.g. smoothing \cite{ermolievnorkinwets95,nestSpok17}, gradient sampling \cite{BLO05}) or explicitely targeting better behaved function classes (e.g. weakly convex \cite{semiconcave,Nurminskii1973,MR520481}, amenable \cite{amen}, prox-regular \cite{prox_reg}, generalized differentiable \cite{norkin1978nonlocal,ermoliev2003solution,MR0461556}, semi-algebraic \cite{BDLS,MR2486055}).
\section{Notation}
\label{s2}
Throughout, we let ${\mathbb R}^d$ denote the standard $d$-dimensional Euclidean space with inner product $\langle \cdot,\cdot\rangle$ and the induced norm $\|x\|=\sqrt{\langle x,x\rangle}$. The symbol $\mathbb{B}$ will stand for the closed unit ball in ${\mathbb R}^d$.
For any set $A\subset{\mathbb R}^d$, we let $\chi_A$ denote the function that evaluates to one on $A$ and to zero elsewhere. Throughout, we let $m(\cdot)$ denote the Lebesgue measure in ${\mathbb R}$.
A function $f\colon\mathcal{U}\to \mathbb{R}$, defined on an open set $\mathcal{U}\subset{\mathbb R}^d$, is
called {\em Lipschitz continuous} if there exists a real $L>0$ such that the estimate holds:
\begin{equation}\label{eqn:lip_est}
|f(x)-f(y)|\leq L\| x-y\|\qquad \forall x,y\in \mathcal{U}.
\end{equation}
The infimum of all constants $L>0$ satisfying \eqref{eqn:lip_est} is called the Lipschitz modulus of $f$ and will be denoted by $\|f\|_{\mathrm{Lip}}$.
For any Lipschitz continuous function $f$, we let $\mathcal{D}_{f}\subset \mathcal{U}$ denote the set of points where $f$ is differentiable. The classical Rademacher's
theorem guarantees that $\mathcal{D}_{f}$ has full Lebesgue measure in $\mathcal{U}$. The {\em Clarke subdifferential of $f$ at $x$} is then defined to be the set
\begin{equation}
\partial f(x):={\mathrm{conv}} \left \{ \lim_{k\to
\infty}\ \nabla f(x_{k}): x_k\to x \textrm{ and }\{x_{k}\}\subset \mathcal{D}_{f}\right \}, \label{Clarke}
\end{equation}
where the symbol, ${\mathrm{conv}}$, denotes the convex hull operation. It is important to note that in the definition, the set $\mathcal{D}_{f}$ can be replaced by any full-measure subset $\mathcal{D}\subset\mathcal{D}_{f}$;
see \cite[Chapter~2]{Clarke} for details. It is easily seen that $\partial f(x)$ is a nonempty convex compact set, whose elements are bounded in norm by
$\|f\|_{\mathrm{Lip}}$. A point $x$ is called {\em (Clarke) critical} if the inclusion $0\in \partial f(x)$ holds. We will denote the set of all critical points of $f$ by $\mathrm{Crit}(f)$. A real number $r\in{\mathbb R}$ is called a {\em critical value} of $f$ whenever there exists some point $x\in \mathrm{Crit}(f)$ satisfying $f(x)=r$.
Though the definition of the Clarke subdifferential is appealingly simple, the behavior of the (set-valued) map $x\rightrightarrows \partial f(x)$ can be quite pathological; e.g. \cite{BMW1997,CPT2005,Dymond-Kaluza2019}. For example, there exists a $1$-Lipschitz function $f\colon{\mathbb R}\to{\mathbb R}$ having maximal possible subdifferential $\partial f(x)=[-1,1]$ at every point on the real line. The first example of such {\em Clarke-saturated} functions appears in \cite[Proposition~1.9]{Lebourg}, and is based on interval splitting sets.
\begin{definition}[Splitting set]\label{def2} A measurable set $A\subseteq \mathbb{R}$ is said to {\em split intervals} if for every nonempty interval $I \subset {\mathbb R}$ it holds
\begin{equation}
0<m(A\cap I)<m(I)\,, \label{split}
\end{equation}
where $m$ denotes the Lebesgue measure.
\end{definition}
The first definition and construction of a splitting
set goes back to \cite{Kirk}, while the first examples of Clarke saturated functions can be found in \cite{rockafellar1981favorable,Lebourg}. The basic construction proceeds as follows. For any fixed set $A\subseteq \mathbb{R}$ that split intervals, define the univariate function
$$f(t)=\int^t_{0} \chi_{A}(\tau)-\chi_{A^c}(\tau)~d\tau\qquad \textrm{for all }t\in {\mathbb R}.$$
An easy computation shows the $f$ is Clarke saturated, namely $\partial f(t)=[-1,1]$ for all $t\in{\mathbb R}$. We will use this observation throughout.
Famously, the papers \cite{BMW1997,BW2000} established that, for the uniform topology, a ``generic'' Lipschitz function (in
the Baire sense) is Clarke saturated. Although Clarke-saturation is not
generic under the $\|\cdot\|_{\mathrm{Lip}}$-topology, it has been recently
proved in \cite{DF2019} that the set of Clarke-saturated functions contains a nonseparable Banach space.\footnote{Consequently, in the notation of \cite{G1966}, the set of Clarke-saturated functions is ``spaceable''. Moreover, if we endow the space of Lipschitz continuous functions with the $\|\cdot\|_{\mathrm{Lip}}$-seminorm, the space of pathological functions contains an isometric copy of $\ell^{\infty}$.}
We refer to \cite{AGS2005,BO2014,BPS2014,EGS2014} for recent results on the topic.
\section{Main results}
In this section we construct the three pathological examples announced in
the introduction. Our first example will make use of a splitting set that
satisfies an auxiliary property. The construction is summarized in
Lemma~\ref{L1}. The proof is essentially standard, and therefore we have placed it in the Appendix.
\begin{lemma}[Controlled splitting]
\label{L1} For every $\lambda\in(\frac{1}{2},1)$, there exists a measurable set $
A\subset {\mathbb{R}}$ that splits intervals and satisfies:\newline
\begin{equation}
m(A\cap \lbrack 0,t])\geq \lambda t\,,\quad \text{for every }t>0. \label{A}
\end{equation}
\end{lemma}
\subsection{Nondecreasing subgradient trajectories}
\label{ss3.1}
The following proposition answers to the negative the first question of the introduction, revealing that the Lyapunov property for a subgradient trajectory may fail.
\begin{proposition}
[Linear increase along orbits] \label{prop3}Let $\alpha > 0$ be arbitrary. Then, there exists a
Lipschitz continuous function $f\colon\mathbb{R}^{2}\rightarrow \mathbb{R}$ and a
subgradient orbit $\gamma:[0,+\infty)\rightarrow \mathbb{R}^{d}$ emanating from
a noncritical point, meaning
\begin{equation}
\left \{
\begin{aligned}
-\dot{\gamma}(t)&\in \partial f(\gamma
(t))~\textrm{for a.e. }t\in [0,+\infty),\\
\gamma(0)&\notin \mathrm{Crit}(f)
\end{aligned}
\right. \label{1}
\end{equation}
and satisfying the linear increase guarantee
\[
f(\gamma(t))-f(\gamma(0))\geq \alpha \,t\,,\quad \text{for every }t\in \lbrack0,+\infty).
\]
\end{proposition}
\noindent \textbf{Proof.} According to Lemma~\ref{L1}, there exists a constant
$\lambda>\frac{1}{2}$ and a set $A\subset {\mathbb R}$ satisfying \eqref{A}. Define the constant $\mu:=\sqrt{\frac{\alpha+1}{2\lambda-1}}$ and define the function $f\colon{\mathbb R}^2\to{\mathbb R}$ by
\[
f(x,y):=-x+\mu \int \limits_{0}^{y} \, \left( \chi_{A}(\tau)-\chi_{A^{c}}
(\tau)\right) \,d\tau.
\]
It is easily seen that $f$ is Lipschitz continuous and the Clarke subdifferential of $f$ is given by
\[
\partial f(x,y)=\{-1\} \times \lbrack-\mu,\mu],\quad \text{for every
}(x,y)\in \mathbb{R}^{2}.
\]
Notice that $\partial f(x,y)$ contains the
direction $u=-(1,\mu)$ at every point $(x,y)$. Taking into account $\mathrm{Crit}(f)=\emptyset$, we deduce that the
curve $\gamma(t)=-tu=(t, \mu t)$ satisfies the system (\ref{1}). Moreover, we have the estimate
\[
f(\gamma(t))=f(t,\mu t)=-t+\mu\, \left( m(A\cap \lbrack0,\mu t])-m(A^{c}\cap
\lbrack0, \mu t])\right) \geq \left( (2\lambda-1)\mu^{2}-1\right) t\, \geq
\alpha \,t\,.
\]
The proof is complete.$\qquad \hfill \Box$
\subsection{(Periodic) subgradient orbits without critical points}
\label{ss3.2}
Next, we present the second example announced in the introduction, namely a Lipschitz continuous function along with a periodic subgradient curve that contains no critical points of the function. We begin with the following intermediate construction. Henceforth, the symbol $\mathbb{B}_{\infty}$ will denote the closed unit $\ell_{\infty}$-ball in ${\mathbb R}^d$, and $i$ will denote the imaginary unit.
\begin{theorem}\label{thm:basic_period}
Fix an arbitrary real $M>0$ and $b\in (0,\frac{M}{2})$, and let $A\subset{\mathbb R}$ be a measurable subset that splits intervals. Define the function $\Phi\colon{\mathbb R}^2\to{\mathbb R}$ by
$$\Phi(x,y):=xy+M\, \int \limits_{0}^{y} \left( \chi_{A}(\tau
)-\chi_{A^{c}}(\tau)\right) \,d\tau.$$
Then the following are true:
\begin{itemize}
\item[\rm{(i).}] The function $\Phi$ is $2 M$-Lipschitz continuous when restricted to the ball $b\mathbb{B}_{\infty}$.
\item[\rm{(ii).}] Equality holds: $b\mathbb{B}_{\infty}\cap \mathrm{Crit}(\Phi)=[-b,b]\times \{0\}$.
\item[\rm{(iii).}] For any real $r<b$ and $\theta\in{\mathbb R}$, the curve $\gamma(t)=re^{i(t+\theta)}$
is a subgradient orbit of $\Phi$, that is $-\dot{\gamma}(t)\in
\partial \Phi(\gamma(t))$ for all $t> 0$.
\end{itemize}
\end{theorem}
\noindent \textbf{Proof.} The standard sum rule yields the expression for the subdifferential
\begin{equation}
\partial \Phi(x,y)=\left \{ (y,x+h):\,h\in \lbrack-M,M]\, \right \}
\,,\quad \text{for all }(x,y)\in {\mathbb R}^2. \label{subd-Phi}
\end{equation}
The first claim then follows immediately by noting
$$\max_{(x,y)\in b\mathbb{B}_{\infty}}\max_{v\in \partial \Phi(x,y)}\|v\|\leq 2M.$$
The second claim also follows immediately from the expression \eqref{subd-Phi}. Next, for any $(x,y)\in b\mathbb{B}_{\infty}$ we can take
$h=-2x$ in (\ref{subd-Phi}), yielding the selection $(-y,x)\in-\partial \Phi(x,y).$
Thus the inclusion $\dot{\gamma}\in-\partial \Phi(\gamma)$ holds as long as the curve $\gamma(t) =(x(t),y(t))$ satisfies the ODE
\begin{equation}\label{eqn:ode_sat}
\dot{x}(t)=-y(t),~~
\dot{y}(t)=x(t)\qquad \forall t.
\end{equation}
Clearly, the curve $\gamma(t):=re^{i(t+\theta)}$ indeed satisfies $\eqref{eqn:ode_sat}$.
\qquad $\hfill \Box$
\bigskip
Thus Theorem~\ref{thm:basic_period} provides an example of a periodic subgradient curve $\gamma(\cdot)$ for a Lipschitz continuous function $\Phi\colon{\mathbb R}^d\to{\mathbb R}$. The deficiency of the construction is that $\gamma$ does pass through some critical points of $f$. We will now see that by doubling the dimension, we can ensure that the periodic curve never passes through the critical point set.
\begin{theorem}\label{thm:thm2}
(Periodic subgradient orbits without critical points)\label{Thm_A} There exists
a Lipschitz continuous function $f\colon\mathcal{U}\longrightarrow \mathbb{R}$, defined on an open set $\mathcal{U}\subset \mathbb{R}^{4}$, and a periodic analytic curve $\gamma\colon {\mathbb R}\to \mathcal{U}$, satisfying
$$\dot{\gamma}(t)\in-\partial f(\gamma(t))\quad \forall t\qquad \textrm{and}\qquad \operatorname{Im}(\gamma)\, \cap \, \mathrm{Crit}(f)=\emptyset.$$
\end{theorem}
\noindent \textbf{Proof.} Let $M$, $b$, $A$, and $\Phi$ be as defined in Theorem~\ref{thm:basic_period}.
Set $\mathcal{U}:=bB_{\infty}\times bB_{\infty}$ and define the function
\begin{equation}
\left \{
\begin{array}
[c]{l}
f\colon\mathcal{U}\to \mathbb{R}\vspace{0.3cm}\\
f(x_{1},x_{2},x_{3},x_{4})=\Phi(x_{1},x_{2})+\Phi(x_{3},x_{4})
\end{array}.
\right. \label{f}
\end{equation}
It follows easily from Theorem~\ref{thm:basic_period} that the critical point set is given by
\begin{equation}\label{eqn:crit_set_def}
\mathrm{Crit}(f)=\left \{ (x_{1},0,x_{3},0):\, \max \{|x_{1}|,|x_{3}|\} \leq
b\, \right \}.
\end{equation}
Define the curve $\gamma\colon {\mathbb R}\to \mathcal{U}$ by
$\gamma(t)=\frac{b}{2}\left(e^{it},e^{i\left(t+\frac{\pi}{2}\right)}\right).$
Theorem~\ref{thm:basic_period} immediately guarantees the inclusion $\dot\gamma(t)\in -\partial f(\gamma(t))$ for all $t\in {\mathbb R}$, while the expression \eqref{eqn:crit_set_def} implies $\operatorname{Im}(\gamma)\, \cap \, \mathrm{Crit}(f)=\emptyset$. See Figure~\ref{fig:fig2} for an illustration.
$\hfill \Box$
\begin{figure}[h!]
\centering
\includegraphics[scale=0.8]{Fig2.pdf}
\caption{Subgradient curve $\gamma(\cdot)$ generated by $f$ is depicted in black; the set of critical points of $f$ is depicted in red. The picture on the left shows the projection of $\gamma$ onto the coordinate $(x_1,x_2)$, while the picture on the right shows the projection of $\gamma$ onto the coordinate $(x_3,x_4)$.}
\label{fig:fig2}
\end{figure}
\bigskip
It is worthwhile to note that the function $f$ in Theorem~\ref{Thm_A} trivially satisfies the conclusion of the Morse-Sard theorem, since $f(\mathrm{Crit}(f))=\{0\}$.
\subsection{Subgradient sequences without reaching critical points}
\label{ss3.3}
We next present the final example announced in the introduction. We exhibit a Lipschitz continuous function $f$ such that
the subgradient method, which can access $f$ only by querying sugradients, fails to detect critical points in any sense, under any choice of
(nonsummable, square summable) steps $\{t_n\}_{n\geq 1}$.
As the initial attempt at the construction, one may try applying the subgradient method to the function $f$ constructed in Theorem~\ref{thm:thm2}, since it has periodic subgradient orbits in continuous time. The difficulty is that when applied to this function, the subgradient iterates (in discrete time) quickly grow unbounded. Therefore, as part of the construction, we will modify the function $f$ from Theorem~\ref{thm:thm2} by exponentially damping its slope.
\begin{theorem}
[Subgradient method does not detect critical points]\label{Thm_B} There exists a Lipschitz continuous function $f\colon\mathbb{R}^{4}\to \mathbb{R}$, a subgradient selection $G(X)\in \partial f(X),$ and initial condition $\bar{X}
\in \mathbb{R}^{4}$ such that
\begin{equation}
\text{for every sequence\ }\{t_{n}\}_{n}\subset(0,+\infty)\; \text{with}\;
\sum_{n\geq1} t_{n}=+\infty \; \text{and} \; \sum_{n\geq1} t_{n}^{2}<+\infty \label{4}
\end{equation}
the subgradient algorithm
\begin{equation}
\left \{
\begin{aligned}
X_{n+1}&=X_{n}-t_{n}G(X_n)\\
X_{1}&= \bar{X}
\end{aligned}
\right\} \label{5}
\end{equation}
generates a bounded sequence $\{X_{n}\}_{n\geq1}$ whose accumulation points do not meet $\mathrm{Crit}(f)$.
\end{theorem}
\noindent \textbf{Proof.}
Let us first define the function $\rho:\mathbb{R}
^{2}\to \mathbb{R}$ by
\[
\rho(x,y)=\exp(-y^{2}) \int \limits_{0}^{x} \exp(-\tau^{2})\,d\tau,
\]
and notice that
\[
\nabla \rho(x,y)=\left( e^{-(x^{2}+y^{2})},\,-2y\cdot\rho(x,y)\right) .
\]
Then for $\delta>0$ and $M\geq \frac{\delta}{2}(\sqrt{\pi}+1)$, define the function $\phi\colon\mathbb{R}^{2}\to \mathbb{R}$ by
\begin{equation}\label{phi}
\phi(x,y):=\delta \,y\, \rho(x,y)+M\, \int \limits_{0}^{y} \left( \chi_{A}(\tau
)-\chi_{A^{c}}(\tau)\right) \,d\tau.
\end{equation}
An easy calculation shows
that $\phi$ is Lipschitz continuous and its subdifferential is given
by
\begin{equation}
\partial \phi(x,y)=\left \{ \delta \,y\,e^{-(x^{2}+y^{2})}\right \} \,
\times \, \left\{\left ( \delta \,(1-2y^{2})\,e^{-y^{2}}\, \int \limits_{0}^{x}
e^{-\tau^{2}}d\tau \right ) +[-M,M]\right\}. \label{subd-phi}%
\end{equation}
It follows from (\ref{subd-phi}) that a point $(x,y)\in \mathbb{R}^{2}$ is
Clarke critical for $\phi$ if and only if $y=0,$ that is
\[
\mathrm{Crit}(\phi)=\mathbb{R}\times \{0\}
\]
We claim that for every $(x,y)\in \mathbb{R}^{2}$, we have
\begin{equation}
g(x,y):=\delta \,e^{-(x^{2}+y^{2})}\,(y,-x)\in \partial \phi(x,y).
\label{subg-field}
\end{equation}
To see this, denoting by $\pi_2:{\mathbb R}^2\rightarrow {\mathbb R}$ the projection to the second coordinate, we observe:
\begin{align*}
\left| \pi_2(g(x,y))- \delta \,(1-2y^{2})\,e^{-y^{2}}\, \int \limits_{0}^{x}
e^{-\tau^{2}}d\tau\right|&=\delta e^{-y^2}\left|(1-2y^{2})\, \int \limits_{0}^{x}
e^{-\tau^{2}}d\tau-xe^{-x^2}\right|\\
&\leq \delta e^{-y^2}|1-2y^{2}|\cdot\frac{\sqrt{\pi}}{2}+|x|e^{-x^2}\\
&< \delta\cdot\frac{\sqrt{\pi}}{2}+\frac{1}{2}\leq M.
\end{align*}
Notice that the integral curves of the above vector field
\[
(\dot{x},\dot{y})=g(x,y)=\delta \,e^{-(x^{2}+y^{2})}\,(y,-x),
\]
are the homocentric cycles $x^{2}+y^{2}=r^{2}$ for any $r\geq 0.$
$\smallskip$
Consider now applying a subgradient method to $\phi$. Namely, let $\gamma_{n}:=(x_{n},y_{n})$ and consider the subgradient sequence
\begin{equation}
\gamma_{n+1}=\gamma_{n}-t_{n}g_n, \label{6}%
\end{equation}
where we set $g_n:=g(x_n,y_n)\in \partial f(x_n,y_n)$.
Then the norms
$r_{n}:=\|\gamma_{n}\|$ satisfy
\begin{equation}
\|g_n\|=\delta r_{n}e^{-r_{n}^{2}}\leq \frac{\delta}{\sqrt{2e}}. \label{7}%
\end{equation}
Since $g_n$ is tangent at $\gamma_{n}$ to the homocentric cycle centered at
$0$ with radius $r_{n},$ we deduce easily from (\ref{6}) that the sequence
$\{r_{n}\}_{n\geq1}$ is strictly increasing. On the other hand, by Pythagoras
theorem and (\ref{7}) we deduce:
\[
r_{n+1}^{2}\, = \,r_{n}^{2}\,+\,t_{n}^{2}\,\|g_{n}\|^{2}\leq \,r_{n}
^{2}\,+\, \left( \frac{\delta^{2}}{2e}\right) t_{n}^{2},
\]
and by induction
\[
r_{n+1}^{2}\, \leq \,r_{1}^{2}\,+\left( \frac{\delta^{2}}{2e}\right)
\sum_{n\geq1} t_{n}^{2}<+\infty.
\]
Therefore, $\{r_{n}\}_{n\geq1}$ is bounded and the sequence $\gamma
_{n}:=(x_{n},y_{n})$ has accumulation points.
The proof is not yet complete, since in principle, the accumulation points of the sequence $(x_{n},y_{n})$ may be critical. To eliminate this possibility, we proceed as in the proof of Theorem~\ref{Thm_A} by doubling the dimension.
Namely, define the function
\begin{equation}
\left \{
\begin{array}
[c]{l}
f\colon\mathbb{R}^{4}\to \mathbb{R}\vspace{0.3cm}\\
f(x,y,z,w)=\phi(x,y)+\phi(z,w)
\end{array}
\right. , \label{f-discr}
\end{equation}
and observe that $f$ is Lipschitz continuous and equality holds:
\[
\mathrm{Crit}(f)=\mathbb{R}\times \{0\} \times \mathbb{R}\times \{0\}.
\]
We shall now prescribe a subgradient selection:
\begin{equation}
G(x,y,z,w)=(g(x,y),g(z,w))\in\partial f(x,y,z,w),
\label{G--field}
\end{equation}
where $g(\cdot,\cdot)$ is defined in \eqref{subg-field}. Let us also prescribe the
initial condition
\[
X_{1}\equiv \bar{X}=(1,0,0,1)\in \mathbb{R}^{4}.
\]
Then (\ref{5}) generates a bounded sequence
\[
X_{n}:=(x_{n},y_{n},z_{n},w_{n})\text{\quad with\quad}G_{n}:=G(X_{n}
)\in \partial f(X_{n})
\]
which splits in ${\mathbb R}^2 \times {\mathbb R}^2$ as follows:
\[
X_{n}=(\gamma_{n},\tilde{\gamma}_{n}),\;G_{n}=(g_{n},\tilde{g}_{n})\in
\partial f(X_{n})\text{\quad with\quad}\gamma_{n}:=(x_{n},y_{n}
)\quad \text{and\quad \ }\tilde{\gamma}_{n}:=(z_{n},w_{n})
\]
such that
\begin{equation}
\left \{
\begin{array}
[c]{c}
\gamma_{n+1}=\gamma_{n}-t_{n}g_{n}\\
g_{n}\in \partial \phi(\gamma_{n})
\end{array}
\right\} \quad \text{and}\quad \text{ }\left \{
\begin{array}
[c]{c}
\tilde{\gamma}_{n+1}=\tilde{\gamma}_{n}-t_{n}\tilde{g}_{n}\\
\tilde{g}_{n}\in \partial \phi(\tilde{\gamma}_{n})
\end{array}
\right\}, \label{rotational}
\end{equation}
respectively. Thanks to the initial condition, for every $n\geq1$ the vector
$\tilde{\gamma}_{n}$ is a $\frac{\pi}{2}$-rotation of the vector
$\gamma_{n}.$ Taking into account this rotational symmetry we deduce
that all limit points of $\{X_{n}
\}_{n\geq1}$ lie outside of $\mathrm{Crit}(f)$. $\hfill \Box$
\bigskip
It is worthwhile to note again that the function $f$ in Theorem~\ref{Thm_B} trivially satisfies the conclusion of the Morse-Sard theorem, since $f(\mathrm{Crit}(f))=\{0\}$.
\vspace{0.8cm}
\textbf{Acknowledgments.} A major part of this work was done during a
research visit of the first author to the University of Washington (July~2019). This author thanks the host institute for hospitality. The first author's research
has been supported by the grants CMM-AFB170001, FONDECYT 1171854 (Chile) and
PGC2018-097960-B-C22, MICINN (Spain) and ERDF (EU). The research of the second author has been supported by the NSF DMS 1651851 and CCF 1740551 awards.
\bigskip
\bibliographystyle{plain}
|
1,116,691,498,330 | arxiv | \section{Introduction}\label{sec:Introduction}
\IEEEPARstart{D}{irection} of arrival estimation is employed in many applications such as seismic \cite{1}, acoustic \cite{2}, sonar \cite{3}, radar \cite{4}, and wireless communications \cite{5}. However, most of the techniques are devoted to narrowband sources, while in many applications the received signals are wideband for which narrowband methods would result in inaccurate and irreliable estimates. This, as a result,
has motivated more research to develop appropriate techniques for wideband signals. To do so, a conventional approach is to divide a wideband output signal of an array into some narrowband subbands using a filter bank or by employing the Discrete Fourier Transformation (DFT). Then, to combine the joint information of these narrowbands, two conventional methods are classified into Incoherent Signal Subspace Methods (ISSMs), and Coherent Signal Subspace Methods (CSSMs).
In the ISSM, a narrowband DOA estimator is applied to all narrowband subbands and the final DOA estimate is then calculated by incoherently averaging the respective results \cite{8,9}. These methods show poor performance for low Signal to Noise Ratio (SNR) signals and the DOA estimate is sensitive to any high-magnitude error that may occur in each subband.
In the CSSM \cite{10,11}, the center frequencies of all narrowband subbands are focused on a reference subband by using a focusing matrix. Then an accurate narrowband DOA estimator such as the MUSIC \cite{6} is applied to the reference frequency for all the focused subbands to obtain the final DOA estimate. The Rotational Signal Subspace (RSS) is one of the popular CSSMs \cite{11}. However, a major difficulty with the CSSM is how to design the corresponding focusing matrices. Moreover, most of these methods need an initial value for DOA estimation which is considered a practical drawback.
To overcome these limitations, some methods such as the Test of Orthogonality of Projected Subspaces (TOPS) \cite{12}, and Weighted Squared TOPS (WS-TOPS) \cite{7} have been developed in signal and noise subspaces for different narrowband subbands. Although these methods need no focusing matrix for the estimation procedure, they need a large number of snapshots to offer acceptable performance, which might not be available in some applications or impose a large computational burden.
As another approach, in \cite{13} and \cite{14}, the DOAs are estimated based on Compressive Sensing (CS) theory by employing the joint sparsity in different frequency bins. These methods can lead to high accuracy estimates for a low number of snapshots. To employ the CS concept, the DOA space is discretized by a grid and the DOA estimates would be located on the nearest corresponding grid point. This, as a result, would limit the resolution of the estimates to the resolution of the grid. It is known that in practice the true DOAs could be located off the grid cells (off-grid) which could lead to an unavoidable grid mismatch error. To overcome this limitation, some off-grid methods estimate the DOAs and the grid simultaneously \cite{15,16}. An off-grid method for wideband DOA estimation has been reported in \cite{17} based on Sparse Bayesian Learning (SBL). These methods normally involve nonconvex optimization problems whose global convergence can not be guaranteed. Moreover, they need an initial grid.
To overcome the mentioned problems, Candés and Fernandez-Granda \cite{18} extended the discrete sparse approach to a continuous case by introducing the super-resolution concept, which ultimately leads to solving a convex optimization problem. In \cite{19}, the super-resolution notion was introduced as a gridless sparse method for line spectral estimation. The Application of the latter approach was later developed in \cite{20} and \cite{21} for narrowband DOA estimation. It should be noted that all of these methods are performed using regular arrays such as Uniform Linear Arrays (ULA) or Sparse Linear Arrays (SLA). In some applications \cite{22}, however, making use of such arrays is impossible as the sensors should be located in arbitrary positions. The employment of irregular array architectures provided a viable solution in \cite{23}. In \cite{24}, a sparse gridless arbitrary sampling-based method is developed for line spectral estimation based on Prolate Spheroidal Wave Functions (PSWFs). This method uses the Single Measurement Vector (SMV) model and can be applied for narrowband DOA estimation with arbitrary linear arrays. We extended this method for Multiple Measurement Vectors (MMVs) case in \cite{25}.
In this paper, we propose a Super-Resolution Wideband DOA (SRW-DOA) estimator using arbitrary linear arrays which leads to a convex optimization problem. Also, the resulting optimization problem is solved by Semi-Definite Programming (SDP) developed in our previous work \cite{25}. The major point is that in this method no focusing matrix and initial estimates are required. Our numerical results show that the proposed method offers high accuracy estimates with more robustness to noise compared to the conventional ones. Moreover, this method is useful for DOA estimation of adjacent sources.
The paper is organized as follows. In Section \ref{sec:data model}, a data model is introduced. In Section \ref{sec:proposed1}, the gridless sparse method is proposed for wideband DOA estimation. Numerical results are shown in \ref{sec:simulations}, and Section \ref{sec:coclusion} concludes the paper.
In this work, matrices and vectors are respectively represented by uppercase and lowercase bold letters, ${\left( \cdot \right)^T}$ and ${\left( \cdot \right)^H}$ denote transpose and conjugate transpose operators, ${\left\| {\cdot} \right\|_F}$ and ${\left\| {\cdot} \right\|_{\cal A}}$ are the Frobenius and atomic norm, respectively, $\mathop {\left\| {\cdot} \right\|}\nolimits_2$ defines the $\ell _ {2} $ norm of a vector, $diag({\cdot})$ and $toep({\cdot})$ convert a vector to diagonal and Toeplitz matrices, respectively and finally, $conv({\cdot})$, $trace({\cdot})$, and $tan({\cdot})$ represent the convex hull, trace, and tangent operator, respectively.
\section{Data model}\label{sec:data model}
We consider an arbitrary linear array which consists of $M$ omnidirectional sensors located with arbitrary distances. Assuming the first sensor as the reference, the distance of the $m$th sensor from the reference is ${r_m}$, and for the reference sensor, ${r_1 = 0}$. Also, there exist $K$ sources with directions ${\theta _k}, k = 1, \ldots ,K$, emitting different wideband signals. It is also considered that these sources are fixed during the observation time. By applying the DFT to the wideband signal received by each sensor which are sampled by the Nyquist rate, we get $J$ frequency bins. Then, the $j$th frequency bin for $K$ received signals at the $m$th sensor is presented as
\begin{align}\label{formulation:1}
{X_{m,j}} = \mathop \sum \limits_{k = 1}^K {S_{k,j}}{\rm{exp}}\left( { - \frac{{i2\pi {r_m}\sin \left( {{\theta _k}} \right)}}{{{\gamma _j}}}} \right) + {N_{m,j}},\\
j = 1, \ldots ,J,\;\;m = 1, \ldots ,M, \nonumber
\end{align}
where ${S_{k,j}}$ shows the emitted signal by the $k$th source, ${\gamma _j}$ is the wavelength of the $j$th frequency, and ${N_{m,j}}$ is the corresponding noise at the $m$th sensor. By defining ${\alpha _j} \buildrel \Delta \over = \frac{2}{{{\gamma _j}}}$, the spatial DOA frequencies ${f_k} = \frac{1}{2}\sin \left( {{\theta _k}} \right) \in \left[ { - \frac{1}{2},\frac{1}{2}} \right]$, and using ${X_{m,j}}$ and ${N_{m,j}}$ as the elements of matrices $\bm{X} \in \mathbb{C}{^{M \times J}}$ and $\bm{N} \in \mathbb{C}{^{M \times J}}$, respectively, (\ref{formulation:1}) is arranged in matrix form as
\begin{equation}\label{formulation:2}
\bm{X} = \mathop \sum \limits_{k = 1}^K {\beta _k}\left[ {{\bm{a}_1}\left( {{f_k}} \right), \ldots ,{\bm{a}_J}\left( {{f_k}} \right)} \right]diag\left( {{\bm{c}_k}} \right) + \bm{N},
\end{equation}
where we assume $\|\bm{N}\|_F^2 \le {\sigma ^2}$,
$$\begin{array}{l}
{\beta _k} = \;\mathop {\left\| {\left[ {\begin{array}{*{20}{c}}
{{s_{k,1}}}\\
\vdots \\
{{s_{k,J}}}
\end{array}} \right]} \right\|}\nolimits_2 \in \mathbb{R}{^{+}},\;\;{{\bm{c}}_k} = {\left[ {\begin{array}{*{20}{c}}
{{s_{k,1}}}\\
\vdots \\
{{s_{k,J}}}
\end{array}} \right]} /{\beta _k} \in \mathbb{C}{^{J \times 1}},\\
\end{array} $$
and the $j$th steering vector for a general spatial DOA frequency of $f$ is given by
\begin{equation}\label{formulation:3}
{\bm{a}_j}\left( f \right) = \left[ {\begin{array}{*{20}{c}}
{exp\left( { - i2\pi {\alpha _j}{r_1}f\;} \right)}\\
{\begin{array}{*{20}{c}}
{exp\left( { - i2\pi {\alpha _j}{r_2}f\;} \right)}\\
\vdots
\end{array}}\\
{exp\left( { - i2\pi {\alpha _j}{r_M}f\;} \right)}
\end{array}} \right] \in \mathbb{C}{^{M \times 1}}.
\end{equation}
To present (\ref{formulation:2}) by a model in order to incorporate in the gridless sparse problem, we define a general steering vector using $\bm{a}_j(f)$'s by defining the set,
$${\cal R} = \left\{ {{\alpha _j}{r_m} \in \mathbb{R}{^{+}}\;:\;\;j = 1, \ldots ,J,\;\;m = 1, \ldots ,M} \right\},$$
and the vector ${\left[ {{{\tilde r}_1}, \ldots ,\;{{\tilde r}_{\tilde M}}} \right]^T}$,
whose elements
$ {\tilde r_{\tilde m}}={\alpha _j}{r_m}, \tilde m = 1, \ldots ,\;\tilde M$ ($M \le \tilde M \le J \times M$) are the sorted members of ${\cal R}$ in ascending order after deleting the recurring elements. Then, the general steering vector is defined as
\begin{equation}\label{formulation:4}
\bm{a}\left( f \right) = \left[ {\begin{array}{*{20}{c}}
{exp\left( { - i2\pi {{\tilde r}_1}f\;} \right)}\\
{\begin{array}{*{20}{c}}
{exp\left( { - i2\pi {{\tilde r}_2}f\;} \right)}\\
\vdots
\end{array}}\\
{exp\left( { - i2\pi {{\tilde r}_{\tilde M}}f\;} \right)}
\end{array}} \right] \in {\mathbb{C}^{\tilde M \times 1}},
\end{equation}
which corresponds to the steering vector of an arbitrary linear array with $\tilde M$ sensors spaced with the distances ${\tilde r_{\tilde m}}$ from each other. It can be seen that each steering vector $\bm{a}_j(f)$ is a sub-vector with $M$ elements of $\bm{a}(f)$.
In order to represent (\ref{formulation:2}) based on $\bm{a}(f)$, we define the map $\chi:{\mathbb{C}^{\tilde M \times J}} \to {\mathbb{C}^{M \times J}}$, which relates the creating matrix $\left[ {{\bm{a}_1}\left( f \right), \ldots ,{\bm{a}_J}\left( f \right)} \right] \in {\mathbb{C}^{M \times J}}$ in (\ref{formulation:2}) to $\bm{a}(f)$ in (\ref{formulation:4}). Then, for any matrix $\bm{Z} \in {{\mathbb{C}^{\tilde M \times J}}}$, this map yields,
\begin{equation}\label{formulation:5}
(\chi {\left( \bm{Z} \right))_{m,j}} = {\bm{Z}_{\tilde m,j}},\;\;\;for, \;\;\;{\alpha _j}{r_m} = {\tilde r_{\tilde m}}.
\end{equation}
Thus, we get
\begin{equation}\label{formulation:6000}
\left[ {{\bm{a}_1}\left( f \right), \ldots ,{\bm{a}_J}\left( f \right)} \right] = \chi \left( {\bm{a}\left( f \right){{\left( {{\bm{1}_{J \times 1}}} \right)}^T}} \right),
\end{equation}
where ${\bm{1}_{J \times 1}}$ is an all-one vector. Now, using $\chi$ in (\ref{formulation:5}) and (\ref{formulation:6000}), we can represent (\ref{formulation:2}) as
\begin{align}\label{formulation:6}
& \bm{X} = \mathop \sum \limits_{k = 1}^K {\beta _k}\chi \left( {\bm{a}\left( {{f_k}} \right){{\left( {{\bm{1}_{J \times 1}}} \right)}^T}} \right)diag\left( {{\bm{c}_k}} \right) + \bm{N}\\
& = \chi \left( {\mathop \sum \limits_{k = 1}^K {\beta _k}\bm{a}\left( {{f_k}} \right){{\left( {{\bm{1}_{J \times 1}}} \right)}^T}diag\left( {{\bm{c}_k}} \right)} \right) +\bm{N}\nonumber\\
& = \chi \left( {\mathop \sum \limits_{k = 1}^K {\beta _k}\bm{a}\left( {{f_k}} \right)\bm{c}_k^T} \right) + \bm{N}.\nonumber
\end{align}
In other words, we can present the model,
\begin{equation}\label{formulation:7}
\bm{X} = \chi \left( \bm{Z} \right) + \bm{N},\;\;\; \bm{Z} = \mathop \sum \limits_{k = 1}^K {\beta _k}\bm{a}\left( {{f_k}} \right)\bm{c}_k^T,
\end{equation}
to propose a gridless sparse method for wideband DOA estimation.
\section{Super-Resolution Wideband DOA Estimator}\label{sec:proposed1}
In the following, we elaborate on the proposed SRW-DOA estimator. According to the structure of $\bm{Z}$ in (\ref{formulation:7}), we state that the building blocks of this matrix are the members of the following set of atoms,
\begin{equation}
\begin{array}{l}
{\cal A} = \{{\bm{A}}\left( {f,{\bm{c}}} \right) = {{\bm{a}}\left( f \right)}{\bm{c}^T}\in{\mathbb{C}^{\tilde M \times J}}\;\; {\rm{|}}\;\\
\;\;\;\;\;\;\;\;\;\;f \in \left[ { - \frac{1}{2},\frac{1}{2}} \right],{\bm{c}} \in\mathbb{C}^{J\times 1} ,\;{\left\| {\bm{c}} \right\|_2} = 1\}\nonumber.
\end{array}
\end{equation}
The atomic norm of $\bm{Z}$ is the possible minimum number of the atoms of ${\cal A}$, by which $\bm{Z}$ can be constructed. The atomic norm is defined according to \cite{21} and \cite{26} as
\begin{align}\label{formulation:8}
&{\|\bm{Z}\|_{\cal A}} = {\rm{inf}}\{ t > 0:\bm{Z} \in t\;conv\left( {\cal A} \right)\} \\
& = \mathop {\inf }\limits_{{f_k},{\beta _k},{\bm{c}_k}} \left\{ {\mathop \sum \nolimits_k {\beta _k}\;:\bm{Z} = \mathop \sum \nolimits_k {\beta _k}\bm{A}\left( {{f_k},{\bm{c}_k}} \right),\;\;{\beta _k} > 0\;} \right\}\nonumber.
\end{align}
Since $K$ is finite, the number of constructing atoms of $\bm{Z}$ will be finite, as well. As a result, to recover $\bm{Z}$, we propose the following Atomic Norm Minimization (ANM) problem,
\begin{align}\label{formulation:9}
&\mathop {\min }\limits_{\bm{Z} \in {^{\tilde M \times J}}} \;{\|\bm{Z}\|_{\cal A}}\\
&s.t. \;\;\; {\|\bm{X} - \chi\left( \bm{Z} \right)\|}_F \le \sigma \nonumber.
\end{align}
Noting that $\bm{a}\left( f \right)$ shows the steering vector of an arbitrary linear array with $\tilde M$ sensors with arbitrary distances ${\tilde r_{\tilde m}}$ , (\ref{formulation:9}) can be considered an ANM problem with arbitrary sampling, which has been solved in \cite{25} by using PSWFs as an SDP.
Considering ${\mathbb{L}^2}$ as the set of all square-integrable functions on $\left[ { - \frac{1}{2},\frac{1}{2}} \right]$ and $c = \pi {\tilde r_{\tilde M}}$, for any $r \in {\mathbb{L}^2}$, PSWFs are defined as the eigenfunctions of the linear map $\xi :{\mathbb{L}^2} \to {\mathbb{L}^2}$ as
\begin{equation}\label{formulation:10}
(\xi r)(\tau)=\int^1_{-1}e^{ic\zeta \tau}r(\zeta)d\zeta, \:\:\:\:\:\:\:\forall \tau\in [-1,1].
\end{equation}
Therefore, the PSWF, ${\varphi _l}$, should satisfy $\xi {\varphi _l} = {\lambda _l}{\varphi _l}$, where ${\lambda _l}$ is the $l$th eigenvalue of $\xi$. According to \cite{1000}, as the amplitude of the eigenvalues larger than $\frac{{2c}}{\pi }$ tends to zero, we can limit the number of required PSWFs to $d = \left\{ {min\frac{l}{2}\;|\;\left| {{\lambda _l}} \right| < \epsilon} \right\}$, where $\epsilon$ defines the desired precision \cite{27}. Then, using the latter PSWFs, we can represent (\ref{formulation:9}) as an SDP \cite{25} by
\begin{align}\label{formulation:10}
\underset{\underset{v_1,\dots,v_d\in \mathbb{C},v_0\in\mathbb{R}}{\boldsymbol W\in \mathbb{R}^{J\times J},\bm{Z} \in \mathbb{C}^{\tilde M\times J}}} {min} & \:\:\:\:\frac{1}{2}(trace(\boldsymbol W)+ \bm{Q}_{1,1}) \\
s.t. \:\:\:\:\:\:\:\:& \left[
\begin{array}{cc}
\boldsymbol W & \boldsymbol Z^H \\
\boldsymbol Z & \boldsymbol Q \\
\end{array}
\right]\succeq0,\nonumber\\
& {\|\bm{X} - \chi\left( \bm{Z} \right)\|}_F \le \sigma, \nonumber\\
& \boldsymbol Q_{q,p}=\boldsymbol h_{qp}^T\boldsymbol{\Phi}^{-1}[v_d^H,\dots,v_1^H,v_0,v_1,\dots,v_d]^T, \nonumber\\
& \boldsymbol T:= toep([v_0,v_1,\dots,v_d]), \nonumber\\
&\Psi(\boldsymbol T):=\tan^2(\frac{{c}}{2 d})(\boldsymbol J_1+\boldsymbol J_2)\boldsymbol T(\boldsymbol J_1+\boldsymbol J_2)^H \nonumber \\
& -(\boldsymbol J_1-\boldsymbol J_2)\boldsymbol T(\boldsymbol J_1-\boldsymbol J_2)^H\succeq0,\nonumber
\end{align}
where ${\bf{\Phi }} \in {\mathbb{R}^{\left( {2d + 1} \right) \times \left( {2d + 1} \right)}}$ and $\bm{h}_{qp} \in {\mathbb{R}^{\left( {2d + 1} \right)}}$ are defined as
\begin{align}
& {{\bf{\Phi }}_{q,l}} = {\varphi _{l - 1}}\left( {\frac{{q - d - 1}}{d}} \right), \nonumber \\
& {\bm{h}_{qp}}\left( l \right) = {\varphi _{l - 1}}\left( {\frac{{{{\tilde r}_q} - {{\tilde r}_p}}}{{{{\tilde r}_{\tilde M}}}}} \right),\nonumber
\end{align}
and ${\bm{J}_1} = \left[ {{\bm{I}_d},{\bm{0}_{d \times 1}}} \right]$, ${\bm{J}_2} = \left[ {{\bm{0}_{d \times 1}},{\bm{I}_d}} \right]$, ${\bm{I}_d}$ is a $d \times d$ identity matrix, and ${\bm{0}_{d \times 1}}$ is an all-zero vector.
By solving the SDP problem in (\ref{formulation:10}), the Toeplitz response matrix $\bm{T}$ is obtained which is singular with rank $K$ \cite{25}. Therefore, using the {Proney'}s method \cite{28}, we can recover the spatial DOA frequencies ${f_k}'$s and accordingly the DOAs ${\theta _k}'$s.
\section{Numerical simulations}\label{sec:simulations}
We compare the performance of the proposed SRW-DOA estimator with other existing wideband DOA estimation methods including RSS \cite{11}, WS-TOPS \cite{7}, and SBL \cite{17} in different experiments. We consider an underwater environment with a wave propagation speed of $1500 m/s$. Each source emits random wideband signals that have a bandwidth of $167 Hz$ and a center frequency of $500 Hz$. The received signal in each sensor is sampled at the rate of $2000 Hz$ with $512$ samples. An arbitrary linear array consisting of $M=8$ sensors is considered, which ${r_m}'$s are drawn randomly from a uniform distribution on the interval $\left( {0,M\frac{{{\gamma _c}}}{2}} \right)$.
The distances of the first and last sensors from the reference sensor are ${r_0} = 0$ and ${r_M} = M\frac{{{\gamma _c}}}{2}$, respectively, and ${\gamma _c}$ shows the wavelength at the center frequency.
The measuring noise in (\ref{formulation:1}) is zero-mean Gaussian with variance ${\sigma ^2}$ and the SNR is defined as $\frac{{\|\bm{X} - \bm{N}\|_F^2}}{{{\sigma ^2}}}$. The initial DOA estimates for the RSS are selected as the true DOAs added up with a uniformly distributed random error within $ \pm {2^\circ }$. The initial grid for the SBL is defined as a ${1^\circ }$ discretized uniform grid. The number of sources $K$ is assumed to be known. Accordingly, in the RSS and WS-TOPS, the $K$ largest peaks in the spatial frequency are selected as the DOA estimates, while in the SBL and SRW-DOA, the $K$ largest components are taken as the DOA's. Simulation results are presented by averaging $100$ independent trials of each experiment. The Root Mean-Square Error (RMSE) is defined as {\tiny $ RMSE = \frac{1}{{100}}\mathop \sum \limits_{i = 1}^{100} \left({\frac{1}{K}\mathop \sum \limits_{k = 1}^K {{\left( {{\theta _k} - {{\hat \theta }_{k,i}}} \right)}^2}}\right)^{1/2}$}, where ${\hat \theta _{k,i}}$ is the estimated DOA for the $k_{th}$ source at the $i_{th}$ trial.
In the first experiment, we consider $K=3$ sources with DOAs ${\theta _1} = - {5^\circ }$, ${\theta _2} = {15^\circ }$, and ${\theta _3} = {40^\circ }$. The number of frequency bins is $J=10$. The error margin is accepted for each DOA estimate if the difference between each estimate and its true value is less than ${5^\circ }$.
The probability of successful estimation is shown in Fig. \ref{figure1} for different SNRs. As seen, both RSS and SRW-DOA considerably outperform the WS-TOPS and SBL.
\begin{figure}[!t]
\centering
\includegraphics[width=75mm]{fig1_snr_pr.pdf}
\caption{Probability of successful estimation v.s. SNR for K=3, ${\theta _1} = - {5^\circ }$, ${\theta _2} = {15^\circ }$, and ${\theta _3} = {40^\circ }$.}\label{figure1}
\end{figure}
Also, the respective RMSEs in Fig. \ref{figure2} show that the SRW-DOA generates lower errors.
\begin{figure}[!t]
\centering
\includegraphics[width=75mm]{fig2_snr.pdf}
\caption{RMSEs v.s. SNR for K=3, ${\theta _1} = - {5^\circ }$, ${\theta _2} = {15^\circ }$, and ${\theta _3} = {40^\circ }$.}\label{figure2}
\end{figure}
In the second experiment, we study the ability to distinguish two near DOAs. For this purpose, two sources are considered at ${\theta _1} = {10^\circ }$ and ${\theta _2} = {10^\circ } + \Delta\theta$, where $\Delta\theta$ increases from ${3^\circ }$ to ${20^\circ }$ and $SNR=10 dB$. The probability of successful estimation and RMSEs are presented in Figs. \ref{figure3} and \ref{figure4}, respectively.
\begin{figure}[!t]
\centering
\includegraphics[width=75mm]{fig3_teta_pr.pdf}
\caption{Probability of successful estimation v.s. $\Delta\theta$ at $SNR=10 dB$.}\label{figure3}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=75mm]{fig4_teta.pdf}
\caption{RMSEs v.s. $\Delta\theta$ at $SNR=10 dB$.}\label{figure4}
\end{figure}
From both figures, one can see that the SRW-DOA (Proposed) significantly outperforms the others, even for small values of $\Delta\theta$.
In the last experiment, the effect of the number of frequency bins on DOA estimates is investigated for $K=3$ sources at ${\theta _1} = - {5^\circ }$, ${\theta _2} = {15^\circ }$, and ${\theta _3} = {40^\circ }$ and $SNR=10 dB$ for different numbers of frequency bins $J$. The results for different numbers of frequency bins $J$ are shown in Figs. \ref{figure5} and \ref{figure6}, respectively.
\begin{figure}[!t]
\centering
\includegraphics[width=75mm]{fig5_j_pr.pdf}
\caption{Probability of successful estimation v.s. $J$ for K=3, $SNR=10 dB$, and ${\theta _1} = - {5^\circ }$, ${\theta _2} = {15^\circ }$, and ${\theta _3} = {40^\circ }$.} \label{figure5}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=75mm]{fig6_j.pdf}
\caption{RMSEs v.s. $J$ for K=3, $SNR=10 dB$, and ${\theta _1} = - {5^\circ }$, ${\theta _2} = {15^\circ }$, and ${\theta _3} = {40^\circ }$.}\label{figure6}
\end{figure}
From Fig. \ref{figure5}, by increasing the number of frequency bins, the probability of successful estimation increases. Moreover, the RSS and the SRW-DOA offer considerably higher success probabilities compared to WS-TOPS and SBL. Also, Fig. \ref{figure6} shows that the SRW-DOA generates the lowest RMSE for all $J$’s.
\section{Conclusion}\label{sec:coclusion}
We proposed a gridless sparse method for wideband DOA estimation with no need for a focusing matrix or initial estimates. This method can be applied to all arbitrary linear arrays and the corresponding optimization problem is convex. In comparison to RSS, WS-TOPS, and SBL algorithms, the proposed SRW-DOA method showed outstanding performance by generating lower RMSEs, higher probability of successful estimation, more accurate DOA estimates, and more robustness to noise. Moreover, it showed remarkable effectiveness for estimation of the DOAs of adjacent sources.
\bibliographystyle{ieeetr}
|
1,116,691,498,331 | arxiv | \section{Introduction}
The stability and reactivity of molecules can be influenced by the way they vibrate. Thermal and light-induced vibrational excitations can provide molecules with enough kinetic energy to overcome activation barriers along specific reaction coordinates. Molecular vibrations explain the mechanism of important natural phenomena such as enzyme-catalyzed hydrogen transfer in biological systems \cite{Hay_2012}. They also affect the stability of atmospheric compounds that are subject to sunlight-driven vibrational excitation \cite{Vaida_2014} and provide a means to control the outcome of chemical reactions by selectively exciting vibrational modes that contribute to a desired reaction coordinate \cite{Crim_2008, Chen_2018, Crim_1999, Heyne_2019}. This is important in chemical reactions that are triggered by an abrupt change in the electronic state of a molecule. In such vibronic transitions, the change in the electronic state is usually accompanied by vibrational excitations that can initiate chemical reactions at the new electronic state \cite{Crim_1996}. The ability to engineer the vibrational state of a molecule during a vibronic transition can in principle be used to selectively break or form chemical bonds and to control the outcome of chemical reactions \cite{Crim_1996, Epshtein_2011, Grygoryeva_2019}.
A vibronic transition can be mediated by the absorption of light that excites molecules to higher-energy electronic states. The energy provided by the absorbed photons partially transfers into vibrations that can help to overcome reaction barriers of predissociative electronic states \cite{Crim_1996}. A similar mechanism has also been reported for the reactions of molecules on metal surfaces where vibrational excitations are induced by electron transfer from a scanning tunnelling microscopy tip to the adsorbed molecule \cite{Motobayashi_2014}. This charge-transfer process corresponds to a vibronic transition between two different charge states of the molecule. The vibrational excitations initiated by such transitions help to break covalent bonds in the molecule \cite{Maksymovych_2008, Chen_2019, Erpenbeck_2018, Jeong_2017, Stipe_1997} and they affect the adsorption of the molecule at the surface \cite{Pascual_2003, Sainoo_2005}. Vibrational excitation in the sudden-force regime of a mechanochemical processes, which can also be considered as a vibronic transition to a force-modified potential energy surface, has been shown to provide excess energy that helps with crossing barriers along a mechanochemical reaction \cite{Rybkin_2017}. Analyses based on such vibrational excitations have been used to explain the mechanism of chemical reactions in different mechanochemical settings \cite{Rybkin_2017}.
In all of the processes mentioned, the change in the molecular electronic state is accompanied by vibrational excitations that have important effects on the reactivity of a molecule. However, predicting the probabilities of excitation to all vibrational levels is challenging for transitions that involve simultaneous changes in the vibrational and electronic states of molecules \cite{Jacob_2019}. Furthermore, the time-dependent redistribution of the vibrational energy between the localized modes of a molecule that undergoes a vibronic transition also affects the stability of specific bonds \cite{Sparrow_2018}. Simulation of such vibrational quantum dynamics can also be challenging for conventional quantum chemistry methods.
Gaussian Boson Sampling (GBS) \cite{Hamilton_2017} is a platform for photonic quantum computation that has a variety of use cases~\cite{Bradler_2018, Arrazola_2018, Banchi_2019, Jahangiri_2020, Schuld_2020, Bromley_2020}, including the simulation of vibronic spectra of molecules \cite{huh2015boson, quesada2019franck, sawaya2019quantum}. When a GBS device is programmed with the appropriate molecular parameters, the distribution of photons in the optical modes of the device can be used to obtain the distribution of vibrational quanta in the molecule during a vibronic transition. This information can also be obtained from classical algorithms, but their computational complexity increases rapidly with molecular size, rendering such classical methods inefficient for large molecules \cite{Jacob_2019}. This makes GBS a candidate for efficient simulation of vibrational excitation and vibrational quantum dynamics of molecules undergoing vibronic transitions. Simulation of such excitations allows optimizing a vibronic process such that specific target modes become populated. The ability to control the final vibrational states during a vibronic process helps, essentially, to design chemical reactions by selectively activating modes that become reaction coordinates in a desired reaction channel.
In this work, we introduce a quantum algorithm based on GBS for simulating the excitation of vibrational modes during vibronic transitions. We also introduce quantum-inspired classical algorithms that can be employed in special cases where excitations in only a few modes are important. We use these algorithms to explore the impact of selective excitations on the dissociation of covalent bonds in pyrrole and butane during vibronic transitions. In the case of pyrrole, the transition is initiated by photoexcitation; for butane, it occurs due to a sudden-force mechanochemical excitation. We furthermore discuss the results of the algorithms with respect to the corresponding experimental observations and classical simulations. Finally, we outline procedures for selective excitation of vibrational modes by optimizing external factors that mediate a vibronic process or affect the vibrational quanta distribution during such transitions.
In Sec.~\ref{sec:theory}, we discuss the theory of vibronic transitions and Gaussian Boson Sampling. We then describe the quantum algorithm in detail and explain how quantum-inspired classical algorithms can be used when only a small number of modes are of interest. In Sec.~\ref{sec:apps}, we explore applications of the algorithm in the photoexcitation of pyrrole and in force-induced vibrational excitations of butane. Finally, we summarize and discuss the main results of the article in Sec.~\ref{sec:conc}.
\section{Theory and Algorithms}\label{sec:theory}
\subsection{Vibronic transitions}
A vibronic transition involves simultaneous changes in the electronic and vibrational states of a molecule. The Franck-Condon (FC) approximation \cite{sharp1964franck, Barone_2009} states that the probability of a vibronic transition is determined by FC factors, which are proportional to the square of the overlap between the vibrational wave functions of the initial and final states. The FC factors can be also written in terms of the Doktorov operator \cite{Doktorov_1977} $\hat{U}_{\text{Dok}}$ as \cite{huh2015boson}:
\begin{equation}
F(\bm{m},\bm{n}) = \left | \bra{\bm{m}}\hat{U}_{\text{Dok}}\ket{\bm{n}} \right | ^ 2.
\end{equation}
Here $\ket{\bm{n}}=\ket{n_1, n_2,\ldots, n_M}$ is the state denoting the number $n_i$ of vibrational quanta in the $i^\text{th}$ normal mode of the ground electronic state and $\ket{\bm{m}}=\ket{m_1, m_2,\ldots, m_M}$ is the corresponding one for the excited state. The core of this formula is that the final vibronic state of the $M$ modes can be written in terms of the initial state and the Doktorov operator. The FC factor just determines the probability of observing an excitation to the state $\ket{\bm{m}}$. In this framework, the relation between the normal coordinates of the initial and final states and the corresponding transformation of the bosonic operators for the vibronic transition are contained in $\hat{U}_{\text{Dok}}$. The states $\ket{\bm{n}}$ and $\ket{\bm{m}}$ only contain information about the number of vibrational quanta in the initial and final states. For any given molecule, $\hat{U}_{\text{Dok}}$ can be obtained from the normal modes and vibrational frequencies of the initial and final states.
\subsection{Gaussian Boson Sampling}
Gaussian Boson Sampling (GBS) is a platform for photonic quantum computation where a Gaussian state is prepared and subsequently measured in the photon-number basis. Gaussian states are defined as states whose Wigner function is a Gaussian distribution. Similarly, Gaussian transformations are those that map Gaussian states to other Gaussian states. Importantly, Gaussian states have the property that they can be uniquely described in terms of a $2M\times 2M$ covariance matrix $V$ and a complex vector of means $\bm{\alpha}$~\cite{picinbono1996second}.
It was first shown in Ref.~\cite{huh2015boson} that GBS can be used to compute Franck-Condon profiles. The main ingredient of the algorithm is to exploit the equivalence between photons in the optical modes of a GBS device, and vibrational quanta in the normal modes of a molecule. Vibronic transitions can be simulated with GBS because the Doktorov transformation is Gaussian: from a starting Gaussian state, the final state after the transition will also be Gaussian. Therefore, it can be sampled with a GBS device. This makes it possible to encode the chemical information characterizing a vibronic transition into a GBS distribution, then sample from it to determine the statistics of the resulting vibrational excitations.
In a GBS setting, the probability $\Pr(\bm{m})$ of observing an output state $\ket{\bm{m}}=\ket{m_1, m_2, \ldots, m_M}$ is given by \cite{bjorklund2018faster,quesada2019franck,quesada2019simulating}:
\begin{align}\label{Eq: lhaf}
\Pr(\bm{m}) = \frac{\exp\left(-\tfrac{1}{2} \bm{\alpha}'^\dagger Q^{-1} \bm{\alpha}' \right)}{m_1!m_2!\cdots m_M!} \frac{\text{lhaf}(\mathcal{A}'_{\bm{m}})}{\sqrt{\text{det}(Q)}},
\end{align}
where $\text{lhaf}(\cdot)$ is the \emph{loop hafnian} introduced in Ref.~\cite{bjorklund2018faster}. Moreover, we define $Q:=V +\id/2$, $\bm{\alpha}' := (\bm{\alpha},\bm{\alpha}^*)^T$ and the matrix $\mathcal{A}'$ with components
\begin{equation}
\mathcal{A}'_{ij} := \begin{cases}
\mathcal{A}_{ij} &\text{ if } i\neq j,\\
\gamma_{i} &\text{ if } i=j,
\end{cases}
\end{equation}
where $\mathcal{A} := X \left(\id- Q^{-1}\right)$, $X := \left[\begin{smallmatrix}
0 & \id \\
\id & 0
\end{smallmatrix} \right]$, and $\gamma_{i}$ is the $i^\text{th}$ entry of $\gamma := Q^{-1} \bm{\alpha}'$. The submatrix $\mathcal{A}'_{\bm{m}}$ is constructed as follows: if $m_i=0$, the rows and columns $i$ and $i+M$ are deleted from $\mathcal{A}$, and if $m_i>0$, the rows and columns are repeated $m_i$ times.
The complexity of the best-known classical algorithms for sampling from the GBS distribution of Eq.~\eqref{Eq: lhaf} scale exponentially with the total number of photons and polynomially with the number of modes~\cite{quesada2020exact}. For large systems, this makes classical simulation intractable and a quantum device is needed.
\subsection{Programming a GBS device with molecular data}\label{sec:prog}
\begin{figure*}
\includegraphics[width=1.5 \columnwidth]{main_figure.pdf}
\centering
\caption{Schematic representation of the Gaussian Boson Sampling (GBS) algorithm for vibrational excitations. (A) A vibronic transition can be represented in terms of a Doktorov transformation $\hat{U}_{\text{Dok}}$, which can be determined from the normal-mode frequencies $\Omega, \Omega'$ and normal coordinates $\bm{q}, \bm{q'}$. (B) The Doktorov operator can be decomposed in terms of displacement $\hat{D}(\bm{\beta})$, squeezing $\hat{S}(\Sigma)$, and rotation $\hat{R}(U_L)$, $\hat{R}(U_R)$ operations. These can be implemented in a GBS device to prepare the final state after the transition. A time-dependent transformation $U(t)$ can also be implemented to simulate the vibrational quantum dynamics. The excitations are sampled by measurements in the photon-number basis.}\label{fig:algorithm}
\end{figure*}
Programming a GBS device for a given molecule requires determining the Doktorov operator for that molecule and mapping its parameters to the GBS device. The Doktorov operator is decomposed in terms of multi-mode displacement $\hat{D}(\beta)$, squeezing
$\hat{S}(\Sigma)$, and generalized rotation $\hat{R}(U_L)$, $\hat{R}(U_R)$ operators corresponding to quantum optical interferometers, as~\cite{quesada2019franck}:
\begin{equation}\label{Eq: Dok_Gaussian}
\hat{U}_{\text{Dok}} = \hat{D}(\bm{\beta})\hat{R}(U_L) \hat{S}(\Sigma) \hat{R}(U_R),
\end{equation}
where $U_L$ and $U_R$ are unitary matrices, $\Sigma$ is a diagonal matrix, and $\bm{\beta}$ is a complex vector. These parameters are obtained from the normal modes and frequencies of the initial and final states. The normal coordinates of the initial and final states, $\bm{q}$ and $\bm{q}'$, are related to each other via the Duschinsky transformation \cite{Duschinsky_1937}:
\begin{equation}
\bm{q'} = U_D\bm{q} + \bm{d},
\end{equation}
where $U_D$ is the Duschinsky matrix that determines the overlap between the normal coordinates and $\bm{d}$ is a real vector that describes the change in the molecular geometries of the initial and final states. The Duschinsky matrix is obtained from the eigenvectors of the initial and final state Hessian matrices, $\bm{L}_i$ and $\bm{L}_f$, respectively \cite{Reimers_2001}:
\begin{equation}
U_D = (\bm{L}_f)^T \bm{L}_i.
\end{equation}
The displacement vector $\bm{d}$ is related to the Cartesian geometry vectors of the initial and final states, $\bm{x}_i$ and $\bm{x}_f$, as \cite{Reimers_2001}:
\begin{equation}
\bm{d} = (\bm{L}_f)^T m^{1/2} (\bm{x}_i - \bm{x}_f),
\end{equation}
where $m$ is a diagonal matrix containing atomic masses. The matrices $U_L$, $U_R$, and $\Sigma$ are obtained from the singular value decomposition $J = U_L\Sigma U_R$ of the matrix $J:=\Omega' U_D\Omega^{-1}$. The diagonal matrices $\Omega$ and $\Omega'$ are respectively obtained from the ground and excited state frequencies:
\begin{align}
\Omega &= \text{diag} (\sqrt{\omega_1},...,\sqrt{\omega_k}),\\
\Omega' &= \text{diag} (\sqrt{\omega_1'},...,\sqrt{\omega_k'}).
\end{align}
Finally, the displacement vector $\bm{\beta}$ is given by $
\bm{\beta}=\hbar^{-1/2}\Omega'\bm{d}/\sqrt{2}$ where $\hbar$ is the reduced Planck constant.
Once $\hat{U}_{\text{Dok}}$ has been determined for a molecule, the quantum algorithm can be used to obtain the excitation of vibrational modes during a vibronic transition. Furthermore, as shown in Ref.~\cite{Sparrow_2018}, a photonic quantum device can also be configured to implement the unitary transformation
\begin{equation} \label{Eq: Ut}
U(t) = \hat{R}(U_l) e^{-i \hat{H} t/\hbar},
\end{equation}
to simulate the vibrational quantum dynamics of molecules. In Eq.~\eqref{Eq: Ut}, $U_l$ is a unitary matrix that converts the molecular normal modes to a set of spatially-localized vibrational modes, $\hat{R}(U_l)$ represents an interferometer configured with respect to $U_l$, $\hat{H}$ is the Hamiltonian corresponding to the harmonic normal modes, and $t$ is time.
\subsection{Initial state preparation}\label{sec:init}
A vibronic process can be optimized by preparing initial states that have a higher probability to reach a desired final vibrational state. By using tunable incoming light, a specific initial state can be resonantly excited, as shown in the following. Importantly, the resulting state of the vibrational mode is well described by a Gaussian state, rendering the process amenable to the GBS algorithm.
We derive the evolution operator associated with a near-resonant interaction between light and a normal mode of a molecule in the harmonic approximation.
We start by writing the standard light-matter Hamiltonian in the dipole approximation
\begin{align}
\hat{H}_{\text{LM}} = \bm{\mu} \cdot \bm{E}(t),
\end{align}
where $\bm{\mu}$ is the dipole moment of the molecule that we split in terms of its electronic and nuclear parts as
\begin{align}\label{Eq: Mu}
\bm{\mu} = \bm{\mu}_e+\bm{\mu}_n = \bm{\mu}_e + \sum_i q_i \bm{x}_i.
\end{align}
In Eq.~\eqref{Eq: Mu}, $q_i$ is the charge of the $i^\text{th}$ nuclei and $\bm{x}_i$ is its position.
The electric field can be written as a classical amplitude oscillating at frequency $\omega_0$,
\begin{align}
\bm{E}(t) = \bm{E}_0 e^{-i \omega_0 t} +\bm{E}_0^* e^{i \omega_0 t}.
\end{align}
We assume that the frequency of the light is far away from any electronic resonance at this stage and thus ignore any contribution due to the electronic dipole moment $\bm{\mu}_e$. Furthermore, in the harmonic approximation, we can expand the position of any atom in terms of the centre of mass and the normal coordinates
\begin{align}
\bm{x}_i = c_i X_{\text{cm}}+\sum_{j} \bm{d}_{ij} \left(\hat{a}_j^\dagger + \hat{a}_j \right),
\end{align}
where $\tilde{\bm{d}}_{ij} \ (\bm{d}_{ij})$ are the expansion coefficients of the position of the $i^{th}$ atom in terms of the normal coordinate $q_i$ (or its associated creation and destruction operators). We now also assume that the centre of mass, if confined, oscillates at a frequency far from $
\omega_0$, so its contribution can be neglected. We can now take the normal mode expansion in the last equation and write the light-matter Hamiltonian in the interaction picture (replacing $\hat{a}_j^\dagger \to \hat{a}_j^\dagger e^{i \omega_j t}$) to obtain
\begin{align}
H_{\text{LM}}^{I}(t) =& \sum_i q_i C_i \cdot \left( \bm{E}_0 e^{-i \omega_0 t} +\bm{E}_0^* e^{i \omega_0 t} \right),
\end{align}
where
\begin{align}
C_i = \sum_{j} \bm{d}_{ij} \left(\hat{a}_j^\dagger e^{i \omega_j t} + \hat{a}_j e^{-i \omega_j t} \right).
\end{align}
The time-evolution operator associated with this Hamiltonian is
\begin{align}
\mathcal{U}(t_0,t_1) = \mathcal{T}\exp\left( - \frac{i}{\hbar} \int_{t_0}^{t_1} dt H_{\text{LM}}^{I}(t) \right),
\end{align}
where $\mathcal{T}$ is the time-ordering operator. As shown in the Appendix, this operator can be expressed as
\begin{align}\label{Eq:time-evolution}
\mathcal{U}(t_0,t_1) = D\left(- \frac{i}{\hbar} \sum_i q_i \bm{d}_{ik}\cdot \bm{E_0} \ \{t_1-t_0\} \right),
\end{align}
where $D_k(\beta_k) = \exp(\beta_k \hat{a}_k^\dagger -\text{H.c.})$ is a displacement in mode $k$ by an amount $\beta_k = - \frac{i}{\hbar} \sum_i q_i \bm{d}_{ik}\cdot \bm{E_0} (t_1-t_0)$. When this operator is applied to a state with zero vibrational quanta, the result is a coherent state in mode $k$, which is a Gaussian state. In the Fock basis, a coherent state with parameter $\beta$ can be represented as:
\begin{equation}
\ket{\beta} = D(\beta)\ket{0} = e^{-\frac{|\beta|^2}{2}}\sum_{n=0}^{\infty} \frac{\beta^n}{\sqrt{n!}} \ket{n}.
\end{equation}
\begin{figure}
\includegraphics[width=0.95 \columnwidth]{marginals.pdf}
\centering
\caption{Algorithm for sampling from marginal distributions. The final state $\ket{\psi}$ of the vibrational modes is Gaussian, so it can be represented in terms of a covariance matrix $V$ and a vector of means $\bm{\alpha}$. In this example, the goal is to sample from the marginal distribution of modes 1 and 3 of a four-mode system. The reduced covariance matrix $V_k$ and reduced vector of means $\bm{\alpha}_k$ are obtained by keeping only the entries of columns and rows numbered 1, 3, (1+4)=5, and (3+4)=7. These can then be encoded into a two-mode GBS device to sample from the desired marginal distribution.}\label{fig:marginal}
\end{figure}
\subsection{Algorithm}
We now outline an algorithm for simulating molecular vibrational excitations during a vibronic transition. The algorithm is depicted in Fig.~\ref{fig:algorithm} and includes the following steps:
\begin{enumerate}
\item Compute the GBS parameters $U_L$, $\Sigma$, $U_R$ and $\bm{\beta}$ from the input chemical parameters $\Omega$, $\Omega'$, $U_D$ and $\bm{d}$.
\item Use the GBS device to prepare the Gaussian state $\ket{\psi}=\hat{U}_{Dok}\ket{\psi_i}$ with covariance matrix $V$ and vector of means $\bm{\alpha}$, where $\ket{\psi_i}$ is an initial Gaussian state.
\item Implement the transformation $U(t)$ to simulate the vibrational quantum dynamics in the localized modes.
\item Measure the output state in the photon-number basis.
\item Repeat these steps sufficiently many times to obtain the desired statistics about the distribution of vibrational excitations.
\end{enumerate}
In certain situations, only a few of the vibrational modes are of interest and it suffices to sample from their marginal distribution. Gaussian states are uniquely specified by their covariance matrix and vector of means, so computing marginal distributions is straightforward; the reduced states can be readily obtained from the covariance matrix and vector of means.
For simplicity and without loss of generality, consider the marginal distribution of the first $k$ modes of an initial state with $M$ modes. The $2k\times 2k$ reduced covariance matrix $V_k$ is obtained by selecting the $(i, i+M)$ rows and columns of the original covariance matrix $V$, for $i=1,2,\ldots, k$. Similarly, the reduced vector of means $\bm{\alpha}_k$ is constructed by keeping only the first $k$ entries of the original vector $\bm{\alpha}$. The marginal distribution of the first $k$ modes is then also given by Eq.~\eqref{Eq: lhaf}, with the exception that all quantities are defined with respect to $V_k$ and $\bm{\alpha}_k$. This process is illustrated in Fig.~\ref{fig:marginal}.
When $k$ is sufficiently small, it is possible to employ existing classical algorithms \cite{quesada2020exact} to simulate the resulting $k$-mode GBS device, thus leading to a quantum-inspired method for simulating vibrational excitations in molecules.
\section{Applications}\label{sec:apps}
In this section, we apply the GBS algorithm to simulate and optimize the vibrational excitations in pyrrole and butane during vibronic processes mediated by photoexcitation and mechanochemical excitation. In particular, we investigate the effect of vibrational pre-excitation on the photodissociation of the nitrogen-hydrogen (N-H) bond of pyrrole and explain our results with respect to experimental investigations \cite{Grygoryeva_2019,Epshtein_2011}. In the case of butane, we explore how the change in the strengths of the external mechanical force can excite vibrational modes that help to dissociate selective carbon-carbon (C-C) bonds and compare the results with those of molecular dynamics simulations and computational analyses \cite{Smalo_2014,Rybkin_2017}. In both applications, we explore situations that require generating GBS samples and also investigate cases where implementing the quantum-inspired algorithm is sufficient.
The electronic structure of the ground and excited states of pyrrole were computed, respectively, using the Coupled-Cluster method at the level of singles and doubles excitations (CCSD) \cite{Piecuch_2002} and its extension to model the excited states, the equation-of-motion CCSD (EOM-CCSD) \cite{Piecuch_2002, Kowalski_2004, Wloch_2005}. The Pople basis set 6-31+G(d) \cite{Ditchfield_1971} was used throughout. Single-point energy calculations with Dunning\textsc{\char13}s correlation-consistent basis set augmented with diffuse functions, aug-cc-pVDZ \cite{Dunning_1989}, were performed to evaluate the accuracy of the smaller basis set in predicting excitation and bond dissociation energies. The calculations for butane were performed with density functional theory \cite{Hohenberg_1964} using the hybrid density functional B3LYP \cite{Lee_1988,Becke_1993,Stephens_1994} and aug-cc-pVDZ basis set. All electronic structure calculations were computed with the general atomic and molecular electronic structure system (GAMESS) \cite{Schmidt_1993,Gordon_2005}. The sampling algorithm is implemented by simulating GBS devices using Strawberry Fields~\cite{killoran2019strawberry} and The Walrus~\cite{gupt2019walrus}.
\subsection{Photoexcitation of pyrrole}
The photochemistry of pyrrole has been the subject of several experimental and theoretical investigations \cite{Vallet_2005,Lan_2007,Epshtein_2011,Wu_2015,Grygoryeva_2019}. Pyrrole derivatives are building blocks of important biological molecules, such as chlorophyll and heme \cite{Stenesh_1998}, and technologically-important systems including solar cells and conducting polymers \cite{Rasmussen_2013}. Pyrrole is also a prototype for investigating photochemistry and light-induced photodissociation of heteroaromatic molecules \cite{Ashfold_2006}.
The first excited singlet electronic state of pyrrole has an experimental excitation energy of 5.22 eV \cite{Flicker_1976}. The excitation energy computed with EOM-CCSD/aug-cc-pVDZ performed on the geometry optimized with EOM-CCSD/6-31+G(d) is 5.22 eV, in perfect agreement with the experimental value. This indicates that the excited state geometry of pyrrole obtained using the smaller basis set 6-31+G(d) is a good approximation for the purposes of our simulations.
The excitation of pyrrole from the ground to the excited electronic state changes the energy of the N-H bond significantly \cite{Lan_2007, Vallet_2005, Neville_2014}. The potential energy curves for stretching this bond are plotted in Fig.~\ref{fig:pyscan}. The dissociation energy at the ground state geometry is 136.9 kcal mol$^{-1}$, while at the excited state the bond is dissociative with an energy barrier of 4.2 kcal mol$^{-1}$.
\begin{figure}[t]
\includegraphics[scale=0.6]{pyscan.pdf}
\centering
\caption{Potential energy curve for stretching the N-H bond of pyrrole in the ground and excited electronic states. The calculations were performed at the level of CCSD and EOM-CCSD for the ground and excited states, respectively. The solid and dash lines correspond to the results obtained with the 6-31+G(d) basis set while the symbols represent the results of single-point calculations using the larger basis set aug-cc-pVDZ. The average deviation between the energies obtained with the two different basis sets is 1.7 kcal mol$^{-1}$.}\label{fig:pyscan}
\end{figure}
We employ the GBS algorithm to determine the distribution of vibrational excitations of pyrrole after a vibronic transition from the ground state to the electronic excited state. The marginal distributions of the normal modes of pyrrole, which determine the probability of vibrational excitations in a single mode, are plotted in Fig.~\ref{fig:pymarg}. They are obtained from the GBS samples. The distributions in Fig.~\ref{fig:pymarg} demonstrate that the normal modes with frequencies 1518.1 cm$^{-1}$, 1080.5 cm$^{-1}$, and 882.3 cm$^{-1}$ become highly excited during the vibronic transitions. These vibrational modes are illustrated in Fig.~\ref{fig:modes}. However, the vibrational normal mode that corresponds to the stretching of the N-H bond in the final electronic state, with a vibrational frequency of 2607.7 cm$^{-1}$, is not significantly excited during the vibronic transition (cf. Fig.~\ref{fig:pymarg}). As a result, the energy required to overcome the potential energy barrier for the dissociation of the N-H bond is mainly provided by thermal excitations.
\begin{figure}[t]
\includegraphics[scale=0.62]{pymarg.pdf}
\centering
\caption{Single-mode marginal distributions of pyrrole during a vibronic transition from ground to the first excited state without (top) and with (bottom) pre-excitation of the ground state N-H stretching mode. The mode with the vibrational frequency of 2607.7 cm$^{-1}$ corresponds to the stretching of the N-H bond in the final electronic state. The number of vibrational quanta in each mode is represented by $n$.
}\label{fig:pymarg}
\end{figure}
Experimental investigations show that excitation of the N-H stretching mode at the electronic ground state, before the initiation of the vibronic transition, enhances its dissociation \cite{Grygoryeva_2019}. This experimental observation can be explained by assuming that the vibrational energy in the N-H stretching mode of the ground state is preserved during the vibronic transition. To validate this assumption, we compute marginal distributions for the vibronic transition for the case where the N-H stretching mode of the ground state is initially excited. The results, which are shown in Fig.~\ref{fig:pymarg}, demonstrate that pre-excitation of the mode leads to significant vibrational excitation of the corresponding N-H mode after the transition. Since the energy barrier is lower in the excited state, this in principle increases the rate of N-H dissociation, in agreement with experimental observations \cite{Grygoryeva_2019}.
\begin{figure}[b]
\includegraphics[scale=0.2]{modes.pdf}
\centering
\caption{Vibrational normal modes of pyrrole with frequencies of a) 1518.1 cm$^{-1}$, b) 1080.5 cm$^{-1}$ and c) 882.3 cm$^{-1}$, which become highly excited during the vibronic transition. Two modes at the excited electronic state that include stretching of the C-N bonds, with vibrational frequencies of d) 1215.7 cm$^{-1}$ and e) 1589.7 cm$^{-1}$ are excited simultaneously as a result of the pre-excitation of the ground electronic state mode with a frequency of f) 1462.2 cm$^{-1}$.}\label{fig:modes}
\end{figure}
Marginal distributions are valuable when the probability of vibrational excitations in a single mode is of interest. In such cases, the quantum-inspired classical algorithms introduced in Sec.~\ref{sec:prog} can be implemented. However, sampling from the complete probability distribution is more informative when preparation of co-excited vibrational modes in the excited electronic state is needed. We now look at the effect of vibrational pre-excitation of pyrrole modes, at the ground electronic state, on the excitation of two modes at the excited electronic state that include stretching of the carbon-nitrogen (C-N) bonds. These two modes have vibrational frequencies of 1215.7 cm$^{-1}$ and 1589.7 cm$^{-1}$ and are illustrated in Fig.~\ref{fig:modes}. The choice of the C-N modes is motivated by the ring opening reaction that involves dissociation of both C-N bonds simultaneously. The sampling results demonstrate that pre-excitation of the mode with the vibrational frequency of 1462.2 cm$^{-1}$, illustrated in Fig.~\ref{fig:modes}, leads to the simultaneous excitation of the two normal modes of the excited electronic state that involve C-N stretching. Pre-excitation of this mode, by inserting an average number of one photon, increases the probability of simultaneous excitation of the two C-N stretching modes from 0.2 \% to 7.1 \%. Pre-excitation with a larger number of photons further increases this probability.
\subsection{Mechanochemistry of butane}
\begin{figure}[t]
\includegraphics[scale=0.6]{buscan.pdf}
\centering
\caption{Potential energy curve for stretching the terminal carbon atoms of butane with the application of an external force. In the sudden-force regime, the application of the external force is abrupt and the molecule is vertically transferred to a new force-modified potential energy surface. This transition to the new potential energy surface is associated with vibrational excitation of the molecule.}\label{fig:buscan}
\end{figure}
In mechanochemistry, a force is applied to a molecule to perform chemical reactions.
The mechanism and kinetics of a mechanochemical process might be considerably affected by the rate at which the external force is applied \cite{Smalo_2014}. In the fixed-force regime, the external force acts gradually and the molecular geometry is allowed to relax during each stage of the process. In the sudden-force regime, the external force is applied instantaneously and the molecule is transformed to the force-modified potential energy surface (FM-PES) abruptly. This makes the process of sudden-force mechanochemistry analogous to the Franck-Condon transition in vibronic spectroscopy \cite{Rybkin_2017}. The external energy applied in the sudden-force regime can excite the molecule to higher vibrational energy levels at the FM-PES. Such excitations can provide the molecule with additional kinetic energy to overcome energy barriers of reaction channels on the FM-PES.
Butane has been investigated in both fixed and sudden-force regimes as a model system for understanding the effect of molecular dynamics on the mechanochemistry of linear alkane chains \cite{Smalo_2014}.
In the case of butane, it has been shown that the gradual application of the external field leads to the dissociation of the outer C-C bonds, while in the sudden-force regime, both outer and inter C-C bonds dissociate \cite{Smalo_2014}. The change in the mechanism can be explained by the vibrational excitation of the molecule due to the abrupt application of the external force \cite{Rybkin_2017}. Here we simulate the vibrational excitation of butane on different FM-PES corresponding to different magnitudes of external force applied suddenly to the terminal carbon atoms of the molecule.
\begin{figure}[b]
\includegraphics[scale=0.23]{bu.pdf}
\centering
\caption{Optimized structure of butane when a) no stretching force is applied to the terminal carbon atoms, b) the stretching force reaches the critical value, and c) the external force dissociates the molecule to ethylene and two methyl radicals.}\label{fig:bu}
\end{figure}
The potential energy curve for stretching the external carbon atoms of butane is presented in Fig.~\ref{fig:buscan} and the optimized structures of butane at different stretching points are shown in Fig.~\ref{fig:bu}. Increasing the distance between the terminal carbon atoms leads to a stretching of both external and internal C-C bonds. At a distance of 5.0 \AA, which corresponds to the maximum stretching force, the C-C bond lengths reach their maximum values. Further increasing the distance between the terminal carbon atoms leads to the formation of an ethylene molecule and two methyl radicals. Similarly, when the external force is applied gradually to the molecule and geometry relaxation is allowed during the stretching process, formation of the stable ethylene molecule favours the dissociation of the external C-C bonds.
\begin{figure}[t]
\includegraphics[scale=0.6]{bumarg.pdf}
\centering
\caption{Single-mode marginal distributions of butane during a vibronic transition from ground to the force-modified potential energy surface. The magnitude of the external force is 4.42 nN. The marginal distributions correspond to the inner C-C bond (top) and an outer C-C bond (bottom) of butane. The number of vibrational quanta in each mode is represented by $n$.}\label{fig:bumarg}
\end{figure}
\begin{figure}[b]
\includegraphics[scale=0.6]{budyn.pdf}
\centering
\caption{Probability of observing different numbers of vibrational quanta ($n$) in the localized modes of butane during a vibronic transition from ground to the force-modified potential energy surface plotted as a function of time. The local modes correspond to the inner (top) and an outer (bottom) C-C bond. The magnitude of the external force is 3.88 nN.}\label{fig:budyn}
\end{figure}
When the external force is applied instantaneously to the terminal carbons of butane, the molecule is vertically transformed to a FM-PES (cf. Fig.~\ref{fig:buscan}) and its vibrational modes get excited. We explore these excitations by investigating the marginal distributions of the vibrational quanta in two localized \cite{Jacob_2009} modes of butane that correspond to the stretching of the inner and outer C-C bonds after application of external forces with magnitudes up to 4.42 nN, which is about 0.75 of the critical force required to break the outer C-C bonds. The results are presented in Fig.~\ref{fig:bumarg}. When the magnitude of the external force is less than 3 nN, the vibrational excitations are very small, but the application of larger external forces lead to significant excitations in both inner and outer C-C bonds. More specifically, the localized mode that corresponds to the stretching of the inner C-C bond gets significantly excited when a force of 4.42 nN is applied (cf. Fig.~\ref{fig:bumarg}). This vibrational excitation helps with breaking the inner C-C bond which has a lower dissociation barriers at the FM-PES. This observation provides further evidence supporting the role of vibrational excitations in explaining the dissociation of the inner C-C bond in the sudden-force regime of butane mechanochemistry \cite{Rybkin_2017, Smalo_2014}.
The distributions in Fig.~\ref{fig:bumarg} correspond to the instantaneous vibrational excitations during the transition to the FM-PESs. However, the distribution of the vibrational quanta in the local modes is time dependent and might fluctuate with time \cite{Sparrow_2018}. We look at the probability of the vibrational excitations in the localized modes of butane during the vibronic transition when a force of 3.88 nN is applied. For this value of the force, the vibrational energy levels of the inner C-C bond are not significantly populated when the vibrational dynamics is not included (cf.~Fig.~\ref{fig:bumarg}). In Fig.~\ref{fig:budyn}, the excitation probabilities for the C-C bonds are plotted with respect to time. The probability of exciting the inner C-C bond to its first vibrational energy level increases by about two times after only 25 fs. Analogous fluctuations are also observed for excitation to the higher energy levels as illustrated in Fig.~\ref{fig:budyn}.
We now look at the co-excitation of the C-C stretching local modes of butane by generating GBS samples for the transition mediated by the external forces of 4.42 nN. In Table~\ref{table:bucoex}, the probability of vibrational co-excitation in the local modes of butane that correspond to the inner and outer C-C stretching are presented for different times. These data indicate that the co-excitation probability of these two modes is not significant, specially for higher excitation levels (e.g., up to only 1.7\% for double excitation in both modes). This is also in agreement with the observation that the simultaneous dissociation of both inner and outer C-C bond is not probable \cite{Rybkin_2017, Smalo_2014}. The sampling was performed for a reduced state of butane containing ten vibrational modes that were selected based on their contribution to the inner and outer C-C stretching local modes due to computational expenses of including all modes in the sampling.
\begin{table}[t]
\caption[]{Probabilities (in percentages) of simultaneously exciting outer and inner C-C stretching modes of butane, computed from ten thousand GBS samples obtained at different times.}
\label{table:bucoex}
\begin{center}
\begin{tabular}{ c c c c c c c}
\hline
Excitation\footnotemark & [1,1] & [1,2] & [2,1] & [1,3] & [2,2] & [1,4] \\
\hline
t = 0 & 9.1 & 5.8 & 2.8 & 2.6 & 1.7 & 1.0\\
t = 20 & 8.1 & 3.8 & 0.9 & 1.4 & 0.5 & 0.4\\
t = 40 & 3.4 & 2.1 & 0.4 & 1.0 & 0.2 & 0.5\\
t = 60 & 9.7 & 6.1 & 1.7 & 2.2 & 1.1 & 0.8\\
t = 80 & 8.1 & 5.4 & 1.6 & 2.0 & 1.0 & 0.6\\
t = 100 & 8.5 & 5.8 & 1.2 & 2.4 & 1.0 & 1.0\\
\hline
\end{tabular}
\footnotetext{The number of vibrational quanta in the outer (n) and inner (m) C-C stretching modes are represented as [n,m].}
\end{center}
\end{table}
\section{Conclusions}\label{sec:conc}
Recent advances in the development of quantum computers have opened the possibility to perform quantum simulation of materials on quantum devices. This creates a demand for developing quantum algorithms that can leverage the capabilities of near-term quantum computers for simulating molecular systems. In this work, we have demonstrated the ability of Gaussian Boson Sampling, which is a model for photonic quantum computing, to predict the vibrational excitation of molecules in vibronic processes. The ability to simulate molecular excitations potentially opens a way to control the outcome of chemical reactions that occur as a result of vibronic transitions. Furthermore, predicting such excitations is important for understanding the impact of molecular vibrations on the outcome of chemical reactions.
We introduced and implemented an algorithm to simulate the vibrational excitation of pyrrole during a vibronic transition and also investigated the possibility of optimizing the number of vibrational quanta in specific vibrational modes by pre-exciting the vibrational modes of the ground electronic state.
This application is motivated by experimental observations that demonstrated an enhancement in the dissociation of the N-H bond of pyrrole as a result of pre-excitation of the vibrational modes at the ground electronic state. Furthermore, we studied the excitation of vibrational modes in butane after application of mechanical forces. Such vibrational excitations can change the mechanism of bond dissociation in butane. Our results for these two model systems demonstrate that the vibrational excitations predicted by Gaussian Boson Sampling can help to explain experimental results and the predictions of molecular dynamics simulations.
Alongside the quantum algorithm, we introduced quantum-inspired classical methods that can be implemented in cases where only a small number of modes are of interest. The computational time required to perform sampling with these algorithms increases with the number of modes and using a Gaussian Boson Sampling device becomes necessary for large molecules and in cases where large numbers of modes need to be simulated.
We expect our results to motivate further investigations on the impact of controlled vibrational excitations in applications beyond light-induced vibronic transitions. An example of such applications is the change in the reactivity and electronic property of molecules in single-molecule junctions.
\bibliographystyle{apsrev}
|
1,116,691,498,332 | arxiv | \section{Introduction}\label{motiv}
The past two decades have witnessed the emergence of data-driven models for describing various engineering systems and physical phenomena in wide ranging applications, from improving medical diagnosis \cite{Ma2017diagnosis}, to weather forecasting \cite{guhathakurta2006}, energetic material property prediction \cite{casey2020prediction}, and turbulence modeling \cite{Durasaimy2019}. Data-driven methods such as deep neural networks are becoming popular in materials science and materials engineering in applications such as fatigue life prediction \cite{lee1999fatigue}, identification of material parameters \cite{lu2020multifidelity}, systems identification \cite{wang2021variational}, and constitutive modeling \cite{Le2015,liu2020,peng2020multiscale,garikipati2020multiresolution,tac2021datadriven}.
The notion of a free energy function, and the definition of the stress by differentiation of the free energy with respect to the deformation lies at the core of nonlinear elasticity, from hyperelastic to energy dissipating materials \cite{marsden1994mathematical}. The stress, in turn, has to be a strongly elliptic function of the deformation gradient in order to satisfy the existence of traveling waves with real wave speeds \cite{kuhl2006illustration}. To satisfy this requirement and to guarantee the existence of solutions in nonlinear elastostatics, Ball concluded that the stored strain energy has to be a polyconvex function of the deformation gradient \cite{ball1976convexity}. Traditionally, material model development has relied on expressing the strain energy as an explicit and differentiable analytical function of the deformation \cite{gasser2005GOH,holzapfel2000HGO,fung1979}. The notion of polyconvexity has been an important factor in the development of these closed-form material models \cite{chagnon2015hyperelastic}. In particular, advances in modeling of soft tissue material behavior has prompted the development of nonlinear and anisotropic strain energies that satisfy this polyconvexity requirement \cite{ehret2007polyconvex}.
Selection of an appropriate expert-constructed material model for a specific material usually requires advanced expertise and significant trial and error. There are a large number of models in the literature designed to fit different families of materials. Even in specific fields such as skin mechanics there is no consensus on the choice of a material model \cite{ limbert2019skin, jor2013computational, mueller2021reliability}. Furthermore, the analytical form of the strain energy function may be too restrictive for many applications, resulting in poor prediction performance. Sensitivity with respect to parameters is another issue, for instance when exponential models are used \cite{lee2018}.
A recent trend in material modeling has been the use of deep neural networks to describe either the strain energy or its derivatives \cite{liu2020, tac2021datadriven, leng2021, Vlassis202elastoplast}. Neural networks of sufficient complexity can be used to learn, reproduce and predict the behavior of any elastic material, which resolves many issues of expert-constructed models. However, the convexity of the strain energy function in data-driven frameworks is usually ignored or enforced with additional loss terms \cite{liu2020, leng2021, tac2021datadriven}. Using this approach does not guarantee the convexity of the strain energy. The penalty terms will push the solution space to resemble that of a convex function in the training region. However, even if convexity checks are satisfied exactly on all the training points, there is no guarantee that convexity extends outside the training points, or beyond the boundaries of the training region. Furthermore, adding loss terms to ensure convexity adds to the nonlinearity of the loss space which makes finding the global minimum a daunting task. These loss terms also result in lengthy calculations during training and may limit the flexibility of the neural networks. Thus, there is a need for data-driven methods that automatically satisfy the polyconvexity requirements but can still capture the behavior of any material.
In this study we use a new type of neural networks known as neural ordinary differential equations (N-ODE) \cite{chen2019node} to estimate the derivatives of the strain energy function with respect to invariants of deformation. Polyconvexity of the strain energy is automatically satisfied in our formulation. We train the N-ODEs using synthetic as well as experimental stress-stretch data from porcine skin biaxial experiments obtained from \cite{tac2021datadriven}. Finally, we demonstrate the applicability of the N-ODE material model in finite element simulations. The diagram in Fig. \ref{fig_diagram} shows an overview of our methodology.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{fig_diagram.png}
\caption{Workflow of the training and inference processes in the N-ODE material model (left), continuous transformation of the hidden state in the N-ODE from modified invariants $\mathbf{H}(0) = J_i$, to the derivatives of the strain energy $\mathbf{H}(1) = \Psi_i$ (right).}
\label{fig_diagram}
\end{figure}
\section{Materials and Methods}
\subsection{Construction of polyconvex strain energy functions}
The existence of physically realistic solutions to nonlinear elasticity problems requires polyconvexity of the strain energy function with respect to the deformation gradient $\mathbf{F}$ \cite{ehret2007polyconvex, ball1976convexity, schroder2010anisotropic}. A function $\Psi(\mathbf{F})$ is polyconvex in $\mathbf{F}$ if there is a function $\hat{\Psi}$ with $\Psi(\mathbf{F})=\hat{\Psi}(\mathbf{F},\mathrm{cof}\mathbf{F},\mathrm{det}\mathbf{F})$ such that $\hat{\Psi}$ is convex on the extended domain formed by $\mathbf{F},\mathrm{cof}\mathbf{F},\mathrm{det}\mathbf{F}$. Constructing general functions that satisfy this requirement can be challenging, but a sufficiently flexible subset of polyconvex functions of $\mathbf{F}$ can be obtained through an additive decomposition
\begin{equation}
\Psi(\mathbf{F}) = \Psi_{\scas{F}}(\mathbf{F})+ \Psi_{\scas{cof}}(\mathrm{cof}\mathbf{F})+\Psi_{\scas{det}}(\mathrm{det}\mathbf{F})\,
\label{eq_polyconvex_sumF}
\end{equation}
with $\Psi_{\scas{F}}$, $\Psi_{\scas{cof}}$, $\Psi_{\scas{det}}$ each a convex function. Furthermore, to guarantee objectivity of the strain energy, $\Psi$ is usually not expressed as a function of the full deformation gradient but instead of the right Cauchy Green deformation $\mathbf{C}=\mathbf{F}^\top\mathbf{F}$ . In that case the following form of the strain energy will retain polyconvexity with respect to $\mathbf{F}$,
\begin{equation}
\Psi(\mathbf{F}) = \Psi_{I_1}(I_1)+ \Psi_{I_2}(I_2) + \Psi_{J}(J)\,
\label{eq_polyconvex_sumI}
\end{equation}
provided $\Psi_{I_1}$,$\Psi_{I_2}$ are convex non-decreasing functions of their respective arguments \cite{schroder2003invariant}, with $I_1, I_2$ the first two invariants of $\mathbf{C}$, and $J=\mathrm{det}\mathbf{F}=\sqrt{\mathrm{det}\mathbf{C}}$. The reason why constructing a function based on (\ref{eq_polyconvex_sumI}) yields polyconvex functions as expressed in (\ref{eq_polyconvex_sumF}) is because the invariants $I_1,I_2$ of $\mathbf{C}$ are convex non-decreasing functions of $\mathbf{F}$ and $\mathrm{cof}\mathbf{F}$, respectively. Explicitly, the invariants of $\mathbf{C}$, $I_1$, $I_2$, are related to $\mathbf{F}$, $\mbox{\rm{cof}}\,\mathbf{F}$,
\begin{eqnarray}
I_1 &=& \mbox{\rm{tr}}\,(\mathbf{C}) = \mathbf{C}:\mathbf{I},\\
I_2 &=& \mbox{\rm{tr}}\,(\mbox{\rm{cof}}\,\mathbf{C}) = \frac{1}{2}[I_1^2 - tr(\mathbf{C}^2)].
\label{eq_invs}
\end{eqnarray}
To consider anisotropy, additional pseudo-invariants need to be considered. A popular option is $I_{4v} = \mathbf{C}:\mathbf{V}_0$, where $\mathbf{V}_0=\mathbf{v}_0\otimes \mathbf{v}_0$ is a structural tensor defined by the anisotropy direction $\mathbf{v}_0$, which usually corresponds to a fiber orientation in soft tissue in the reference configuration. It can be shown that $I_{4v}$ is a convex function of $\mathbf{F}$. Consequently, we can include additional terms that depend on $I_{4v}$ in the strain energy function retaining polyconvexity while including the effects of anisotropy. We consider two different fiber directions $I_{4v}$ and $I_{4w}$, corresponding to the vectors $\mathbf{v}_0$ and $\mathbf{w}_0$ in the reference configuration. Lastly, we note that since $I_1$, $I_{4v}$,$I_{4w}$ are convex functions of $\mathbf{F}$, and $I_2$ is convex with respect to $\mathrm{cof}\mathbf{F}$, we can compose convex non-decreasing functions of linear combinations of these invariants and retain polyconvexity. Here, we propose the following general form of the strain energy function:
\begin{multline}
\Psi(\mathbf{F}) = \Psi_{I_1}(I_1)+ \Psi_{I_2}(I_2)+ \Psi_{I_{4v}}(I_{4v}) +\Psi_{I_{4w}}(I_{4w}) + \\
\sum_{j > i} \Psi_{I_i,I_j} \left(\alpha_{ij} I_i+ (1-\alpha_{ij})I_j\right)+ \Psi_{J}(J)\,
\label{eq_polyconvex_sum_aniso}
\end{multline}
where the second to last term includes convex non-decreasing functions of all possible linear combinations of the invariants introduced so far. The weights $\alpha_{ij}$ are restricted to the interval $(0,1)$. We could include additional terms depending on the specific application, for instance, anisotropic invariants that are convex with respect to $\mathrm{cof}\mathbf{F}$, such as $I_{5v}=\mathrm{tr}(\mathrm{cof}\mathbf{C}\mathbf{V}_0)$, can be added to this framework.
Turning the attention back to (\ref{eq_polyconvex_sum_aniso}), each of the $\Psi_{I_i}$ and $\Psi_{I_i, I_j}$ needs to be convex non-decreasing. This is equivalent to the derivatives $d\Psi_{I_i}/d I_i$ being monotonic functions, with $d\Psi_{I_i}/d I_i \geq 0$ in the domain of $I_i$. In the next section we show how to leverage N-ODEs to generate monotonic derivative functions and thus polyconvex strain energies.
\subsection{Neural ordinary differential equations}
\label{methods_NODE}
We are interested in finding functions that create monotonic maps between inputs $\vec{x}$ and outputs $\vec{y}$. N-ODEs are a novel architecture of neural networks that generalizes some successful models, such as residual networks \cite{he2016deep}. The key idea is to replace the number of discrete layers of classical neural networks by a continuous transformation of the hidden state by a learnable function. In this sense, the concept of depth of the neural network is replaced by time. This results in the construction of an ordinary differential equation system:
\begin{equation}
\frac{\partial \mathbf{H}(t)}{\partial t} = \vec{f}(\mathbf{H}(t),t; \vec{\theta})
\end{equation}
Here $\mathbf{H}(t)$ represents the hidden state of the neural network. The function $f(\cdot;\vec{\theta})$ is represented by a fully-connected neural network with trainable parameters $\vec{\theta}$. The relationship between the input and the output for this model can be obtained by integrating the system in an arbitrary interval of time:
\begin{equation}
\mathbf{H}(1) = \mathbf{H}(0) + \int_0^1 \vec{f}(\mathbf{H}(t),t; \vec{\theta}) dt
\end{equation}
Here, we assign the input to the initial condition $\vec{x} = \mathbf{H}(0)$ and the output to the final state $\vec{y} = \mathbf{H}(1)$.
We know from ordinary differential equation analysis that the solution trajectories never intersect each other in the state space, as illustrated in Fig.~\ref{fig_ODE_mapping}. Intersection of trajectories would imply that for a given point in the state space there is more than one value for the rate of change $f(\mathbf{H}(t))$, which contravenes the definition of the ordinary differential equation system. In one-dimensional case, this condition implies that for two different trajectories ${\rm H}_1(t)$ and ${\rm H}_2(t)$:
\begin{eqnarray}
{\rm H}_2(0) \geq {\rm H}_1(0) & \iff & {\rm H}_2(1) \geq {\rm H}_1(1)\\
{\rm H}_2(0) < {\rm H}_1(0) & \iff & {\rm H}_2(1) < {\rm H}_1(1)
\end{eqnarray}
These conditions can be succinctly written in terms of the input $x = {\rm H}(0)$ and output $y = {\rm H}(1)$ of the neural ordinary differential equations as:
\begin{equation}
(y_2 - y_1)(x_2 - x_1)\geq 0 \label{eq:nondec}
\end{equation}
which correspond exactly to the requirements for a monotonic function. To satisfy the polyconvexity requirements, we also need to ensure that these functions are non-negative in the domain of the input. To achieve this property, we need to ensure that
\begin{equation}
\int_0^1 f({\rm H}(t)) dt \geq \max\{0,-{\rm H}_{min}\}, \hspace{0.1cm} {\rm H}(0) = {\rm H}_{min}
\end{equation}
where ${\rm H}_{min}$ is lowest possible value of the input. Although there are multiple ways to satisfy this condition, here we focus on the simplest case, where ${\rm H}_{min} = 0$ and $f(0) = 0$. This particular scenario can be achieved by shifting the inputs and removing all the bias parameters from the neural network that approximates $f$ and setting the initial condition of the ODE as $H(0)=0$. In this case, the neural network that approximates the right-hand side of the ordinary differential equation can be written as:
\begin{equation}
f(x) = \mathbf{W}_n h(...\mathbf{W}_2 h(\mathbf{W}_1x)))
\end{equation}
where $h(\cdot)$ is non-linear activation function applied element-wise, $n$ is the depth of the network and $\mathbf{W}_i$ are the learnable parameters.
With this setup, we have shown that the functions approximated by one-dimensional N-ODEs are monotonic and non-negative, and suitable to construct polyconvex strain energy functions.
\subsection*{Anisotropic, hyperelastic, and fully incompressible materials}
To demonstrate the potential of the proposed framework, we will attempt to learn an anisotropic, hyperelastic and incompressible materials. We have introduced the strain energy, $\Psi$, which is a function of the right Cauchy-Green deformation tensor, $\mathbf{C}$, and two material direction vectors, $\mathbf{v}_0$ and $\mathbf{w}_0$. For incompressible materials, the last term of the strain energy, $\Psi_J$, is replaced by the constraint $p(J-1)$ with $p$ a Lagrange multiplier. Furthermore, in the case of hyperelastic materials the free energy has no other contribution. The second Piola-Kirchhoff stress tensor, $\mathbf{S}$, follows from the Doyle-Erickson formula by differentiating the strain energy $\Psi$ with respect to $\mathbf{C}$ \cite{DOYLE195653},
\begin{equation}
\mathbf{S} = 2\frac{\partial\Psi}{\partial \mathbf{C}} = 2 \frac{\partial \Psi}{\partial I_1} \mathbf{I} + 2 \frac{\partial \Psi}{\partial I_2} (I_1 \mathbf{I} - \mathbf{C}) + 2 \frac{\partial \Psi}{\partial I_{4v}} \mathbf{V}_0 + 2 \frac{\partial \Psi}{\partial I_{4w}} \mathbf{W}_0 + p\mathbf{C}^{-1} \, .
\label{eq_S}
\end{equation}
Often times in finite element packages, the Cauchy stress, $\mathbf{\sigma}$, is used, which can be obtained via a push forward operation.
To solve the equilibrium equations we also need the elasticity tensor:
\begin{equation}
\mathbb{C} = 2\frac{\partial \mathbf{S}}{\partial \mathbf{C} }.
\label{eq_CC}
\end{equation}
The full expansion of the elasticity tensor is shown in the Appendix. Also note that if the finite element implementation is done in the deformed configuration, the spatial version of the elasticity tensor, $\mathbb{c}$, can be obtained via a push forward operation.
Therefore, given N-ODEs describing the strain energy derivatives, these functions can be used directly to determine the stress for a given deformation by evaluating (\ref{eq_S}). For nonlinear finite element simulations, evaluation of the tangent (\ref{eq_CC}) requires the second derivatives of the strain energy, which in our case involves taking derivatives of the N-ODEs with respect to their inputs.
\subsection*{Data-driven polyconvex constitutive models using neural ordinary differential equations}
We have shown that N-ODEs produce monotonic functions. We therefore use this tool to represent the derivatives of the terms in eq. (\ref{eq_polyconvex_sum_aniso}), i.e. $\partial \Psi_{I_1}/\partial I_1, \partial \Psi_{I_2}/\partial I_2$, etc. as illustrated in Fig. \ref{fig_diagram}. By producing monotonic non-negative derivative functions, we guarantee that the underlying strain energy terms are convex non-decreasing functions of the invariants, which in turn guarantees polyconvexity of the energy with respect to $\mathbf{F}$. A total of $10$ N-ODEs are used, each corresponding to a different partial derivative of the strain energy. The architecture of each of the N-ODEs is the same and summarized in Table \ref{table01}.
\begin{table}[h!]\centering
\caption{Neural network architecture}
\label{table01}
\begin{tabularx}{0.78\textwidth}{lll}
\hline
Layer & Number of nodes & Activation function\\ \hline
Input & 1 & None\\
Hidden layer 1 & 5 & tanh \\
Hidden layer 2 & 5 & tanh\\
Output & 1 & Linear\\
\hline
\end{tabularx}
\end{table}
While the monotonicity condition is guaranteed directly by using N-ODEs, to satisfy the non-negative conditions, $\partial \Psi_{I_i}/\partial I_i \geq 0$, we first note that $I_1, I_2 \geq 3$ for an incompressible material \cite{schroder2003invariant}. For these invariants, setting $J_1=I_1 - 3, J_2=I_2 - 3$ as inputs to the N-ODE and setting biases to zero gives the initial condition $H(0)=0$. A single non-negative bias is added after the evaluation of each of these two N-ODEs, i.e. $H_1$, $H_2$ for $\partial \Psi_{I_1}/\partial I_1 , \partial \Psi_{I_2}/\partial I_2$, resulting in $\partial \Psi_{I_1}/\partial I_1 , \partial \Psi_{I_2}/\partial I_2 \geq 0$ at $J_1=0,J_2=0$. In this way, the functions $\Psi_{I_1}, \Psi_{I_2}$ are convex non-decreasing with a minimum of $H_1,H_2$ at $I_1=3,I_2=3$, which not only guarantees the polyconvexity of the strain energy but also allows for the model to capture polyconvex models such as Mooney-Rivlin and neo-Hookean. We remark that this formulation can be extended to compressible materials using modified version of $I_1, I_2$. Namely, the usual split of the deformation gradient into isochoric and volumetric parts results in the isochoric invariant $\bar{I}_1\geq 3$, such that $\bar{I}_1$ is polyconvex with respect to $\mathbf{F}$ \cite{schroder2003invariant}. The isochoric invariant $\bar{I}_2$ is not polyconvex with respect to $\mathbf{F}$. However, dividing by an appropriate power of $J$, one can obtain modified invariants $\hat{I}_2\geq3$ which are polyconvex \cite{schroder2003invariant}.
For the anisotropic terms, the domain of the input is $I_{4v},I_{4w} \geq 0$, with $I_{4v} = I_{4w} = 1$ for the identity map. We set shifted invariants $J_{4v}=I_{4v} -1, J_{4w}=I_{4w} - 1$ as the inputs of the N-ODEs and remove all biases to get $\partial \Psi_{I_{4v}}/\partial I_{4v},\partial \Psi_{I_{4w}}/\partial I_{4w} \geq 0$, and $\partial \Psi_{I_{4v}}/\partial I_{4v},\partial \Psi_{I_{4w}}/\partial I_{4w} = 0$ for $I_{4v},I_{4w}=1$. For the mixed terms in eq (\ref{eq_polyconvex_sum_aniso}) we also set all biases to zero, but, additionally, we check if the output of the N-ODE is non-negative otherwise it is set to zero.
The N-ODE framework outlined here is general, it will always produce monotonic derivative functions and thus convex functions of the invariants. Specifying the minimum of the energy and the stress-free state can be achieved in variety of ways and we show one convenient solution. Note also that it is possible to incorporate anisotropic compression terms by considering the invariants $I_{5v} = \mbox{\rm{cof}}\, \mathbf{C}: \mathbf{v}_0 \otimes \mathbf{v}_0$ and $I_{5w} = \mbox{\rm{cof}}\, \mathbf{C}: \mathbf{w}_0 \otimes \mathbf{w}_0$, which are convex with respect to $\mathrm{cof}\mathbf{F}$ and represent area changes orthogonal to the fibers. Extension to compressible materials is also straightforward. Lastly, even though we focus on hyperelasticity, polyconvex strain energies can be used as building blocks to describe dissipative mechanisms.
\subsubsection*{Model calibration and verification}
\label{methods_training_data}
The training and validation data for the N-ODEs are taken from biaxial tests on porcine skin. The data corresponds to five experimental protocols described in Table \ref{table02}. In this case, the experiments are described in terms of the principal stretches of the deformation gradient:
\begin{equation}
\mathbf{F} = \begin{bmatrix}
\lambda_{xx} & 0 & 0 \\ 0 & \lambda_{yy} & 0 \\ 0 & 0 & \lambda_{zz}
\end{bmatrix}
\end{equation}
With the assumption of plane stress incompressible behavior, $J = \lambda_{xx}\lambda_{yy}\lambda_{zz} = 1$ leads to $\lambda_{zz} = 1/\lambda_{xx} \lambda_{yy}$ plus a boundary condition to evaluate the pressure Lagrange multiplier $p$.
\begin{table}[h!]\centering
\caption{Biaxial experimental protocols. $\lambda_x$ and $\lambda_y$ represent the stretches imposed in the $x$ and $y$ directions and $\sigma_z$ is the stress in the $z$ direction. The parameter $\lambda$ is monotonically increased during the experiments to stretch the tissue. The last component of the deformation gradient is directly obtained from the incompressible constraint, while the vanishing normal stress is imposed as the necessary boundary condition to solve for the pressure $p$.}
\label{table02}
\begin{tabularx}{0.4\textwidth}{llll}
\hline
Loading & $\lambda_{xx}$ & $\lambda_{yy}$ & $\sigma_{zz}$\\ \hline
Off-x & $\sqrt{\lambda}$ & $\lambda$ & 0\\
Off-y & $\lambda$ & $\sqrt{\lambda}$ & 0\\
Equibiaxial & $\lambda$ & $\lambda$ & 0\\
Strip-x & $\lambda$ & 1 & 0\\
Strip-y & 1 & $\lambda$ & 0\\
\hline
\end{tabularx}
\end{table}
We also test the N-ODE against synthetic data generated using four popular analytical models summarized next.
\subsubsection*{Holzapfel-Gasser-Ogden (HGO) }
The HGO material model proposed in \cite{holzapfel2000HGO} assumes there are two families of fibers that contribute to the energy through an exponential term. The strain energy is,
\begin{equation}
\Psi (\mathbf{C}, \mathbf{v}_0, \mathbf{w}_0) = \Psi_{\text{iso}} (\mathbf{C}) + \Psi_{\text{aniso}} (\mathbf{C}, \mathbf{v}_0, \mathbf{w}_0) + p(J-1) ,
\end{equation}
with dependence on two anisotropy directions $\mathbf{v}_0,\mathbf{w}_0$,
\begin{equation}
\Psi_{\text{aniso}} (\mathbf{C}, \mathbf{v}_0, \mathbf{w}_0) = \frac{k_1}{2k_2} \sum_{i=4v,4w} \left\{\exp \left[k_2 (I_i - 1)^2 \right] -1 \right\} \, .
\label{GOHpsianiso}
\end{equation}
The parameters controlling the anisotropic contribution are $k_1,k_2$. The strain invariants $I_{4v},I_{4w}$ are the same as defined in eq. (\ref{eq_invs}). The isotropic contribution depends on the first invariant, $I_1$, and is that of a neo-Hookean solid
\begin{equation}
\Psi_{\text{iso}}(\mathbf{C}) = \mu (I_1-3)\, ,
\end{equation}
with parameter $\mu$. The incompressibility constraint is imposed through the Lagrange multiplier $p$.
\subsubsection*{Gasser-Ogden-Holzapfel (GOH) }
The GOH material model was initially proposed to model arterial walls \cite{gasser2005GOH}. It is an extension to the HGO model which considers a single fiber family but incorporates fiber dispersion. Since then, it has been used to model other soft tissues including skin. The strain energy density function in this model
\begin{equation}
\Psi (\mathbf{C}, \mathbf{v}_0) = \Psi_{\text{iso}} (\mathbf{C}) + \Psi_{\text{aniso}} (\mathbf{C}, \mathbf{v}_0) + p(J-1)\, ,
\end{equation}
where
\begin{align}
&\Psi_{\text{iso}}(\mathbf{C}) = \mu (I_1-3) \, ,
\label{GOHpsiiso}
\\
&\Psi_{\text{aniso}} (\mathbf{C}, \mathbf{v}_0) = \frac{k_1}{4k_2} \left[exp \left(k_2 E^2 \right) -1 \right] \, ,
\label{GOHpsianiso}
\end{align}
and
\begin{equation}
E = \left[\kappa I_1 + (1-3\kappa) I_{4v} - 1 \right]\, .
\end{equation}
The parameters are the same as those used in the HGO model, except for the fiber dispersion parameter $\kappa$. The strain invariants, $I_1$ and $I_{4v}$ are the same as defined in (\ref{eq_invs}), with the single anisotropy direction $\mathbf{v}_0$.
\subsubsection*{Mooney Rivlin (MR)}
Originally, the MR models was proposed to capture the mechanical response of rubber-like materials in large strains \cite{Mooney1940, Rivlin1948}. There are a number of different formulations for MR models, here we use
\begin{equation}
\Psi(\mathbf{\lambda}) = C_{10}(I_1-3) + C_{01}(I_2-3) + C_{20}(I_1-3)^2 \, + p(J-1),
\end{equation}
parameterized by $C_{10}, C_{01}, C_{20}$.
\subsubsection*{Fung-type models }
The strain energy function proposed by Fung et al. was one of the first models of soft tissue to capture the strain-stiffening anisotropy of collagenous tissue \cite{fung1979}. Unlike the previous material models, the strain energy proposed by Fung et al. is directly in terms of the Green-Lagrange strain, $\mathbf{E}=(\mathbf{C}-\mathbf{I})/2$,
\begin{equation}
\Psi(\mathbf{C}) = \frac{c_1}{2}\exp(a_1 E_{xx}^2 + a_2 E_{yy}^2 + 2a_4 E_{xx}E_{yy})
\label{eq_fung}
\end{equation}
with parameters $a_1,a_2,a_4,c_1$. Note that equation (\ref{eq_fung}) is strictly a model of a two-dimensional material and, as originally introduced in \cite{fung1979}, it cannot be used for three-dimensional elasticity problems. However, since its original development, generalized forms of the model by Fung et al. have been developed, e.g. \cite{sun2005finite}. In this work we use the original form based on \cite{fung1979}.
\subsubsection*{Finite element implementation}
A major motivation for the data-driven framework described here is to make it readily usable in large scale finite element simulations and realistic applications. Since the N-ODEs directly encode energy derivatives, it is straighforward to implement the N-ODE material model through the UANISOHYPER subroutine in the commercial finite element package Abaqus. The implementation is available in the Github link at the end.
\section{Results}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig_input_space.png}
\caption{Input space of the N-ODE in (a) $\lambda_{xx}-\lambda_{yy}$ space, $I_1-I_2$ space, and (c) $I_1-I_2-I_{4v}$ space. Points in the input space where testing is not feasible are shown with grey markers, whereas feasible points are in shades of blue. The boundary of the feasible region is marked by the uniaxial tests in $x$ and $y$ directions in $\lambda_{xx}-\lambda_{yy}$ space in (a). The map to the space of invariants is nonlinear and condenses the points into a narrow cone in the $I_1-I_2$ space in (b). Including anisotropy, the input domain for the data-driven models continues to show how evenly space points in the $\lambda_{xx}-\lambda_{yy}$ space are nonlinearly mapped to the domain of the N-ODEs. }
\label{fig_input_space}
\end{figure}
The model should estimate the stresses for arbitrary deformations of the material. In experimental mechanics of thin specimens such as skin, the most appropriate method to quantify material behavior is through biaxial tests in terms of the principal stretches, $\lambda_{xx},\lambda_{yy}$. However, as outlined above, the model inputs are the invariants of the right Cauchy-Green deformation tensor, $I_i$. In Fig. \ref{fig_input_space}a we show how the input space for biaxial experiments maps into the input space of the N-ODEs, Fig. \ref{fig_input_space}b,c.
The protocols in Table~\ref{table02} are shown as curves in \ref{fig_input_space}a, the scatter points in shades of blue denote deformations in the feasible region of $\lambda_{xx}-\lambda_{yy}$ space, and the scatter points in gray denote infeasible regions. Points colored gray are not achievable during the testing of thin specimens like skin because they correspond to a compressive state under which thin membranes would buckle. The boundary between the feasible and infeasible regions are uniaxial tests in the $x$ and $y$ directions.
The same points from Fig. \ref{fig_input_space}a are mapped into the invariant space. A projection onto the space $I_1-I_2$ is depicted in Fig. \ref{fig_input_space}b, and a projection onto $I_1-I_2-I_{4v}$ is shown in Fig. \ref{fig_input_space}c. The space $I_1,I_2$ corresponds to the input space for the isotropic material behavior. The equibiaxial loading protocol falls in the middle of the feasible region in the $\lambda_{xx}-\lambda_{yy}$ space, but it is at the boundary of the $I_1-I_2$ space. The uniaxial loading cases, which determine the boundary of the feasible region in $\lambda_{xx}-\lambda_{yy}$ space also continue to bound the feasible domain in the $I_1-I_2$ space. Even though the testing points in Fig. \ref{fig_input_space}a are evenly distributed in $\lambda_{xx}-\lambda_{yy}$ space, they form a narrow cone in the $I_1-I_2$ space. The map is highly nonlinear. Since the material model is a function of the invariants, the performance of data-driven models depends on sampling the invariant space, which is not necessarily achieved by simply covering evenly spaced points in $\lambda_{xx}-\lambda_{yy}$ space.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig_ODE_mapping.png}
\caption{An illustration of how the N-ODE maps points in the input space to partial derivatives of the strain energy. Time evolution of (a) $J_1$ to $\partial \Psi / \partial I_1$, (b) $J_2$ to $\partial \Psi / \partial I_2$, (c) $J_{4v}$ to $\partial \Psi / \partial I_{4v}$ and (d) $J_{4w}$ to $\partial \Psi / \partial I_{4w}$, and, (e) the shape of the resulting functions $\partial \Psi / \partial I_i$. The notation $J_i$ is used to denote shifted invariants as inputs to the N-ODE in order to satisfy the initial condition $H(0)=0$, which is one solution to achieve non-negative derivatives with a vanishing stress at the identity map.}
\label{fig_ODE_mapping}
\end{figure}
In Fig. \ref{fig_ODE_mapping} we illustrate the principle behind the N-ODE map. Points from the input space (translated invariants $J_i$, i.e $J_1 = I_1 - 3$) are integrated in time based on the neural network, which encodes the right-hand side of an ODE. The partial derivatives of the strain energy function are defined as the solution of a N-ODE at the fictitious time $t=1$. Only the mappings for the invariants $J_1, J_2, J_{4v}$ and $J_{4w}$ are shown in Fig. \ref{fig_ODE_mapping}a-d. The monotonicity of the transformation is evident in these figures, it follows from the fact that curves originating at different initial condition never intersect as they are integrated in time. Fig. \ref{fig_ODE_mapping}e represents the direct relationship between input and output without the pseudo-time axis.
\subsection*{Comparisons against synthetic data}
We start by training the N-ODE material model with synthetic stress-stretch data generated from the four closed-form analytical models introduced in section \ref{methods_training_data}. We plot the training data as well as the predictions of the N-ODE for each of the four cases in Fig. \ref{fig_synthetic}. Even though the analytical models have each a completely different functional form, the same N-ODE architecture is capable of replicating the different synthetic datasets with high accuracy. The average absolute error is small in all cases. The highest error occurs for the Fung model, for which stresses are an order of magnitude higher than the other models for the parameters chosen
The best fit of the N-ODE is for the Mooney-Rivlin material. Among the four synthetic datasets, the Mooney-Rivlin is the least nonlinear. The other three analytical models include an exponential term that describes a rapid strain stiffening typical of soft tissues. Concomitantly, the exponential term in these models poses challenges for model calibration due to the extreme sensitivity to parameters \cite{lee2018}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig_synthetic.png}
\caption{Synthetic training data and N-ODE predictions using (a) GOH, (b) Mooney Rivlin, (c) HGO and (d) Fung type material models. The N-ODE is able to interpolate synthetic data from closed-form constitutive equations.}
\label{fig_synthetic}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{fig_convexity.png}
\caption{Strain energy contours for synthetic data generated using (a) GOH and (c) Fung type material models and (b), (d) the predictions of the N-ODE after training on these datasets, respectively. The training region corresponded to tensile biaxial deformations, as indicated, while the shaded region was not included during learning. When fed non-convex data, the N-ODE still produces a polyconvex output because it is built in the formulation. }
\label{fig_convexity}
\end{figure}
Next, we demonstrate how the N-ODE automatically satisfies polyconvexity of the strain energy. We train the N-ODE material model with synthetic data from GOH and Fung models. We show in Figs. \ref{fig_convexity}a,c the predicted contours of strain energy of the two analytical models and the corresponding N-ODE approximations are in Figs. \ref{fig_convexity}b,d. GOH is an inherently convex model \cite{gasser2005GOH}, which is reflected in the contour of the energy as a function of the Green Lagrange strain components for parameters $\mu = 0.0102, k_1 = 0.513, k_2 = 59.1, \kappa = 0.271, \theta = 1.57.$ (Fig. \ref{fig_convexity}a). Naturally, the N-ODE trained with this model also predicts a convex function (Fig. \ref{fig_convexity}b). The Fung model, on the other hand, is known to exhibit non-convex behavior for some choice of parameters, for instance $c_1 = 0.00241, a_1 = -1.75, a_2 = -21.5, a_4 = 49.8$. The N-ODE model cannot interpolate these data, it can only find a convex approximation as seen in Fig. \ref{fig_convexity}d.
\subsection*{Performance on experimental data}
The primary objective of developing N-ODE material models is being able to capture experimental data without the restrictions of closed-form energy functions. The experimental data for this example consists of data from five biaxial tests as described in Table \ref{table02}, performed on porcine skin specimens. We divide the data in two sets, train the data on three of the biaxial test data and validate against the remaining biaxial test data.
The N-ODE reproduces the training data almost identically, with an average error of 0.011 MPa in the worst case scenario. This is expected from the results on the synthetic datasets. The error in the validation tests is naturally higher than in the training set, but it is still in a reasonable range with average errors 1.242 MPa for off-x and 0.648 MPa for off-y data.
In Fig. \ref{fig_exp2}, we train both the N-ODE and the four analytical models with data from a different porcine skin specimen subjected to same loading as described in Table \ref{table02}. Similarly to before, we split the data into training and validation sets. The maximum principal stress predictions, $\sigma_{mp}$, from each of the analytical models as well as the N-ODE model are plotted as the graph of a surface over the $\lambda_xx,\lambda_yy$ plane (Fig. \ref{fig_exp2}a-e). The experimental data are plotted as curves colored based on the absolute error between these curves and the predicted response surface. Fig. \ref{fig_exp2}f shows a box plot of the error distribution for each model. None of the analytical models are capable of capturing the behavior of the material in the training set, whereas the N-ODE replicates the behavior in that region flawlessly. Errors naturally increase overall in the validation set. Yet, even in the validation set, the N-ODE has the lowest median errors. Note also that under high equibiaxial stretches, the stresses in GOH, HGO and Fung models increase to unreasonably high values. This is due to the exponential form of these analytical models. The N-ODE, on the other hand, maintains convexity but does not infer an exponential growth beyond the training region.
\begin{table}[h!]\centering
\caption{Mean absolute error, in MPa, for the validation set for the 4 analytical models and the N-ODE. Training/fitting is performed with the first 80\% of data points of each loading protocol while the last 20\% are held for validation.}
\label{table_val}
\begin{tabularx}{0.71\textwidth}{llllll}
\hline
Protocol & GOH & MR & HGO & Fung & N-ODE \\ \hline
Off-x & \textbf{0.083} & 0.114 & 0.245 & 0.196 & 0.084 \\
Off-y & \textbf{0.010} & 0.024 & 0.071 & 0.071 & 0.027 \\
Equibiaxial & 0.037 & 0.100 & 0.141 & 0.082 & \textbf{0.030} \\
Strip-x & 0.056 & 0.123 & 0.051 & 0.130 & \textbf{0.039} \\
Strip-y & 0.120 & 0.095 & 0.051 & 0.210 & \textbf{0.050} \\ \hline
Average & 0.062 & 0.091 & 0.112 & 0.138 & \textbf{0.046} \\
\hline
\end{tabularx}
\end{table}
Table \ref{table_val} summarizes the performances of the five models from another series of tests. In each row of Table \ref{table_val} we train the models on the first 80\% of the loading path and test against the remaining 20\%. The average error from the validation test is shown in the Table. In other words, these tests are done to measure the ability of each of the models to extrapolate away from the training region, toward larger deformations. The last row in Table \ref{table_val} contains the average of the errors among the five individual tests. The N-ODE has the lowest error in three of the five cases, and is a close second in the Off-x case. The N-ODE has the lowest average error over the five tests, at 0.046 MPa. The second best performing model, GOH, has 35\% higher error relative to the N-ODE model.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{fig_P12AC1.png}
\caption{Predictions of the N-ODE material model after training on porcine data. Test results from (a) equibiaxial, (b) Strip-x and (c) Strip-y testing protocols were used for training while (d) Off-x and (e) Off-y test results were reserved for validation.}
\label{fig_exp1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{fig_exp2.png}
\caption{Surfaces of maximum principal stress ($\sigma_{MP}$) and curves with the experimental data colored based on error for (a) GOH, (b) Mooney Rivlin, (c) HGO, (d) Fung and (e) N-ODE. Box plots of error distribution for the five models (f).}
\label{fig_exp2}
\end{figure}
\subsection*{Finite element simulations}
The results from finite element simulations in Abaqus for a range of geometries and loading scenarios using the N-ODE material model are showcased here.
First, three basic loading scenarios were applied to a 5 $\times$ 5 cm specimen as shown in Fig. \ref{fig_FEM} For a simple uniaxial tension test in the $x$ direction, unconstrained in $y$ the resulting stresses match the analytical evaluation (see Fig. (\ref{fig_exp1}), indicating that the subroutine is functioning as desired (Fig.\ref{fig_FEM}a).
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{fig_FEMbasic.png}
\caption{Finite element simulations using the N-ODE material model. Boundary conditions on the undeformed geometry are shown in the first column, deformed geometry in the second column, and contours of $\sigma_{xx}$ and $\sigma_{yy}$ on the deformed geometry are depicted in the last two columns for (a) uniaxial tension, (b) shearing and stretching, (c) torsional loading scenarios. The material is anisotropic, with the directions of anisotropy depicted as quiver plots in the reference geometries. The red vector field corresponds to the stiffer orientation $\mathbf{v}_0$, while the black vector field corresponds to $\mathbf{w}_0$.}
\label{fig_FEM}
\end{figure}
In Fig. \ref{fig_FEM}b the left side of the specimen is held fixed while the right side is displaced both in x and y directions, whereas in \ref{fig_FEM}c a torsional loading is applied. For the shearing deformation we see a band of stress for the $\sigma_{xx}$ component, and concentration of the $\sigma_{yy}$ component at the corners but otherwise fairly uniform and small $\sigma_{yy}$ stresses since the upper and bottom surfaces are unconstrained. When a torque is applied in Fig. \ref{fig_FEM}c, stress concentrations develop in the corners where there is the most deformations as expected from the clamping condition on the boundary. In all cases the simulations converged quadratically, demonstrating that the N-ODE model can be used as a constitutive equation in stable, nonlinear finite element simulations.
Next, we performed tissue expansion simulations on $10 \times 10 \times 0.3$ cm\textsuperscript{3} a patch of skin (Fig.\ref{fig_FEMexp}a). A rectangular expander of dimensions $8 \times 8$ cm\textsuperscript{2} underneath the skin is modeled by the fluid cavity feature in Abaqus. The expander is inflated to 20, 30 and 40 cm\textsuperscript{3} resulting in the strain contours shown in Fig. \ref{fig_FEM}b and c.
Lastly, a surgical operation was simulated on the scalp of a cancer patient with a patient specific geometry reported in \cite{lee2021personalized}. Figs.\ref{fig_scalp2} a-e show the model during various stages of surgery. Fig.\ref{fig_scalp2}a shows the initial geometry of the model, Fig.\ref{fig_scalp2}b-e show contours of maximum principal stress on the deformed geometry when the sutures near the ear are completed, and when the sutures on top of the cranium are brought together. Fig.\ref{fig_scalp2}f shows a photo of the patient's scalp prior to surgery and Fig.\ref{fig_scalp2}g a post-operative photo of the same region with the predicted contours of maximum principal stress superimposed.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{fig_FEMexp.png}
\caption{Finite element simulations of tissue expansion using the N-ODE material model in UANISOHYPER. Model setup (a), and contours of strain (b and c) on the deformed geometry after the expander is inflated to 20, 40 and 60 cm\textsuperscript{3}, respectively.}
\label{fig_FEMexp}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{cranium2.png}
\caption{Virtual surgery finite element model performed in Abaqus using the N-ODE material model through the user defined UANISOHYPER subroutine. Initial geometry of the scalp (a), contours of maximum principal stress on the deformed geometry after the lower sutures are completed (b), and as the upper sutures are stretched to close the wound by 30\%, 60\% and 100\% (c-e), respectively. Photographs of the scalp of the patient before the surgery (f) and after the surgery superimposed with contours of maximum principal stress (g).}
\label{fig_scalp2}
\end{figure}
\section{Discussion}
In this paper we present the first automatically polyconvex data-driven framework for constitutive modeling of nonlinear anisotropic materials. No other data-driven material model in the literature has been able to guarantee polyconvexity \textit{a priori}. Instead most proposed methods either do not impose convexity or rely on additional loss terms to penalize deviation from convexity during model training \cite{VLASSIS2020113299,tac2021datadriven}.
Our approach is general enough to approximate a large class of material models. Although there might be polyconvex strain energy functions that are not captured by our solution, they might not be needed in practice. In general, our approach can represent any degree of non-linearity for a given invariant and reproduce certain types of interactions between invariants. We could arbitrarily increase the complexity of the model by adding second order interactions (i.e. $I_i + I_j + I_k$) and by composing the resulting strain energy function with other convex non-decreasing functions. If we do this multiple times, we could create deep networks to approximate an even larger class of strain energy functions. However, from what we have seen in our experiments, the capacity of the functions presented in this work is enough to model the behavior of highly nonlinear and anisotropic materials, e.g. soft tissues.
The proposed framework is based on the principal invariants of the right Cauchy Green deformation tensor, which is commonplace in the development of anisotropic polyconvex strain energies \cite{chagnon2015hyperelastic}. However, it could be reformulated for a different set of basic invariants such as the principal stretches \cite{shariff2011physical}, or the invariants of the right stretch tensor \cite{steigmann2003isotropic,steigmann2003frame}. Specifically for anisotropy, we have restricted our attention to the pseudo-invariants $I_{4v}$ and $I_{4w}$, which are polyconvex functions of the deformation that measure the square of the stretch along the directions $\mathbf{v}_0$ and $\mathbf{w}_0$. Other anisotropic pseudo-invariants are possible. For example, as discussed in the Methods section, $I_{5v}=\mathrm{cof}\mathbf{C}:\mathbf{v}_0\otimes \mathbf{v}_0$ and $I_{5w}=\mathrm{cof}\mathbf{C}:\mathbf{w}_0\otimes \mathbf{w}_0$ are also polyconvex functions that can be used to capture energy associated with compression of the anisotropy directions \cite{schroder2003invariant}. We did not consider terms such as $I_{5v},I_{5w}$ because our data consisted of biaxial tests and because it is common to assume that fibers in soft tissues do not contribute to the energy under compression \cite{holzapfel2000HGO}. Incorporating additional directions of anisotropy, for instance based on some fiber probability distribution \cite{lanir1979structural,sacks2000biaxial}, would be straightforward. The results with only two directions were already satisfactory and we did not include additional anisotropic pseudo-invariants for that reason.
Alternative conditions of convexity have been considered in the literature. Among them, convexity of the strain energy with respect to $\mathbf{C}$ is often sought in constitutive models \cite{VLASSIS2020113299,Holzapfel2009heart}. However, the conditions under which convexity with respect to $\mathbf{C}$ guarantees global minimizers to boundary value problems in nonlinear elasticity is not fully elucidated, see for example the recent work by Gao et al. \cite{gao2017convexity}. As pointed out in \cite{sivaloganathan2018uniqueness}, convexity with respect to $\mathbf{C}$ and polyconvexity with respect to $\mathbf{F}$ are not equivalent. Thus, in this work we focused on the condition of polyconvexity established by Ball \cite{ball1976convexity}.
Although the proposed approach works excellently for the cases we tested, we acknowledge that there are alternatives to arrive at similar results. For example, any invertible neural network architecture should work because a function that posses an inverse in one-dimensions is monotonic. N-ODEs are an elegant and efficient solution to this problem, because the gradient can be computed with a system of ODEs rather than back-propagation, which can be expensive for a deep neural network. We also note that there are other architectures to guarantee convexity of functions based on similar principles \cite{amos2017input}. We think our method has two main advantages: first, one can directly incorporate domain knowledge by selecting the terms and interactions that are relevant rather than using all possible inputs (strain invariants and combinations). This approach also adds interpretability, as each of the fitted functions can be inspected and compared to existing models. Second, by approximating derivatives of the strain energy function rather than the strain energy function itself, we decrease the degree of differentiability of the neural network and can employ simpler activation functions, such as ReLU. Alternative methods to deal with the issues of differentiability are integrable neural networks or regularization terms during model training \cite{VLASSIS2020113299,teichert2019machine}. Even though we do not have direct access to the strain energy function, the fact that the derivative functions are all one-dimensional enables the use of standard quadrature rules to efficiently integrate the strain energy if needed.
Lastly, we remark that in this paper we have focused on incompressible, and hyperelastic behavior. This is not a limitation of the framework but simply the application explored in detail here. The framework produces polyconvex strain energies applicable to a much wider set of problems. For example, consideration of compressible behavior would require a suitable volumetric strain energy function which can also be data-driven and convex. Modeling of viscoelastic materials or elastoplastic deformations can also be formulated using data-driven polyconvex strain energies \cite{bonet2015computational,krishnan2014polyconvex,nordsletten2021viscoelastic}.
\section*{Conclusions}
We present a data-driven framework to construct automatically polyconvex strain energy functions. The formulation is based on N-ODEs. We showcase the framework by using it to model the nonlinear, anisotropic, hyperelastic, and incompressible behavior of skin. The data-driven framework outperforms closed-form constitutive models. Our results foment the appeal of data-driven methods to better capture experimental material response without the constraints of analytical expressions for the strain energy while still satisfying the basic requirements of physically realistic models, i.e. the notion of polyconvexity. Additionally, the formulation is invariant and allows for the efficient computation of the second derivatives of the strain energy, which further enables the use of our data-driven framework in finite element simulations. We therefore anticipate that this work will further cement the use of data-driven methods in computational mechanics.
\section*{Acknowledgements}
This work was supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institute of Health under award R01AR074525.
\section*{Supplementary material}
Code associated with this publication is available at a Github repository
\url{https://github.com/tajtac/NODE_v2}
|
1,116,691,498,333 | arxiv | \section{Introduction}\label{sec:1}
Most atomic nuclei in their ground state are deformed from a well defined spherical shape. Nuclear deformation arises due to short-range strong nuclear force among nucleons themselves, and depending on the proton and neutron number, the minima in the total energy of the system can be found for spherical, ellipsoidal, octuple and hexadecapole shapes~\cite{Heyde2011,Togashi:2016yzs,Heyde:2016sop,Frauendorf:2017ryj,Zhou:2016ujx}. Information about nuclear deformation is primarily extracted from spectroscopic measurements and models of reduced transition probability $B(En)$ between low-lying rotational states, which involves nuclear experiments with energy per nucleon less than few 10 MeVs. Recently, the prospects of probing the nuclear deformation at much higher beam energy, energy per nucleon exceeding hundreds of GeVs, by taking advantage of the hydrodynamic flow behavior of large number of produced final-state particles, have been discussed~\cite{Heinz:2004ir,Filip:2009zz,Shou:2014eya,Goldschmidt:2015kpa,Giacalone:2017dud,Giacalone:2018apa,Giacalone:2021uhj,Giacalone:2021udy,Jia:2021wbq,Jia:2021tzt,Bally:2021qys}, several experimental evidences have been observed~\cite{Adamczyk:2015obl,Acharya:2018ihu,Sirunyan:2019wqp,Aad:2019xmh,jjia}.
The shape of a nucleus, including only the dominant quadrupole component, is often described by a nuclear density profile of the Woods-Saxon form,
\begin{align}\label{eq:1}
\rho(r,\theta,\phi)=\frac{\rho_0}{1+e^{\left[r-R(\theta,\phi)/a_0\right]}},\;R(\theta,\phi) = R_0\left(1+\beta_2 [\cos \gamma Y_{2,0}(\theta,\phi)+ \sin\gamma Y_{2,2}(\theta,\phi)]\right),
\end{align}
where the nuclear surface $R(\theta,\phi)$ is expanded into real form spherical harmonics $Y_{2,m}$ in the intrinsic frame. The positive number $\beta_2$ describes the overall quadrupole deformation, and the triaxiality parameter $\gamma$ controls the relative order of the three radii $r_a,r_b,r_c$ of the nucleus in the intrinsic frame. It has the range $0\leq\gamma\leq\pi/3$, with $\gamma=0$, $\gamma=\pi/3$, and $\gamma=\pi/6$ corresponding, respectively, to prolate ($r_a=r_b<r_c)$, oblate ($r_a<r_b=r_c$) or rigid triaxiality ($r_a<r_b<r_c$ and $2r_b=r_a+r_c$), see top row of Fig.~\ref{fig:1} for an illustration. Most nuclei have axially symmetric prolate or oblate shapes, and triaxiality is a rather elusive signature in nuclear structure physics. The triaxial degree of freedom is related to a number of interesting phenomena including the $\gamma$-band~\cite{bohr}, chirality~\cite{Frauendorf:1997mux} and wobbling motion~\cite{PRL.86.5866,RevModPhys.73.463}, but the extraction of $\gamma$ value has significant experimental and theoretical uncertainties. An interesting question is if and how triaxiality may manifest itself in other fields of nuclear physics.
High-energy heavy-ion collisions at RHIC and the LHC, especially head-on collisions with nearly zero impact parameter (Ultra-central collisions or UCC), provide a new way to image the shape of the nucleus. The large amount of energy deposited in these collisions leads to the formation of a hot and dense quark-gluon plasma (QGP)~\cite{Busza:2018rrf} in the overlap region, whose shape and size are strongly correlated with nuclear deformation as illustrated by the second row of Fig.~\ref{fig:1}. The transverse area $S_{\perp}$ (or size $R_{\perp}$) and eccentricity of the overlap can be quantified by
\begin{align}\label{eq:2}
S_{\perp}\equiv \pi R^2_{\perp}= \pi\sqrt{\lr{x^2}\lr{y^2}}\;,\;\;\;{\bf \epsilon_2} \equiv \varepsilon_2e^{i2\Phi_2} = - \frac{\lr{r_{\perp}^2 e^{i2\phi}}}{\lr{r_{\perp}^2}},
\end{align}
where the average is over nucleons in the transverse plane $(x,y)=(r_{\perp},\phi)$ in the rotated center-of-mass frame such that $x$ ($y$) corresponds to the minor (major) axis of the ellipsoid. Within the liquid-drop model with a sharp surface, the variances of $\varepsilon_2$ and $R_{\perp}$ over many head-on collisions are directly related to the $\beta_2$: $\lr{\varepsilon_2^2}=\frac{3}{2\pi}\beta_2^2$, $\lr{(\delta R_{\perp}/R_{\perp})^2} = -\frac{1}{16\pi} \beta_2^2$, where $\delta R_{\perp}/R_{\perp} \equiv (R_{\perp}-\lr{R_{\perp}})/\lr{R_{\perp}}$ denotes the event-by-event fluctuations relative to the average. Driven by the large pressure gradient forces and subsequent hydrodynamic collective expansion, the initial shape and size information is transferred into azimuthal and radial flow of final state hadrons~\cite{Heinz:2013wva}. Specifically, the particle momentum spectra in each collision event can be parameterized as $\frac{d^2N}{\pT d\pT d\phi} = N(\pT) \left[1+2v_2(\pT) \cos 2(\phi-\Psi)\right]$ in $\phi$ and transverse momentum $\pT$. The magnitude of the radial flow, characterized by the slope of the particle spectrum $N(\pT)$ or the average $[\pT]$, is positively correlated with the gradient of nucleon density or inverse transverse size $d_{\perp}$
\begin{align}\label{eq:2b}
d_{\perp} =\sqrt{N_{\mathrm{part}}/S_{\perp}},
\end{align}
in the overlap region~\cite{Bozek:2012fw,Schenke:2020uqq}, with $N_{\mathrm{part}}$ being the number of participating nucleons. This is because $d_{\perp}\propto 1/R_{\perp}$ is proportional to the pressure gradient and therefore is expected to be correlated with $[\pT]$. Similarly, the amplitude and orientation of elliptic flow, characterized by $V_2=v_2e^{i2\Psi}$, is directly related to ${\bf \epsilon_2}=\varepsilon_2e^{i2\Phi}$. In fact, detailed hydrodynamic model simulations~\cite{Niemi:2015qia,Schenke:2020uqq} show good linear relations, for events with fixed $N_{\mathrm{part}}$.
\begin{align}\label{eq:3}
v_2=k_2\varepsilon_2,\;\;\; \frac{\delta [\pT]}{[\pT]} = k_0\frac{\delta d_{\perp}}{d_{\perp}} = -k_0\frac{\delta R_{\perp}}{R_{\perp}} =-k_0\frac{1}{2}\frac{\delta S_{\perp}}{S_{\perp}}\;.
\end{align}
The response coefficients $k_2$ and $k_0$ capture the transport properties of the QGP and they have been well constrained theoretically~\cite{Teaney:2012ke,Bernhard:2016tnd,Bernhard:2019bmu,Nijs:2020ors}.
As indicated clearly in the second row of Fig.~\ref{fig:1}, in ultra-central collisions of prolate nuclei, the shape of overlap falls in between ``body-body'' and ``tip-tip'' configurations with long-axis perpendicular or parallel to the beam, respectively. The body-body collisions have large $\varepsilon_2$ and larger size $R_{\perp}$ and therefore smaller $d_{\perp}$, while the tip-tip collisions have near-zero $\varepsilon_2$ and larger $d_{\perp}$, i.e. the correlation of $\varepsilon_2$ and $d_{\perp}$ is negative $\lr{\varepsilon_2^2\delta d_{\perp}}<0$~\cite{Giacalone:2019pca}. In contrast, the covariance of $\varepsilon_2$ and $d_{\perp}$ is expected to be positive for collisions of oblate nuclei, and zero for collisions of rigid triaxial nuclei. Eq.~\eqref{eq:3} would then imply that $\lr{v_2^2\delta[\pT]}<0$, $>0$ and $=0$ for collisions of prolate, oblate and rigid triaxial nuclei, respectively. In fact, we find that both $\lr{\varepsilon_2^2\delta d_{\perp}}$ and $\lr{v_2^2\delta[\pT]}$ are dominated by a $-\cos(3\gamma)$ dependence in the ultra-central collisions, not surprising given the three-fold symmetry of nuclear shape in the $\gamma$ angle.
\begin{figure}[!t]
\begin{center}
\hspace*{0.4cm}\includegraphics[width=0.95\linewidth]{figidea.pdf}
\end{center}
\caption{\label{fig:1} The cartoon of a nucleus with quadrupole deformation $\beta_2=0.25$ (top row) with prolate (left), rigid triaxial (middle), and oblate (right) shape, and correspondingly the overlap containing the quark-gluon plasma in the ultra-central collisions (middle row) and distribution of the transverse size relative to the mean $\delta R_{\perp}/R_{\perp}=-\delta d_{\perp}/d_{\perp}$ derived from Eq.~\eqref{eq:11} (bottom row). The distributions in the bottom are given in units of $\beta_2$.}
\end{figure}
Another interesting aspect of the deformation in heavy ion collisions, not discussed yet in literature, concerns the nature of the event-by-event fluctuations of $R_{\perp}$ or $d_{\perp}$ in the presence of deformation and how they influence the $[\pT]$ fluctuations. As shown in the bottom row of Fig.~\ref{fig:1}, the probability for various overlap configurations are not equal. In collisions of rigid triaxial nuclei, the shape of the overlap in the transverse plane falls in between the three configurations for the two axes of the ellipse: ``$r_b-r_a$'', ``$r_c-r_a$'' and ``$r_c-r_b$''. The combination ``$r_c-r_a$'' has the largest probability, and as the nucleus become more prolate (oblate), the middle branch merges with the right (left) branch and the distribution becomes more asymmetric. This gives rise to a skewness $\lr{(\delta d_{\perp})^3}\sim -\lr{(\delta R_{\perp})^3}\neq 0$ for $\gamma\neq \pi/6$, and the sign of $\lr{(\delta d_{\perp})^3}$ is expected to be opposite to that for $\lr{\varepsilon_2^2\delta d_{\perp}}$. Indeed, we found that $\lr{(\delta d_{\perp})^3}$ contains a large $\cos (3\gamma)$ term, which is expected to drive a similar skewness $\lr{(\delta [\pT])^3}$ in the final state. Therefore, we have identified two three-particle correlation observables, $\lr{v_2^2\delta[\pT]}$ and $\lr{(\delta [\pT])^3}$, to probe nuclear triaxiality in heavy ion collisions. The $\beta_2$ value on the other hand can be constrained from two-particle correlation observables $\lr{v_2^2}$ and $\lr{(\delta [\pT])^2}$.
Several experimental studies of nuclear deformation in heavy ion collision have been carried at RHIC~\cite{Adamczyk:2015obl,Giacalone:2021udy} and the LHC~\cite{Acharya:2018ihu,Sirunyan:2019wqp,Aad:2019xmh}, focusing mostly on the relation between $\beta_2$ and $v_2$ in the UCC. However, the most striking evidences are provided by the recent measurement of $\lr{v_2^2\delta[\pT]}$ and $\lr{(\delta [\pT])^3}$ in $^{197}$Au+$^{197}$Au and $^{238}$U+$^{238}$U collisions at RHIC~\cite{jjia}. The large prolate deformation of $^{238}$U yields a large negative contribution to $\lr{v_2^2\delta[\pT])}$ and large positive contribution to $\lr{(\delta [\pT])^3}$, consistent with Fig.~\ref{fig:1} discussed above. A few model studies on the feasibility of constraining triaxiality in heavy ion collisions appeared recently~\cite{Jia:2021wbq,Jia:2021tzt,Bally:2021qys}. In light of these measurements and model work, we aim to clarify, via a Monte-Carlo Glauber model and a transport model, the influence of deformation on the cumulants of $\varepsilon_2$ and $[\pT]$. Remarkably, we found that the $\beta_2$ and $\gamma$ dependencies of these observables follow a very simple parametric functional form. In particular, we find that $\lr{\varepsilon_2^2}$ and $\lr{\delta (d_{\perp})^3}$ can be well described by a function of the $a'+b'\beta_2^2$ form, while $\lr{\varepsilon_2^2\delta d_{\perp}}$ and $\lr{\delta (d_{\perp})^3}$ by a function of the $a'+(b'+c'\cos(3\gamma))\beta_2^3$ form, with $b'$ and $c'$ nearly independent of the size of the collision systems. This finding provides a motivation for a collision system scan of nucleus with similar $\beta_2$ but different $\gamma$ values, which may provide additional insight on the question of shape evolution and shape coexistence~\cite{Heyde2011}.
\section{Analytical estimate}\label{sec:2}
Let us first predict the analytical form for the $(\beta_2,\gamma)$ dependencies using a simple heuristic argument. For small deformation $\beta_2$, the values of $d_{\perp}$ and ${\bm \epsilon}_2$ in a given event are expected to have the following form:
\begin{align}\label{eq:4}
\frac{\delta d_{\perp}}{d_{\perp}} = \delta_d + p_0(\Omega_1,\Omega_2,\gamma)\beta_2 + \mathcal{O}(\beta_2^2)\;,\;{\bm \epsilon}_2 = {\bm \epsilon}_0 + {\bm p}_{2}(\Omega_1,\Omega_2,\gamma)\beta_2 + \mathcal{O}(\beta_2^2),
\end{align}
where the scalar $\delta_d$ and vector ${\bm \epsilon}_0=\varepsilon_0e^{i2\Phi_{2;0}}$ are values without deformation effects, which in UCC collisions are dominated by the random fluctuations of nucleons positions but in non-central collisions are also affected by the impact-parameter-dependent average shape of the overlap. The $p_0$ and ${\bm p}_{2}$ are phase space factors controlled by the Euler angles $\Omega=\phi\theta\psi$ of the two nuclei, and they also contain the $\gamma$ parameter. For example, in collision of prolate nuclei (see left of the middle row of Fig~\ref{fig:1}), $|p_0|$ and $|{\bm p}_{2}|$ are largest for the ``body-body'' orientation and smallest for the ``tip-tip'' orientation. Since the fluctuation of $\delta_d$ (${\bm \epsilon}_0$) is uncorrelated with $p_0$ (${\bm p}_{2}$), an average over collisions with different Euler angles is expect to give the following expression for the variances
\begin{align}\label{eq:5}
C_{\mathrm{d}}\{2\}\equiv\lr{\left(\frac{\delta d_{\perp}}{d_{\perp}}\right)^2} = \lr{\delta_d^2} + \lr{p_0(\Omega_1,\Omega_2,\gamma)^2}\beta_2^2\;,\;
c_{2,\epsilon}\{2\}\equiv\lr{\varepsilon_2^2} = \lr{\varepsilon_0^2} + \lr{{\bm p}_{2}(\Omega_1,\Omega_2){\bm p}_{2}^*(\Omega_1,\Omega_2,\gamma)}\beta_2^2\;.
\end{align}
The $\lr{p_0^2}$ and $\lr{{\bm p}_{2}{\bm p}_{2}^*}$ are constants obtained by averaging over $\Omega_1$ and $\Omega_2$, and their precise values depend on the functional form of Eq.~\eqref{eq:1}. This argument can be generalized to higher-order cumulants. For example, the skewness and kurtosis of inverse size, which are estimators for skewness and kurtosis of $[\pT]$ fluctuations, can be written as,
\begin{align}\nonumber
C_{\mathrm{d}}\{3\}&\equiv\lr{\left(\frac{\delta d_{\perp}}{d_{\perp}}\right)^3} = \lr{\delta_d^3} + \lr{p_0(\Omega_1,\Omega_2,\gamma)^3}\beta_2^3\;,\\\label{eq:6}
C_{\mathrm{d}}\{4\}&\equiv\lr{\left(\frac{\delta d_{\perp}}{d_{\perp}}\right)^4}-3\lr{\left(\frac{\delta d_{\perp}}{d_{\perp}}\right)^2}^2 = \lr{\delta_d^4}-3\lr{\delta_d^2}^2 + (\lr{p_0(\Omega_1,\Omega_2,\gamma)^4}-3\lr{p_0(\Omega_1,\Omega_2,\gamma)^2}^2)\beta_2^4\;.
\end{align}
The fourth-order cumulant of ${\bm \epsilon}_2$ is
\begin{align}\nonumber
c_{2,\epsilon}\{4\}\equiv\lr{\varepsilon_2^4}-2\lr{\varepsilon_2^2}^2 = \lr{\varepsilon_0^4}-2\lr{\varepsilon_0^2}^2 + \left(\lr{{\bm p}_{2}^2{\bm p}_{2}^{*2}}-2\lr{{\bm p}_{2}{\bm p}_{2}^{*}}^2\right)\beta_2^4\;,
\end{align}
where we use the fact that $\lr{{\bm p}_{2}^n{\bm p}_{2}^{*m}}=0$ for $n\neq m$. We shall skip the straightforward expression for higher-order cumulant of ${\bm \epsilon}_2$. A more interesting example is three-particle mixed-skewness $\lr{\varepsilon_2^2\frac{\delta d_{\perp}}{d_{\perp}}}$, a good estimator for $\lr{v_2^2\frac{\delta [\pT]}{[\pT]}}$, which can be expressed as,
\begin{align}\label{eq:7}
\lr{\varepsilon_2^2 \frac{\delta d_{\perp}}{d_{\perp}}} = \lr{\varepsilon_0^2\delta_d} +\lr{p_0{\bm p}_{2}{\bm p}_{2}^*}\beta_2^3\;.
\end{align}
The first term represents the value for collisions of spherical nuclei, while the deformation contribution has an explicit $\beta_2^3$ dependence with a coefficient that contains the $\gamma$. Note that in non-central collisions, the cross-term like $\lr{p_0({\bm p}_{2}{\bm \epsilon}_{0}^*+{\bm p}_{2}^*{\bm \epsilon}_{0})}\beta_2^2$ term may not vanish due to possible alignment between ${\bm \epsilon}_{0}$ and ${\bm p}_{2}$.
This argument can be generalized to simultaneous presence of octuple or hexadecapole deformations for which additional axial symmetric components are added to the nuclear surface in Eq.~\eqref{eq:1},
\begin{align}\label{eq:1b}
R(\theta,\phi) = R_0\left(1+\beta_2 [\cos \gamma Y_{2,0}(\theta,\phi)+ \sin\gamma Y_{2,2}(\theta,\phi)] + \beta_3 Y_{3,0}(\theta,\phi)+\beta_4 Y_{4,0}(\theta,\phi)\right)\;,
\end{align}
as well as to the higher-order eccentricities of the overlap region in the transverse plane, defined as
\begin{align}\label{eq:9}
{\bm \epsilon}_1\equiv\varepsilon_1e^{i\Phi_1} = -\frac{\lr{r_{\perp}^3 e^{i\phi}}}{\lr{r_{\perp}^3}}\;,\;\;\;{\bm \epsilon}_n\equiv\varepsilon_ne^{in\Phi_n} = - \frac{\lr{r_{\perp}^n e^{in\phi}}}{\lr{r_{\perp}^n}}\;\mathrm{for}\; n>1\;,
\end{align}
In this case, the leading order expression for $\delta d_{\perp}$ and eccentricity are $\delta d_{\perp}/d_{\perp} = \delta_d + \sum_{m=2}^{4}p_{0;m}\beta_m$ and ${\bm \epsilon}_n \approx {\bm \epsilon}_{n;0} + \sum_{m=2}^{4} {\bm p}_{n;m}(\Omega_1,\Omega_2)\beta_m$, respectively. The variances have the following more general form
\begin{align}\label{eq:10}
\lr{\left(\frac{\delta d_{\perp}}{d_{\perp}}\right)^2} \approx \lr{\delta_d^2} + \sum_{m,m'}\lr{p_{0;m}p_{0;m'}}\beta_m\beta_{m'}\;,\;
\lr{\varepsilon_n^2}\approx \lr{\varepsilon_{n;0}^2} + \sum_{m,m'} \lr{{\bm p}_{n;m}{\bm p}^*_{n;m'}}\beta_m\beta_{m'}\;.
\end{align}
Note that the off-diagonal coefficients $\lr{p_{0;m'}p_{0;m'}}_{m\neq m'}$ and $\lr{{\bm p}_{n;m}{\bm p}^*_{n;m'}}_{m\neq m'}$ may not vanish especially in the non-central collisions. These mixing contributions have been observed in our previous study of $\lr{\varepsilon_n^2}$~\cite{Jia:2021wbq}, and are expected to influence all other cumulants discussed above. We leave this interesting topic to a future study.
For a more quantitative estimation, we consider the liquid-drop model of nucleus where the nucleon density distribution has a sharp surface $\rho(r,\theta,\phi)=\rho_0$ when $r<R(\theta,\phi)$ and zero beyond. We limit the discussion to head-on collisions with nearly maximum overlap, i.e. the two nuclei not only have zero impact parameter, but are also aligned $\Omega_1=\Omega_2$ to ensure the overlap region contains all the nucleons $N_{\mathrm{part}}=2A$. In this case it is easy to show (see Ref.~\cite{Jia:2021tzt} and Appendix~\ref{sec:app1})
\begin{align}\label{eq:11}
\frac{\delta d_{\perp}}{d_{\perp}}=\sqrt{\frac{5}{16 \pi}} \beta_{2}\left(\cos \gamma D_{0,0}^{2}+\frac{\sin \gamma}{\sqrt{2}}\left[D_{0,2}^{2}+D_{0,-2}^{2}\right]\right)\;,\;{\bm \epsilon}_{2}=-\sqrt{\frac{15}{2 \pi}} \beta_{2}\left(\cos \gamma D_{2,0}^{2}+\frac{\sin \gamma}{\sqrt{2}}\left[D_{2,2}^{2}+D_{2,-2}^{2}\right]\right)\;,
\end{align}
where the $D^{l}_{m,m'}(\Omega)$ is the Wigner matrix. Similar expressions often appear, unsurprisingly, in nuclear structure literature when discussing the projection of quadrupole moment of ellipsoid in the laboratory frame (see for example Ref.~\cite{Niksic:2011sg}), except with a different coefficient. This expression gives the probability density distributions of $\delta d_{\perp}/d_{\perp}$ that are shown in the bottom row of Fig.~\ref{fig:1} (the distribution for the prolate case was previously derived in a different context~\cite{Alhassid:2014fca}). From these, we can easily integrate to obtain the expression for cumulants of any order, e.g.:
\begin{align}\nonumber
\left\langle\left(\frac{\delta d_{\perp}}{d_{\perp}}\right)^{2}\right\rangle&=\beta_{2}^{2} \frac{5}{16 \pi} \int\left(\sum_{m} \alpha_{2, m} D_{0, m}^{2}\right)^{2} \frac{d \Omega}{8 \pi^{2}}=\frac{1}{16 \pi} \beta_{2}^{2}\;,\;\; \alpha_{2,0}\equiv \cos\gamma,\;\alpha_{2,\pm2}\equiv \frac{\sin\gamma}{\sqrt{2}}, \\\nonumber
\left\langle\left(\frac{\delta d_{\perp}}{d_{\perp}}\right)^{3}\right\rangle&=\beta_{2}^{3}\left(\frac{5}{16 \pi}\right)^{3/2} \int\left(\sum_{m} \alpha_{2, m} D_{0, m}^{2}\right)^{3} \frac{d \Omega}{8 \pi^{2}}=\frac{\sqrt{5}}{224 \pi^{3/2}} \cos (3 \gamma) \beta_{2}^{3} \\\label{eq:12}
\left\langle\varepsilon_{2}^{2} \frac{\delta d_{\perp}}{d_{\perp}}\right\rangle&=\beta_{2}^{3} \frac{15}{2 \pi} \sqrt{\frac{5}{16\pi}} \int\left(\sum_{m} \alpha_{2, m} D_{2, m}^{2}\right)\left(\sum_{m} \alpha_{2, m} D_{2, m}^{2}\right)^{*}\left(\sum_{m} \alpha_{2, m} D_{0, m}^{2}\right) \frac{d \Omega}{8 \pi^{2}}=-\frac{3 \sqrt{5}}{28 \pi^{3 / 2}} \cos (3 \gamma) \beta_{2}^{3}\;.
\end{align}
The results for several cumulants of interest are listed in the Table~\ref{tab:1}~\footnote{The expression for $5^{\mathrm{th}}$- and $6^{\mathrm{th}}$-order cumulants of $d_{\perp}$ are $C_d\{5\}=-\frac{15\sqrt{5}}{9856\pi^{5/2}}\cos(3\gamma)\beta_2^5$ and $C_d\{6\}=\frac{15}{7007\times512\pi^3}(113-90\cos(6\gamma))\beta_2^6$.}. Note that, if we use the transverse nucleon density $N_{\mathrm{part}}/S_{\perp}=d_{\perp}^2$ as the estimator as done in Ref.~\cite{Schenke:2020uqq}, the $n^{\mathrm{th}}$-order cumulant would be larger by $2^{n}$. The values for appropriately normalized cumulants are also given to the lower-right side of the observable. The skewness and kurtosis of $d_{\perp}$ are conventionally normalized by the variance,
\begin{align}\label{eq:13}
S_d=\frac{C_d\{3\}}{C_d\{2\}^{3/2}}\;,\;K_d=\frac{C_d\{4\}}{C_d\{2\}^{2}}\;.
\end{align}
The four and six-order cumulants of ${\bm \epsilon}_2$ are defined by $\mathrm{nc}_{2,\epsilon}\{4\} = (\lr{\varepsilon_2^4}-2\lr{\varepsilon_2^2}^2)/\lr{\varepsilon_2^2}^2$ and $\mathrm{nc}_{2,\epsilon}\{6\} =\left(\lr{\varepsilon_2^6}-9\lr{\varepsilon_2^4}\lr{\varepsilon_2^2}+12\lr{\varepsilon_2^2}^3\right)/(4\lr{\varepsilon_2^2}^3)$, respectively. The normalization of $\lr{\varepsilon_{2}^{2}\delta d_{\perp}/d_{\perp}}$ is defined in two different ways,
\begin{align}\label{eq:14a}
&\rho_{\mathrm{orig}}(\varepsilon_2^2,\delta d_{\perp}/d_{\perp}) = \frac{\lr{\varepsilon_{2}^{2}\delta d_{\perp}/d_{\perp}}}{\sqrt{(\lr{\varepsilon_2^4}-\lr{\varepsilon_2^2}^2)\lr{\left( d_{\perp}/d_{\perp}\right)^2}}}\;,\;\rho(\varepsilon_2^2,\delta d_{\perp}/d_{\perp}) = \frac{\lr{\varepsilon_{2}^{2}\delta d_{\perp}/d_{\perp}}}{\lr{\varepsilon_2^2}\sqrt{\lr{\left( d_{\perp}/d_{\perp}\right)^2}}}\;.
\end{align}
The $\rho_{\mathrm{orig}}$ is the original definition known as Pearson correlation coefficient~\cite{Bozek:2016yoj,Schenke:2020uqq}, whose normalization for $\varepsilon_2$ in the denominator, from Eqs.~\eqref{eq:4} and \eqref{eq:7}, can be expressed as,
\begin{align}\label{eq:14b}
\lr{\varepsilon_2^4}-\lr{\varepsilon_2^2}^2 \equiv \lr{\varepsilon_2^2}^2+c_{2,\varepsilon}\{4\} =\lr{\varepsilon_0^4}-\lr{\varepsilon_0^2}^2 +2\lr{\varepsilon_0^2}\lr{{\bm p}_{2}{\bm p}_{2}^{*}}\beta_2^2+\left(\lr{{\bm p}_{2}^2{\bm p}_{2}^{*2}}-\lr{{\bm p}_{2}{\bm p}_{2}^{*}}^2\right)\beta_2^4\;.
\end{align}
This expression unfortunately contains also an annoying $\beta_2^2$ term that mixes nucleon fluctuations with deformation, which becomes dominant in the mid-central and peripheral collisions. The second definition $\rho$, preferred in this paper, avoid such analytical complication. But for completeness, the values for both are quoted in Table~\ref{tab:1}.
The normalization of four-particle symmetric cumulants between $\varepsilon_{2}$ and $\delta d_{\perp}$ is defined as
\begin{align}\label{eq:16}
\mathrm{nc}(\varepsilon_2^2,\left(\delta d_{\perp}/d_{\perp}\right)^2) = \frac{\lr{\varepsilon_{2}^{2}\left(\delta d_{\perp}/d_{\perp}\right)^2} -\lr{\varepsilon_{2}^{2}}\lr{\left(\delta d_{\perp}/d_{\perp}\right)^2} }{\lr{\varepsilon_{2}^{2}}\lr{\left(\delta d_{\perp}/d_{\perp}\right)^2}}\;.
\end{align}
This correlator should be measurable with a few hundred millions of events in large system. Lastly we also calculated the three-particle mixed harmonics $\lr{ {\bm \epsilon}_2^2{\bm \epsilon}_4^*}$, the $\beta_2^4$ dependence arises because the ${\bm\epsilon}_4$ has a $\beta_2^2$ dependence~\cite{Jia:2021tzt}. Interestingly, in the presence of only quadrupole deformation, we have $\lr{ {\bm \epsilon}_2^2{\bm \epsilon}_4^*}=\lr{\varepsilon_4^2}=\frac{45}{14\pi^2}\beta_2^4$. To limit the scope of this paper, we shall skip the discussion of these two observables and the fourth- and higher-order cumulants of $\varepsilon_2$.
The results in Table~\ref{tab:1} are obtained with the assumption $\Omega_1=\Omega_2$. In reality, the selection of UCC events naturally encompasses a wider range of rotation angles and also a finite range of $N_{\mathrm{part}}$, therefore we also study a second case which requires zero impact parameter but independent orientation for the two nuclei. Since the contributions of the two nuclei are independent, the additive nature of the cumulants implies that the value of the $n^{\mathrm{th}}$-order cumulant of intensive quantity is reduced by a factor of $2^{n-1}$, i.e a factor two smaller for $C_d\{2\}$ and $\lr{\varepsilon_2^2}$, a factor of four smaller for $C_d\{3\}$, and a factor of eight smaller for $C_d\{4\}$ and $\lr{\varepsilon_2^4}-2\lr{\varepsilon_2^2}^2$ etc. The table of values in this case is provided in Tab.~\ref{tab:2}. In realistic model study, $\Omega_1$ and $\Omega_2$ in UCC is expected to be partially aligned and the results for these observables are expected to be in between the values given in Table~\ref{tab:1} and Table~\ref{tab:2}.
\begin{table}[!h]
\centering
\begin{tabular}{c|c||c|c||c|c}\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\lr{(\delta d_{\perp}/d_{\perp})^2}$}} & \multicolumn{2}{c||}{\multirow{2}{*}{$\lr{(\delta d_{\perp}/d_{\perp})^3}$}} & \multicolumn{2}{c}{\multirow{2}{*}{$\lr{(\delta d_{\perp}/d_{\perp})^4}-3\lr{(\delta d_{\perp}/d_{\perp})^2}^2$}}\\
\multicolumn{2}{c||}{}&\multicolumn{2}{c||}{}&\multicolumn{2}{c}{}\\\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\frac{1}{16\pi}\beta_2^2$}} & \multirow{2}{*}{$\frac{\sqrt{5}}{224 \pi^{3/2}}\cos(3\gamma)\beta_2^3$} & \multirow{2}{*}{$\frac{2\sqrt{5}}{7}\cos(3\gamma)$} & \multirow{2}{*}{$-\frac{3}{896 \pi^{2}}\beta_2^4$} &\multirow{2}{*}{$-6/7$}\\
\multicolumn{2}{c||}{}&&&&\\\hline\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\lr{\varepsilon_2^2}$}} & \multicolumn{2}{c||}{\multirow{2}{*}{$\lr{\varepsilon_2^4}-2\lr{\varepsilon_2^2}^2$}} & \multicolumn{2}{c}{\multirow{2}{*}{$\left(\lr{\varepsilon_2^6}-9\lr{\varepsilon_2^4}\lr{\varepsilon_2^2}+12\lr{\varepsilon_2^2}^3\right)/4$}}\\
\multicolumn{2}{c||}{}&\multicolumn{2}{c||}{}&\multicolumn{2}{c}{}\\\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\frac{3}{2\pi}\beta_2^2$}} & \multirow{2}{*}{$-\frac{9}{7\pi^2}\beta_2^4$} & \multirow{2}{*}{$-4/7$} & \multirow{2}{*}{$\frac{27(373-25\cos(6\gamma))}{8008\pi^{3}}\beta_2^6$} & \multirow{2}{*}{$\frac{373-25\cos(6\gamma)}{1001}$} \\
\multicolumn{2}{c||}{}&&&&\\\hline\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$}} & \multicolumn{2}{c||}{\multirow{2}{*}{$\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})^2}-\lr{\varepsilon_2^2}\lr{(\delta d_{\perp}/d_{\perp})^2}$}} & \multicolumn{2}{c}{\multirow{2}{*}{$\lr{ {\bm \epsilon}_2^2{\bm \epsilon}_4^*}$}}\\
\multicolumn{2}{c||}{}&\multicolumn{2}{c||}{}&\multicolumn{2}{c}{}\\\hline
\multirow{2}{*}{$-\frac{3 \sqrt{5}}{28\pi^{3/2}} \cos(3\gamma)\beta_2^3$}&\multirow{2}{*}{$-\frac{2\sqrt{5}}{7}\cos(3\gamma)$,$-\sqrt{\frac{20}{21}} \cos(3\gamma)$}& \multirow{2}{*}{$-\frac{3}{112\pi^2}\beta_2^4$}& \multirow{2}{*}{$-1/4$} & \multicolumn{2}{c}{\multirow{2}{*}{$\frac{45}{14\pi^2}\beta_2^4$}}\\
&&&& \multicolumn{2}{c}{}\\\hline
\end{tabular}
\caption{\label{tab:1} The value of various cumulants of $\varepsilon_2$ and $d_{\perp}$, calculated for nucleus with sharp surface by setting $a_0=0$ in Eq.~\eqref{eq:1}. The two nuclei are placed with zero impact parameter and results are obtained by averaging over common random orientations. For many observables, we also provide the values after normalizing with second-order cumulants (listed as the numbers on the bottom-right side of the cell). In the case of $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$, we provide the values of $\rho$ (the first number) and $\rho_{\mathrm{orig}}$ (the second number) on the bottom-right side of the cell.}
\end{table}
\begin{table}[!h]
\centering
\begin{tabular}{c|c||c|c||c|c}\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\lr{(\delta d_{\perp}/d_{\perp})^2}$}} & \multicolumn{2}{c||}{\multirow{2}{*}{$\lr{(\delta d_{\perp}/d_{\perp})^3}$}} & \multicolumn{2}{c}{\multirow{2}{*}{$\lr{(\delta d_{\perp}/d_{\perp})^4}-3\lr{(\delta d_{\perp}/d_{\perp})^2}^2$}}\\
\multicolumn{2}{c||}{}&\multicolumn{2}{c||}{}&\multicolumn{2}{c}{}\\\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\frac{1}{32\pi}\beta_2^2$}} & \multirow{2}{*}{$\frac{\sqrt{5}}{896 \pi^{3/2}}\cos(3\gamma)\beta_2^3$} & \multirow{2}{*}{$\frac{\sqrt{10}}{7}\cos(3\gamma)$} & \multirow{2}{*}{$-\frac{3}{7168 \pi^{2}}\beta_2^4$} &\multirow{2}{*}{$-3/7$}\\
\multicolumn{2}{c||}{}&&&&\\\hline\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\lr{\varepsilon_2^2}$}} & \multicolumn{2}{c||}{\multirow{2}{*}{$\lr{\varepsilon_2^4}-2\lr{\varepsilon_2^2}^2$}} & \multicolumn{2}{c}{\multirow{2}{*}{$\left(\lr{\varepsilon_2^6}-9\lr{\varepsilon_2^4}\lr{\varepsilon_2^2}+12\lr{\varepsilon_2^2}^3\right)/4$}}\\
\multicolumn{2}{c||}{}&\multicolumn{2}{c||}{}&\multicolumn{2}{c}{}\\\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\frac{3}{4\pi}\beta_2^2$}} & \multirow{2}{*}{$-\frac{9}{56\pi^2}\beta_2^4$} & \multirow{2}{*}{$-2/7$} & \multirow{2}{*}{$\frac{27(373-25\cos(6\gamma))}{32\times 8008\pi^{3}}\beta_2^6$} & \multirow{2}{*}{$\frac{373-25\cos(6\gamma)}{4004}$} \\
\multicolumn{2}{c||}{}&&&&\\\hline\hline
\multicolumn{2}{c||}{\multirow{2}{*}{$\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$}} & \multicolumn{2}{c||}{\multirow{2}{*}{$\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})^2}-\lr{\varepsilon_2^2}\lr{(\delta d_{\perp}/d_{\perp})^2}$}} & \multicolumn{2}{c}{\multirow{2}{*}{$\lr{ {\bm \epsilon}_2^2{\bm \epsilon}_4^*}$}}\\
\multicolumn{2}{c||}{}&\multicolumn{2}{c||}{}&\multicolumn{2}{c}{}\\\hline
\multirow{2}{*}{$-\frac{3 \sqrt{5}}{112\pi^{3/2}} \cos(3\gamma)\beta_2^3$}&\multirow{2}{*}{$-\frac{\sqrt{10}}{7}\cos(3\gamma)$,$-\sqrt{\frac{2}{7}} \cos(3\gamma)$}& \multirow{2}{*}{$-\frac{3}{896\pi^2}\beta_2^4$}& \multirow{2}{*}{$-1/8$} & \multicolumn{2}{c}{\multirow{2}{*}{$\frac{45}{56\pi^2}\beta_2^4$}}\\
&&&& \multicolumn{2}{c}{}\\\hline
\end{tabular}
\caption{\label{tab:2} Same calculation as Table~\ref{tab:1}, except assuming independent random orientations for the two nuclei.}
\end{table}
A few remarks are in order. The skewness $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ and $\lr{\left(\delta d_{\perp}/d_{\perp}\right)^3}$ show clear sensitivity to the triaxiality in the form of a characteristic $\cos(3\gamma)$ dependence, but with an opposite sign. Therefore, if deformation is varied from prolate to oblate, the value of $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ is expected to change from negative to positive while the value of $\lr{(\delta d_{\perp}/d_{\perp})^3}$ is expected to change from positive to negative. In particular, the normalized skewness $\rho$ and $S_{\mathrm{d}}$, defined in Eqs.~\eqref{eq:14a} and \eqref{eq:13}, have equal magnitudes, suggesting a comparable sensitivity to the triaxiality. Secondly, all two- and four-particle correlators have no explicit $\gamma$ dependence, while the six-particle eccentricity cumulant contains a small $\cos(6\gamma)$ modulation. An interesting case is the normalized fourth-order cumulant of $\varepsilon_2$, $\mathrm{nc}_{2}\{4\}=\lr{v_2^4}/\lr{v_2^2}^2-2=-2/7$. Assuming linear response relation $v_2\{2k\}=k_2\varepsilon_2\{2k\}$, one would expect a large four-particle flow signal, which in the presence of large $\beta_2$ scales like $v_2\{4\}/v_2\{2\} =\varepsilon_2\{4\}/\varepsilon_2\{2\} = (-\mathrm{nc}_{2}\{4\})^{1/4}=0.73$. This naturally explained the much larger $v_2\{4\}$ value in $^{238}$U+$^{238}$U collisions than that in $^{197}$Au+$^{197}$Au collisions due to the large $\beta_2$ in $^{238}$U nucleus~\cite{Adamczyk:2015obl}. Another interesting case is the negative value of four-particle cumulant $\mathrm{nc}(\varepsilon_2^2,\left(\delta d_{\perp}/d_{\perp}\right)^2)$, whose value is found to be positive in the UCC region without nuclear deformation effect. Therefore in the presence of large $\beta_2$, we expect its value to decrease to a value of in between $-1/4$ and $-1/8$ in the UCC region.
\section{Setup of the Glauber model and the AMPT model}\label{sec:3}
For a more realistic estimation of influence of nuclear deformation, a Monte-Carlo Glauber model~\cite{Miller:2007ri} is used to simulate collisions of $^{238}$U and $^{96}$Zr systems. These systems are chosen because the experimental collision data exist already. The setup of the model and the data used in this analysis are exactly the same as those used in our previous work~~\cite{Jia:2021tzt}. The nucleons are assumed to have a hard-core of 0.4 fm in radii, with a density described by Eq.~\eqref{eq:1}. The nuclear radius $R_0$ and the surface thickness $a_0$ are chosen to be $R_0=6.81$~fm and $a_0=0.55$~fm for $^{238}$U and $R_0=5.09$~fm and $a_0=0.52$~fm for $^{96}$Zr, respectively. The nucleon-nucleon inelastic cross-section are chosen to be $\sigma_{\mathrm{nn}}=42$~mb at $\mbox{$\sqrt{s_{\mathrm{NN}}}$}=200$ GeV. In each collision event, nucleons are generated in each nuclei at a random impact parameter. Each nucleus is then rotated by randomly generated Euler angles before they are set on a straight line trajectory towards each other along the $z$ direction. From this, the nucleons in the overlap region are identified, which are used to calculate the $\varepsilon_2$ and $d_{\perp}$ defined in Eqs.~\eqref{eq:2} and \eqref{eq:3}, and the results are studied as a function of $N_{\mathrm{part}}$. Most study focuses on the influence of quadrupole deformation, but we also performed a limited study on the influence of the observables from octuple and hexadecapole deformations, for which additional axial symmetric component are added to the nuclear surface (see Eq.~\eqref{eq:1b}). A special study is performed to also investigate the presence of multiple shape components, where two or three non-zero values for $\beta_2$, $\beta_3$ and $\beta_4$ are enabled.
It is well known that particle productions in nucleus-nucleus collisions scale only approximately with $N_{\mathrm{part}}$. A better scaling can be achieved by considering the constituent quarks as effective degrees-of-freedom for particle production~\cite{Adler:2013aqf,Lacey:2016hqy,Loizides:2016djv,Bozek:2016kpf,Acharya:2018hhy}, which would naturally give rise to different $\varepsilon_2$ and $d_{\perp}$ in each event. Defining centrality with constituent quarks is also expected to change the fluctuations of eccentricity~\cite{Zhou:2018fxx}, and provides a way to quantify the centrality smearing effects (also known as volume fluctuations). For this purpose, a quark Glauber model from Ref.~\cite{Loizides:2016djv} is used. Three quark constituents are generated for each nucleon according to the ``mod'' configuration~\cite{Mitchell:2016jio}, which ensures that the radial distribution of the three constituents after re-centering follows the proton form factor $\rho_{\mathrm{proton}}(r) = e^{-r/r_0}$ with $r_0=0.234$~fm~\cite{DeForest:1966ycn}. The value of quark-quark cross-section is chosen to be $\sigma_{\mathrm{qq}}=8.2$~mb in order to match the $\sigma_{\mathrm{nn}}$. The $\varepsilon_2$ and $d_{\perp}$ are then calculated from the list of quark participants in the overlap region, $d_{\perp}=\sqrt{N_{\mathrm{quark}}/S_{\perp}}$, and the number of quark participants $N_{\mathrm{quark}}$ is used as an alternative centrality estimator.
In the presence of large deformation, the total volume of the nucleus increases slightly. Considering the quadrupole deformation only, for the largest value considered, $\beta_2=0.34$, the ratio to the original volume is approximately (exact for sharp surface nucleus) $1+\frac{3}{4\pi}\beta_2^2+\frac{\sqrt{5}}{28\pi^{3/2}}\cos(3\gamma)\beta_2^3=1.021+0.0004\cos(3\gamma)$. In order to keep the overall volume fixed, it would require less than 1\% decrease of the $R_0$, which is safely ignored in the present study.
The results for each cumulant observable are obtained in four different ways. Taking the variance $\lr{(\delta d_{\perp}/d_{\perp})^2}$ for instance, $d_{\perp}$ in each event is calculated either from nucleons or quarks in the Glauber model, after which the averaging ``$\lr{}$'' is then performed for events with the same $N_{\mathrm{part}}$ or the same $N_{\mathrm{quark}}$. The latter can produce different variances due to slightly different volume fluctuations which can be quite important in the UCC region. Each cumulant can be obtained either nucleons or quarks and plotted as a function of $N_{\mathrm{part}}$ or $N_{\mathrm{quark}}$.
To understand the conversion from $\varepsilon_2$ and $d_{\perp}$ in the initial overlap to $v_2$ and $[\pT]$ of the final-state particles, the popular event generator ``a multi-phase transport model'' (AMPT)~\cite{Lin:2004en} is used, which is a realistic yet computationally efficient way to implement hydrodynamic response. The model starts with Monte Carlo Glauber initial conditions. The system evolution is modeled with strings that first melt into partons, followed by elastic partonic scatterings, parton coalescence, and hadronic scatterings. The collectivity is generated mainly through elastic scatterings of partons, which leads to an emission of partons preferable along the gradient of the initial-state energy density distribution, in a manner that is similar to hydrodynamic flow.
The AMPT model has been demonstrated to qualitatively describe the harmonic flow $v_n$ in $p$+A and A+A collisions~\cite{Xu:2011jm,Xu:2011fe}, so it can be use to predict the $\beta_2$ dependence of $v_n$. A previous study has demonstrated a robust simple quadratic dependence $\lr{v_2^2}=a+b\beta_2^2$ in the final state as a result of a linear response to a similar dependence in the initial state $\lr{\varepsilon_2^2}=a'+b'\beta_2^2$~\cite{Giacalone:2021udy,Jia:2021wbq}. However this model is known to have the wrong hydrodynamic response for the radial flow, i.e. the centrality dependence of average transverse momentum $\llrr{\pT}\equiv\lr{[\pT]}$ and the variance $\lr{(\delta [\pT])^2}$ do not describe the experimental data~\cite{Ma:2016fve,Jia:2021wbq}. A recent modification of the model~\cite{Zhang:2021vvp} fixed the problem with the $\llrr{\pT}$, but the value of $\lr{(\delta [\pT])^2}$ is still more than a factor of 3 lower than the STAR data~\cite{Adam:2019rsf,jjia}~\footnote{Hydrodynamic model simulation based on Trento initial condition~\cite{Giacalone:2020lbm} predicts a much larger $[\pT]$ fluctuation, but with very little sensitivity on $\beta_2$.}. This implies that the response of $[\pT]$ to $d_{\perp}$ in AMPT is a lot weaker than experimental finding, this probably is also the reason why the model underestimates quantitatively the sign-change of $\lr{v_2^2\delta[\pT]}$ in U+U collisions associated with the large prolate deformation Uranium nucleus observed by the STAR data~\cite{jjia}. Nevertheless, we can still study the parametric $(\beta_2,\gamma)$ dependence of $\lr{v_2^2\delta[\pT]}$ and at least compare with the trend of $\lr{\varepsilon_2^2\delta d_{\perp}}$, since the response of $v_2$ is still correct. But this unfortunately can not be said about cumulants of $[\pT]$ fluctuations, more realistic hydrodynamic model studies are required in the future.
Following Refs~\cite{Ma:2014pva,Bzdak:2014dia,Nie:2018xog}, we use the AMPT model v2.26t5 with string-melting mode and partonic cross section of 3.0~mb, which we check reasonably reproduce Au+Au $v_2$ data at RHIC. The Woods-Saxon parameters in the AMPT are chosen to be $R_0=6.81$fm and $a=0.54$ similar to ~\cite{Heinz:2004ir} but with different fixed values of $(\beta_2, \gamma)$. The $v_2$ and $[\pT]$ are calculated with all hadrons with $0.2<\pT<2$ GeV and $|\eta|<2$, and the event centrality is defined using either $N_{\mathrm{part}}$ or inclusive hadron multiplicity in $|\eta|<2$, $N_{\mathrm{hadron}}$. The value of $N_{\mathrm{hadron}}$, which include both charged and neutral particles, is about six times of the charged hadron multiplicity density, i.e. $N_{\mathrm{hadron}}\approx 6 dN_{\mathrm{ch}}/d\eta$.
\section{Results}\label{sec:4}
To highlight the general feature of the $(\beta_2,\gamma)$ dependence, Fig.~\ref{fig:2} shows the correlations between $\varepsilon_2$ and $\delta d_{\perp}/d_{\perp}=-\delta R_{\perp}/R_{\perp}$~\footnote{In principle full expression should also contain contribution from volume fluctuations, i.e. $\delta d_{\perp}/d_{\perp}=-\delta R_{\perp}/R_{\perp}+\frac{1}{2}\delta N_{\mathrm{part}} /N_{\mathrm{part}}$. However, the second term drops out when we classify events according to $N_{\mathrm{part}}$.} calculated with nucleon Glauber model in 0--0.1\% most central U+U collisions selected on $N_{\mathrm{part}}$ and with $\beta_2=0.28$ and different $\gamma$. They can be contrasted directly with the expectations illustrated by Fig.~\ref{fig:1}. A clear anti-correlation (positive correlation) between $\varepsilon_2$ and $\delta d_{\perp}/d_{\perp}$ is observed for the prolate (oblate) deformation as expected, after taking into account the opposite sign between $\delta d_{\perp}/d_{\perp}$ and $\delta R_{\perp}/R_{\perp}$. The distribution of $\delta d_{\perp}/d_{\perp}$ also indicates clearly a positive (negative) skewness as expected. These distributions are much more broad than the ideal case in Fig.~\ref{fig:1} due to randomness of $\Omega_1$ relative to $\Omega_2$, surface diffuseness, smearing due to nucleon position fluctuations and smearing due to centrality selection.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\linewidth]{enscorr0.pdf}
\end{center}
\caption{\label{fig:2} The correlation of $\varepsilon_2$ vs $\delta d_{\perp}/d_{\perp}$ for quadrupole deformation $\beta_2=0.28$ with prolate (left panel), rigid triaxial (second left panel) and oblate (third left panel) shape in 0--0.1\% most central U+U collisions selected on $N_{\mathrm{part}}$. The right panel show the distributions of $\delta d_{\perp}/d_{\perp}$ in the three cases.}
\end{figure}
The goal of this paper is to explore the $(\beta_2,\gamma)$ dependence of various cumulants in Tabs.~\ref{tab:1} and \ref{tab:2} to provide insights on the deformation dependence of experiment measurements. The main finding is that the $\beta_2,\gamma$ dependence for the $n^{\rm{th}}$-order cumulant can be described by a simple equation with the following general form
\begin{align}\label{eq:17}
a'+(b'+c'\cos(3\gamma)) \beta_2^n\;,
\end{align}
including the variance $\lr{(\delta d_{\perp}/d_{\perp})^2}$ and $\lr{\varepsilon_2^2}$, the skewness $\lr{(\delta d_{\perp}/d_{\perp})^3}$ and $\lr{\varepsilon_2^2\delta d_{\perp}/d_{\perp}}$, and the kurtosis $\lr{(\delta d_{\perp}/d_{\perp})^4}-3\lr{(\delta d_{\perp}/d_{\perp})^2}^2$ and $\lr{\varepsilon_2^4}-2\lr{\varepsilon_2^2}^2$. It is remarkable that most $\gamma$ dependences can be described by a $\cos (3\gamma)$ function, and the higher-order terms allowed by symmetry $\cos (6\gamma)$, $\cos (9\gamma)$ etc are very small. The coefficients $a',b'$ and $c'$ are functions of centrality and collision systems, but are independent of $\beta_2$ and $\gamma$. The coefficient $a'$ represents the values without deformation, it is usually a strong function of centrality and size of the collision systems. In contrast, the values of $b'$ and $c'$ are similar between nucleon and quark Glauber models and between U+U vs Zr+Zr (i.e. independent of collision systems). They also have rather weak dependence on event centrality. These behaviors are the result of geometrical effect: the deformation changes the distribution of nucleons in the entire nucleus, therefore the values of $b'$ and $c'$ in each collision event depend only on the Euler angles of the two nuclei and the impact parameter, and they should be insensitive to the random fluctuations of nucleons and size of the collision system in the Glauber model.
The results are organized as follows. Section~\ref{sec:41} discusses the variance of $d_{\perp}$ in details, which is most significant and easy to determine experimentally via the variance of $[\pT]$. Results of higher-order cumulants, skewness and kurtosis of $d_{\perp}$ fluctuations, are presented in Section~\ref{sec:42}. Section~\ref{sec:43} considers the mixed cumulant between $d_{\perp}$ and $\varepsilon_2$, which is identified to be the most promising observable to constrain $\gamma$. We then summarize in Section\ref{sec:44} the Glauber results in terms of Eq.~\eqref{eq:17} and discuss the effects of volume fluctuations, and the centrality and system dependences of the results. Lastly, results from the AMPT model are discussed in Section\ref{sec:45}, which confirmed the general trends observed in the Glauber model.
\subsection{Variance of $d_{\perp}$ fluctuations}\label{sec:41}
In the hydrodynamic picture, the variance of $d_{\perp}$ fluctuation is proportional to the variance of $[\pT]$ fluctuation, $C_{\mathrm{d}}\{2\}=\lr{(\delta d_{\perp}/d_{\perp})^2} \propto \lr{(\delta [\pT]/\lr{\pT})^2}$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\linewidth]{paperS_c0_cum4.pdf}
\end{center}\vspace*{-0.6cm}
\caption{\label{fig:3} The variance $\lr{(\delta d_{\perp}/d_{\perp})^2}$ for several $\beta_2$ values of prolate shape $\gamma=0$ (top row) and several $\gamma$ values with $\beta_2=0.28$ (bottom row) in U+U collisions. The left column show the $N_{\mathrm{part}}$ dependence where markers and lines represent $d_{\perp}$ obtained with nucleons and quarks, respectively. The middle column shows results in several centrality ranges, which follows a linear function of $\beta_2^2$ (top panel) or $\cos(3\gamma)$ (bottom panel). The right column shows the coefficients $b'$ (top) and $c'$ (bottom) as a function of centrality in U+U (black) and Zr+Zr (red) systems for $d_{\perp}$ calculated from nucleons (markers) or quarks (lines). The three vertical lines in the left column mark the locations of 2\%, 1\% and 0.2\% centrality, respectively.}
\end{figure}
\begin{figure}[h!]
\begin{center}\vspace*{-0.3cm}
\includegraphics[width=0.8\linewidth]{paperS_c0_cum4_high.pdf}
\end{center}\vspace*{-0.6cm}
\caption{\label{fig:4} The variance $\lr{(\delta d_{\perp}/d_{\perp})^2}$ for several values of $\beta_3$ (top row) and $\beta_4$ (bottom row) as a function of $N_{\mathrm{part}}$ (left column) or $\beta_n^2$ (middle column) in U+U collisions. The latter dependences can be described by a simple $a'+b'\beta_n^2$ function. The right column summarizes the values of $b'$ from the middle column as a function of centrality in U+U (black) and Zr+Zr (red) systems.}
\end{figure}
The left column of Fig.~\ref{fig:3} shows the $N_{\mathrm{part}}$ dependence of $C_{\mathrm{d}}\{2\}$ for various values of $\beta_2$ or $\gamma$ with fixed $\beta_2=0.28$ in U+U collisions, calculated from the participating nucleons. In the absence of deformation, the $C_{\mathrm{d}}\{2\}$ decreases approximately as a power law function of $N_{\mathrm{part}}$. The presence of large $\beta_2$ increases $C_{\mathrm{d}}\{2\}$ significantly in the UCC region, but the enhancement appears over a very broad centrality range. On the other hand, the triaxiality parameter $\gamma$ only has a small influence, as reflected by the clustering of all different curves in the bottom panel. In the same panels, we also show results calculated from quark participants as solid lines, with the same color as those calculated from nucleon participants. Small differences are observed in the UCC region, implying that the influences of deformation are insensitive to nucleon substructures.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1\linewidth]{paperRcS_t0_cum4.pdf}
\end{center}\vspace*{-0.3cm}
\caption{\label{fig:5} Ratios of $\lr{(\delta d_{\perp}/d_{\perp})^2}$ to the default as a function of $N_{\mathrm{part}}$ for several values of $\cos(3\gamma)$ with $\beta_2=0.28$ (left column), several values of $\beta_2$ with $\cos(3\gamma)=1$ (second column), several values of $\beta_3$ (third column) and $\beta_4$ (right column) in the U+U (top row) and the Zr+Zr (bottom row) collisions. The results calculated using nucleons or quarks are shown in markers and lines respectively. The three vertical bars around unity mark the locations of 2\%, 1\% and 0.2\% centrality, respectively.}
\end{figure}
To quantify the $(\beta_2,\gamma)$ dependencies, $C_{\mathrm{d}}\{2\}$ values obtained for fixed $N_{\mathrm{part}}$ are averaged in narrow centrality ranges, which are then plotted as a function of $\beta_2^2$ or $\cos(3\gamma)$ in the middle column of Fig.~\ref{fig:3}. Very nice linear trends are observed in most of the cases, confirming Eq.~\eqref{eq:17}~\footnote{In 0--0.2\% centrality we also observe significant $\cos(6\gamma)$ component in Fig.~\ref{fig:3}, but not in quark Glauber model.}. The slopes in the middle-top panel corresponds to $b'+c'$ (since $\gamma=0$) and the slopes in the middle-bottom panel corresponds to $c'\beta_2^2$. The two panels in the right column summarizes the centrality dependence of coefficients $b'$ and $c'$, respectively. They are shown for $d_{\perp}$ calculated from both nucleons and quarks in U+U and Zr+Zr collisions. The values are largest in the UCC region and decrease slowly toward mid-central region. It is quite remarkable that the value of $b'$ and $c'$ are insensitive to subnucleon structures and are similar in both collision systems, which are expected since deformation influences the global geometry of the overlap region. The values of $c'$ is about a factor 20--30 smaller than $b'$, but is not zero. A qualitatively similar functional form was also observed between $\lr{\varepsilon_2^2}$ and $(\beta_2,\gamma)$ in a previous study~\cite{Jia:2021tzt}.
Although the axial quadrupole distortion is the nuclear deformation of primary importance, secondary contributions from octupole and hexadecapole components often coexist and can be important in some regions of nuclear chart~\cite{Butler:2016rmu}. Therefore it is interesting to study how $d_{\perp}$ is affected by the higher-order deformations $\beta_3$ and $\beta_4$. We have performed such calculations and results are shown in Fig.~\ref{fig:4} with a similar layout as Fig.~\ref{fig:3}. These higher-order deformations have no influence on the variance of $d_{\perp}$ in the UCC region, but significant enhancement associated with $\beta_3$ is observed in near-central and mid-central collisions, and the $\beta_4$ only has modest enhancement in the peripheral region. Such enhancements can be described by a quadratic function of $\beta_3^2$ and $\beta_4^2$, i.e. $b'\beta_3^2$ or $b'\beta_4^2$ as predicted by Eq.~\eqref{eq:10}. The coefficients $b'$ are shown in the right panels. These observations are qualitatively similar to the positive correlations between $\lr{\varepsilon_2^2}$ and $\beta_3$/$\beta_4$ observed in Ref.~\cite{Jia:2021tzt} (See Figs. 12--14 in Ref.~\cite{Jia:2021tzt}).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.6\linewidth]{paperRcS_c0_cum4.pdf}
\end{center}\vspace*{-0.3cm}
\caption{\label{fig:6} Ratio of $\lr{(\delta d_{\perp}/d_{\perp})^2}$ for U+U (top row) and Zr+Zr (bottom row) collisions, relative to undeformed case, as a function of $N_{\mathrm{part}}$ for different combinations of $\beta_2$, $\beta_3$ and $\beta_4$ as indicated in the top panels; only axial deformations are considered. The left column shows results with small $\beta_2=0.1$, while the right column shows results with large $\beta_2=0.28$. Both $d_{\perp}$ and centrality are calculated with $N_{\mathrm{part}}$.}
\end{figure}
To better visualize and quantify the effects of deformation, Fig.~\ref{fig:5} show the ratios of the $C_{\mathrm{d}}\{2\}$ in U+U (top row) or in Zr+Zr (bottom) collisions. The results in the top row are obtained directly from the data from the left columns of Figs.~\ref{fig:3} and \ref{fig:4}. These results can be used to predict the ratios of $\lr{(\delta [\pT]/[\pT])^2}$ between two systems with similar mass number but different deformation parameters. Most trends are obvious, but the results for different $\gamma$ cases deserves some discussion. The separation between different $\gamma$ cases increases linearly with $N_{\mathrm{part}}$, reaching the maximum around 2\% centrality and then decreases and turns to opposite direction in the UCC region. The maximum relative difference is about 3--4\%, which is about twice of the influence of $\gamma$ for $\varepsilon_2$~\cite{Jia:2021tzt}. As mentioned in the Appendix~\ref{sec:app1}, this $\gamma$ dependence may arise from the higher-order expansion of $\delta d_{\perp}/d_{\perp}$ in powers of $\beta_2$~\footnote{The derivation in the Appendix~\ref{sec:app1} suggests $\lr{(\delta d_{\perp}/d_{\perp})^2} \approx a'+b'\beta_2^2+c'\cos(3\gamma)\beta_2^3$ instead of Eq.~\eqref{eq:17}. In fact, the data in top-middle panel of Fig.~\ref{fig:3} do suggest a $\beta_2^3$ component. But in practice, we found Eq.~\eqref{eq:17} works rather well for Glauber model.}. Later, we shall see such small residual $\gamma$ dependence is important for the kurtosis of the $d_{\perp}$ fluctuations.
It is also interesting to study how the fluctuations of $d_{\perp}$ depend on the simultaneous presence of quadrupole and higher-order deformations, in particular, whether the contribution from each component to $d_{\perp}$ is independent of each other. For this exploratory study, only combinations of axial-symmetric components $Y_{n,0}, n=2,3, 4$ are considered. The analysis is carried out for different combinations of $(\beta_2,\beta_3,\beta_4)$ from the values $\beta_2=\pm0.1,0$, $\beta_3=0.1,0$ and $\beta_4=0.1,0$, and results are shown in the left column of Fig.~\ref{fig:6}. The contributions from different deformation components are not fully independent of each other. In particular, the influence of $\beta_4$ and to some extent also $\beta_3$ is enhanced when $\beta_2$ is non-zero in non-central collisions. This suggest that the mixing between different deformation, i.e. terms such as $\beta_2\beta_4$, $\beta_2\beta_3$ and $\beta_3\beta_4$ in Eq.~\eqref{eq:10} are important, but these non-linear effects are always very small in the UCC region. The right column of Fig.~\ref{fig:6} considers a different scenario where the quadrupole component is much larger than the octupole and hexadecapole. For this case, we increase the $\beta_2$ to the value of 0.28. Similar conclusions can be drawn.
\subsection{Skewness and kurtosis of $d_{\perp}$ fluctuations}\label{sec:42}
Figure~\ref{fig:7} show the results of skewness $C_{\mathrm{d}}\{3\}=\lr{(\delta d_{\perp}/d_{\perp})^3}$, which is directly related to skewness of transverse momentum fluctuations $\lr{(\delta [\pT]/\lr{\pT})^3}$, for different values of $\beta_2$ and $\gamma$ with similar layout as Fig.~\ref{fig:3}. The $N_{\mathrm{part}}$ dependence in the left column show dramatic differences between different parameter values across a broad centrality range. In particular, the $C_{\mathrm{d}}\{3\}$ for large $\beta_2$ is nearly constant from the mid-central to central collisions, a salient feature observed in the skewness of $[\pT]$ fluctuations in the U+U data by the STAR collaboration~\cite{jjia}. The bottom panel also shows that the $C_{\mathrm{d}}\{3\}$ is largest for prolate deformation $\cos(3\gamma)=1$ and smallest for the oblate deformation $\cos(3\gamma)=-1$. In the latter case, $C_{\mathrm{d}}\{3\}$ actually changes sign to negative in central collisions. The $C_{\mathrm{d}}\{3\}$ values are plotted as a function of $\beta_3^2$ or $\cos(3\gamma)$ in the middle panels. Very nice linear dependencies, described by Eq.~\eqref{eq:17}, are observed.
The right panels show the centrality dependence of the coefficients $b'$ and $c'$ for various cases. The results are similar between U+U and Zr+Zr collisions, but the values of $b'$ obtained from quarks Glauber are systematically larger, especially towards more peripheral collisions. The values of $c'$ are larger than $b'$ in the 0--10\% most central collisions, and are smaller than $b'$ in mid-central and peripheral collisions. This should be contrasted to the expectation of liquid-drop model, which predicts $b'=0$ in the UCC region. The strong sensitivity on $\gamma$ suggests that the skewness of the $[\pT]$ fluctuation is an excellent probe of the triaxiality of the colliding nuclei. For smaller Zr+Zr collision system, we do not observe a sign change from prolate deformation to oblate deformation even with $\beta_2=0.28$ (see Fig.~\ref{fig:app2} in Appendix~\ref{sec:app2}).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\linewidth]{paperS_c0_cum8.pdf}
\end{center}\vspace*{-0.3cm}
\caption{\label{fig:7} The skewness $\lr{(\delta d_{\perp}/d_{\perp})^3}$ for several $\beta_2$ values of prolate shape $\gamma=0$ (top row) and several $\gamma$ values with $\beta_2=0.28$ (bottom row). The left column shows the $N_{\mathrm{part}}$ dependence where markers and lines correspond $d_{\perp}$ obtained with nucleons and quarks, respectively. The middle column shows the respective results in several centrality ranges based on $N_{\mathrm{part}}$, which can be mostly described by a linear function of $\beta_2^3$ (top panel) or $\cos(3\gamma)$ (bottom panel) according to Eq.~\eqref{eq:17}. The right column summarizes extracted coefficients $b'$ (top) and $c'$ (bottom) as a function of centrality in U+U (black) and Zr+Zr (red) systems calculated from nucleons (markers) or quarks (lines). The three vertical lines in the left column mark the locations of 2\%, 1\% and 0.2\% centrality, respectively.}
\end{figure}
Similar study is performed for kurtosis $C_{\mathrm{d}}\{4\}=\lr{(\delta d_{\perp}/d_{\perp})^4}-3\lr{(\delta d_{\perp}/d_{\perp})^2}^2$ in Fig.~\ref{fig:8}, which can be used to provide guidance on the behavior of kurtosis of transverse momentum fluctuations $\lr{(\delta [\pT]/\lr{\pT})^4}-3\lr{(\delta [\pT]/\lr{\pT})^2}^2$. For large prolate deformation (top row), $C_{\mathrm{d}}\{4\}$ changes sign in the UCC region. It also shows a strong dependence on $\gamma$ (bottom row), i.e. $C_{\mathrm{d}}\{4\}$ becomes more negative from probate to oblate deformation. These dependencies again can be parameterized according to Eq.~\eqref{eq:17}. The centrality dependence of the extracted coefficients $b'$ and $c'$ are shown in the right panels. Besides the similarity between U+U and Zr+Zr, we find $b'\approx -c'$ in the case of nucleon Glauber, but magnitude of $b'$ is much smaller in quark Glauber. This observation is different from the naive estimate of $c'=0$ from the liquid-drop model. The origin for this is related to small $\cos(3\gamma)$ dependence in the $C_{\mathrm{d}}\{2\}$, which will be discussed later.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\linewidth]{paperS_c0_cum9.pdf}
\end{center}\vspace*{-0.3cm}
\caption{\label{fig:8} The kurtosis of $d_{\perp}$ for several $\beta_2$ values of prolate shape $\gamma=0$ (top row) and several $\gamma$ values with $\beta_2=0.28$ (bottom row). The left column shows the $N_{\mathrm{part}}$ dependence where markers and lines correspond $d_{\perp}$ obtained with nucleons and quarks, respectively. The middle column shows the respective results in several centrality ranges based on $N_{\mathrm{part}}$, which can be mostly described by a linear function of $\beta_2^4$ (top panel) or $\cos(3\gamma)$ (bottom panel). The right column shows centrality dependence of extracted $b'$ (top) and $c'$ (bottom) according to Eq.~\eqref{eq:17} in U+U (black) and Zr+Zr (red) systems for $d_{\perp}$ calculated from nucleons (markers) or quarks (lines). The vertical lines in the left column mark the locations of 2\%, 1\% and 0.2\% centrality, respectively.}
\end{figure}
The behavior of the high-order cumulants are often analyzed in terms of cumulant ratios. In the independent source picture and without deformation, the cumulants of intensive quantities scales approximately as $C_{\mathrm{d}}\{k\}\sim 1/N_{\mathrm{part}}^{k-1}$. The normalized skewness $S_{\mathrm{d}}$ and normalized kurtosis $K_{\mathrm{d}}$ in Eq.~\eqref{eq:13} are expected naively to scale as $S_{\mathrm{d}} \sim 1/\sqrt{N_{\mathrm{part}}}$ and $K_{\mathrm{d}} \sim 1/N_{\mathrm{part}}$. The results of Glauber model using $N_{\mathrm{part}}$-based event averaging in Fig.~\ref{fig:7} show clear deviation from this scaling expectation, although results obtained using $N_{\mathrm{quark}}$-based event averaging are closer to this scaling. The presence of nuclear deformation is expected to cause further deviation from this baseline. The top row of Fig.~\ref{fig:9} shows the $S_{\mathrm{d}}$ (left two panels) and $K_{\mathrm{d}}$ (right two panels) as a function of $N_{\mathrm{part}}$ for various $\beta_2$ and $\gamma$ values. In the presence of large $\beta_2$, the values of $S_{\mathrm{d}}$ are greatly enhanced, while the values of $K_{\mathrm{d}}$ decrease more strongly and even change sign in the UCC region. As one varies $\gamma$ from prolate to oblate with fixed $\beta_2=0.28$, the trends of $S_{\mathrm{d}}$ changes from an increase with $N_{\mathrm{part}}$ into a decrease with $N_{\mathrm{part}}$, while the $K_{\mathrm{d}}$ deceases nearly linearly with $N_{\mathrm{part}}$ with an increasingly larger slope. The results of $K_{\mathrm{d}}$ suggests a quite sizable $\cos(3\gamma)$ component on the order of 0.1--0.2. As mentioned earlier, the origin is related to the residual $\cos(3\gamma)$ dependence in the $C_{\mathrm{d}}\{2\}$ in Fig.~\ref{fig:5}. This small $\gamma$ dependence at the level of $\Delta C_{\mathrm{d}}\{2\}/C_{\mathrm{d}}\{2\}\sim\pm0.03$ propagates to the kurtosis as $\Delta K_{\mathrm{d}} =\frac{\Delta C_{\mathrm{d}}\{4\}}{C_{\mathrm{d}}\{2\}^2}-6\frac{\Delta C_{\mathrm{d}}\{2\}}{C_{\mathrm{d}}\{2\}} \approx -4\frac{\Delta C_{\mathrm{d}}\{2\}}{C_{\mathrm{d}}\{2\}} \sim \mp0.12$, consistent with the observation.
The normalized skewness $S_{\mathrm{d}}$ and kurtosis $K_{\mathrm{d}}$, while easier to construct experimentally, mix up the contributions from nucleon fluctuations and nuclear deformation, which preclude a direct and intuitive interpretation of the results. Therefore, we propose a modified form of the normalized cumulants,
\begin{align}\label{eq:18}
S_{\mathrm{d,sub}} \equiv \frac{C_{\mathrm{d}}\{3\}-C_{\mathrm{d}}\{3\}_{|\beta_2=0}}{(C_{\mathrm{d}}\{2\}-C_{\mathrm{d}}\{2\}_{|\beta_2=0})^{3/2}} \equiv S_{\mathrm{d}}(\beta_2=\infty) \;,\;K_{d.sub} \equiv \frac{C_{\mathrm{d}}\{4\}-C_{\mathrm{d}}\{4\}_{|\beta_2=0}}{(C_{\mathrm{d}}\{2\}-C_{\mathrm{d}}\{2\}_{|\beta_2=0})^{2}} \equiv K_{\mathrm{d}}(\beta_2=\infty)
\end{align}
With this definition, the baseline contributions are subtracted in the numerator and denominator and the $\beta_2$ dependence is expected to cancel. The final results contain only the $\cos(3\gamma)$ dependence and can be compared directly with the normalized quantities in Tables~\ref{tab:1} and \ref{tab:2}. Another important point is that the values of the normalized cumulant are expected to be in between two limits
\begin{align}\label{eq:19}
S_{\mathrm{d}}(\beta_2=0)<S_{\mathrm{d}}(\beta_2)<S_{\mathrm{d,sub}}\;,\;\;K_{\mathrm{d,sub}}<K_{\mathrm{d}}(\beta_2)<K_{\mathrm{d}}(\beta_2=0)\;.
\end{align}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.49\linewidth]{paperS_sys0_c0_cum0.pdf}\includegraphics[width=0.49\linewidth]{paperS_sys0_c0_cum1.pdf}
\end{center}\vspace*{-0.3cm}
\caption{\label{fig:9} Left part: The $N_{\mathrm{part}}$ dependence of normalized skewness $S_{\mathrm{d}}=C_{\mathrm{d}}\{3\}/C_{\mathrm{d}}\{2\}^{3/2}$ (top) and modified version $S_{\mathrm{d,sub}}$ (bottom) for several $\beta_2$ values of prolate shape $\gamma=0$ (left) and for several $\gamma$ values but fixed $\beta_2=0.28$ (right) in U+U collisions. Right part: results for normalized kurtosis $K_{\mathrm{d}}=C_{\mathrm{d}}\{4\}/C_{\mathrm{d}}\{2\}^{2}$ and $K_{\mathrm{d,sub}}$ with the same layout. The shaded bands indicate the predicted range from Tabs.~\ref{tab:1} and \ref{tab:2}.}
\end{figure}
The bottom panels of Fig.~\ref{fig:9} show the results for these modified quantities. Datasets for different $\beta_2$ values, as shown by the first panel for $S_{\mathrm{d,sub}}$ and the third panel for $K_{\mathrm{d,sub}}$, nearly collapse on a common curve, confirming our earlier statement that these observables are a great way to isolate the coefficient $b'$ and $c'$ in Eq~\eqref{eq:17}. The same panels also show the range of the predicted values from Tables~\ref{tab:1} and \ref{tab:2} as shaded boxes. Remarkably, the values predicted from the full Monte Carlo Glauber model falls within the ranges from the simple analytical estimates. These results suggest an approximate parametrization $S_{\mathrm{d,sub}}=b_0+c_0\cos3\gamma$, with coefficient $c_0$ nearly independent of centrality and coefficient $b_0$ increasing from central to peripheral collisions.
Even though $S_{\mathrm{d,sub}}$ and $K_{d.sub}$ can not be directly measured, they can be estimated by comparing results from collisions of two species $A$ and $B$ with similar mass numbers and therefore similar values in the absence of deformations. Taking skewness for example, we could construct the following ratio using Eq.~\eqref{eq:17},
\begin{align}\label{eq:20}
S_{\mathrm{d,AB}}=\frac{C_{\mathrm{d}}\{3\}_{\mathrm{A}}-C_{\mathrm{d}}\{3\}_{\mathrm{B}}}{(C_{\mathrm{d}}\{2\}_{\mathrm{A}}-C_{\mathrm{d}}\{2\}_{B})^{3/2}} \approx S_{d,sub,A}\left(1+\frac{3}{2}x^2-\frac{b'+c'\cos(3\gamma_{\mathrm{B}})}{b'+c'\cos(3\gamma_{\mathrm{A}})}x^3+\frac{15}{8}x^4\right)\;,
\end{align}
where $x=\beta_{2B}/\beta_{2A}\ll 1$ is assumed and we have ignored the negligible $\cos(3\gamma)$ term in $C_{\mathrm{d}}\{2\}$. The $b'$ and $c'$ refers those of $C_{\mathrm{d}}\{3\}$ which are also expected to be the same for nucleus ``A'' and ``B''. The ideal case for Eq.~\eqref{eq:20} is between a pair of isobaric system with different amount of deformations such as Zr+Zr and Ru+Ru collisions.
\subsection{Correlation between eccentricity and $d_{\perp}$}\label{sec:43}
Let's turn attention to the skewness $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ and the related experimentally measured observable $\lr{v_2^2(\delta [\pT]/[\pT])}$. They has been studied both experimentally~\cite{jjia,ATLAS-CONF-2021-001} and in models~\cite{Giacalone:2019pca,Jia:2021wbq}, and as discussed below, they have great potential in constraining the triaxiality of the colliding nuclei.
Figure~\ref{fig:10} shows the results of $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ for different values of $\beta_2$ and $\gamma$ with the usual layout. The $N_{\mathrm{part}}$ dependences show a clear hierarchy between different $\beta_2$ and/or $\gamma$ values, and the sensitivity to these parameters are clearly visible across a broad centrality range. In the absence of deformation, $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ decreases gradually from peripheral to more central collisions but remains positive. For the prolate deformation, as $\beta_2$ is increased, $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ decreases over the entire centrality range, and becomes negative in the UCC region. However, for large oblate deformation, $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ increases in the UCC region. These behaviors are fully consistent with the expectation of Fig.~\ref{fig:1}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.9\linewidth]{papercov_sys0_t0.pdf}
\end{center}\vspace*{-0.3cm}
\caption{\label{fig:10} The $\lr{\varepsilon_2^2\delta d_{\perp}/d_{\perp}}$ for several $\beta_2$ values of prolate shape $\gamma=0$ (top row) and several $\gamma$ values with $\beta_2=0.28$ (bottom row). The left column shows the $N_{\mathrm{part}}$ dependence. The middle column shows the respective results in several centrality ranges based on $N_{\mathrm{part}}$, which can be mostly described by a linear function of $\beta_2^3$ (top panel) or $\cos(3\gamma)$ (bottom panel). The right column summarizes the centrality dependence of $b'$ (top) and $c'$ (bottom) obtained via Eq.~\eqref{eq:17} in U+U (black) and Zr+Zr (red) collisions for $\varepsilon_2$ and $d_{\perp}$ calculated from nucleons (markers) or quarks (lines).}
\end{figure}
The middle column shows the values of $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ as a function of either $\beta_3^3$ or $\cos(3\gamma)$ in several narrow centrality ranges. Very nice linear dependencies are observed, consistent with the now familiar parameterization Eq.~\eqref{eq:17}. The right panels show the centrality dependencies of $b'$ and $c'$ for various cases. The results are similar between U+U and Zr+Zr collisions and between nucleon Glauber and quark Glauber models. Both $b'$ and $c'$ are positive over the full centrality range, but the values of $c'$ are much larger than $b'$ in the 0--10\% central collisions, and are smaller than $b'$ in mid-central and peripheral collisions. The strong sensitivity to $\gamma$ suggests that $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ is a very promising probe of the triaxiality. Since $c'\gg b'$ in the UCC region, a clear sign change is observed from prolate to oblate deformation even in smaller Zr+Zr for $\beta_2=0.28$ (see Fig.~\ref{fig:app4}). This implies that the sensitivity of $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ to $\gamma$ is stronger than $\lr{(\delta d_{\perp}/d_{\perp})^3}$, even though they are clearly complementary.
Given the importance of this observable, we also investigated the influence of $\beta_3$ and $\beta_4$ (see in Fig.~\ref{fig:app6} in Appendix~\ref{sec:app2}). The influence is negligible in the UCC region. But we found the $\beta_3$ enhances the value of $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ in central collisions upto 15\% centrality. In the peripheral region, both $\beta_3$ and $\beta_4$ reduces the signal, the relative change is less than 30\% as long as $\beta_3,\beta_4<0.2$. The dependences are found to be linear functions of $\beta_3^2$ and $\beta_4^2$.
The behavior of $\lr{\varepsilon_2^2(\delta d_{\perp}/d_{\perp})}$ can be analyzed using the normalized quantity, $\rho_{\mathrm{orig}}(\varepsilon_2^2,\delta d_{\perp}/d_{\perp})$ and $\rho(\varepsilon_2^2,\delta d_{\perp}/d_{\perp})$ defined in Eq.~\eqref{eq:14a}. They are directly related to the analogous experimentally accessible observables based on final-state particles $\rho_{\mathrm{orig}}(v_2^2,\delta [\pT]/[\pT])$~\cite{Bozek:2016yoj} and $\rho(v_2^2,\delta [\pT]/[\pT])$. The results of $\rho(\varepsilon_2^2,\delta d_{\perp}/d_{\perp})$ are shown in the left part of Fig.~\ref{fig:11}. The $\beta_2$ dependence (the 2nd column) shows an approximately linear dependence for moderate value of $\beta_2$, but non-linear behavior shows up at small and larger $\beta_2$. The reason for this complex $\beta_2$ dependence can be attributed to the $a'$ terms in the numerator and the denominator. Following the example for the $S_{\mathrm{d,sub}}$, we define a modified correlator by subtracting out the baseline effects,
\begin{align}\label{eq:21}
\rho_{\mathrm{sub}}(\varepsilon_2^2,\frac{\delta d_{\perp}}{d_{\perp}}) =\frac{\lr{\varepsilon_2^2 \frac{\delta d_{\perp}}{d_{\perp}}}-\lr{\varepsilon_2^2 \frac{\delta d_{\perp}}{d_{\perp}}}_{|\beta_2=0}}{\left(\lr{\varepsilon_2^2}-\lr{\varepsilon_2^2}_{|\beta_2=0}\right)\sqrt{\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^2}-\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^2}_{|\beta_2=0}}}\equiv\rho(\varepsilon_2^2,\frac{\delta d_{\perp}}{d_{\perp}})_{|\beta_2=\infty}\;,
\end{align}
Just like the case for skewness of the $d_{\perp}$ fluctuations, the $\beta_2$ dependence completely cancels, and $\rho_{\mathrm{sub}}$ contains only the $\cos(3\gamma)$ dependence. Therefore it can be compared directly to the values in Tables~\ref{tab:1} and \ref{tab:2}. The $\rho$ in general is expected to be in between the value without deformation $\rho_{|\beta_2=0}$ and $\rho_{\mathrm{sub}}$, which naturally explains the non-linear $\beta_2$ dependence seen in the top panel in the second column of Fig.~\ref{fig:11}.
The right part of Fig.~\ref{fig:11} shows the results for $\rho_{\mathrm{sub}}$. Various datasets for different $\beta_2$ values nearly collapse on a common curve, confirming our earlier statement that these modified quantities are a great way to isolate the coefficient $b'$ and $c'$. The same panels also show the range of the predicted values from Tables~\ref{tab:1} and \ref{tab:2}. Remarkably, the value from the full Monte Carlo Glauber model agree well with our analytical estimates. The results suggest $\rho_{\mathrm{sub}}=b_0+c_0\cos3\gamma$, with $c_0$ nearly independent of centrality, while $a_0$ is nearly a constant in 0--5\% centrality and starts to decrease beyond that.
Repeating the same argument made for $S_{\mathrm{d,sub}}$, the value of $\rho_{\mathrm{sub}}$ can be estimated by comparing collisions of two species $A$ and $B$ with similar mass number, therefore canceling the baseline effects. The result is,
\begin{align}\label{eq:22}
\rho_{\mathrm{AB}} &=\frac{\lr{\varepsilon_2^2 \frac{\delta d_{\perp}}{d_{\perp}}}_{\mathrm{A}}-\lr{\varepsilon_2^2 \frac{\delta d_{\perp}}{d_{\perp}}}_{\mathrm{B}}}{(\lr{\varepsilon_2^2}_{\mathrm{A}}-\lr{\varepsilon_2^2}_{\mathrm{B}})\sqrt{\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^2}_{\mathrm{A}}-\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^2}_{\mathrm{B}}}} \approx \rho_{\mathrm{sub,A}}(\varepsilon_2^2,\frac{\delta d_{\perp}}{d_{\perp}}) (1+\frac{3}{2}x^2-\frac{b'+c'\cos(3\gamma_{\mathrm{B}})}{b'+c'\cos(3\gamma_{\mathrm{A}})}x^3+\frac{15}{8}x^4)\;,
\end{align}
where we assume $x=\beta_{2B}/\beta_{2A}\ll 1$ and we have ignored the small $\cos(3\gamma)$ terms in $C_{\mathrm{d}}\{2\}$ and $\lr{\varepsilon_2^2}$. The $b'$ and $c'$ are the efficients for $\lr{\varepsilon_2^2 \frac{\delta d_{\perp}}{d_{\perp}}}$, which are also expected to be the same for nucleus ``A'' and ``B''. This approximation is accurate within 5\% for $x<0.5$, and the contribution from $x^3$ and $x^4$ terms is less than 5\% for $x<0.3$ (same also applied for Eq.~\eqref{eq:20}). They can be easily applied for a pair of isobaric system such as Zr+Zr and Ru+Ru collisions, but could also be used for comparison between Au+Au and U+U systems~\footnote{Although a small correction is required to precisely subtract the $a'$ term. This can be achieved by focusing on central events with similar multiplicity, where the values of $a'$ are smallest and similar between the two systems.}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.9\linewidth]{paperrhoB_sys0_t0.pdf}
\end{center}\vspace*{-0.3cm}
\caption{\label{fig:11} Left part: The left column shows $\rho(\varepsilon_2^2,\delta d_{\perp}/d_{\perp})$ for several $\beta_2$ values of prolate shape $\gamma=0$ (top row) and and several $\gamma$ values with $\beta_2=0.28$ (bottom row). The left column shows the $N_{\mathrm{part}}$ dependence. The right column shows the $\beta_2$ (top panel) and $\cos(3\gamma)$ (bottom panel) dependence. Right part: similar plots for $\rho_{\mathrm{sub}}$, and the shaded band in the top-left panel indicate the predicted range from Tabs.~\ref{tab:1} and \ref{tab:2}.}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.9\linewidth]{paperrho_sys0_t0.pdf}
\end{center}\vspace*{-0.3cm}
\caption{\label{fig:12} Same as Fig.~\ref{fig:12} but calculated for modified Pearson correlation coefficients $\rho_{\mathrm{orig}}(\varepsilon_2^2,\delta d_{\perp}/d_{\perp})$ defined in Eq.~\eqref{eq:14a}.}
\end{figure}
Although we do not prefer the standard normalization $\rho_{\mathrm{orig}}(\varepsilon_2^2,\delta d_{\perp}/d_{\perp})$ for deformation studies, we nevertheless carried out the same calculation since it is widely used before. Here the correlator with the baseline effects subtracted is defined as
\begin{align}
\rho_{\mathrm{orig,sub}}(\varepsilon_2^2,\frac{\delta d_{\perp}}{d_{\perp}}) =\frac{\lr{\varepsilon_2^2 \frac{\delta d_{\perp}}{d_{\perp}}}-\lr{\varepsilon_2^2 \frac{\delta d_{\perp}}{d_{\perp}}}_{\beta_2=0}}{\sqrt{(\lr{\left(\delta \varepsilon_2^2\right)^2}-\lr{\left(\delta \varepsilon_2^2\right)^2}_{\beta_2=0})(\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^2}-\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^2}_{\beta_2=0})}}\equiv\rho_{\mathrm{orig}}(\varepsilon_2^2,\frac{\delta d_{\perp}}{d_{\perp}})_{\beta_2=\infty}\;,
\end{align}
We shall present the final results in Fig.~\ref{fig:12} without detailed explanation. These results show similar trends, the values and trends in the UCC region are quantitatively similar. This is expected since in central collisions, $c_{2,\varepsilon}\{4\}$ approaches zero and $\lr{\left(\delta \varepsilon_2^2\right)^2}\approx \lr{\varepsilon_2^2}^2$, therefore $\rho_{\mathrm{orig,sub}}\approx\rho_{\mathrm{sub}}$. In the more peripheral region, the two correlators are quantitatively different. The $\rho_{\mathrm{orig,sub}}$ is relatively flat towards mid-central collisions, however, its $\gamma$ dependence is dramatically weaker.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.9\linewidth]{paperslope1.pdf}
\includegraphics[width=0.9\linewidth]{paperslope2.pdf}
\end{center}
\caption{\label{fig:13} The centrality dependence of the coefficients $b'$ and $c'$ from Eq.~\eqref{eq:17} for $C_{\mathrm{d}}\{2\}$,$C_{\mathrm{d}}\{3\},C_{\mathrm{d}}\{4\}$, $\lr{\varepsilon_2^2}$ and $\lr{\varepsilon_2^2\delta d_{\perp}/d_{\perp}}$ from top row to the bottom row and in each row, the values obtained via event averaging based on $N_{\mathrm{part}}$ (left two columns) and $N_{\mathrm{quark}}$ (right two columns) are shown. In each panel, the results are compared between U+U and Zr+Zr, and between values calculated from nucleons (symbols) and quarks (lines). The data points for the following centrality ranges are plotted from left to right: 0--0.2\%, 0.2--0.5\%, 0.5--1\%, 1--2\%,..., 5--6\%, 6--8\%, 8--10\%, 10--15\%,..., and 25--30\%.}
\end{figure}
\subsection{Effects of volume fluctuations and dependence on centrality and system size}\label{sec:44}
Although $d_{\perp}$ and $\varepsilon_2$ in each event are calculated using either nucleons or quarks, the cumulants of these quantities so far are obtained via an event averaging procedure based on $N_{\mathrm{part}}$. As mentioned before, the averaging can be also be performed over event ensembles classified via $N_{\mathrm{quark}}$. Figure~\ref{fig:13} summarizes the coefficients $b'$ and $c'$ as a function of centrality for the five quantities $C_{\mathrm{d}}\{2\},C_{\mathrm{d}}\{3\},C_{\mathrm{d}}\{4\}$, $\lr{\varepsilon_2^2}$ and $\lr{\varepsilon_2^2\delta d_{\perp}/d_{\perp}}$. The results based on event averaging via $N_{\mathrm{quark}}$ are shown in the right two columns, and the results based on event averaging via $N_{\mathrm{part}}$, already presented before, are shown in the left two columns.
For all observables and in almost all cases, the coefficients are very consistent between U+U and Zr+Zr, except in the UCC region where the magnitudes of the coefficients are systematically smaller in Zr+Zr collisions. Clear differences between event averaging based on $N_{\mathrm{part}}$ and those based on $N_{\mathrm{quark}}$ are also visible in the UCC region, reflecting the effects of volume fluctuations. These differences reach up to 20\% for $C_{\mathrm{d}}\{2\}$ and $\lr{\varepsilon_2^2\delta d_{\perp}/d_{\perp}}$; they are even larger for $C_{\mathrm{d}}\{3\}$, and $C_{\mathrm{d}}\{4\}$, but are negligible for $\lr{\varepsilon_2^2}$. What this means is that by selecting extremely central events, one might introduce a large smearing effects from volume fluctuations for skewness and kurtosis observables. Therefore the optimal centrality range to maximize the deformation effects yet avoid strong volume fluctuations should be events in not too central region, such as 0--1\% or 0--5\%. In general, the $c'$ are much smaller than $b'$ for even-order cumulants, but are larger than $b'$ for skewness $C_{\mathrm{d}}\{3\}$ and $\lr{\varepsilon_2^2\delta d_{\perp}/d_{\perp}}$ in the central collisions. The latter confirms earlier conclusion that three-particle correlations involving $v_2$ and $[\pT]$ in heavy ion collisions are unique and sensitive probe of the nuclear triaxiality. In some limited cases such as $b'$ for $C_{\mathrm{d}}\{3\}$ and $C_{\mathrm{d}}\{4\}$, the results are quantitatively different between calculation based on nucleons and that based on quarks (compare the symbols with the lines), suggesting that the deformation contribution to high-order cumulants of $d_{\perp}$ (hence those of $[\pT]$) are also sensitive to the subnucleon fluctuations.
Table~\ref{tab:3} lists the values of $a'$, $b'$ and $c'$ from Eq.~\eqref{eq:17} in the 0--1\% most central collisions for the four cases for the calculation in each event and event averaging: calculation via nucleons and averaging via $N_{\mathrm{part}}$, calculation via quarks and averaging via $N_{\mathrm{part}}$, calculation via nucleons and averaging via $N_{\mathrm{quark}}$ and calculation via quarks and averaging via $N_{\mathrm{quark}}$. We see that the values of $a'$ could differ by up to a factor of 2 among the four cases. From these values, we can derive the analytical function form of the $(\beta_2,\gamma)$ dependence for each observable, including various normalized cumulants discussed in pervious sections.
\begin{table}[!h]
\centering
\begin{tabular}{c|ccc|ccc|ccc|ccc}\hline
&\multicolumn{3}{c|}{}& \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{}\\[-0.2ex]
variable calculation &\multicolumn{3}{c|}{nucleon}& \multicolumn{3}{c|}{quark} & \multicolumn{3}{c|}{nucleon} & \multicolumn{3}{c}{quark}\\[2ex]
event class for average &\multicolumn{3}{c|}{$N_{\mathrm{part}}$ }&\multicolumn{3}{c|}{$N_{\mathrm{part}}$}& \multicolumn{3}{c|}{$N_{\mathrm{quark}}$}& \multicolumn{3}{c}{$N_{\mathrm{quark}}$}\\[2ex]\hline\hline
&\multicolumn{3}{c|}{}& \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{}\\[-0.2ex]
&$a'$&$b'$&$c'$&$a'$&$b'$&$c'$&$a'$&$b'$&$c'$&$a'$&$b'$&$c'$\\[2ex]\hline
$\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^2}\times10^2$&0.033&0.93&0.0039& 0.038&0.88&-0.015& 0.039&0.83&0.019& 0.04&0.85&0.023\\[2ex]
$a'+(b'+c'\cos(3\gamma))\beta_2^2$&\multicolumn{3}{c|}{}& \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{}\\[1ex]\hline
$\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^3}\times10^4$&0.006&1.3&3.0& 0.0084&0.72&2.7& 0.012&-0.087&2.2& 0.0085&-0.43&2.4\\[2ex]
$a'+(b'+c'\cos(3\gamma))\beta_2^3$&\multicolumn{3}{c|}{}& \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{}\\[1ex]\hline
$(\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^4}-3\lr{(\frac{\delta d_{\perp}}{d_{\perp}})^2}^2)\times10^5$&0.00033&-5.4&1.1& 0.00065&-5.0&0.88& 0.00064&-3.1&-0.1& 0.00052&-3.4&-0.35\\[2ex]
$a'+(b'+c'\cos(3\gamma))\beta_2^4$&\multicolumn{3}{c|}{}& \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{}\\[1ex]\hline
$\lr{\varepsilon_2^2}\times10$&0.045&2.35&0.11& 0.055&2.38&0.083& 0.047&2.32&-0.19& 0.056&2.34&-0.21\\[2ex]
$a'+(b'+c'\cos(3\gamma))\beta_2^2$&\multicolumn{3}{c|}{}& \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{}\\[1ex]\hline
$\lr{\varepsilon_2^2\frac{\delta d_{\perp}}{d_{\perp}}}\times10^2$&0.00051&-0.066&-1.36& 0.00070&-0.12&-1.35& 0.00097&-0.17&-1.17& 0.00084&-0.19&-1.19\\[2ex]
$a'+(b'+c'\cos(3\gamma))\beta_2^3$&\multicolumn{3}{c|}{}& \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{}\\[1ex]\hline
\end{tabular}
\caption{\label{tab:3} The values of the coefficients $a'$, $b'$ and $c'$ for the $\beta_2$ and $\gamma$ dependence for each observable (Eq.~\eqref{eq:17}) in 0--1\% U+U collisions from Glauber model. They are listed for four cases: variables can be calculated with either nucleons or quarks and the event averaging are also based on either nucleons or quarks.}
\end{table}
\subsection{AMPT model}\label{sec:45}
We have shown that the initial state of the heavy ion collisions are very sensitive to quadrupole deformation and triaxiality of the colliding nuclei, and we have constructed multiple observables to constrain $\beta_2$ and $\gamma$ independently. The next crucial question, however, is how much of these sensitivities in the initial state survive to the particle correlations in the final-state. Previous hydrodynamic model studies and data comparisons have firmly established the proportionality between $\varepsilon_2$ and $v_2$, and to lesser extent also the positive correlation between $d_{\perp}$ and $[\pT]$~\cite{Bozek:2012fw,Bozek:2017jog} and between $\lr{\varepsilon_2^2,\delta d_{\perp}}$ and $\lr{v_2^2,\delta [\pT]}$~\cite{Schenke:2020uqq,Giacalone:2020dln}.
One main drawback of the AMPT model is that it underestimates the hydrodynamic response of radial flow. For one thing, it severely undershoot the variance of the $\pT$ fluctuations from data, see the left panel of Fig.~\ref{fig:14}. The right panel show that the AMPT model predicts a very weak dependence of $\lr{(\delta [\pT]/[\pT])^2}$ on $\beta_2$. Even for a value of $\beta_2=0.28$, the increase of $[\pT]$ variance is only 30\%. Similar observation is also made for $\lr{(\delta [\pT]/[\pT])^3}$ (not shown). This is in clear contradiction to the much larger influence from deformation observed in the recent experimental results of variance and skewness of $[\pT]$ in U+U and Au+Au collisions~\cite{jjia}. Hence, AMPT model can not be used to study reliably the deformation effects on the $[\pT]$ fluctuations. Instead, we shall focus on $\lr{v_2^2\delta [\pT]}$, the rationale being that even though the radial flow response is underestimated, the elliptic flow response is still correctly modeled. We hope that we could at least explore the qualitative features of $\lr{v_2^2\delta [\pT]}$ and compare to $\lr{\varepsilon_2^2\delta d_{\perp}}$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\linewidth]{paperampt0.pdf}\includegraphics[width=0.4\linewidth]{paper_pt0.pdf}
\end{center}\vspace*{-0.3cm}
\caption{\label{fig:14} Left: scaled variance of $[\pT]$ fluctuation from AMPT model (open symbol) and experimental data Ref.~\cite{Adam:2019rsf} (solid symbol), as well as the scaled variance of $d_{\perp}$ (solid line) in Au+Au collisions at $\mbox{$\sqrt{s_{\mathrm{NN}}}$}=200$ GeV. Right: scaled variance of $[\pT]$ from AMPT model in U+U collisions for different values of $\beta_2$. }
\end{figure}
The left column of Fig.~\ref{fig:15} shows the $N_{\mathrm{part}}$ dependence of $\lr{v_2^2\delta [\pT]/[\pT]}$ for several values of $\beta_2$ and $\gamma$, calculated using the multi-particle correlation framework of Ref.~\cite{Zhang:2021phk}. There are clear sensitivity on both parameters, especially in the UCC region. The values are integrated for several centrality ranges and plotted as a function of $\beta_2^3$ and $\cos(3\gamma)$ in the middle column, calculated from the corresponding data in the left column. Despite the large statistical uncertainties, linear dependences are observed, confirming the trends seen in the Glauber model:
\begin{align}\label{eq:24}
\lr{v_2^2(\delta [\pT]/[\pT])} = a+(b+c\cos(3\gamma)) \beta_2^3\;.
\end{align}
The values of $b$ and $c$ are shown in the right column as a function of centrality; the centrality-dependent trends are similar to those obtained from Glauber model (compare to Fig.~\ref{fig:10}). However, the values of $b$ and $c$ are about a factor of 100 smaller than $b'$ and $c'$, also $b$ is larger than 0 in central collisions, while $b'$ is less than 0 over the full centrality range. In hydrodynamic model with linear response assumption of Eq.~\eqref{eq:3}, we have approximately,
\begin{align}\label{eq:25}
\lr{v_2^2\frac{\delta [\pT]}{[\pT]}} \approx k_2^2 k_0\lr{\varepsilon_2^2 \frac{\delta d_{\perp}}{d_{\perp}}}
\end{align}
Using the value of $k_2\approx0.2$ from a hydrodynamic model~\cite{Song:2010mg} and $k_0\approx0.4$ from left panel of Fig.~\ref{fig:11} in central collisions, we expect a factor of 60. We also repeat the same analysis using $N_{\mathrm{hadron}}$ to classify events. They give very similar values of $b$ and $c$ as shown in the right column of Fig.~\ref{fig:15}, implying the results are robust against the volumn fluctuations.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.9\linewidth]{paperbeta0sub.pdf}\\
\includegraphics[width=0.9\linewidth]{papergamma0.pdf}
\end{center}\vspace*{-0.5cm}
\caption{\label{fig:15} The $\lr{v_2^2\delta [\pT]/[\pT]}$ for several $\beta_2$ values of prolate shape $\gamma=0$ (top row) and several $\gamma$ values with $\beta_2=0.28$ (bottom row) in U+U collisions from the AMPT model. The left column show the $N_{\mathrm{part}}$ dependence. The middle column shows the results as a function of $\beta_2^3$ (top panel) or $\cos(3\gamma)$ (bottom panel) in several centrality ranges based on $N_{\mathrm{part}}$. The right column summarizes the coefficients $b$ (top) and $c$ (bottom) from Eq.~\eqref{eq:24} as a function of centrality based on $N_{\mathrm{part}}$ (filled symbols) or $N_{\mathrm{hadron}}$ (open symbols).}.
\end{figure}
From these results, we calculate the normalized quantities, $\rho(v_2^2,\frac{\delta [\pT]}{[\pT]})$ and $\rho_{\mathrm{sub}}(v_2^2,\frac{\delta [\pT]}{[\pT]})$, defined similar to those in Eqs.~\eqref{eq:14a} and \eqref{eq:21}. The results are shown in Fig.~\ref{fig:16} for $\beta_2$ dependence on the left part and $\gamma$ dependence on the right part. The $\rho$ follows approximately a linear dependence of $\beta_2$, similar to Glauber model results (top panel in the second column of Fig.~\ref{fig:11}). The $\rho_{\mathrm{sub}}$ in the bottom panels are nearly independent of $\beta_2$ as expected. For the $\cos(3\gamma)$ dependence, $\rho$ show different slopes for different centralities ranges, but $\rho_{\mathrm{sub}}$ data follow a common slope in all centrality ranges. What this means is that the difference of $\rho_{\mathrm{sub}}$ between prolate and oblate is approximately independent of centrality, just as what we observe in the Glauber model in the bottom right panel of Fig.~\ref{fig:11}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\linewidth]{paperrhobeta0B.pdf}\includegraphics[width=0.5\linewidth]{paperrhogamma0B.pdf}
\end{center}\vspace*{-0.5cm}
\caption{\label{fig:16} Left Part: The $\rho(v_2^2,\delta [\pT]/[\pT])$ (top row) and $\rho_{\rm{sub}}(v_2^2,\delta [\pT]/[\pT])$ (bottom row) as a function of $N_{\mathrm{part}}$ for several $\beta_2$ values of prolate shape $\gamma=0$ (left column) and as a function of $\beta_2^3$ in several centrality ranges based on $N_{\mathrm{part}}$ (right column). Right part: The $\rho(v_2^2,\delta [\pT]/[\pT])$ (top row) and modified $\rho_{\rm{sub}}(v_2^2,\delta [\pT]/[\pT])$ (bottom row) as a function of $N_{\mathrm{part}}$ for several $\gamma$ values with $\beta_2=0.28$ (left column) and as a function of $\cos(3\gamma)$ in several centrality ranges based on $N_{\mathrm{part}}$ (right column).}
\end{figure}
\section{Summary and a proposal}\label{sec:5}
We have shown that the two bulk quantities of the initial overlap of the heavy ion collisions, the $\varepsilon_2$ and $d_{\perp}$ which quantifies the quadrupole shape and density gradient (or the inverse size) of the overlap region, respectively, are directly related to the quadrupole deformation parameters $(\beta_2,\gamma)$ of the colliding nuclei. Aided by hydrodynamic response in the final state, these initial quantities are transformed into the experimentally measured elliptic flow $v_2$ and average transverse momentum $[\pT]$ in each event. Using an analytical argument and a Glauber model simulation, we derive analytical relations between the cumulants of $\varepsilon_2$ and $d_{\perp}$ and $(\beta_2,\gamma)$. Remarkably, the variances depend mainly on $\beta_2$, $\lr{\varepsilon_2^2}, \lr{(\delta d_{\perp}/d_{\perp})^2}\sim a'+b'\beta_2^2$, while the skewness are sensitive to both parameters in a simple facterizable form, $\lr{\varepsilon_2^2 \delta d_{\perp}/d_{\perp}}, \lr{(\delta d_{\perp}/d_{\perp})^3}\sim a'+(b'+c'\cos(3\gamma))\beta_2^3$. These robust relations provide an efficient way, via a dedicated system scan, to constrain simultaneously the $\beta_2$ and $\gamma$ of the atomic nuclei.
To illustrate how this can be done, we refer to the results obtained from Glauber model for 0--1\% most central U+U collisions from Tab.~\ref{tab:3},
\begin{align}\nonumber
\langle\varepsilon_2^2\rangle&\approx[0.02+\beta_2^2]\times0.235\\\nonumber
\langle(\delta d_{\perp}/d_{\perp})^2\rangle&\approx[0.035+\beta_2^2]\times 0.0093\\\nonumber
\langle(\delta d_{\perp}/d_{\perp})^3\rangle&\approx[0.006+(1.3+3.0\cos(3\gamma))\beta_2^3]\times10^{-4}\\\label{eq:26}
\langle\varepsilon_2^2\delta d_{\perp}/d_{\perp}\rangle&\approx[0.0005-(0.07+1.36\cos(3\gamma))\beta_2^3]\times10^{-2}
\end{align}
From these we construct the ratios of cumulants $\rho(\varepsilon_2^2,\delta d_{\perp}/d_{\perp})$ and $S_{\mathrm{d}}$, as well as baseline subtracted ratios $\rho_{\mathrm{sub}}$ and $S_{\mathrm{d,sub}}$ to factorize the $\gamma$ dependence from the $\beta_2$ dependence (their definitions are repeated in Fig.~\ref{fig:17}). Eq.~\eqref{eq:26} can map any trajectory in the $(\beta_2,\gamma)$ diagram (top-left panel) onto new trajectories in various correlation plots as shown in the bottom panels (a)--(f). We note that the direction of the trajectory in the $(\rho,\langle\varepsilon_2^2\rangle)$ plane is opposite to that in the $(S_{\mathrm{d}},\langle\varepsilon_2^2\rangle)$ plane, and the trajectory in the $(\rho,S_{\mathrm{d}})$ plane almost collapses into a straight line. The $\gamma$ dependences in these plots follow a simple linear function of $\cos(3\gamma)$, while the $\beta_2$ dependence is more complex due to the offsets in Eq.~\eqref{eq:26}. The correlations are much well behaved for $\rho_{\mathrm{sub}}$ and $S_{\mathrm{d,sub}}$ as shown in the bottom row of Fig.~\ref{fig:17}. In particular, the differences between prolate and oblate deformation for these quantities are independent of $\beta_2$, and they are also expected to be nearly independent of centrality as suggested by Figs.~\ref{fig:9} and \ref{fig:11}. Therefore, we could determine the $\gamma$ angle of any nucleus with similar mass number, once the values of $\rho_{\mathrm{sub}}$ and $S_{\mathrm{sub}}$ are calibrated from collisions of prolate and oblate nuclei with known $\beta_2$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\linewidth]{figmapfinal.pdf}
\end{center}\vspace*{-0.5cm}
\caption{\label{fig:17} Glauber model prediction of the mapping of a closed trajectory on $(\beta_2,\gamma)$ plane (top-left) onto correlation between normalized skewness $\rho$ and $\langle\varepsilon_2^2\rangle$ (panel-a), $S_{\mathrm{d}}$ and $\langle\varepsilon_2^2\rangle$ (panel-b), as well as those with baseline-subtracted $\rho_{\mathrm{sub}}$ and $\langle\varepsilon_2^2\rangle$ (panel-d), $S_{\mathrm{d,sub}}$ and $\langle\varepsilon_2^2\rangle$ (panel-e). The definition of these quantities are given in the top-right corner. The corresponding trajectories are also shown between $\rho$ and $S_{\mathrm{d}}$ (panel-c) and between $\rho_{\mathrm{sub}}$ and $S_{\mathrm{d,sub}}$ (panel-f). The results are shown for collision of nucleus with 238 nucleons and for the 0--1\% most central events selected based on $N_{\mathrm{part}}$. Note that the correlation with variance $\langle(\delta d_{\perp}/d_{\perp})^2\rangle$ as the $x$-axis are similar, i.e. only require a shift and rescaling (see text).}
\end{figure}
A few additional summarizing points can be made about these flow diagrams. 1) We can replace the $x$-axis with $\langle(\delta d_{\perp}/d_{\perp})^2\rangle$, the trajectories would be shifted and rescaled but their shapes remain the same. 2) The results from quark Glauber model are quite comparable. The main differences can be attributed to their differences in the offset terms in Eq.~\eqref{eq:26}. However we caution to avoid extremely central region, e.g. 0--0.2\%, there the functional forms may remain the same, but the coefficients can be quite different due to a selection bias on the fluctuations on the nucleon and quarks level. This is especially the case for $\langle(\delta d_{\perp}/d_{\perp})^3\rangle$ (see the UCC region in Fig.~\ref{fig:9}). 3) Since the coefficients $b'$ and $c'$ are relatively insensitive to the size of the collision systems, the correlations in the bottom row of Fig.~\ref{fig:17} are expected to remain largely the same. By the way, the change of $\rho_{\mathrm{sub}}$ and $S_{\mathrm{d,sub}}$ from prolate to oblate deformations, unlike $\rho$ and $S_{\mathrm{d}}$, also relatively independent of centrality. This implies that the curves in the bottom panels only shift vertically and narrow horizontally for events in mid-central collisions, but the height remains roughly the same. 4) We should be able to construct similar flow diagrams for cumulants of $v_2$ and $[\pT]$ in the final state. This can be done by using the full hydrodynamic model simulations, but can also be estimated from the well known linear relation $v_2\propto\varepsilon_2$ and $\delta [\pT]/[\pT]\propto \delta d_{\perp}/d_{\perp}$. 5) The generalization of this idea to kurtosis and higher-order cumulants may not work well due to strong non-linear mode mixing from lower-order cumulants.
Study of the nuclear deformation, in particular shape evolution in the $(\beta_2,\gamma)$ along the isobaric chain by adding neutron and protons, is one of the most important areas of research in nuclear structure community~\cite{Heyde2011}. The skewnesses $\langle(\delta d_{\perp}/d_{\perp})^3\rangle$ and $\langle\varepsilon_2^2\delta d_{\perp}/d_{\perp}\rangle$, experimentally accessible via three-particle correlations, show remarkably strong sensitivity to triaxiality over a broad range of centrality, as well as nearly system-size independent signal strength. The existing data from various species, in particular the recent isobar $^{96}$Zr+$^{96}$Zr and $^{96}$Ru+$^{96}$Ru collision data, provide a unique opportunity to test the methodology proposed in this paper. However, most valuable information will ultimately arise from a scan of systems for which we already have precision knowledge from the nuclear structure community to calibration the hydrodynamic response, followed by systems for which we not have sufficient understanding.
{\bf Acknowledgements:} I am grateful for the AMPT simulation data provided by Chunjian Zhang. I thank Giuliano Giacalone, Chunjian Zhang and Somadutta Bhatta for valuable discussions. This work is supported by DOE DEFG0287ER40331.
|
1,116,691,498,334 | arxiv | \section{The problem}\label{Sec:The problem}
Consider the boundary-value problem
\begin{align}
\left\{\begin{array}{l}
{\rm Div}\left[{\bf L}^\epsilon({\bf X})\nabla{\bf u}^\epsilon\right]=\textbf{0},\quad {\bf X}\in {\rm \Omega}\setminus{\rm \Gamma}\vspace{0.2cm}\\
\widehat{{\rm Div}}\left[\widehat{{\bf L}}^\epsilon\widehat{\nabla}{\bf u}^\epsilon\right]-\jump{{\bf L}^\epsilon({\bf X})\nabla{\bf u}^\epsilon}\widehat{{\bf N}}=\textbf{0},\quad {\bf X}\in {\rm \Gamma}\vspace{0.2cm}\\
{\bf u}^{\epsilon}({\bf X})=\overline{{\bf u}}({\bf X}),\quad {\bf X}\in\partial{\rm \Omega}\end{array}\right. \label{BVP_epsilon}
\end{align}
for the displacement field ${\bf u}^\epsilon({\bf X})\in H^1 ({\rm \Omega};\mathbb{R}^n)$ in an open domain $\mathrm{\Omega}\subset \mathbb{R}^n$, $n=2,3$, with boundary $\partial\mathrm{\Omega}$. Ghosh and Lopez-Pamies \cite{GLP22} have recently shown that (\ref{BVP_epsilon}) are the equations that govern the mechanical response of an elastomeric matrix ($\texttt{m}$) filled with initially $n$-spherical\footnote{Employing the parlance of geometers (\cite{Coxeter73}, Section 7.3), we refer to circles as $2$-spheres and to spheres as $3$-spheres.} liquid inclusions ($\texttt{i}$) of length scale $\epsilon$ subjected to small quasistatic deformations. Here, ${\bf L}^\epsilon({\bf X})$ stands for the modulus of elasticity for the bulk ${\rm \Omega}\setminus{\rm \Gamma}$, which is comprised of the solid elastomeric matrix and the firmly embedded liquid inclusions, $\widehat{{\bf L}}^\epsilon$ denotes the modulus of elasticity for the interfaces ${\rm \Gamma}$ separating the elastomer from the inclusions, $\widehat{{\bf N}}$ is the unit normal of ${\rm \Gamma}$ pointing outwards from the inclusions towards the elastomer, and $\overline{{\bf u}}({\bf X})$ is the applied displacement boundary condition (Dirichlet boundary conditions are assumed for simplicity of presentation). In equations (\ref{BVP_epsilon}), ${\rm Div}$ stands for the bulk divergence operator, $\jump{\cdot}$ is the jump operator across the interfaces ${\rm \Gamma}$ based on the convention $\jump{f({\bf X})}=f^{(\texttt{i})}({\bf X})-f^{(\texttt{m})}({\bf X})$, where $f^{(\texttt{i})}$ (resp. $f^{(\texttt{m})}$) denotes the limit of any given function $f({\bf X})$ when approaching ${\rm \Gamma}$ from within the inclusion (resp. matrix), while $\widehat{\nabla}$ and $\widehat{{\rm Div}}$ stand for the interface gradient and divergence operators. In indicial notation, with respect to a Cartesian frame of reference $\{{\bf e}_i\}$ ($i=1,...,n$) and help of the projection tensor
\begin{equation*}
\widehat{{\bf I}}={\bf I} - \widehat{{\bf N}} \otimes \widehat{{\bf N}},
\end{equation*}
we recall that these interface operators read \cite{Carmo16,GWL98}
\begin{equation*}
\left(\widehat{\nabla}{\bf v}\right)_{ij}=\dfrac{\partial v_i}{\partial X_k}({\bf X})\widehat{I}_{kj}\qquad {\rm and}\qquad
\left(\widehat{{\rm Div}}\,{\bf S}\right)_{i}=\dfrac{\partial S_{ij}}{\partial X_k}({\bf X})\widehat{I}_{kj},\qquad {\bf X} \in{\rm \Gamma}
\end{equation*}
when applied to vector and second-order tensor fields.
\paragraph{Filled elastomers with periodic microstructure} For filled elastomers with periodic microstructure, which are the class of materials of interest in this work, the initial subdomains occupied collectively by all the inclusions can be expediently described by the characteristic function
\begin{equation}
\theta^\epsilon({\bf X})=\displaystyle\sum_{I=1}^{\texttt{N}}\theta^\epsilon_I({\bf X})\label{theta}
\end{equation}
in terms of the characteristic functions
\begin{equation}
\theta^\epsilon_I({\bf X})=\theta_I(\epsilon^{-1}{\bf X})\qquad I=1,...,\texttt{N} \label{thetaj}
\end{equation}
for each individual inclusion. Here, $\theta_I({\bf y})$ are $ Y$-periodic functions, with $ Y=(0,1)^n$, and $\texttt{N}$ denotes the number of inclusions contained in the unit cell $Y$. It immediately follows that $\theta^\epsilon({\bf X})=\theta(\epsilon^{-1}{\bf X})$, where $\theta({\bf y})$ is also $ Y$-periodic. Figure \ref{Fig1} shows a schematic of a filled elastomer with periodic microstructure in its initial configuration for an illustrative case of space dimension $n=3$ and $\texttt{N}=2$ inclusions in $Y$.
\begin{figure}
\begin{center}
\includegraphics[width=4.4in]{Fig1.eps}
\vspace{-0.2cm}
\caption{\small Schematics of the initial configuration ${\rm \Omega}\in \mathbb{R}^3$ of a periodic suspension, of period $\epsilon$, of $3$-spherical liquid inclusions embedded in a solid elastomer and of its defining unit cell $Y=(0,1)^3$. The radii of the inclusions are denoted by $A_I^\epsilon=\epsilon A_I$ and their outward unit normal by $\widehat{{\bf N}}$. The interfaces separating the elastomer from the inclusions in ${\rm \Omega}$ are denoted by ${\rm \Gamma}$. Within the unit cell $Y$, the interfaces separating the elastomer from the inclusions are denoted by $G$.}
\label{Fig1}
\end{center}
\vspace{-0.2cm}
\end{figure}
Granted (\ref{theta})-(\ref{thetaj}), the modulus of elasticity for the bulk and the interfaces read, respectively, as
\begin{align}
{\bf L}^\epsilon({\bf X})=\left(1-\theta(\epsilon^{-1}{\bf X})\right){\bf L}^{(\texttt{m})}+
\displaystyle\sum_{I=1}^{\texttt{N}}\theta_I(\epsilon^{-1}{\bf X})\left[ n \Lambda^{(\texttt{i})}\mbox{\boldmath$\mathcal{J}$}+r^\epsilon_{I}(\mbox{\boldmath$\mathcal{A}$}-\mbox{\boldmath$\mathcal{K}$}+(n-1)\mbox{\boldmath$\mathcal{J}$})\right]\label{Lbulk}
\end{align}
and
\begin{equation}
\widehat{{\bf L}}^\epsilon=2\,\widehat{\mu}^{\,\epsilon}\,\widehat{\mbox{\boldmath$\mathcal{K}$}}+2 (\widehat{\mu}^{\,\epsilon}+\widehat{\Lambda}^{\,\epsilon})\widehat{\mbox{\boldmath$\mathcal{J}$}}+
\widehat{\gamma}^{\epsilon}\left(\widehat{\mbox{\boldmath$\mathcal{A}$}}-\widehat{\mbox{\boldmath$\mathcal{K}$}}+\widehat{\mbox{\boldmath$\mathcal{J}$}}\right).\label{Linterface}
\end{equation}
In relation (\ref{Lbulk}), $\mbox{\boldmath$\mathcal{A}$}$, $\mbox{\boldmath$\mathcal{K}$}$, $\mbox{\boldmath$\mathcal{J}$}$ are the orthonormal\footnote{That is, $\mbox{\boldmath$\mathcal{A}$}\mbox{\boldmath$\mathcal{K}$}=\mbox{\boldmath$\mathcal{K}$}\mbox{\boldmath$\mathcal{A}$}=\mbox{\boldmath$\mathcal{A}$}\mbox{\boldmath$\mathcal{J}$}=\mbox{\boldmath$\mathcal{J}$}\mbox{\boldmath$\mathcal{A}$}=\mbox{\boldmath$\mathcal{K}$}\mbox{\boldmath$\mathcal{J}$}=\mbox{\boldmath$\mathcal{J}$}\mbox{\boldmath$\mathcal{K}$}=\textbf{0}$, $\mbox{\boldmath$\mathcal{A}$}\Atan=\mbox{\boldmath$\mathcal{A}$}$, $\mbox{\boldmath$\mathcal{K}$}\Ktan=\mbox{\boldmath$\mathcal{K}$}$, and $\mbox{\boldmath$\mathcal{J}$}\Jtan=\mbox{\boldmath$\mathcal{J}$}$.} eigentensors
\begin{align}
&\mathcal{A}_{ijkl}=\dfrac{1}{2}(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk}),\label{A}\\
& \mathcal{K}_{ijkl}=\dfrac{1}{2}\left(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)-\dfrac{1}{n}\delta_{ij}\delta_{kl},\label{K}\\
&\mathcal{J}_{ijkl}=\dfrac{1}{n}\delta_{ij}\delta_{kl},\label{J}
\end{align}
${\bf L}^{(\texttt{m})}$ is the modulus of elasticity of the elastomeric matrix, which satisfies the standard symmetry and positive-definiteness properties
\begin{align*}
L^{(\texttt{m})}_{ijkl}=L^{(\texttt{m})}_{klij}=L^{(\texttt{m})}_{jikl}=L^{(\texttt{m})}_{ijlk},\qquad B_{ij}L^{(\texttt{m})}_{ijkl}B_{kl}\geq \alpha B_{pq} B_{pq}\;\forall {\bf B}\in \mathbb{R}^{n\times n}
\end{align*}
and some $\alpha>0$, $\Lambda^{(\texttt{i})}\geq 0$ is the first Lam\'e constant of the liquid making up the inclusions, and
\begin{align}
r^\epsilon_{I}=-\dfrac{(n-1)\,\widehat{\gamma}^\epsilon}{A^\epsilon_I}\qquad I=1,...,\texttt{N}\label{rj}
\end{align}
denotes the initial hydrostatic stress that the $I$th inclusion is subjected to in the initial configuration. In this last expression, $\widehat{\gamma}^\epsilon$ stands for the initial surface tension on the interfaces and
\begin{align*}
A_I^\epsilon=\epsilon A_I
\end{align*}
is the radius of the $I$th inclusion, where $0<A_I<1$. In relation (\ref{Linterface}), $\widehat{\mbox{\boldmath$\mathcal{A}$}}$, $\widehat{\mbox{\boldmath$\mathcal{K}$}}$, $\widehat{\mbox{\boldmath$\mathcal{J}$}}$ are the orthonormal\footnote{In complete analogy with their bulk counterparts (\ref{A})-(\ref{J}), $\widehat{\mbox{\boldmath$\mathcal{A}$}}\widehat{\mbox{\boldmath$\mathcal{K}$}}=\widehat{\mbox{\boldmath$\mathcal{K}$}}\widehat{\mbox{\boldmath$\mathcal{A}$}}=\widehat{\mbox{\boldmath$\mathcal{A}$}}\widehat{\mbox{\boldmath$\mathcal{J}$}}=\widehat{\mbox{\boldmath$\mathcal{J}$}}\widehat{\mbox{\boldmath$\mathcal{A}$}}=\widehat{\mbox{\boldmath$\mathcal{K}$}}\widehat{\mbox{\boldmath$\mathcal{J}$}}=\widehat{\mbox{\boldmath$\mathcal{J}$}}\widehat{\mbox{\boldmath$\mathcal{K}$}}=\textbf{0}$,
$\widehat{\mbox{\boldmath$\mathcal{A}$}}\widehat{\mbox{\boldmath$\mathcal{A}$}}=\widehat{\mbox{\boldmath$\mathcal{A}$}}$, $\widehat{\mbox{\boldmath$\mathcal{K}$}}\widehat{\mbox{\boldmath$\mathcal{K}$}}=\widehat{\mbox{\boldmath$\mathcal{K}$}}$, and $\widehat{\mbox{\boldmath$\mathcal{J}$}}\widehat{\mbox{\boldmath$\mathcal{J}$}}=\widehat{\mbox{\boldmath$\mathcal{J}$}}$.} eigentensors
\begin{align*}
&\widehat{\mathcal{A}}_{ijkl}=\delta_{ik}\widehat{I}_{jl}-\dfrac{1}{2}\left(\widehat{I}_{ik}\widehat{I}_{jl}+\widehat{I}_{il}\widehat{I}_{jk}\right),\nonumber\\
&\widehat{\mathcal{K}}_{ijkl}=\dfrac{1}{2}\left(\widehat{I}_{ik}\widehat{I}_{jl}+\widehat{I}_{il}\widehat{I}_{jk}-\widehat{I}_{ij}\widehat{I}_{kl}\right),\nonumber\\
&\widehat{\mathcal{J}}_{ijkl}=\dfrac{1}{2}\widehat{I}_{ij}\widehat{I}_{kl},\label{AKJ-hat}
\end{align*}
$\widehat{\mu}^{\,\epsilon}\geq 0$ and $\widehat{\Lambda}^{\,\epsilon}\geq 0$ are the interface Lam\'{e} constants, and, again, $\widehat{\gamma}^\epsilon\geq 0$ denotes the surface tension on the interfaces in the initial configuration.
\begin{remark}
{\rm All inclusions are assumed to be made of the same liquid, thus the unique value of Lam\'{e} constant $\Lambda^{(\texttt{i})}$ in (\ref{Lbulk}); note that the case of an incompressible liquid is recovered by setting $\Lambda^{(\texttt{i})}=+\infty$. However, because each inclusion is allowed to have its own initial size, the residual hydrostatic stresses $r^\epsilon_{I}$ in (\ref{Lbulk}) may be different for different inclusions.}
\end{remark}
\begin{remark}
{\rm The specific form of the residual hydrostatic stresses (\ref{rj}) is necessarily a direct consequence of equilibrium within the bulk of the liquid making up the inclusions and on the interfaces separating the inclusions from the elastomer in the initial configuration. Indeed, the residual hydrostatic stresses (\ref{rj}) are the solutions of the equations
\begin{equation}
\left\{\begin{array}{l}\nabla\left[ \displaystyle\sum_{I=1}^{\texttt{N}}\theta_j(\epsilon^{-1}{\bf X}) r^\epsilon_I\right]={\bf0}, \quad {\bf X}\in\mathrm{\Omega}\setminus {\rm \Gamma} \vspace{0.4cm}\\
\displaystyle\sum_{I=1}^{\texttt{N}}\theta_I(\epsilon^{-1}{\bf X}) r^\epsilon_I=-\widehat{\gamma}^\epsilon\;{\rm tr}\,\widehat{\nabla}\widehat{{\bf N}}, \quad {\bf X}\in{\rm \Gamma}
\end{array}\right. .\label{BLM-Young-Laplace-0}
\end{equation}
Remark that the first of these equations is nothing more than balance of linear momentum within the inclusions, while the second one is the Young-Laplace equation.
}
\end{remark}
\paragraph{Scaling of the interface Lam\'{e} constants $\widehat{\mu}^{\,\epsilon}$, $\widehat{\Lambda}^{\,\epsilon}$ and initial surface tension $\widehat{\gamma}^\epsilon$} The governing equations (\ref{BVP_epsilon}), with (\ref{Lbulk}), (\ref{Linterface}), and (\ref{rj}), apply to elastomers filled with a periodic distribution of spherical liquid inclusions of arbitrary length scale $\epsilon$. In this work, we are interested in the limit as $\epsilon\searrow 0$ when the inclusions are much smaller that the length scale of ${\rm \Omega}$, which is considered to be a fixed domain. To this end, remark that equations (\ref{BVP_epsilon}) depend directly on the size of the inclusions through the residual hydrostatic stresses (\ref{rj}) in (\ref{BVP_epsilon})$_{1,2}$ and through the interface divergence $\widehat{{\rm Div}}$ operator in (\ref{BVP_epsilon})$_{2}$. Accordingly, in order to preserve the correct physics in the limit as $\epsilon\searrow 0$, the interface Lam\'{e} constants $\widehat{\mu}^{\,\epsilon}$, $\widehat{\Lambda}^{\,\epsilon}$ and initial surface tension $\widehat{\gamma}^\epsilon$ must scale appropriately with $\epsilon$, in particular, they must scale linearly in $\epsilon$. We write
\begin{align}
\widehat{\mu}^{\epsilon}=\epsilon\, \widehat{\mu},\quad \widehat{\Lambda}^{\epsilon}=\epsilon\, \widehat{\Lambda},\quad \widehat{\gamma}^\epsilon=\epsilon\, \widehat{\gamma},\label{gamma-eps}
\end{align}
where $\widehat{\mu}\geq0$, $\widehat{\Lambda}\geq0$, and $\widehat{\gamma}\geq0$.
Granted the scaling (\ref{gamma-eps}), the modulus of elasticity (\ref{Lbulk}) for the bulk depends on $\epsilon$ only through the combination $\epsilon^{-1}{\bf X}$, specifically,
\begin{align}
{\bf L}^\epsilon({\bf X})=&\left(1-\theta(\epsilon^{-1}{\bf X})\right){\bf L}^{(\texttt{m})}+\nonumber\\
&\displaystyle\sum_{I=1}^{\texttt{N}}\theta_I(\epsilon^{-1}{\bf X})\left[n\Lambda^{(\texttt{i})}\mbox{\boldmath$\mathcal{J}$}-\dfrac{(n-1)\widehat{\gamma}}{A_I}(\mbox{\boldmath$\mathcal{A}$}-\mbox{\boldmath$\mathcal{K}$}+(n-1)\mbox{\boldmath$\mathcal{J}$})\right]=: \,{\bf L}(\epsilon^{-1}{\bf X}),\label{Lbulk-eps}
\end{align}
while the modulus of elasticity (\ref{Linterface}) for the interfaces specializes to
\begin{align}
\widehat{{\bf L}}^\epsilon= \epsilon \left( 2\,\widehat{\mu}\,\widehat{\mbox{\boldmath$\mathcal{K}$}}+2 (\widehat{\mu}+\widehat{\Lambda})\widehat{\mbox{\boldmath$\mathcal{J}$}}+
\widehat{\gamma} \left(\widehat{\mbox{\boldmath$\mathcal{A}$}}-\widehat{\mbox{\boldmath$\mathcal{K}$}}+\widehat{\mbox{\boldmath$\mathcal{J}$}}\right)\right)=:\epsilon\,\widehat{{\bf L}}.\label{Linterface-eps}
\end{align}
It follows that the boundary-value problem (\ref{BVP_epsilon}) specializes to
\begin{align}
\left\{\begin{array}{l}
{\rm Div}\left[{\bf L}(\epsilon^{-1}{\bf X})\nabla{\bf u}^\epsilon\right]=\textbf{0},\quad {\bf X}\in {\rm \Omega}\setminus{\rm \Gamma}\vspace{0.2cm}\\
\widehat{{\rm Div}}\left[\epsilon\,\widehat{{\bf L}}\widehat{\nabla}{\bf u}^\epsilon\right]-\jump{{\bf L}(\epsilon^{-1}{\bf X})\nabla{\bf u}^\epsilon}\widehat{{\bf N}}=\textbf{0},\quad {\bf X}\in {\rm \Gamma}\vspace{0.2cm}\\
{\bf u}^{\epsilon}({\bf X})=\overline{{\bf u}}({\bf X}),\quad {\bf X}\in\partial{\rm \Omega}\end{array}\right. . \label{BVP-1}
\end{align}
For fixed $\epsilon$, equations (\ref{BVP-1}) generalize in two counts the classical linear elastostatics equations for heterogeneous materials. Specifically, these equations feature: ($i$) residual stresses (in the inclusions) and ($ii$) a non-standard jump condition across material (matrix/inclusions) interfaces due to the presence of interfacial forces. These two traits have profound implications not only on the resulting mechanical response of the body, but also on the mathematical analysis of the problem. Indeed, remark that the \emph{non-symmetric} term
\begin{equation*}
-\theta_I(\epsilon^{-1}{\bf X})\dfrac{(n-1)\widehat{\gamma}}{A_I}\mbox{\boldmath$\mathcal{A}$}
\end{equation*}
in (\ref{Lbulk-eps}) makes the bulk modulus of elasticity ${\bf L}(\epsilon^{-1}{\bf X})$ \emph{not} positive definite. Similarly, for the physically prominent case when $\widehat{\gamma}>\widehat{\mu}$, the \emph{negative} term
\begin{equation*}
-\widehat{\gamma}\,\widehat{\mbox{\boldmath$\mathcal{K}$}}
\end{equation*}
in (\ref{Linterface-eps}) makes the interface modulus of elasticity $\widehat{{\bf L}}$ \emph{not} positive definite. Accordingly, the standard coercivity based on local positive definiteness cannot be invoked here to prove existence of solution for (\ref{BVP-1}) via the Lax-Milgram theorem. Nevertheless, the expectation\footnote{In point of fact, explicit solutions can be readily worked out in terms of plane/spherical harmonics for some special cases, see, e.g., \cite{Sharmaetal03,Syleetal15b,GLP22}.} is that one can identify an appropriate weaker notion of coercivity that allows to prove existence. We shall address this issue in a separate contribution. From now onward, we simply assume that solutions ${\bf u}^\epsilon({\bf X})\in H^1 ({\rm \Omega};\mathbb{R}^n)$ exist for (\ref{BVP-1}).
\section{The limit as $\epsilon\searrow 0$ by the method of two-scale asymptotic expansions}\label{Sec: Expansion}
In this section, we present the derivation of the homogenized equations that emerge from the boundary-value problem (\ref{BVP-1}) in the limit as $\epsilon\searrow 0$ by means of the method of two-scale asymptotic expansions \cite{SP80,BLP11}.
We begin by looking for solutions of the asymptotic form
\begin{align}
u_i^\epsilon({\bf X})=& u_i^{(0)}({\bf X},\epsilon^{-1}{\bf X})+\epsilon\,u_i^{(1)}({\bf X},\epsilon^{-1}{\bf X})+\epsilon^2 u_i^{(2)}({\bf X},\epsilon^{-1}{\bf X})+...\nonumber\\
=&\sum_{s=0}^{\infty}\epsilon^s u_i^{(s)}({\bf X},\epsilon^{-1}{\bf X}), \label{ansatz}
\end{align}
where the functions ${\bf u}^{(s)}({\bf X},\epsilon^{-1}{\bf X})$ are $ Y$-periodic in their second argument and, according to the boundary condition (\ref{BVP-1})$_3$, such that ${\bf u}^{(0)}({\bf X},\epsilon^{-1}{\bf X})=\overline{{\bf u}}({\bf X})$ and ${\bf u}^{(s)}({\bf X},\epsilon^{-1}{\bf X})=\bf0$ for $s\neq 0$ on $\partial {\rm \Omega}$.
Next, we introduce the variables
\begin{equation*}
{\bf x}={\bf X}\quad {\rm and}\quad {\bf y}=\epsilon^{-1}{\bf X}
\end{equation*}
and operators
\begin{align*}
\texttt{A}_{ik}^\epsilon=\epsilon^{-2}\texttt{A}_{ik}^{(1)}+\epsilon^{-1}\texttt{A}_{ik}^{(2)}+\texttt{A}_{ik}^{(3)}&\quad {\rm with}\nonumber\\
&\texttt{A}_{ik}^{(1)}=\dfrac{\partial}{\partial y_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial }{\partial y_l}\right],\nonumber\\
&\texttt{A}_{ik}^{(2)}=\dfrac{\partial}{\partial y_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial }{\partial x_l}\right]+\dfrac{\partial}{\partial x_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial }{\partial y_l}\right],\nonumber\\
&\texttt{A}_{ik}^{(3)}=\dfrac{\partial}{\partial x_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial }{\partial x_l}\right],
\end{align*}
and
\begin{align*}
\widehat{\texttt{A}}_{ik}^{\,\epsilon}=&\epsilon^{-1}\widehat{\texttt{A}}_{ik}^{(1)}+\widehat{\texttt{A}}_{ik}^{(2)}+\epsilon\,\widehat{\texttt{A}}_{ik}^{(3)}\quad {\rm with}\nonumber\\
&\widehat{\texttt{A}}_{ik}^{(1)}=\dfrac{\partial}{\partial y_q}\left[\widehat{L}_{ijkl}\dfrac{\partial }{\partial y_p} \widehat{I}_{pl}\right]\widehat{I}_{qj}-\jump{L_{ijkl}({\bf y})\dfrac{\partial }{\partial y_l}}\widehat{N}_j,\nonumber\\
&\widehat{\texttt{A}}_{ik}^{(2)}=\dfrac{\partial}{\partial y_q}\left[\widehat{L}_{ijkl} \dfrac{\partial }{\partial x_p} \widehat{I}_{pl}\right]\widehat{I}_{qj}+\dfrac{\partial}{\partial x_q}\left[\widehat{L}_{ijkl} \dfrac{\partial }{\partial y_p} \widehat{I}_{pl}\right] \widehat{I}_{qj}-\jump{L_{ijkl}({\bf y})\dfrac{\partial }{\partial x_l} }\widehat{N}_j,\nonumber\\
&\widehat{\texttt{A}}_{ik}^{(3)}=\dfrac{\partial}{\partial x_q}\left[\widehat{L}_{ijkl} \dfrac{\partial }{\partial x_p} \widehat{I}_{pl}\right]\widehat{I}_{qj},
\end{align*}
in terms of which equations (\ref{BVP-1})$_{1,2}$ can be compactly rewritten as
\begin{align}
\left\{\begin{array}{l}
\texttt{A}_{ik}^{\epsilon} u_k^\epsilon=0\vspace{0.2cm}\\
\widehat{\texttt{A}}_{ik}^{\,\epsilon} u_k^\epsilon=0\end{array}\right. . \label{PDEs-1}
\end{align}
Substituting the ansatz (\ref{ansatz}) in the PDEs (\ref{PDEs-1}) and expanding in powers of $\epsilon$ leads to a hierarchy of equations for the functions ${\bf u}^{(s)}({\bf x},{\bf y})$. Only the first four of these, of $O(\epsilon^{-2})$, $O(\epsilon^{-1})$, $O(\epsilon^{0})$, and $O(\epsilon)$, turn out to be needed for our purposes here. In terms of the above-introduced operators, they read
\begin{align}
&\texttt{A}_{ik}^{(1)}u^{(0)}_k=0, \label{Asymptotic Eq1}\\
&\left\{\begin{array}{l}
\texttt{A}_{ik}^{(1)}u^{(1)}_k+\texttt{A}_{ik}^{(2)}u^{(0)}_k=0\vspace{0.2cm}\\
\widehat{\texttt{A}}_{ik}^{(1)}u^{(0)}_k=0
\end{array}\right. ,\label{Asymptotic Eq2}\\
&\left\{\begin{array}{l}
\texttt{A}_{ik}^{(1)}u^{(2)}_k+\texttt{A}_{ik}^{(2)}u^{(1)}_k+\texttt{A}_{ik}^{(3)}u^{(0)}_k=0\vspace{0.2cm}\\
\widehat{\texttt{A}}_{ik}^{(1)}u^{(1)}_k+\widehat{\texttt{A}}_{ik}^{(2)}u^{(0)}_k=0
\end{array}\right. ,\label{Asymptotic Eq3}\\
&\left\{\begin{array}{l}
\texttt{A}_{ik}^{(1)}u^{(3)}_k+\texttt{A}_{ik}^{(2)}u^{(2)}_k+\texttt{A}_{ik}^{(3)}u^{(1)}_k=0\vspace{0.2cm}\\
\widehat{\texttt{A}}_{ik}^{(1)}u^{(2)}_k+\widehat{\texttt{A}}_{ik}^{(2)}u^{(1)}_k+\widehat{\texttt{A}}_{ik}^{(3)}u^{(0)}_k=0
\end{array}\right. . \label{Asymptotic Eq4}
\end{align}
\paragraph{\textbf{The equations of $O(\epsilon^{-2})$ in the bulk and $O(\epsilon^{-1})$ on the interfaces}} The PDEs (\ref{Asymptotic Eq1}) and (\ref{Asymptotic Eq2})$_2$ can be combined to render the set of equations
\begin{align}
\left\{\begin{array}{l}
\dfrac{\partial}{\partial y_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial u^{(0)}_k}{\partial y_l}({\bf x},{\bf y})\right]=0,\quad{\bf y}\in Y\setminus G\vspace{0.4cm}\\
\dfrac{\partial}{\partial y_q}\left[\widehat{L}_{ijkl}\dfrac{\partial u^{(0)}_k}{\partial y_p}({\bf x},{\bf y}) \widehat{I}_{pl}\right]\widehat{I}_{qj}-\jump{L_{ijkl}({\bf y})\dfrac{\partial u^{(0)}_k}{\partial y_l}({\bf x},{\bf y})}\widehat{N}_j=0,\quad{\bf y}\in G
\end{array}\right.\label{Eq-u0}
\end{align}
for the function ${\bf u}^{(0)}_k({\bf x},{\bf y})$ in the unit cell $Y$, where $G$ has been introduced to denote the interfaces separating the elastomer from the inclusions contained in $Y$. In (\ref{Eq-u0}), ${\bf y}$ plays the role of the independent variable, whereas ${\bf x}$ is just a parameter. Accordingly, the solution of (\ref{Eq-u0}) with respect to ${\bf y}$ is simply a function of ${\bf x}$ that does not depend on ${\bf y}$. We write
\begin{equation}
{\bf u}^{(0)}({\bf x},{\bf y})={\bf u}({\bf x}).\label{Sol-u0}
\end{equation}
\paragraph{\textbf{The equations of $O(\epsilon^{-1})$ in the bulk and $O(\epsilon^{0})$ on the interfaces}} Making direct use of the result (\ref{Sol-u0}), the PDEs
(\ref{Asymptotic Eq2})$_1$ and (\ref{Asymptotic Eq3})$_2$ can be combined to yield
\begin{align}
\left\{\begin{array}{l}
\dfrac{\partial}{\partial y_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial u^{(1)}_k}{\partial y_l}({\bf x},{\bf y})\right]=-\dfrac{\partial}{\partial y_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial u_k}{\partial x_l}({\bf x})\right],\quad{\bf y}\in Y\setminus G\vspace{0.4cm}\\
\dfrac{\partial}{\partial y_q}\left[\widehat{L}_{ijkl}\dfrac{\partial u^{(1)}_k}{\partial y_p}({\bf x},{\bf y}) \widehat{I}_{pl}\right]\widehat{I}_{qj}-\jump{L_{ijkl}({\bf y})\dfrac{\partial u^{(1)}_k}{\partial y_l}({\bf x},{\bf y})}\widehat{N}_j=\vspace{0.2cm}\\
-\dfrac{\partial}{\partial y_q}\left[\widehat{L}_{ijkl} \dfrac{\partial u_k}{\partial x_p}({\bf x}) \widehat{I}_{pl}\right]\widehat{I}_{qj}+\jump{L_{ijkl}({\bf y})\dfrac{\partial u_k}{\partial x_l}({\bf x})}\widehat{N}_j,\quad{\bf y}\in G
\end{array}\right. ,\label{Eq-u1}
\end{align}
which, for a given function ${\bf u}({\bf x})$, can be thought of as equations for the function ${\bf u}^{(1)}({\bf x},{\bf y})$ in the unit cell $Y$ with ${\bf x}$ playing the role of a parameter.
By introducing the $Y$-periodic function $\omega_{kmn}({\bf y})\in H^1(Y;\mathbb{R}^{n^{3}})$ defined implicitly as the solution of the unit-cell problem
\begin{align}
\left\{\begin{array}{l}
\dfrac{\partial}{\partial y_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial \omega_{kmn}}{\partial y_l}({\bf y})\right]=-\dfrac{\partial L_{ijmn}}{\partial y_j}\left({\bf y}\right),\quad{\bf y}\in Y\setminus G\vspace{0.4cm}\\
\dfrac{\partial}{\partial y_q}\left[\widehat{L}_{ijkl}\dfrac{\partial \omega_{kmn}}{\partial y_p}({\bf y}) \widehat{I}_{pl}\right]\widehat{I}_{qj}-\jump{L_{ijkl}({\bf y})\dfrac{\partial \omega_{kmn}}{\partial y_l}({\bf y})}\widehat{N}_j=\vspace{0.2cm}\\
-\dfrac{\partial}{\partial y_q}\left[\widehat{L}_{ijkl}\delta_{km} \widehat{I}_{nl}\right]\widehat{I}_{qj}+\jump{L_{ijkl}({\bf y})\delta_{km}\delta_{ln}}\widehat{N}_j,\quad{\bf y}\in G\vspace{0.4cm}\\
\int_{Y}\omega_{kmn}({\bf y}){\rm d}{\bf y}=0
\end{array}\right. ,\label{Eq-omega}
\end{align}
the solution (with respect to ${\bf y}$) of (\ref{Eq-u1}) can be written in the separable form
\begin{align}
u_k^{(1)}({\bf x},{\bf y})=\omega_{kmn}({\bf y})\dfrac{\partial u_{m}}{\partial x_n}({\bf x})+v^{(1)}_k({\bf x}),\label{Sol-u1}
\end{align}
where ${\bf v}^{(1)}({\bf x})$ is an arbitrary function of ${\bf x}$.
\paragraph{\textbf{The equations of $O(\epsilon^{0})$ in the bulk and $O(\epsilon)$ on the interfaces}} In turn, making again use of the result (\ref{Sol-u0}), the combination of PDEs (\ref{Asymptotic Eq3})$_1$ and (\ref{Asymptotic Eq4})$_2$ renders the set of equations
\begin{align}
\left\{\begin{array}{l}
\dfrac{\partial}{\partial y_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial u^{(2)}_k}{\partial y_l}({\bf x},{\bf y})\right]=-\dfrac{\partial}{\partial y_j}\left[L_{ijkl}\left({\bf y}\right)\dfrac{\partial u^{(1)}_k}{\partial x_l}({\bf x},{\bf y})\right]-\vspace{0.2cm}\\
\hspace{3.2cm}\dfrac{\partial}{\partial x_j}\left[L_{ijkl}({\bf y})\left(\dfrac{\partial u_k}{\partial x_l}({\bf x})+\dfrac{\partial u^{(1)}_k}{\partial y_l}({\bf x},{\bf y})\right)\right],\quad{\bf y}\in Y\setminus G\vspace{0.4cm}\\
\dfrac{\partial}{\partial y_q}\left[\widehat{L}_{ijkl}\dfrac{\partial u^{(2)}_k}{\partial y_p}({\bf x},{\bf y}) \widehat{I}_{pl}\right]\widehat{I}_{qj}-\jump{L_{ijkl}({\bf y})\dfrac{\partial u^{(2)}_k}{\partial y_l}({\bf x},{\bf y})}\widehat{N}_j=\vspace{0.2cm}\\
\hspace{0.45cm}-\dfrac{\partial}{\partial y_q}\left[\widehat{L}_{ijkl} \dfrac{\partial u^{(1)}_k}{\partial x_p}({\bf x},{\bf y}) \widehat{I}_{pl}\right]\widehat{I}_{qj}+\jump{L_{ijkl}({\bf y})\dfrac{\partial u^{(1)}_k}{\partial x_l}({\bf x},{\bf y})}\widehat{N}_j-\vspace{0.2cm}\\
\hspace{2.75cm}\dfrac{\partial}{\partial x_q}\left[\widehat{L}_{ijkl}\left(\dfrac{\partial u_k}{\partial x_p}({\bf x})+\dfrac{\partial u^{(1)}_k}{\partial y_p}({\bf x},{\bf y})\right)\widehat{I}_{pl}\right]\widehat{I}_{qj},\quad{\bf y}\in G
\end{array}\right. .\label{Eq-u2}
\end{align}
For any function ${\bf u}({\bf x})$ of choice, noting that ${\bf u}^{(1)}({\bf x},{\bf y})$ is given by (\ref{Sol-u1}) in terms of ${\bf u}({\bf x})$, equations (\ref{Eq-u2}) are nothing more than a unit-cell problem for the function ${\bf u}^{(2)}({\bf x},{\bf y})$, where once more ${\bf x}$ plays the role of a parameter.
Analogously to the classical context of elastostatics without residual stresses and interfacial forces (\cite{BLP11}, Chapter 2), equation (\ref{Eq-u2}) can be manipulated to yield the governing equation for the leading-order function (\ref{Sol-u0}) in the ansatz (\ref{ansatz}). Indeed, upon integrating equation (\ref{Eq-u2})$_1$ over $Y$, equation (\ref{Eq-u2})$_2$ over $G$, summing the two results together, then using the bulk divergence theorem
\begin{equation}
\displaystyle\int_{Y}\dfrac{\partial (\cdot)}{\partial y_j}\,{\rm d}{\bf y}=\displaystyle\int_{\partial Y}(\cdot) N_j\,{\rm d}{\bf y}+\displaystyle\int_{G} \jump{\cdot}\widehat{N}_{j}\,{\rm d}{\bf y}\label{Div-Bulk}
\end{equation}
and the interface divergence theorem
\begin{equation}
\displaystyle\int_{G}\dfrac{\partial (\cdot)}{\partial y_q}\widehat{I}_{qj}\,{\rm d}{\bf y}=\displaystyle\int_{G}\dfrac{\partial \widehat{N}_m}{\partial y_n}\widehat{I}_{mn}(\cdot) \widehat{N}_j\,{\rm d}{\bf y}, \label{Div-Interfaces}
\end{equation}
noting that $\widehat{L}_{ijkl}\widehat{N}_{j}=0$, and recognizing the identity $\widehat{L}_{ijkl}\widehat{I}_{qj}=\widehat{L}_{iqkl}$, it follows that
\begin{align*}
&\dfrac{\partial}{\partial x_j}\displaystyle\int_{Y} L_{ijkl}({\bf y})\left(\dfrac{\partial u_k}{\partial x_l}({\bf x})+\dfrac{\partial u^{(1)}_k}{\partial y_l}({\bf x},{\bf y})\right){\rm d}{\bf y}+\nonumber\\
&\dfrac{\partial}{\partial x_j}\displaystyle\int_{G}\widehat{L}_{ijkl}\left(\dfrac{\partial u_k}{\partial x_p}({\bf x})+\dfrac{\partial u^{(1)}_k}{\partial y_p}({\bf x},{\bf y})\right)\widehat{I}_{pl}{\rm d}{\bf y}=0 .
\end{align*}
Finally, making use of the representation (\ref{Sol-u1}) for ${\bf u}^{(1)}({\bf x},{\bf y})$ in terms of the $Y$-periodic function $\omega_{kmn}({\bf y})$, it is a simple matter to deduce that this last relation can be rewritten in the form
\begin{align}
\dfrac{\partial}{\partial x_j}\left[\overline{L}_{ijkl}\dfrac{\partial u_{k}}{\partial x_l}({\bf x})\right]=0,\label{Hom-Eq-0}
\end{align}
where
\begin{align}
\overline{L}_{ijkl}=&\displaystyle\int_{Y} L_{ijmn}({\bf y})\left(\delta_{mk}\delta_{nl}+\dfrac{\partial \omega_{mkl}}{\partial y_n}({\bf y})\right){\rm d}{\bf y}+\nonumber\\
&\displaystyle\int_{G}\widehat{L}_{ijmn}\left(\delta_{mk}\widehat{I}_{nl}+\dfrac{\partial \omega_{mkl}}{\partial y_p}({\bf y})\widehat{I}_{pn}\right){\rm d}{\bf y}. \label{Leff-0}
\end{align}
Equation (\ref{Hom-Eq-0}) is the homogenized equation in ${\rm \Omega}$ that, together with the boundary condition ${\bf u}({\bf x})=\overline{{\bf u}}({\bf x})$ on $\partial{\rm \Omega}$, completely determines the macroscopic displacement field ${\bf u}({\bf x})$. The following remarks are in order:
\paragraph{i. Physical interpretation of the homogenized equation (\ref{Hom-Eq-0})} Equation (\ref{Hom-Eq-0}), together with the boundary condition ${\bf u}({\bf x})$ on $\partial{\rm \Omega}$, corresponds to the governing equation for the displacement field within a \emph{homogeneous} linear elastic solid, with constant effective modulus of elasticity $\overline{{\bf L}}$, undergoing small quasistatic deformations.
\paragraph{ii. Absence of a macroscopic residual stress} In spite of the fact that there is a local stress within the inclusions and an initial surface tension on the elastomer/inclusions interfaces, the homogenized equation (\ref{Hom-Eq-0}) is free of residual stresses. The reason behind this result is that the average of the local residual stress and initial surface tension cancel each other out. Precisely,
\begin{align}
-\displaystyle\int_{Y}\displaystyle\sum_{I=1}^{\texttt{N}}\theta_I({\bf y})\dfrac{(n-1)\widehat{\gamma}}{A_I}{\bf I}\,{\rm d}{\bf y}+\displaystyle\int_{G}\widehat{\gamma}\,\widehat{{\bf I}}\,{\rm d}{\bf y}=\textbf{0}.\label{zero-res-eff}
\end{align}
\paragraph{iii. The effective modulus of elasticity $\overline{{\bf L}}$} The effective modulus of elasticity (\ref{Leff-0}) that emerges in the homogenized equation (\ref{Hom-Eq-0}) is independent of the choice of the domain ${\rm \Omega}$ occupied by the filled elastomer and the boundary conditions on $\partial {\rm \Omega}$. It does depend, however, on the size of the inclusions, the residual hydrostatic stress that they are subjected to in the initial configuration, as well as on the elasticity of the interfaces and the surface tension that they are subjected to in the initial configuration.
\paragraph{iv. Symmetries of $\overline{{\bf L}}$} The effective modulus of elasticity (\ref{Leff-0}) satisfies the major and minor symmetries
\begin{align}
\overline{L}_{ijkl}=\overline{L}_{klij}\quad {\rm and}\quad \overline{L}_{ijkl}=\overline{L}_{jikl}=\overline{L}_{ijlk}\label{Symm_Leff}
\end{align}
of a conventional homogeneous elastic solid, this in spite of the fact that the local moduli of elasticity ${\bf L}({\bf y})$ and $\widehat{{\bf L}}$ for the bulk and the interfaces do \emph{not} possess minor symmetries.
The major symmetry $\overline{L}_{ijkl}=\overline{L}_{klij}$ is a direct consequence of the fact that the local moduli ${\bf L}({\bf y})$ and $\widehat{{\bf L}}$ themselves possess major symmetry. To see this, making use of the bulk (\ref{Div-Bulk}) and interface (\ref{Div-Interfaces}) divergence theorems, as well as of the definition (\ref{Eq-omega}) for the $Y$-periodic corrector function $\boldsymbol{\omega}({\bf y})$, first note that
\begin{align*}
&\displaystyle\int_{Y}\dfrac{\partial \omega_{mij}}{\partial y_n}({\bf y})L_{mnpq}({\bf y})\left(\delta_{pk}\delta_{ql}+\dfrac{\partial \omega_{pkl}}{\partial y_q}({\bf y})\right){\rm d}{\bf y}+\displaystyle\int_{G}\dfrac{\partial \omega_{mij}}{\partial y_r}({\bf y})\widehat{I}_{rn}\widehat{L}_{mnpq}\times\\
&\left(\delta_{pk}\widehat{I}_{ql}+
\dfrac{\partial \omega_{pkl}}{\partial y_s}({\bf y})\widehat{I}_{sq}\right){\rm d}{\bf y}=\displaystyle\int_{Y}\dfrac{\partial}{\partial y_n}\left[\omega_{mij}({\bf y})L_{mnpq}({\bf y})\left(\delta_{pk}\delta_{ql}+\dfrac{\partial \omega_{pkl}}{\partial y_q}({\bf y})\right)\right]{\rm d}{\bf y}-\\
&\displaystyle\int_{Y}\omega_{mij}({\bf y})\dfrac{\partial}{\partial y_n}\left[L_{mnpq}({\bf y})\left(\delta_{pk}\delta_{ql}+\dfrac{\partial \omega_{pkl}}{\partial y_q}({\bf y})\right)\right]{\rm d}{\bf y}+\displaystyle\int_{G}\dfrac{\partial}{\partial y_r}\left[ \omega_{mij}({\bf y})\widehat{L}_{mnpq}\times\right.\\
&\left.\left(\delta_{pk}\widehat{I}_{ql}+
\dfrac{\partial \omega_{pkl}}{\partial y_s}({\bf y})\widehat{I}_{sq}\right)\right]\widehat{I}_{rn}{\rm d}{\bf y}-\displaystyle\int_{G}\omega_{mij}({\bf y})\dfrac{\partial }{\partial y_r}\left[\widehat{L}_{mnpq}\left(\delta_{pk}\widehat{I}_{ql}+
\dfrac{\partial \omega_{pkl}}{\partial y_s}({\bf y})\widehat{I}_{sq}\right)\right]\times\\
&\widehat{I}_{rn}{\rm d}{\bf y}=\displaystyle\int_{G}\jump{\omega_{mij}({\bf y})L_{mnpq}({\bf y})\left(\delta_{pk}\delta_{ql}+\dfrac{\partial \omega_{pkl}}{\partial y_q}({\bf y})\right)}\widehat{N}_n{\rm d}{\bf y}-\\
&\displaystyle\int_{G}\omega_{mij}({\bf y})\jump{L_{mnpq}({\bf y})\left(\delta_{pk}\delta_{ql}+\dfrac{\partial \omega_{pkl}}{\partial y_q}({\bf y})\right)}\widehat{N}_{n}{\rm d}{\bf y}=0.
\end{align*}
With this result at hand, it is a simple matter to verify that the formula (\ref{Leff-0}) can be rewritten in the equivalent form
\begin{align*}
\overline{L}_{ijkl}=&\displaystyle\int_{Y}\left(\delta_{mi}\delta_{nj}+\dfrac{\partial \omega_{mij}}{\partial y_n}({\bf y})\right)L_{mnpq}({\bf y})\left(\delta_{pk}\delta_{ql}+\dfrac{\partial \omega_{pkl}}{\partial y_q}({\bf y})\right){\rm d}{\bf y}+\\
&\displaystyle\int_{G}\left(\delta_{mi}\widehat{I}_{nj}+\dfrac{\partial \omega_{mij}}{\partial y_r}({\bf y})\widehat{I}_{rn}\right)\widehat{L}_{mnpq}\left(\delta_{pk}\widehat{I}_{ql}+\dfrac{\partial \omega_{pkl}}{\partial y_s}({\bf y})\widehat{I}_{sq}\right){\rm d}{\bf y},
\end{align*}
from which it is trivial to establish that $\overline{L}_{ijkl}=\overline{L}_{klij}$ since $L_{mnpq}({\bf y})=L_{pqmn}({\bf y})$ and $\widehat{L}_{mnpq}=\widehat{L}_{pqmn}$.
On the other hand, the minor symmetries $\overline{L}_{ijkl}=\overline{L}_{jikl}$ and $\overline{L}_{ijkl}=\overline{L}_{ijlk}$ are a direct consequence of the absence of a macroscopic residual stress (\ref{zero-res-eff}) and the macroscopic major symmetry (\ref{Symm_Leff})$_1$ of $\overline{{\bf L}}$ . To see this, first note that
\begin{align*}
&-\displaystyle\int_{Y}\displaystyle\sum_{I=1}^{\texttt{N}}\theta_I({\bf y})\dfrac{(n-1)\widehat{\gamma}}{A_I}\left(\delta_{il}\delta_{jk}+\dfrac{\partial \omega_{jkl}}{\partial y_i}({\bf y})\right)\,{\rm d}{\bf y}+\displaystyle\int_{G}\widehat{\gamma}\left(\delta_{jk}\widehat{I}_{il}+\dfrac{\partial \omega_{jkl}}{\partial y_p}({\bf y})\widehat{I}_{ip}\right)\,{\rm d}{\bf y}=\\
&-\displaystyle\int_{Y}\displaystyle\sum_{I=1}^{\texttt{N}}\theta_I({\bf y})\dfrac{(n-1)\widehat{\gamma}}{A_I}\dfrac{\partial \omega_{jkl}}{\partial y_i}({\bf y})\,{\rm d}{\bf y}+\displaystyle\int_{G}\widehat{\gamma}\dfrac{\partial \omega_{jkl}}{\partial y_p}({\bf y})\widehat{I}_{ip}\,{\rm d}{\bf y}=-\displaystyle\int_{Y}\dfrac{\partial}{\partial y_i}\left[\displaystyle\sum_{I=1}^{\texttt{N}}\theta_I({\bf y})\times\right.\\
&\left.\dfrac{(n-1)\widehat{\gamma}}{A_I}\omega_{jkl}({\bf y})\right]{\rm d}{\bf y}+\displaystyle\int_{Y}\dfrac{\partial}{\partial y_i}\left[\displaystyle\sum_{I=1}^{\texttt{N}}\theta_I({\bf y})\dfrac{(n-1)\widehat{\gamma}}{A_I}\right]\omega_{jkl}({\bf y}){\rm d}{\bf y}+\displaystyle\int_{G}\widehat{\gamma}\dfrac{\partial\widehat{N}_m}{\partial y_n}\widehat{I}_{mn}\widehat{N}_i\times\\
&\omega_{jkl}({\bf y})\,{\rm d}{\bf y}=\displaystyle\sum_{I=1}^{\texttt{N}}\left(-\displaystyle\int_{G_I}\dfrac{(n-1)\widehat{\gamma}}{A_I}\omega_{jkl}({\bf y})\widehat{N}_i{\rm d}{\bf y}+\displaystyle\int_{G_I}\dfrac{(n-1)\widehat{\gamma}}{A_I}\omega_{jkl}({\bf y})\widehat{N}_i{\rm d}{\bf y}\right)=0
\end{align*}
where $G_I$ denotes the interface of the $I$th inclusion and where use has been made of relation (\ref{zero-res-eff}), the bulk (\ref{Div-Bulk}) and interface (\ref{Div-Interfaces}) divergence theorems, as well as of the $Y$-periodicity of the corrector function $\boldsymbol{\omega}({\bf y})$. In view of this last result, it is straightforward to show that the formula (\ref{Leff-0}) can also be rewritten as
\begin{align*}
\overline{L}_{ijkl}=&\displaystyle\int_{Y}\left(L_{ijmn}({\bf y})-\displaystyle\sum_{I=1}^{\texttt{N}}\theta_I({\bf y})\dfrac{(n-1)\widehat{\gamma}}{A_I}\delta_{in}\delta_{jm}\right)\left(\delta_{mk}\delta_{nl}+\dfrac{\partial \omega_{mkl}}{\partial y_n}({\bf y})\right)\,{\rm d}{\bf y}+\nonumber\\
&\displaystyle\int_{G}\left(\widehat{L}_{ijmn}+\widehat{\gamma}\delta_{jm}\widehat{I}_{in}\right)\left(\delta_{mk}\widehat{I}_{nl}+\dfrac{\partial \omega_{mkl}}{\partial y_p}({\bf y})\widehat{I}_{pn}\right)\,{\rm d}{\bf y},
\end{align*}
from which it is trivial to establish that $\overline{L}_{ijkl}=\overline{L}_{jikl}$ since the combinations $L_{ijmn}({\bf y})$ $-(\sum_{I=1}^{\texttt{N}}\theta_I({\bf y})(n-1)\widehat{\gamma}/A_I) \delta_{in}\delta_{jm}$ and $\widehat{L}_{ijmn}+\widehat{\gamma}\delta_{jm}\widehat{I}_{in}$ possess minor symmetries. Minor symmetries in the last two indices $\overline{L}_{ijkl}=\overline{L}_{ijlk}$ can be established by exploiting the major symmetry $\overline{L}_{ijkl}=\overline{L}_{klij}$ and then following the same steps as above.
\paragraph{v. Positive definiteness of $\overline{{\bf L}}$} Physically, the expectation is that the effective modulus of elasticity (\ref{Leff-0}) be positive definite. However, given that the local moduli of elasticity ${\bf L}({\bf y})$ and $\widehat{{\bf L}}$ for the bulk and the interfaces are \emph{not} positive definite in general, the standard argument (\cite{BLP11}, Section 2.3 of Chapter 1) to prove so does \emph{not} apply here. This difficulty is intimately related to the difficulty of proving existence of solution for the boundary-value problem (\ref{BVP-1}) noted at the end of the preceding section. We shall address both of these issues in a separate contribution.
\paragraph{vi. Computation of $\overline{{\bf L}}$} The computation of the effective modulus of elasticity (\ref{Leff-0}) amounts to solving the unit-cell problem (\ref{Eq-omega}) for the corrector $\boldsymbol{\omega}({\bf y})$. In general, this can only be accomplished numerically. Ghosh and Lopez-Pamies \cite{GLP22} have recently put forth a finite-element (FE) scheme to generate numerical solutions for such classes of boundary-value problems. In the next section, by way of an example, we make use of that scheme to generate solutions for the effective modulus of elasticity of isotropic suspensions of incompressible liquid $2$-spherical inclusions of monodisperse size embedded in an isotropic incompressible elastomer.
\paragraph{vii. Strain and stress macro-variables} A quick glance at the homogenized equation (\ref{Hom-Eq-0}) suffices to identify
\begin{equation}
H_{ij}({\bf x})= \dfrac{\partial u_i}{\partial x_j}({\bf x})\label{H-macro}
\end{equation}
as the macroscopic displacement gradient field and
\begin{equation}
S_{ij}({\bf x})= \overline{L}_{ijkl}\dfrac{\partial u_k}{\partial x_l}({\bf x})\label{S-macro}
\end{equation}
as the macroscopic stress measure that describe the constitutive response of the resulting effective elastic solid in the homogenization limit.
By virtue of the minor symmetries (\ref{Symm_Leff})$_2$ of the effective modulus of elasticity $\overline{{\bf L}}$, remark that the constitutive relation between (\ref{H-macro}) and (\ref{S-macro}) can be written in the classical stress-strain form
\begin{equation}
S_{ij}({\bf x})= \overline{L}_{ijkl}E_{kl}({\bf x}),\quad E_{ij}({\bf x}):=\dfrac{1}{2}\left(H_{ij}({\bf x})+H_{ji}({\bf x})\right).\label{S-E-macro}
\end{equation}
The macro-variable (\ref{H-macro}) happens to be identical to the one that arises in the classical context of elastostatics without residual stresses and interfacial forces (\cite{BLP11}, Chapter 2). Precisely,
\begin{align*}
H_{ij}({\bf x})=& \displaystyle\int_{Y}\left(\dfrac{\partial u_i}{\partial x_j}({\bf x})+\dfrac{\partial u^{(1)}_i}{\partial y_j}({\bf x},{\bf y})\right){\rm d}{\bf y}\nonumber\\
=&\dfrac{\partial u_i}{\partial x_j}({\bf x})+\displaystyle\int_{\partial Y} u^{(1)}_i({\bf x},{\bf y}) N_j {\rm d}{\bf y}+\displaystyle\int_{G} \jump{u^{(1)}_i({\bf x},{\bf y})} \widehat{N}_j {\rm d}{\bf y}\nonumber\\
=&\dfrac{\partial u_i}{\partial x_j}({\bf x}).
\end{align*}
By contrast, the macro-variable (\ref{S-macro}) is \emph{not} in accord with the classical result. Instead, relation (\ref{S-macro}) corresponds to the average over the unit cell $Y$ of the local stress in the bulk \emph{plus} the average over the interfaces $G$ of the local interface stress. Precisely,
\begin{align*}
S_{ij}({\bf x})=& \displaystyle\int_{Y}L_{ijkl}({\bf y})\left(\dfrac{\partial u_k}{\partial x_l}({\bf x})+\dfrac{\partial u^{(1)}_k}{\partial y_l}({\bf x},{\bf y})\right){\rm d}{\bf y}+\nonumber\\
&\displaystyle\int_{G}\widehat{L}_{ijkl}\left(\dfrac{\partial u_k}{\partial x_p}({\bf x})+\dfrac{\partial u^{(1)}_k}{\partial y_p}({\bf x},{\bf y})\right)\widehat{I}_{pl}{\rm d}{\bf y}\nonumber\\
=&\overline{L}_{ijkl}\dfrac{\partial u_k}{\partial x_l}({\bf x}).
\end{align*}
A similar result emerges in the homogenization of elastic dielectric composites containing space charges \cite{LLP17,FGLP21}.
\paragraph{viii. Effective stored-energy function} By virtue of the major symmetry (\ref{Symm_Leff})$_1$ of the effective modulus of elasticity $\overline{{\bf L}}$, the macroscopic constitutive relation (\ref{S-E-macro}) is a hyperelastic one. That is, there is an effective stored-energy function, $\overline{W}(\textbf{E})$ say, whose derivative with respect to the macroscopic strain $\textbf{E}$ yields the macroscopic stress ${\bf S}$.
Precisely, making use of the bulk (\ref{Div-Bulk}) and interface (\ref{Div-Interfaces}) divergence theorems, together with the representation (\ref{Sol-u1}) for ${\bf u}^{(1)}({\bf x},{\bf y})$ and the definition (\ref{Eq-omega}) for the $Y$-periodic corrector function $\boldsymbol{\omega}({\bf y})$, it is not difficult to deduce that
\begin{equation*}
S_{ij}=\dfrac{\partial \overline{W}}{\partial E_{ij}}(\textbf{E}),
\end{equation*}
where
\begin{align*}
\overline{W}(\textbf{E})=&\dfrac{1}{2}\displaystyle\int_{Y}\left(\dfrac{\partial u_i}{\partial x_j}({\bf x})+\dfrac{\partial u^{(1)}_i}{\partial y_j}({\bf x},{\bf y})\right)L_{ijkl}({\bf y})\left(\dfrac{\partial u_k}{\partial x_l}({\bf x})+\dfrac{\partial u^{(1)}_k}{\partial y_l}({\bf x},{\bf y})\right)\,{\rm d}{\bf y}+\nonumber\\
&\dfrac{1}{2}\displaystyle\int_{G}\left(\dfrac{\partial u_i}{\partial x_p}({\bf x})+\dfrac{\partial u^{(1)}_i}{\partial y_p}({\bf x},{\bf y})\right)\widehat{I}_{pj}\widehat{L}_{ijkl}\left(\dfrac{\partial u_k}{\partial x_q}({\bf x})+\dfrac{\partial u^{(1)}_k}{\partial y_q}({\bf x},{\bf y})\right)\widehat{I}_{ql}\,{\rm d}{\bf y} \nonumber\\
=&\dfrac{1}{2} E_{ij} \overline{L}_{ijkl} E_{kl}.
\end{align*}
\section{The homogenized behavior of isotropic suspensions of monodisperse $2$-spherical inclusions}\label{Sec: Application}
In this final section, for demonstration purposes, we present numerical results for the effective modulus of elasticity $\overline{{\bf L}}$ of a basic class of elastomers filled with liquid inclusions, that of isotropic suspensions of $2$-spherical inclusions of monodisperse size,
\begin{equation*}
A_I=A\quad I=1,...,\texttt{N}
\end{equation*}
made of an incompressible liquid,
\begin{equation*}
\Lambda^{(\texttt{i})}=+\infty,
\end{equation*}
embedded in an isotropic incompressible elastomer,
\begin{equation*}
{\bf L}^{(\texttt{m})}=2\mu^{(\texttt{m})}\mbox{\boldmath$\mathcal{K}$}+\infty \mbox{\boldmath$\mathcal{J}$},
\end{equation*}
wherein the interfaces only feature a constant surface tension $\widehat{\gamma}$ and hence the interface Lam\'e constants
\begin{equation*}
\widehat{\mu}=\widehat{\Lambda}=0.
\end{equation*}
For this fundamental class of filled elastomers, remark that there is a sole dimensionless material constant that describes the constitutive behavior, the so-called elasto-capillary number
\begin{equation*}
eCa:=\dfrac{\widehat{\gamma}}{2\mu^{(\texttt{m})}A}
\end{equation*}
Physically, $eCa$ is a measure of interface stiffness $\widehat{\gamma}/2A$ relative to bulk stiffness $\mu^{(\texttt{m})}$ \cite{Andreottietal16,Bicoetal18}.
\subsection{Construction of the unit cells $Y$}\label{Sec:Microstructures}
Prior to the presentation of the results for $\overline{{\bf L}}$ \emph{per se} in Subsection \ref{Sec:Results}, we begin by outlining the process by which we constructed the unit cells $Y$.
We follow in the footstep of a well-settled approach \cite{Gusev97,LPGD13} and approximate the aforementioned class of \emph{isotropic} filled elastomers as infinite media made of the periodic repetition of unit cells $Y$ that contain random distributions of a sufficiently large number $\texttt{N}$ of inclusions. A critical point in this approach is to determine what that sufficiently large number $\texttt{N}$ is so that the resulting homogenized constitutive behaviors are indeed isotropic to a high enough degree of accuracy.
In order to cover a large range of inclusion concentrations (that is, in the present context of $n=2$ space dimensions, area fractions of inclusions)
\begin{equation*}
c:=\displaystyle\int_{Y}\theta({\bf y}){\rm d}{\bf y},
\end{equation*}
we make use of the algorithm introduced by Lubachevsky and Stillinger \cite{LS90}. Roughly speaking, the idea behind this algorithm is to randomly seed at once in the unit cell $Y$ the desired total number $\texttt{N}$ of inclusions as points endowed with random velocities and a uniform radial growth rate. As the points move and grow into $2$-spheres, their collision with one another are described by conservation of momentum, while their crossings through the boundaries of the unit cell are described by periodicity. When the desired concentration $c$ is reached, the algorithm is stopped.
Although the algorithm allows to generate microstructures spanning the full range of concentrations --- from the dilute limit $c\searrow 0$ to the percolation threshold $c\nearrow c_p\approx 0.90$ \cite{LS90} --- we do not wish to deal with the computational challenges of extremely packed microstructures and restrict our attention here to the range $c\in[0,0.50]$. Specifically, the construction process that we carried out is as follows.
In the footstep of \cite{LFLP22,LLP22}, we started by generating a total of 10,800 realizations of unit cells $Y=(0,1)^2$ containing 30, 60, 120, 240, 480, 960 randomly distributed inclusions with six different concentrations $c=0.05, 0.10, 0.20, 0.30, 0.40, 0.50$ and three different minimum inter-inclusion distances $d=0.01 A, 0.02 A, 0.05A$. For each realization, we computed the two-point correlation function $P_2({\bf y})=\int_{Y}\theta({\bf y}^\prime)\theta({\bf y}+{\bf y}^\prime)\,{\rm d}{\bf y}^\prime$. As a first assessment of deviation from exact geometric isotropy (which is only achieved in the limit of infinitely many inclusions), we then computed the deviation of $P_2({\bf y})$ from its isotropic projection $I_2(|{\bf y}|)={1}/(2\pi)\int_0^{2\pi}P_2(|{\bf y}|\cos\phi \textbf{e}_1+|{\bf y}|\sin\phi \textbf{e}_2)\,{\rm d}\phi$ onto the space of functions that depend on ${\bf y}$ only through its magnitude $|{\bf y}|$; recall that $\{\textbf{e}_1,\textbf{e}_2\}$ stand for the principal axes of the unit cell $Y$. Realizations that did not satisfy the condition
\begin{equation}
\dfrac{||P_2({\bf y})-I_2(|{\bf y}|)||_1}{||I_2(|{\bf y}|)||_1}\le 10^{-2}\label{P2}
\end{equation}
were discarded as not sufficiently isotropic. This filtering process reduced the initial set of 10,800 realizations to just a set of 90 potentially acceptable realizations, five for each of the six concentrations $c=0.05, 0.10, 0.20, 0.30, 0.40, 0.50$ and the three minimum inter-inclusion distances $d=0.01 A, 0.02 A, 0.05A$.
Thanks to its pure geometric nature, the criterion (\ref{P2}) provides a computationally inexpensive tool to weed out microstructures that are unlikely to lead to isotropic constitutive behaviors. However, microstructures that do satisfy (\ref{P2}) need not exhibit isotropic constitutive behaviors. To conclusively establish whether a given realization with a \emph{finite} number $\texttt{N}$ of inclusions does indeed exhibit isotropic constitutive behavior to within the desired accuracy, one needs to compute its effective modulus of elasticity $\overline{{\bf L}}$ in its entirety and then quantify its deviation from exact constitutive isotropy. Accordingly, for each of the 90 potentially acceptable realizations and each of the three elasto-capillary numbers $eCa=0.20,1,5$ that we considered in this study, we generated numerical solutions for the entire $\overline{{\bf L}}$ via the ($n=2$ version of the) FE scheme put forth in \cite{GLP22} and then computed its isotropic deviatoric projection
\begin{equation}
\overline{{\bf L}}_{iso}=2\,\overline{\mu}\,\mbox{\boldmath$\mathcal{K}$},\qquad \overline{\mu}:=\dfrac{1}{4}\,\mbox{\boldmath$\mathcal{K}$}\cdot\overline{{\bf L}}=\dfrac{1}{4}\,\mathcal{K}_{ijkl}\overline{L}_{ijkl} ,\label{mueff}
\end{equation}
which serves to define the effective shear modulus $\overline{\mu}$ of the filled elastomer at hand. Realizations that did not satisfy the stringent threshold
\begin{equation}
\dfrac{||\mbox{\boldmath$\mathcal{K}$}\,\overline{{\bf L}}\,\mbox{\boldmath$\mathcal{K}$}-\overline{{\bf L}}_{iso}||_{\infty}}{||\mbox{\boldmath$\mathcal{K}$}\,\overline{{\bf L}}\,\mbox{\boldmath$\mathcal{K}$}||_{\infty}}\leq 0.02\label{L-iso}
\end{equation}
were discarded as not sufficiently isotropic. Those that did satisfy (\ref{L-iso}) are the ones for which we present results below. Importantly, the maximum difference between any two such realizations with the same inclusion concentration $c$ and the same minimum inter-inclusion distance $d$ was less than $2\%$, and hence, as expected \cite{Papanicolaou81}, they exhibited practically the same homogenized behavior. By way of an example, Fig. \ref{Fig2} shows three representative unit cells $Y$ containing a total of $\overline{{\bf L}}_{iso}=960$ inclusions at concentration $c=0.50$ and minimum inter-inclusion distances $d=0.01 A, 0.02A,$ and $0.05 A$ that satisfy conditions (\ref{P2}) and (\ref{L-iso}).
\begin{figure}[t!]
\centering \includegraphics[width=4.6in]{Fig2.eps}
\caption{{\small Representative unit cells $Y$ containing random distributions of $\overline{{\bf L}}_{iso}=960$ $2$-spherical inclusions of monodisperse radius $A$ at concentration $c=0.50$ and minimum distances $d=0.01 A, 0.02A,$ and $0.05 A$ between the inclusions.}}\label{Fig2}
\end{figure}
\subsection{Results}\label{Sec:Results}
Figure \ref{Fig3} presents the FE solutions obtained for the effective shear modulus $\overline{\mu}$, as defined in (\ref{mueff}), of the isotropic suspensions described above. While Fig. \ref{Fig3}(a) shows the effective shear modulus $\overline{\mu}$, normalized by the shear modulus of the elastomeric matrix $\mu^{(\texttt{m})}$, for minimum inter-inclusion distance $d=0.01A$ and elasto-capillary numbers $eCa=0.20,1,5$ as a function of the concentration $c$ of inclusions, Fig. \ref{Fig3}(b) shows $\overline{\mu}/\mu_{\texttt{m}}$ as a function of $c$ for $d=0.01A, 0.05 A$ and elasto-capillary number $eCa=5$. For completeness, all plots include the asymptotic result
\begin{align}
\overline{\mu}=\overline{\mu}^{\,{\rm dil}}+O(c^2),\qquad\overline{\mu}^{\,{\rm dil}}=&\mu^{(\texttt{m})}+\dfrac{(2+n)(eCa-1)}{n+(2+n)\, eCa}\,\mu^{(\texttt{m})}\, c \nonumber\\
=&\mu^{(\texttt{m})}+\dfrac{2(eCa-1)}{1+2\, eCa}\,\mu^{(\texttt{m})}\, c\label{mueff-dilute}
\end{align}
for the effective shear modulus of a dilute suspension. To be precise, the result (\ref{mueff-dilute}) corresponds to the response of an infinitely large elastomer domain that contains a \emph{single} liquid inclusion. In other words, the result (\ref{mueff-dilute}) is an extension of the classical result of Eshelby \cite{Eshelby57} to account for the presence of surface tension at the matrix/inclusion interface.
\begin{figure}[t!]
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.97\linewidth]{Fig3a.eps}
\caption*{(a)}
\end{subfigure}\hspace{2mm}%
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.97\linewidth]{Fig3b.eps}
\caption*{(b)}
\end{subfigure}
\caption{{\small The effective shear modulus $\overline{\mu}$, normalized by the shear modulus of the underlying elastomeric matrix $\mu^{(\texttt{m})}$, for isotropic suspensions of monodisperse $2$-spherical liquid inclusions spanning a range of concentrations $c$ of inclusions and minimum inter-inclusion distances $d$. (a) $\overline{\mu}/\mu^{(\texttt{m})}$ as a function of $c$ for $d=0.01 A$ and three values of the elasto-capillary number $eCa$. (b) $\overline{\mu}/\mu^{(\texttt{m})}$ as a function of $c$ for $d=0.01 A, 0.05 A$ and elasto-capillary number $eCa=5$. For direct comparison, the plots include the asymptotic result (\ref{mueff-dilute}) for the corresponding dilute suspension.}}\label{Fig3}
\end{figure}
Three observations are immediate from Fig. \ref{Fig3}. First, irrespectively of the concentration $c$ of inclusions, $\overline{\mu}<\mu^{(\texttt{m})}$ for $eCa=0.20<1$, $\overline{\mu}=\mu^{(\texttt{m})}$ for $eCa=1$, and $\overline{\mu}>\mu^{(\texttt{m})}$ for $eCa=5>1$. That is, while the presence of liquid inclusions leads to the \emph{softening} of the material when $eCa<1$, it leads to \emph{stiffening} when $eCa>1$. The transition from softening to stiffening occurs precisely at $eCa=1$, when, rather interestingly, the presence of liquid inclusions goes unnoticed in the homogenized response. This behavior can be readily understood by recognizing that liquid inclusions with ``small'' interface stiffness $\widehat{\gamma}/2A$ pose little resistance to deformation and hence lead to the softening of the homogenized response. By contrast, inclusions with ``large'' interface stiffness $\widehat{\gamma}/2A$ pose significant resistance to deformation, behave effectively as stiff inclusions, and hence lead to the stiffening of the homogenized response. Second, both the softening and the stiffening can be very significant even at moderate values of $c$ and $eCa$. At $c=0.5$, for instance, we see from Fig. \ref{Fig3}(a) that $\overline{\mu}=0.54\mu^{(\texttt{m})}$ for $eCa=0.20$ and $\overline{\mu}=1.56\mu^{(\texttt{m})}$ for $eCa=5$. Finally, the minimum inter-inclusion distance $d$ remains inconsequential from the dilute limit $c\searrow 0$ up to approximately $c\approx 0.40$. For larger concentrations of inclusions, as expected on physical grounds \cite{LLP22}, suspensions with different minimum inter-inclusion distances $d$ can exhibit sizably different responses, more so the larger the concentration.
\section*{Acknowledgements}
\noindent Support for this work by the National Science Foundation through the Grant DMREF--1922371 is gratefully acknowledged. V.L. would also like to acknowledge support through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.
\bibliographystyle{unsrtnat}
|
1,116,691,498,335 | arxiv | \section{Introduction}
Simulations of collections of classical objects in the Universe have been performed for many decades
\cite{Klypin1983,Centrella1983,Wisdom1991,Duncan1998,Chambers1998,Robutel2001,Reintamayo2015,Rein2019}.
The coupled second order differential equations have either been solved numerically by various
higher- order symplectic algorithms \cite{Wisdom1991,Forest1990,Yoshida1990,Hernandez2015,petit2019} or
by the Particle-Particle/Particle-Mess (PPPM) method \cite{Hockney1974}.
In the PPPM method each mass unit, e.g. a planet is treated as moving in the collective field of all others, and the Poisson equation
for the PPPM grid is solved numerically.
Later, simulations with large scaled computer packages \cite{Diemand2008,Alimi2012} with many billions of mass units show strong evidence
for dark matter in the Universe. A comprehensive review of the simulations, the mass distribution in the Universe and
the evidence for dark matter is given in \cite{Martino2020}.
The present algorithm and the simulations deviate from the previous algorithms and the PPPM model and the complex simulations in several ways.
The basic algorithm is the central difference algorithm.
It appears in the literature under different names, most known as the ''Verlet-" or ''leap-frog" algorithm,
but it is actually first formulated by Isaac Newton in PHILOSOPHI\AE \ NATURALIS PRINCIPIA MATHEMATICA in 1687 \cite{Newton1687,Newton1}.
In celestial mechanics it has been rediscovered as the second-order leap-frog discrete mapping
\cite{Wisdom1991,Forest1990,Yoshida1990} and extended to higher order \cite {Hernandez2015,petit2019}.
Newtons discrete expression for the
relation between the positions of the objects, their forces and the discrete time propagation is time reversible, symplectic and has the same dynamic invariances for
a conservative system as his analytic formulation \cite{Tox1,Toxa}. The algorithm allows for obtaining the
discrete dynamics of classical objects without any approximations, and here his algorithm is extended to cover
the fusions of objects and self-assembly at the emergence of planetary systems.
\section{The discrete algorithm for fusion of classical objects}
According to Newton's classical discrete dynamics a new position $\textbf{r}_k(t+\delta t)$ at time $t+\delta t$ of an object
$k$ with the mass $m_k$ is determined by
the force $\textbf{f}_k(t)$ acting on the object at the discrete position $\textbf{r}_k(t)$ at time $t$ and the position
$\textbf{r}_k(t-\delta t)$ at $t - \delta t$ as
\begin{equation}
m_k\frac{\textbf{r}_k(t+\delta t)-\textbf{r}_k(t)}{\delta t}
=m_k\frac{\textbf{r}_k(t)-\textbf{r}_k(t-\delta t)}{\delta t} +\delta t \textbf{f}_k(t).
\end{equation}
where the momenta $ \textbf{p}_k(t+\delta t/2) = m_k (\textbf{r}_k(t+\delta t)-\textbf{r}_k(t))/\delta t$ and
$ \textbf{p}_k(t-\delta t/2)= m_k(\textbf{r}_k(t)-\textbf{r}_k(t-\delta t))/\delta t$ are constant in
the time intervals in between the discrete positions.
Newton $postulated$ Eq. (1) and obtained his second law from Eq. (1) as the limit $ lim_{\delta t \rightarrow 0}$ \cite{Newton1}.
Newton is together with Leibniz the fathers of analytic mathematics and Newton's discrete algorithm- or equivalent expressions, is usually presented as a third-order predictor algorithm, which can be derived
by a Taylor-McLaurin expansion from the objects analytic trajectories. Brok Taylor (1685-1731) lived at the same time as Newton (1643-1727), and Newton had full knowledge of Taylor expansions,
but Newton never presented his
expression for his second law, even in his later two editions of $Principia$, as the first and leading term in an analytic expansion. And with good reason because, unlike algorithms obtained by higher-order
expansions, his discrete algorithm has all the qualities of the analytic analog.
Isaac Newton obtained his second law as the limit expression $ lim_{\delta t \rightarrow 0}$ of the central difference
in momentum for a planet at a discrete change $\delta t$ in time. But he noticed in $Principia$ at the derivation of the law,
that the areas of three triangles in his geometrical construction
of the discrete trajectory of a planet are equal, \textit{an irrelevant observation for the derivation},
but he did not mentioned the consequence of the equal areas.
It is Kepler's second law, and the young Newton must immediately, when he postulated the law have realised,
that his discrete relation Eq. (1) at least explains Kepler's second law. But he
did not mentioned it at the derivation of the second law, even much later when he wrote
$Principia$, nor in his second- or third editions of $Principia$ \cite{Newton1687}.
The fulfillment of Kepler's second law is a consequence of the conserved angular momentum in his discrete dynamics \cite{Tox3}. An explanation for that Newton on one hand
noticed the equality of the areas of the triangles, and on the other hand did not noticed that this explains Kepler's second law could be, that
he believed that the exact classical dynamics first is achieved in the analytic limit with continuous time and space. But this is in fact not the
case, his discrete dynamics has the same invariances as his analytic dynamics.
Due to the time symmetry it s time reversible and the algorithm is also symplectic \cite{Wisdom1991,Forest1990,Yoshida1990,Tox4}. The conservation of momentum and angular momentum
is ensured by Newton's second and third law, because the sum of the forces between the objects
in a conservative system is zero. The algorithm conserves also the energy, but it is, however, not obvious because of the asynchronous
appearance of positions and momenta, and thereby the asynchronous determination of the
potential- and the kinetic energy. But one can prove that the discrete algorithm conserves the energy \cite{Toxa} and also show that there (most likely) exists a ''shadow Hamiltonian" nearby
the Hamiltonian for the analytic dynamics and where the discrete positions
are located on the shadow Hamiltonian's analytic trajectories \cite{Tox1,Tox2,Tox3}. So the discrete dynamics has a constant energy given by the energy of the shadow Hamiltonian.
Newton's discrete algorithm has been rediscovered several times, most known by L. Verlet \cite{Verlet}, and it appears with a variety of names:
Verlet-, Leap-frog,...\cite{Tox3}. Almost all Molecular Dynamics (MD) simulations of complex physical and chemical systems and many celestial mechanics simulations are performed with Newton's discrete algorithm.
It is convenient to reformulate Newton's algorithm as the ''Leap frog" algorithm
\begin{equation}
\textbf{v}_k(t+\delta t/2)= \textbf{v}_k(t-\delta t/2)+ \delta t/m_k \textbf{f}_k(t),
\end{equation}
with the velocities $\textbf{v}(t+\delta t/2)$ and $\textbf{v}(t-\delta t/2)$ and the positions
\begin{equation}
\textbf{r}_k(t+\delta t)= \textbf{r}_k(t)+ \delta t \textbf{v}_k(t+\delta t/2),
\end{equation}
so the new positions at $t+\delta t$ are obtained in two steps, first by calculating the new
(mean) velocities $ \textbf{v}_k(t+\delta t/2)$ in the time interval $[t,t+\delta t] $ from the old velocities $ \textbf{v}_k(t-\delta t/2)$
in the previous time interval and the
forces $\textbf{f}_k(\textbf{r}_k(t))$ at the positions $\textbf{r}_k(t)$, and then the new
positions $\textbf{r}_k(t+\delta t)$ are obtained from the velocities $ \textbf{v}_k(t+\delta t/2)$ .
Newton's discrete algorithm is here used as a starting point for a formulation of a discrete algorithm
for the irreversible fusion of spherical symmetrical objects
by classical dynamics with inelastic collisions. The derivation of the algorithm is governed by a desire to preserve
as much as possible of the invariances of Newton's dynamics.
\subsection{An algorithm for coalescence of classical objects and formation of planetary systems}
The classical discrete dynamics between $N$ spherically symmetrical objects
with masses $ m^N=m_1, m_2,..,m_N$ and positions $\textbf{r}^N(t)=\textbf{r}_1, \textbf{r}_2,..,\textbf{r}_N$ is obtained
by Eq. (2) and Eq.(3) with extensions.
According to Newton's shell theorem \cite{Newtonshell} the force, $\textbf{F}_i$,
on a spherically symmetrical object $i$ with mass $m_i$ is a sum over the forces, $ \textbf{f}(r_{ij})$, caused by the other
spherically symmetrical objects $j$ with mass $m_j$, and it
is solely given by their center of mass distance $r_{ij}$ to $i$
\begin{equation}
\textbf{F}_i(r_{ij})= \Sigma_{j \neq i}^N \textbf{f}(r_{ij})= -\frac{G m_i m_j}{r_{ij}^2}\hat{\textbf{r}}_{ij}.
\end{equation}
Let all the spherically symmetrical objects
have the same (reduced) number density $\rho= (\pi/6)^{-1} $ by which
the diameter $\sigma_i$ of the spherical object $i$ is
\begin{equation}
\sigma_i= m_i^{1/3}
\end{equation}
and the collision diameter
\begin{equation}
\sigma_{ij}= \frac{\sigma_{i}+\sigma_{j}}{2}.
\end{equation}
If the distance $r_{ij}(t)$ at time $t$ between two objects is less than $\sigma_{ij}$
the two objects merge to one spherical symmetrical object with mass
\begin{equation}
m_{\alpha}= m_i + m_j,
\end{equation}
and diameter
\begin{equation}
\sigma_{\alpha}= (m_{\alpha})^{1/3},
\end{equation}
and with the new object $\alpha$ at the position
\begin{equation}
\textbf{r}_{\alpha}= \frac{m_i}{m_{\alpha}}\textbf{r}_i+\frac{m_j}{m_{\alpha}}\textbf{r}_j,
\end{equation}
at the center of mass of the the two objects before the fusion.
(The object $\alpha$ at the center of mass of the two merged objects $i$ and $j$ might occasionally be near another object $k$
by which more objects merge, but after the same laws.)
Let the center of mass of the system of the $N$ objects be at the origin, i.e.
\begin{equation}
\Sigma_k m_k \textbf{r}_k(t)=\textbf{0}.
\end{equation}
The momenta of the objects in the discrete dynamics just before the fusion are $\textbf{p}^N(t-\delta t/2)$ and the
total momentum of the system is conserved at the fusion if
\begin{equation}
\textbf{v}_{\alpha}(t-\delta t/2)= \frac{m_i}{m_{\alpha}}\textbf{v}_i(t-\delta t/2)+ \frac{m_j}{m_{\alpha}}\textbf{v}_j(t-\delta t/2),
\end{equation}
which determines the velocity $\textbf{v}_{\alpha}(t-\delta t/2)$ of the merged object.
The invariances in the classical
Newtonian dynamics are for a conservative system with Newton's third law, i.e with
\begin{equation}
\textbf{f}_{kl}(t)=-\textbf{f}_{lk}(t)
\end{equation}
for the forces between two objects $k$ and $l$, and with no external forces.
An object $k$'s forces with $i$ and $j$ before the fusion
are $\textbf{f}_{ik}(t)$ and $\textbf{f}_{jk}(t)$,
and these forces
must be replaced by calculating the force $\textbf{f}_{\alpha k}(\textbf{r}_{\alpha k}(t))$.
The total force after the fusion is zero
due to Newtons third law for a conservative system with the forces $\textbf{f}_{\alpha k}=-\textbf{f}_{k \alpha}$ between pairs of objects,
and the total momentum
\begin{eqnarray}
\Sigma_k \textbf{p}_k(t_n+\delta t/2)= \Sigma_k \textbf{p}_k(t_n-\delta t/2)+ \delta t\Sigma_k \textbf{f}_k(t_{n}) \nonumber \\
= \Sigma_k \textbf{p}_k(t_n-\delta t/2),
\end{eqnarray}
and the position of the center of mass are conserved for the discrete dynamics with fusion.
The determination of the position, $\textbf{r}_\alpha(t)$, and
the velocity, $\textbf{v}_\alpha(t-\delta t/2)$, of the new object from the requirement of
conserved center of mass and conserved momentum determines the discrete dynamics of the $N-1$ objects.
The angular momentum is affected by the fusion.
The angular momentum of the system of spherically symmetrical objects consist of two terms
\begin{equation}
\textbf{L}(t)= \textbf{L}_{G}(t)+ \textbf{L}_{I}(t)
\end{equation}
where $ \textbf{L}_{G}(t)$ is the angular momentum of the objects due to the dynamics obtained from the gravitational forces between their
center of masses, and $\textbf{L}_{I}(t)$ is the angular momentum due to the spin of the objects.
Without fusion $\textbf{L}_{G}(t)$ is conserved for Newtons discrete
dynamics \cite{Tox3}. $\textbf{L}_{I}(t)$ is, however, also conserved according to the shell theorem \cite{Newtonshell} , where Newton
proves that no net gravitational force is exerted by
a shell on any object inside, regardless of the object's location within the uniform shell, by which the
spin of the object is not affected by any force and is therefore constant.
But at a fusion $ \textbf{L}_{G}$ changes by
\begin{equation}
\delta \textbf{L}_{G}(t)= \textbf{r}_{\alpha}(t) \times m_{\alpha}\textbf{v}_{\alpha}(t-\delta t/2)-
\textbf{r}_i(t) \times m_i\textbf{v}_i(t-\delta t/2)- \textbf{r}_j(t) \times m_j\textbf{v}_j(t-\delta t/2).
\end{equation}
and $ \textbf{L}_{I}$ changes by
\begin{eqnarray}
\delta \textbf{L}_{I}(t)=
(\textbf{r}_i(t)- \textbf{r}_{\alpha}(t))\times m_i\textbf{v}_i(t-\delta t/2)+
(\textbf{r}_j(t)- \textbf{r}_{\alpha}(t)) \times m_j\textbf{v}_j(t-\delta t/2) \nonumber \\ \nonumber
= \textbf{r}_i(t) \times m_i\textbf{v}_i(t-\delta t/2)+ \textbf{r}_j(t) \times m_j\textbf{v}_j(t-\delta t/2)
- \textbf{r}_{\alpha}(t) \times m_{\alpha}\textbf{v}_{\alpha}(t-\delta t/2) \\
= -\delta \textbf{L}_{G}(t).
\end{eqnarray}
So without fusion the angular momenta $ \textbf{L}_{I}(t)$ and $ \textbf{L}_{G}(t)$ with Newton's discrete dynamics are
conserved separately, and at a fusion the total angular momentum is still conserved but with an exchange of angular momentum with
$\delta \textbf{L}_{I}(t)= -\delta \textbf{L}_{G}(t)$.
The exact classical discrete dynamics with fusion of colliding objects can be used to explore the self-assembly at the emergence of planetary systems and to investigate
the stability and chaotic behaviour of solar systems \cite{Hernandez2020}.
\begin{figure}
\begin{center}
\resizebox{0.5\textwidth}{!}{
\includegraphics[angle=-90]{Figure1.eps}}
\caption{ The number $N(t)$ of objects (sun, planets and free objects) as a function of time $t$ with fusion for one (No. 1) of the twelve systems.
This system contained 852 objects at $t=250$. The
positions of the $N$=852 objects are shown in Figure 2 (red dots) together with the
start positions with $N=1000$ (small blue dots). At $t= 4.5\times 10^6$ the planetary system contained one Sun
and 165 planets and free objects, and the system aged with only one fusion for
the succeeding $\Delta t=5.5 \times 10^6$ time. The planet orbits at $t=4.5 \times 10^6$
for four inner planets are shown in Figure 3.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.65\textwidth}{!}{
\includegraphics[angle=-90]{Figure2.eps}}
\caption{ The positions of the $N$ objects at the start
of fusion with small blue dots, and with red dots at $t=250$ where the fusion accelerated (Figure 1) and ended with
one sun, 23 planets and 142 free objects.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.65\textwidth}{!}{
\includegraphics[angle=-90]{Figure3.eps}}
\caption{ The sun (red, enlarged) with four planets close to the sun.
The orbits are obtained at $t=4.50 \times 10^6$ after
creation of the solar system. Light blue: Orbit time
$T_{\textrm{orbit}}=630,$ eccentricity $ \epsilon=0.941$; green: $T_{\textrm{orbit}}=1308, \epsilon=0.867$;
blue $T_{\textrm{orbit}}=1740, \epsilon=0.377$;
magenta $T_{\textrm{orbit}}=1529, \epsilon=0.815$. The light blue planet have circulated $\approx$ seven thousand times around
the sun.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.5\textwidth}{!}{\includegraphics[angle=-90]{Figure4.eps}}
\caption{ Mean $log-$distances ($log <r_{i,\textrm{sun}}(t)>$)
to the sun of the objects $i$ as a function of their relative ($log$) mean velocities
$log(<v_{i,\textrm{sun}}(t))>$) for planetary system No. 1. The means are for a time interval
$\Delta t \in [4.0 \times 10^6, 4.5 \times 10^6]$. The locations of the four inner planets in Figure 3 are marked with their color from Figure 3.
The planets (colored spheres) are located on the lower branch of the distribution and the upper
branch shows(black spheres) the mean locations of the free objects. The ''Kuiper belt" with there objects (grey spheres) is estimated to
be for mean locations $\approx
<r_{i,\textrm{sun}}(t)> \in [30000,200 000]$. Orbits of planets in the Kuiper belt are shown in the next figure.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.65\textwidth}{!}{\includegraphics[angle=-90]{Figure5.eps}}
\caption{ Two planets in the ''Kuiper belt". The planet shown with green changed its course, but remained in the Kuiper belt,
whereas the planet shown with blue escaped
the planetary system.}
\end{center}
\end{figure}
\section{Simulation of formation of planetary systems }
The algorithm Eqs.(2), (3) and IIA is used to simulate the emergence of planetary systems. The set-up of the actual MD systems and the general conditions
for MD for gravitational systems is given in the Appendix.
Our planetary system is presumably created from flattened, rotationally supported disc structures of cosmic dust grains, and the composition of the building blocks -
planetesimals - is grossly different from that of the sun \cite{Blum2008}.
Here the
results for twelve MD simulations of the emergence of planetary systems are presented.
The planetary systems are obtained from different diluted ''gas" states of $N=1000$ objects with equal masses and at different low
temperatures (mean velocity of the objects), and the systems can be considered as
embryos of planetary systems by self-assembly of simple small grains.
The creation of a system with one heavy central object and with some of the other objects in orbits around the central ''sun" is
established within a relative short period of time as illustrated in Figure 1. The
start configurations are diluted spherical (gas) distributions of objects (see Appendix).
The objects are accelerated toward the center of mass by the gravitational forces,
and the fusion of objects results in a creation of a system with one heavy object (the sun) and
with other of the objects in elliptical orbits around the sun. The solar systems are created rather quickly.
The system No. 1 (Figure 1-5, and Table I) is established already after a fusion time $t \approx 1000$ (see Figure 1, for MD details and unit of time see Appendix)
with 386 objects consisting of one sun with the mass $m_{\textrm{sun}}=557$ and many planets and free objects.
Nine of the planets have a mass $m=3$, but most of the other planets and free objects (338)
are not fused with others and have a mass $m=$ 1. The solar system is in a rather stable state but ages slowly (Figure 1). First after $t=4.5\times 10^6$ are all
twelve planetary system stable and with very rare mergers (Table I).
The four planets in planetary system No. 1 closest to the sun is shown in Figure 4. The angular momentum, $\textbf{L}_G$,
of the planetary system is constant if there
is no fusion, but also the angular momenta of the individual planets are also almost constant.
The four planets shown in Figure 4 have almost constant angular momenta with closed elliptical orbits.
The solar system shown in the Figures 1-4 is system No. 1 (see Table I). It consists of many planets including very
tiny bounded planets in a ''Kuiper belt'' and with orbits which have an orbit time of more than
$t_{\textrm{orbit}}=1 \times 10^5$. The existence of such
Kuiper belt makes it difficult to determine precisely how many planets a planetary system consists of since the planets change their
orbits over time. The distribution of planets and free objects is determined from the distances and velocities
relative to the sun. A planet will have a relative short mean distance to the sun, averaged
over a long time interval, whereas a free object has a long mean distance and a constant velocity. Figure
4 shows the relative mean distances of the objects to the sun as a function of their relative mean velocities, where the means are obtained
for $\Delta t \in [4.0\times 10^6,4.5\times 10^6]$.
The distribution has two branches, a lover
branch for the planets and an upper branch for the free objects.
The Kuiper belt is located in between the two branches in Figure 4.
The objects in this zone are almost free from the gravitational attractions of the sun and the other planets, and sometimes an object
in this zone escapes from the
planetary system. Figure 5 shows two planets in the Kuiper belt, where one (green) remained in the planetary system,
whereas one (blue) escaped. The planet with green remained in
the planetary system and with elliptical-like orbits.
The planet with green was in an elliptical orbit with an eccentricity
\begin{equation}
\epsilon =\frac{r_{\textrm{max}} -r_{\textrm{min}}}{r_{\textrm{max}} +r_{\textrm{min}}}=\frac{156090-115}{156090+115}=0.9985,
\end{equation}
and with the longest distance $r_{\textrm{max}}=156090$ at aphelion and the shortest distance at perihelion
$r_{\textrm{min}}=115$. The orbit time is $5.06 \times 10^5$. After passing the aphelion at $r_{i,sun}$=156090 the planet ended in
a new elliptical orbit closer to the sun with a new aphelion distance $r_{i,sun}$=38635. The other object shown with blue in Figure 5 excaped the planetary system.
Our Solar system has a Kuiper belt located $\approx$ 30-100 AU (astronomical unit = mean distance between the Earth and the Sun).
This distances is translated to the present solar systems by setting 1 AU=500, i.e $\approx$ equal to the mean distance of one of the
inner planets in Figure 4 (light blue).
The lower border of the present Kuiper belt should be $\approx 30 \times 500=15000$ with this unit and the upper border
should be $\approx 50000$. The
planet orbit shown with green in Figure 5 have a maximum distance 156090.
The present Kuiper belt in planetary system No. 1
is estimated to be in the interval
$\mid \textbf{r}(t)-\textbf{r}_{\textrm{sun}}(t) \mid \in [30000,200000]$.
The data for the twelve simulations of planets systems are collected in Table I.
The data are obtained in the
time interval $t \in [4.0\times 10^6, 4.50\times 10^6 ]$ after the start
of the fusions. The mean distances $ <r_{\textrm{cl}}>$ is the mean distance to the sun
for the planet in a system closest to the sun. The temperatures, $T$, of the objects are obtained from the
mean kinetic energy $<E_{\small{\textrm{Kin}}}>/N=3/2T$, and the different start distributions and kinetic energies of the objects result in temperatures
which varies with a factor of $\approx 4$. There is, however, no clear connection between the number of planets and the mean kinetic energies
of the planetary systems.
The eccentricities and mean positions of planets in the twelve systems are shown in Figure 6.
The distributions show that the inner planets in general have an eccentricity significant
below $\epsilon=1$, which is the limit of stability for an elliptical orbit, whereas the planets close to the border of the the Kuiper belt ($\approx 30000$)
all have eccentricities only little less than the limit of stability.
\textbf{Table 1}. Collected data for the planetary systems
\begin{tabbing}
\hspace{1.6cm}\=\hspace{1.5cm}\=\hspace{1.6cm}\=\hspace{1.6cm}\=\hspace{1.6cm}\=\hspace{1.6cm}\=\hspace{2.0cm} \\
\> Inner \>''Kuiper \> Free \\
No. \>Planets \> planets \> objects \> $m_{\textrm{Sun}}$\>$<r_{\textrm{cl}}>$\> Temp. \\
--------------------------------------------------------------------------------------------\\
1 \> 21 \> 2 \> 142\> 830 \> 484\> 4.24 \\
2 \> 25 \> 10 \> 317 \> 628 \>119 \> 8.88 \\
3 \> 24 \> 10 \> 229\> 720 \> 150\> 11.60 \\
4 \> 13 \>5 \> 224 \> 739 \> 455\> 12.53 \\
5 \> 23 \> 7 \> 253\> 700 \> 191\> 10.76 \\
6 \> 8 \>3 \> 116 \> 865\> 131\> 5.00 \\
7 \> 7 \> 0\> 147 \> 839\>262 \>19.08 \\
8 \> 13 \>4 \> 182 \> 765 \> 452 \> 3.45 \\
9 \> 15 \>6 \> 159 \> 779 \>402 \> 4.62 \\
10 \> 12 \>2 \> 141 \> 826 \>44 \> 3.95 \\
11 \> 8 \>2 \> 107 \> 862 \>43 \> 4.76 \\
12 \> 6 \>2 \> 90 \> 897 \>145 \> 4.65 \\
\end{tabbing}
--------------------------------------------------------------------------------------------\\
\begin{figure}
\begin{center}
\resizebox{0.5\textwidth}{!}{\includegraphics[angle=-90]{Figure6.eps}}
\caption{ The eccentricity of the planets in the twelve planetary systems as
a function of their mean distances from their suns.}
\end{center}
\end{figure}
\section{Conclusion}
Newton solved in $Principia$ as the first, Kepler's equation and determined the analytic expression for the orbit of a planet -or a comet,
but the analytic dynamics of a solar system with many
planets can only be obtained numerically, traditionally by the use of higher order symplectic algorithms.
But the discrete dynamics with Newtons central difference algorithm is also time reversible and symplectic and has the same invariances as his analytic classical dynamics.
Here his discrete algorithm is extended to handle
fusion of objects at a collision and to create small planetary systems.
The formation of a planetary system depends on the distribution and the kinetic energies of the collection of objects that start to fuse together.
Our planetary system is presumably created from flattened, rotationally supported disc structures of cosmic dust grains, and the composition of the building blocks -
planetesimals - is grossly different from that of the sun \cite{Blum2008}.
Here twelve planetary systems are created by fusions of a small number $N=1000$ objects (planetesimals)
with equal masses and from a spherical starting distribution of the merging objects, in order to test the algorithm and to obtain an ''embryo" of a planetary system.
The time evolutions in the systems reveal that the objects spontaneously form ''mini" planetary systems with a heavy ''sun" and with many of the planetesimals in orbits
around the sun.
The planetary systems have some qualities which agree with our own Solar system, with stable elliptical orbits and with
many bounded objects at great distances from the sun in a ''Kuiper belt". But the planetary systems deviate from the Solar system by,
that the orbits are not in a common Ecliptic plane due to the spherically distributed starting positions of the merging objects. Furthermore there is no planetary systems with moons. These deviations can, however, very well be
a consequence of the small size of the systems with only $N=1000$ spherically distributed objects at the start of the fusion, and to the monodisperticity of the
systems with equal masses of the objects at the start. The small systems was selected in order to be able to follow
the created solar systems over very long times without any approximations in order to test the exact algorithm and the aging and stability of the planetary systems.
The planetary systems are established over a short period of time with fusions.
The systems are stable and age slowly by that a planet occasionally collided with another planet or with the sun and merge. Some of the
planets were also accelerated out of the planetary system (Figure 5). The planetary systems show chaotic sensitivity, and the actual numbers of inner planets and their
positions and eccentricities depend on the forces from all the other objects in the system, including the free objects far from the sun. Almost all of the
twelve planetary systems
contain many planets in Kuiper belts far from the suns.
The extension of Newton's discrete dynamics with the algorithm for fusion of colliding objects is the simplest possible. The fusion of two
spherical symmetrical objects to one uniform and spherical symmetrical object is far from what actually happens when two macroscopic celestial bodies
merge \cite{Canup2001}. But, although it is straight forward to extend the algorithm to a more complex fusion at the collision, it has not be the goal
with the present investigation. The algorithm is suitable for analysis of the self-assembly of planetesimals and, due to the exact dynamics, the algorithm can be
useful at investigations of the impact of the chaotic behaviour on the stability of planetary systems.
\\
$ $\\
$\textbf{Acknowledgement}$
This work was supported by the VILLUM Foundation Matter project, grant No. 16515.
\\
$\textbf{Data Availability Statement}$ Data will be available on request.
|
1,116,691,498,336 | arxiv | \section{Introduction}
Almost complex structure has been employed in the study of superstring theory, gravity and sigma model in different but related researches. In topological sigma model developed by Witten \cite{witten}, almost complex structure arose in a coupling term with sigma model. Following this and the idea of complexifying space-time, an invariant action containing exterior derivative of almost complex structure was constructed by Chamseddine \cite{ch} which gives the correct equation of motion of a complex metric in linearized limit\footnote{The complex metric is based on Einstein's attempts to generalize the relativistic theory of gravitation to establish a unified
field theory \cite {eins1, eins2}.}. Other applications of the almost complex structure are in string theory (see for example \cite{Strominger}), where a set of 10-dimensional solution of string equation is based on complex non-K\"{a}hler manifolds.
An interesting class of non-K\"{a}hler manifolds is nearly K\"{a}hler manifolds. These manifolds were first studied by A. Gray \cite{gary1,gary2,gary3}, and recently were investigated and classified by Nagy {\it et al} \cite{nagy1, nagy4} who have shown that the complete and strict nearly K\"{a}hler manifolds, i.e. non-K\"{a}hler manifolds, are locally Riemannian products of 6-dimensional nearly K\"{a}hler manifolds, twistor spaces over quaternionic K\"{a}hler manifolds and homogeneous nearly K\"{a}hler spaces. The only known compact strict nearly K\"{a}hler manifolds in dimension 6 are three coset space
$S^{6}\simeq G(3)/SU(3), CP^{3}\simeq Sp(2)/SU(2)\times U(1), F(1,2)\simeq SU(3)/ U(1)\times U(1)$ and a group manifold $S^{3}\times S^{3}\simeq SU(2)\times SU(2)$ \cite{Butruille}. Nearly K\"{a}hler manifolds have been of recent interest in massive type IIA super-gravity and related Yang-Mills theory, M-theory and heterotic strings compactification \cite{app}.
Besides, as an interesting appearance of almost complex structure in general relativity, in Ref.\cite{cfkaluzaklein} it has been shown that a special class of solutions of Kaluza-Klein conformal flat reduction equations relates the Kaluza-Klein gauge field $F_{\mu\nu}$ to the pseudo-K\"{a}hler and para-K\"{a}hler structure on manifold.
Here, we are interested in a somewhat different approach to develop a gravitational model which depends on almost complex
structure. Following Einstein realization which declare that gravity should be regarded as a property of Riemannian geometry and
space-time, it is intriguing to ask if another geometrical structure on manifold could play a physical role, for example,
as a matter field. The main purpose of this paper is to construct an action of type $S(g_{MN},J_{M}^{~N}) $ on a non-K\"{a}hler
manifold which includes not higher than second derivative of metric. The model uses a curvature-like tensor including the almost complex structure beside the metric structure and the idea behind this is to give matter interpretation to the almost complex structure. Explicitly, the four dimensional matter will be induced from the
almost complex structure in accordance with the Einstein's dream that the
origin of matter is geometry.
{In general, the manifold here is considered to be non-K\"{a}hlerian, however, it turns out that if one is interested in exploiting an interpretation of matter source from the almost complex structure, the only choice is the nearly K\"{a}hler manifold which is consistent with the conservation law for such a matter source}. In other words, conservation of energy-momentum tensor of model requires the manifold to be nearly
K\"{a}hler. This type of manifolds appears
in string compactification as internal space as a result of supersymmetry condition \cite{app}. Of course, in some previous works the authors have tried to include the complex
structure by adding terms to the standard Einstein-Hilbert action (see for example Ref. \cite {ch}). In the present model, we will show that such
terms in the action are recovered in a straightforward way by using a Einstein-Hilbert action whose scalar curvature is constructed by the
curvature-like tensor. We will show that such a new geometric structure is also capable of alleviating the well known fine tuning problem of the cosmological
constant, in
a typical example. Moreover, it may shed light on the other problems of cosmology
like dark energy.
The plan of this paper is as follows. A short summary of nearly K\"{a}hler manifolds is presented in section two. In section three, by investigation of symmetry properties of a tensor which carries almost complex structure it is shown that such tensor is a curvature-like tensor. In section four, a gravitational model is constructed by the scalar curvature of this curvature-like tensor subject to the weak nearly K\"{a}hlerian property imposed as a condition by a Lagrange multiplier. Then it is shown that the conservation of energy-momentum tensor is consistent with the strong nearly K\"{a}hlerian condition. In section five, the Einstein field equations with the nearly K\"{a}hlerian properties of almost complex structure are solved, for example, on the group manifold $R\times II \times S^{3}\times S^{3}$, the gravitational system (metric and matter) is completely determined, and a solution is given for the fine
tuning problem of the cosmological constant. In section six, we discuss on the possible role of almost complex structure as dark energy. The paper ends with a conclusion.
\section{Nearly K\"{a}hler manifolds}
Here, for self containing of the paper we review the definitions and some mathematical concepts of nearly K\"{a}hler manifolds. Let $M$ be an almost Hermitian manifold with real dimension $d$ $(d>2)$, a Hermitian structure $(J^{~M}_{N},g_{MN})$, i.e. an almost complex structure, and a positive definite Riemannian metric tensor $g_{MN}$ satisfying the following conditions \cite{nakahara}
\begin{eqnarray}\label{1}
J^{~N}_{R}J^{~M}_{N}=-\delta^{M}_{R},
\end{eqnarray}
\begin{eqnarray}\label{2}
g_{MN}J^{~M}_{R}J^{~N}_{S}=g_{RS}.
\end{eqnarray}
Then, from the above equations, we have the K\"{a}hler two-form
\begin{eqnarray}\label{3}
\Omega_{MN}=g_{NR}J^{~R}_{M}=-\Omega_{NM}.
\end{eqnarray}
The Nijenhuis tensor of an almost Hermitian manifold $(M; J; g)$ is defined as follows
\begin{eqnarray}\label{6}
N_{J}(X,Y)=(\nabla_{JX}J)Y-(\nabla_{JY}J)X+J((\nabla_{Y}J)X-(\nabla_{X}J)Y),~~~~~ \forall X,Y \epsilon ~\cal{X}(M)
\end{eqnarray}
where $\chi(M)$ is the space of vector fields on $M$. Now, if an almost Hermitian structure satisfies the following conditions
\begin{subequations}\label{4}
\begin{eqnarray}
\nabla_{M}J^{~N}_{R}+\nabla_{R}J^{~N}_{M}=0, \\
\nabla_{M}J^{~M}_{R}=0,
\end{eqnarray}
\end{subequations}
where the $(5b) $ is the weak form of $(5a)$ and $\nabla$ denotes the operator of covariant derivative with respect to the Riemannian connection, then the manifold is called a\textit{ nearly K\"{a}hler manifold} (or Tachibana space, or K-space) \cite{gary2}. For nearly K\"{a}hler manifold $M$, the Nijenhus tensor does not vanish and we have \cite{yano}
\begin{eqnarray}\label{7}
N_{J}(X,Y)=4J(\nabla_{X}J)Y,~~~~~ \forall X,Y \epsilon ~\cal{X}(M).
\end{eqnarray}
Furthermore, the lowest dimension of the strict nearly K\"{a}hler manifolds is 6 dimensions \cite{yano}. Let $R^{R}_{~SMN}$ and $R_{MN}=g_{RS}R^{R}_{~MSN}$ and $R$ be the \textit{Riemannian curvature tensor}, Ricci tensor and scalar curvature respectively, and $R^{*}_{MN}$ be the Hermitian Ricci tensor which is defined as follows \cite{yano}
\begin{eqnarray}\label{9}
R^{*}_{MN}\equiv-\frac{1}{2}R_{MK RS}J^{~K}_{N}J^{RS}.
\end{eqnarray}
Then, for a nearly K\"{a}hler manifold two versions of Ricci tenors are related by \cite{yano}
\begin{eqnarray}\label{10}
S_{MN}\equiv R_{MN}-R^{*}_{MN}=-(\nabla_{M}J^{~R}_{S})(\nabla_{N}J^{~S}_{R}),
\end{eqnarray}
and
\begin{eqnarray}\label{11}
S=R-R^{*}=-(\nabla_{M}J^{~R}_{S})(\nabla^{M}J^{~S}_{R})=Constant>0.
\end{eqnarray}
In a K-space we have the Bianchi identity for Hermitian Ricci tensor as $\frac{1}{2}\nabla_{M}R^{*}=\nabla^{N}R^{*}_{NM}$, and by using \eqref{4} we have \cite{yano}
\begin{eqnarray}\label{12}
\nabla^{N}(R_{NM}-R^{*}_{NM})=\frac{1}{2}\nabla_{M}(R-R^{*})=0.
\end{eqnarray}
\section{Curvature-like tensor}
In this section, we investigate properties of a curvature-like tensor on a nearly K\"{a}hler manifold.
For arbitrary constants $a$ and $b$, a tensor which includes Riemannian curvature tensor and almost complex structure could be defined as follows \cite{yano}
\begin{eqnarray}\label{13}
W_{MNRS}\equiv R_{MNRS}-a(g_{MS}S_{NR}-g_{NS}S_{MR}+g_{NR}S_{MS}-g_{MR}S_{NS})+b(R-R^{*})(g_{MS}g_{NR}-g_{NS}g_{MR}).
\end{eqnarray}
There exist real numbers $a$ and $b$ if and only if $d\geq 6$ \cite{yano}.
$W_{MNRS}$ tensor satisfies the following symmetry properties
\begin{eqnarray}\label{14}
W_{KLMN}+W_{KMNL}+W_{KNLM}=0,
\end{eqnarray}
\begin{eqnarray}\label{15}
W_{MNRS}=-W_{NMRS}=-W_{MNSR}=W_{RSMN},
\end{eqnarray}
which is then called a curvature-like tensor \cite{c-L}.
Because $W_{MNRS}$ tensor has similar terms as those of Weyl tensor, it is appealing
to check the Weyl tensor proprieties for $W_{MNRS}$. The Weyl tensor is
always invariant under conformal transformation of metric, i.e $\bar{g}=e^{2\sigma}g$; some of useful transformation properties of this tensor are listed in appendix A. The tensor $W_{MNRS}$ is conformal invariant if the arbitrary constants $a$ and $b$ are fixed as $a=-\frac{1}{d-4}$ and $b=-\frac{1}{(d-2)(d-4)}$, and just with these fixed constants $a$ and $b$ the $trW$ will be vanished in conformal flat K-spaces.
However, unlike Weyl tensor whose trace is always zero, $W_{MNRS}$ tensor has non-vanishing trace; in other words for contracted pair of indices in $W_{RMLN}$ we obtain
\begin{eqnarray}\label{22}
W_{MN}=W^{L}_{~MLN}=R_{MN}-a(2-d)S_{MN}+((1-d)b+a)(R-R^{*})g_{MN}.
\end{eqnarray}
Now, contracting \eqref{22} with $g^{MN}$ gives the scalar $W$ as follows
\begin{eqnarray}\label{23}
W=g^{MN}W_{MN}=R-(d-1)(2a-db)(\nabla_{R}J^{~M}_{N})(\nabla^{R}J^{~N}_{M}).
\end{eqnarray}
Therefore, $W_{RMLN}$ tensor cannot be regarded as a Weyl-like tensor, and it suffices to consider it as a curvature-like tensor. On the other hand, being a curvature-like tensor only, $W_{MNRS}$ tensor is not necessarily invariant under conformal transformation of the metric; hence, there is no obvious condition for fixing arbitrary constants $a$ and $b$, and they will remain arbitrary from the curvature-like tensor point of view.
\section{Gravity and induced matter}
The main motivation for the present work is to realize the possible physical signification of the almost complex structure as a geometrical structure carried by some manifolds. The almost complex structure has already been
interpreted as electromagnetic matter field \cite{cfkaluzaklein}. Here, similar
to the gravitational Einstein-Hilbert action where the Ricci scalar $R$ is constructed by the Riemann curvature tensor, we will try to construct a gravitational action by using the scalar curvature \eqref{23} constructed by the curvature-like tensor \eqref{13} which includes the almost complex structure and possesses the symmetry properties of the curvature tensor. One such familiar candidate is the curvature-like tensor $W_{RMLN}$ of \eqref{13} which looks like
the Weyl
tensor written in terms of $S_{MN}$ tensor \eqref{10} instead of Ricci tensor $R_{MN}$. Beside this tensor, there are some other known curvature-like tensors containing $J$-dependent terms, for example the holomorphic
curvature tensor introduced in \cite{hct}. However, the motivation for choosing a curvature-like
tensor $W_{RMLN}$, in comparison with other tensors mentioned above, is that the tensor $W_{RMLN}$ is potentially decomposable to the Einstein-Hilbert action plus a
matter type action, as is shown in the following. Furthermore, as it will
be seen in the following and in appendix $B$, the trace of this tensor has
a term which could be recognized as strength tensor of the almost complex
structure (or equivalently of K\"{a}hler two form $\Omega_{MN}$) \footnote{The holomorphic
curvature tensor has been used for constructing an invariant tensor under conformal
transformation of metric, named generalized Bochner curvature tensor which is related to the Weyl tensor \cite{WB}. Similarly, one could consider a tensor based on the curvature-like tensor $W_{RMLN}$ with
$U_{RMLN}\equiv~^{*}O_{RM}^{~~KP}W_{KPLN}=\frac{1}{2}(\delta_{R}^{~K}\delta_{M}^{~P}+J_{R}^{~K}J_{M}^{~P})W_{KPLN}$, which beside
being a curvature-like tensor, could be conformal invariant with fixed arbitrary constants $a$ and $b$ and particularly has
vanishing trace (i.e. $U=g^{RL}g^{MN}=0$) and so it could be recognized as a type of generalization of Weyl tensor. From this point of view, the $W_{RMLN}$ has been written in such a way that leads to a Weyl-like tensor.}.
The scalar \eqref{23} contains Ricci scalar $R$ and a term carrying covariant derivatives of the almost complex structure. We start with a general situation where there is no emphasis on $J^{~M}_{N}$ being a nearly K\"{a}hler structure, and \textit{the only requirement is non-K\"{a}hlerity}, $\nabla J\neq 0$. Then, the action with this scalar in {\it even} $d$-dimensions ($d\geq 6$) is given by\footnote{We have used the natural system of units with $G=c=1$ and the dimension dependent factor of the cosmological constant term has been chosen to have $R_{MN}=\Lambda g_{MN}$ in vacuum.}
\begin{eqnarray}\label{24}
S&=&\frac{1}{16\pi}\int d^{d}x\sqrt{-g} (W-(d-2)\Lambda)\nonumber\\
&=&\frac{1}{16\pi}\int d^{d}x\sqrt{-g}(R-(d-2)\Lambda)-\dfrac{(d-1)(2a-db)}{16\pi}\int d^{d}x\sqrt{-g}(\nabla^{K}J^{~M}_{N})(\nabla_{K}J^{~N}_{M})\nonumber\\
&\equiv&\frac{1}{16\pi} S_{EH}+S_{M}(J).
\end{eqnarray}
In this way, the almost complex structure $J$ as a part of geometrical data enters in the action via the scalar of the curvature-like
tensor. Briefly, the almost complex manifold is equipped with the metric $g$ and the almost complex structure $J$ which are two geometrical structures on the
manifold and the above action encompasses both of them where the $d$-dimensional Einstein-Hilbert term is decomposed totally from the $d$-dimensional $J$ terms \footnote{The coupling of $J$ terms with metric is introduced from gravitational point of view.}. Now,
one may consider the second term in \eqref{24} as a type of matter action and obtain a gravitational model which contains the Einstein-Hilbert action and a matter term which is induced by the almost complex
structure in a geometrical way. \textit{The key point
here is that the obtained matter action is not considered as an additional action to the Einstein-Hilbert action, rather it is obtained in a straightforward way from a pure geometric based action including a curvature-like
tensor $W_{RMLN}$}.
Conservation of energy momentum tensor derived from the above action requires the almost complex structure to be nearly K\"{a}hlerian type. However, the equations of motion of $J$ would kill the energy momentum tensor in the nearly K\"{a}hler case. The strategy for solving this inconsistency is to consider the geometry as nearly K\"{a}hler type and add the nearly K\"{a}hler condition $(5)$ with Lagrange multiplier to the action \eqref{24}. Since this energy-momentum tensor will be derived from the structure of nearly K\"{a}hler manifold, we shall call it as the {\it geometrically induced matter}. Hence, we reconsider the action \eqref{24} within nearly K\"{a}hler structure which guarantees the conservation of energy momentum tensor, as follows
\begin{eqnarray}\label{model}
S=\frac{1}{16\pi}S_{EH}+S_{M}(J)+\int d^{d}x\sqrt{-g}\lambda^{M}\nabla_{N}J_{M}^{~~N}.
\end{eqnarray}
Variations with respect to $\lambda^{M}$ and $J_{M}^{~N}$ give the equations of motion as follows
\begin{eqnarray}\label{l}
\nabla_{N}J_{M}^{~~N}=0,
\end{eqnarray}
\begin{eqnarray}\label{25}
-2(d-1)(2a-db)\nabla^{R}\nabla_{R}J_{M}^{~N}+\nabla_{M}\lambda^{N}=0.
\end{eqnarray}
Employing the equation of motion of $J_{M}^{~N}$ \eqref{25} for eliminating the Lagrange multipliers $\lambda^{M}$ gives the following extremized action after integrating by parts
\begin{eqnarray}\label{250}
S=\frac{1}{16\pi}\int d^{d}x\sqrt{-g}(R-(d-2)\Lambda)-\dfrac{(d-1)(2a-db)}{16\pi}\int d^{d}x\sqrt{-g}(\nabla^{K}J^{~M}_{N})(\nabla_{K}J^{~N}_{M}).
\end{eqnarray}
As is mentioned in detail in appendix $B$, the second term in the action has a form of $H_{MNK}^{(nk)}H^{(nk)~MNK}$, where $H_{MNK}^{(nk)}$ is the field strength associated with the nearly K\"{a}hler
two form $\Omega_{MN}=g_{NK}J_{M}^{~K}$ given in \eqref{90}.
Using the general formula for the symmetric energy-momentum tensor
\begin{eqnarray}\label{26}
T_{MN}=\frac{\delta(\sqrt{-g} \cal L_{M})}{\delta g^{MN}}=g_{MN}{\cal L_{M}}-2\frac{\delta{\cal L_{M}}}{\delta g^{MN}},
\end{eqnarray}
and varying matter Lagrangian with respect to the metric leads to symmetric energy-momentum tensor as follows
\begin{eqnarray}\label{27}
T_{MN}=\dfrac{(d-1)(2a-db)}{16\pi}[-2(\nabla_{M}J^{~K}_{L})(\nabla_{N}J^{~L}_{K})-4(\nabla_{L}J_{KN})(\nabla^{L}J^{~K}_{M})\nonumber\\
+g_{MN}(\nabla_{P}J^{~K}_{L})(\nabla^{P}J^{~L}_{K})],
\end{eqnarray}
where we have used $J^{2}=-1$. The non-zero contribution of the Lagrange multiplier term in \eqref{25} prevents the energy momentum tensor from being vanished due to the equation of motion of $J$. In general, varying the action \eqref{250} with respect to the metric results in Einstein equations as follows
\begin{eqnarray}\label{28}
R_{MN}-\frac{1}{2}R~g_{MN}+\frac{d-2}{2}\Lambda ~g_{MN}=8\pi T_{MN},
\end{eqnarray}
so that the almost complex structure $J$ as a geometric structure on manifold appears in the right hand side of Einstein equation as an induced energy-momentum tensor.
Now, by considering the covariant derivative of \eqref{27} to investigate the conservation of energy-momentum tensor and using \eqref{10}, \eqref{11}, we have
\begin{eqnarray}\label{29}
\nabla^{M}T_{MN}=\dfrac{(d-1)(2a-db)}{16\pi}[-2(\nabla^{M}(R_{MN}-R^{*}_{MN})-\dfrac{1}{2}\nabla_{N}(R-R^{*}))-4\nabla^{M}(\nabla_{L}J_{KN})(\nabla^{L}J^{~K}_{M})],
\end{eqnarray}
the above equation is covariantly constant if both nearly K\"{a}hler properties in \eqref{4} be taken into account with using the Bianchi-like identity \eqref{12}. In fact, we have the weak nearly K\"{a}hler condition $(5b)$ as the equation of motion \eqref{l}, when the general and stronger condition of nearly K\"{a}hler structure $(5a)$, is required by the conservation of energy-momentum tensor.
Note that the second term could be rewritten in terms of $\nabla^{M}(R_{MN}-R^{*}_{MN})$ by using $(5a)$.
In the last calculations we have used the properties of curvature tensor in nearly K\"{a}hler manifolds
$R_{MNRS}=J^{~K}_{M}J^{~L}_{N}J^{~P}_{R}J^{~Q}_{S}R_{KLPQ}=J^{~K}_{R}J^{~L}_{S}R_{MNKL}-(\nabla_{M}J^{~L}_{N})(\nabla_{R}J_{SL})$
and the relation $S^{NM}J_{N}^{~~K}=-S^{NK}J_{N}^{~~M}$ \cite{yano}.
So, the matter in the action is minimally coupled to gravity if the matter field $J$ be of nearly K\"{a}hler type, then
\begin{eqnarray}
\nabla^{M}T_{MN}=0.
\end{eqnarray}
Now, employing the nearly K\"{a}hlerian properties \eqref{4} results in
the energy-momentum tensor as follows
\begin{eqnarray}\label{299}
T_{MN}=\dfrac{(d-1)(2a-db)}{16\pi}[-6(\nabla_{M}J^{~K}_{L})(\nabla_{N}J^{~L}_{K})
+g_{MN}(\nabla_{P}J^{~K}_{L})(\nabla^{P}J^{~L}_{K})],
\end{eqnarray}
where its trace is given by
\begin{eqnarray}\label{30}
T&=&g^{MN}T_{MN}\nonumber\\
&=&\dfrac{(d-1)(2a-db)(d-6)}{16\pi}(\nabla_{P}J^{~K}_{L})(\nabla^{P}J^{~L}_{K}).
\end{eqnarray}
Obviously, the trace of energy momentum tensor does not vanish. In fact, $T_{MN}$ is traceless for dilation invariant scalar field theories, so conformal invariance has been broken.
Moreover, under general coordinate transformation
\begin{eqnarray}
x'^{M}=x^{M}+\epsilon~ \xi^{M}(x),~~~~~~\epsilon\rightarrow 0,
\end{eqnarray}
where the metric and K\"{a}hler two-form transform as
\begin{eqnarray}
g'_{MN}-g_{MN}&=&\epsilon\partial _{M}\xi ^{K}g_{KN}+\epsilon\partial _{N}\xi ^{K}g_{MK}+\epsilon\xi ^{K}\partial _{K}g_{MN},\nonumber\\
\Omega'_{MN}-\Omega_{MN}&=&\epsilon\partial _{M}\xi ^{K}\Omega_{KN}+\epsilon\partial _{N}\xi ^{K}\Omega_{MK}+\epsilon\xi ^{K}\partial _{K}\Omega_{MN},
\end{eqnarray}
a direct calculation by using a relation between covariant derivative of $J$, Ricci and Hermitian Ricci tensor on nearly K\"{a}hler manifolds as \cite{yano}
\begin{eqnarray}\label{330}
\nabla_{M}\nabla^{M}J_{K}^{~~N}=J^{NM}(R_{KM}-R^{*}_{KM}),
\end{eqnarray}
reveals that the action \eqref{model} is invariant under general coordinate transformation if we have $\partial.\xi=0$.
\section{Example}
In this section, we introduce a particular 10-dimensional nearly K\"{a}hler manifold on which we would like to describe our gravitational model \eqref{model}. It is a theorem that a nearly K\"{a}hler manifold $M$ can be decomposed as direct product $M=M^{k}\times M^{s}$, where $M^{k}$ is a K\"{a}hler and $M^{s}$ is a strictly nearly K\"{a}hler manifold \cite{gary2}. The known 6-dimensional examples of nearly K\"{a}hler manifolds are $SU(3)/U(1)\times U(1)$, $G_{2}/SU(3)$, $SP(2)/SU(2)\times U(1)$ and $S^3\times S^3$ \cite{Butruille}.
Particularly, we are interested in group manifolds, so we will focus on $S^3\times S^3$ which is the only group manifold example of known nearly K\"{a}hler manifolds, up to now. Then, with a 4-dimensional K\"{a}hler manifold $M_{4}$ (i.e. ${\nabla}J=0$), $M_{4}\times S^3\times S^3$ will be a 10-dimensional nearly K\"{a}hler manifold on which we can consider a metric ansatz of the form
\begin{eqnarray}\label{31}
dS^{2}_{10}=g_{MN}dx^{M}dx^{N}
=g_{\mu\nu}(x^{\rho})dx^{\mu}dx^{\nu}+g_{\hat{\mu}\hat{\nu}}(x^{\hat{\rho}})dx^{\hat{\mu}}dx^{\hat{\nu}},
\end{eqnarray}
where $g_{\mu\nu}(x^{\rho})$ is a 4-dimensional space-time metric and $g_{\hat{\mu}\hat{\nu}}(x^{\hat{\rho}})$ denotes the 6-dimensional metric. The indices $M, N, ...$ run over the whole 10 dimensions, the indices $\mu, \nu, \rho, ...$ run over $ 0, 1, 2, 3$ labeling 4-dimensional space-time, the indices $\hat{\mu}, \hat{\nu},\hat{\rho}$
run over $4,5,...9$ labeling 6-dimensional compact nearly K\"{a}hler manifold.
It is more convenient to apply our formalism in non-coordinate basis \cite{nakahara}
and for this reason we will prefer a group manifold to workout as an example\footnote{In the coordinate basis $T_{p}M$ is spanned by $\{e_{M}\}=\{\frac{\partial}{\partial x^{M}}\}$, where in non-coordinate basis there is an alternative choice for the basis as $\{e_{A}\}$. These two basis are related to each other by vielbines $e_{M}^{~~A}$ and we have $[e_{A},e_{A}]=f_{AB}^{~~C}e_{C}$.}.
Consider a 10-dimansional group manifold, $R\times B\times SU(2)\times SU(2)$, where $R$ is 1-dimensional Abelian Lie group whose coordinate will be regarded as time variable, and $B$ is a 3-dimensional real Lie group (Bianchi Lie group) \cite{Landau}. Basis of $B$ are labeled by $i, j, k, ...$, and indices $\hat{a}, \hat{b}, \hat{c},..$ run over whole Lie algebra $SU(2)\times SU(2)$ where indices $A, B, C, ...$ will label 10-dimensional manifold.
We consider a non-coordinate basis for the 4-dimensional part of manifold where the vielbins $e_{\mu}^{~~a}(x)$ depend only on the space coordinates, and the non-coordinate metric $g_{ab}(t)$ as the variable of $R$ Lie group depends only on time $t$. In this way, factorizing 4-dimensional space-time metric in a synchronous frame gives \cite{Landau,mr}
\begin{eqnarray}\label{34}
g_{\mu\nu}dx^{\mu}dx^{\nu}&=&e_{\mu}^{~~a}(x) g_{ab}(t)e_{\nu}^{~~b}(x)dx^{\mu}dx^{\nu}\nonumber\\
&=&-g_{00}(t)dt^{2}+e_{\alpha}^{~~i}(x) g_{ij}(t)e_{\beta}^{~~j}(x)dx^{\alpha}dx^{\beta},
\end{eqnarray}
where $ {X_{i}} $ and $ {x_{i}} $ indicate generators and coordinates of $B$ Lie group, respectively. Then, for 6-dimensional space we set
\begin{eqnarray}\label{35}
g_{\hat{\mu}\hat{\nu}}=e_{\hat{\mu}}^{~~\hat{a}}(x_{\hat{a}}) e_{\hat{\nu}}^{~~\hat{b}}(x_{\hat{a}}) g_{\hat{a}\hat{b}},
\end{eqnarray}
where $ {{X_{\hat{a}}}} $ and $ {x_{\hat{a}}} $ are the generators and coordinates of $SU(2)\times SU(2)$ Lie group, respectively.
In this non-coordinate basis for 10-dimensional manifold we have the following relation for Ricci tensor
\begin{eqnarray}\label{36}
R_{MN}=e_{M}^{~~a} e_{N}^{~~b} R^{(4)}_{ab}(t)+e_{M}^{~~\hat{a}} e_{N}^{~~\hat{b}} R^{(6)}_{\hat{a}\hat{b}}.
\end{eqnarray}
Note that, as in 6-dimensional Einstein-Yang-Mills theory \cite{ranjbar}, Poincare invariance implies
\begin{eqnarray}\label{360}
R_{a\hat{b}}=0,~~~~~ \Omega_{a\hat{b}}=0,
\end{eqnarray}
hence $J_{a}^{~\hat{b}}=0$. The expression of Ricci tensor in non-coordinate basis in terms of structure constants and time derivatives is given in the
appendix $C$ in \eqref{38}.
Returning to the Einstein equations of motion \eqref{28}, the metric and $J$ are not completely arbitrary and should be of nearly K\"{a}hler type. Hence, the first essential step toward solving the equations of motion is to identify a nearly K\"{a}hler metric and almost complex structure by solving the equations of $(5a)$ and $(5b)$ along with \eqref{1} and \eqref{2}. As mentioned in the previous section, the nearly K\"{a}hler condition imposes an extra identity on $J$ which is consistent with the metric and the conservation of energy momentum tensor. In the appendix $C$, our new method of calculating nearly K\"{a}hler structure in the non-coordinate basis is explained in detail, where for a particular example of $R\times II\times SU(2)\times SU(2)$ \footnote{
Note that $II$ denotes the Bianchi type Lie group which along with $R$ Lie group is capable of having a class of K\"{a}hler
structure given in \eqref{55} and \eqref{555}.} the metric and complex structure have been obtained in \eqref{44}, \eqref{444} and \eqref{55}, \eqref{555}. As
mentioned above, the $SU(2)\times SU(2)$ part of 10 dimensional group manifold is strictly nearly K\"{a}hler and the $R\times
II$ part of it is a K\"{a}hler manifold, so they altogether construct a strictly nearly K\"{a}hler manifold. The final nearly K\"{a}hler
structure (metric and complex structure) which contains an arbitrary function of time $F(t)$ up to four arbitrary constants $c_{1}, c_{2}, c_{3}$ and $\xi$ is given by
\begin{eqnarray}\label{111}
dS^{2}_{10}&=& g_{ab}e^{a}\otimes e^{b}+g_{\hat{a}\hat{b}}e^{\hat{a}}\otimes e^{\hat{b}}\nonumber\\
&=&(-{\frac {\rm d}{{\rm d}t}}F \left( t \right)e^{1}\otimes e^{1}-{{\it c_{3}}}^{2}{\frac {\rm d}{{\rm d}t}}F \left( t \right) e^{2}\otimes e^{2}+{\it c_{1}}+{\it c_{2}}F \left( t \right) e^{3}\otimes e^{3}+{\frac { \left( {\it c_{1}}+F \left( t \right) {\it c_{2}} \right) {{\it c_{3}}
}^{2}}{{{\it c_{2}}}^{2}}}e^{4}\otimes e^{4})\nonumber\\
&-&( 2\xi(e^{5}\otimes e^{5}+e^{6}\otimes e^{6}+e^{7}\otimes e^{7}+e^{8}\otimes e^{8}+e^{9}\otimes e^{9}+ e^{10}\otimes e^{10}\nonumber\\
&-&\frac{1}{2}(e^{5}\otimes e^{8}+e^{6}\otimes e^{9}+e^{7}\otimes e^{10}))),
\end{eqnarray}
\begin{eqnarray}\label{112}
\Omega&=&\frac{1}{2}(\Omega_{ab}e^{a}\wedge e^{b}+\Omega_{\hat{a}\hat{b}}e^{\hat{a}}\wedge e^{\hat{b}})\nonumber\\
&=&\frac{1}{2}(-{\it c_{3}}\,{\frac {\rm d}{{\rm d}t}}F \left( t \right)~~ e^{1}\wedge e^{2}+{\frac {{\it c_{3}}\, \left( {\it c_{1}}+F \left( t \right) {\it c_{2}}
\right) }{{\it c_{2}}}}~~e^{3}\wedge e^{4}+\sqrt {3}\xi\ (e^{5}\wedge e^{8}+e^{6}\wedge e^{9}+e^{7}\wedge e^{10})
).
\end{eqnarray}
The metric and almost complex structure present nearly K\"{a}hler structure with some arbitrariness which could be determined by solving the Einstein equations. The metric and K\"{a}hler two form in the equations are in the non-coordinate basis and explicit forms of $g_{MN}$ and $\Omega_{MN}$ may be obtained by multiplication of the vielbien given in \eqref{vs} and \eqref{vII}.
Now, with the above nearly K\"{a}hler metric and complex structure we may solve Einstein equations to fix the function $F(t)$, and take the advantageous of the arbitrary constants to construct the correct signature of metric. Decomposing Einstein equations \eqref{28} in 6 extra dimensions and 4-dimensional space-time gives
\begin{eqnarray}\label{32}
R_{\hat{\mu}\hat{\nu}}-\frac{1}{2}R^{(10)} g_{\hat{\mu}\hat{\nu}}+4\Lambda g_{\hat{\mu}\hat{\nu}}=\dfrac{(18 a-90 b)}{2}(-6(\nabla_{\hat{\mu}}J^{~\hat{\rho}}_{\hat{\sigma}})(\nabla_{\hat{\nu}}J^{~\hat{\sigma}}_{\hat{\rho}})+g_{\hat{\mu}\hat{\nu}}(\nabla_{\hat{\lambda}}J^{~\hat{\rho}}_{\hat{\sigma}})(\nabla^{\hat{\lambda}}J^{~\hat{\sigma}}_{\hat{\rho}})),
\end{eqnarray}
\begin{eqnarray}\label{33}
R_{\mu\nu}-\frac{1}{2}R^{(10)} g_{\mu\nu}+4\Lambda g_{\mu\nu}=\frac{(18a-90b)}{2
}g_{\mu\nu}(\nabla_{\hat{\lambda}}J^{~\hat{\rho}}_{\hat{\sigma}})(\nabla^{\hat{\lambda}}J^{~\hat{\sigma}}_{\hat{\rho}}),
\end{eqnarray}
where, noting \eqref{360}, the Ricci scalar of 10-dimensional manifold is the sum of Ricci scalar of 4$d$ and 6$d$ parts of manifold, i.e. $R^{(10)}=R^{(4)}+R^{(6)}$. On K\"{a}hler part of the manifold, $M_{4}$, the first term of energy momentum tensor \eqref{27} vanishes, but a non-zero contribution in the second term will be inherited from the 6-dimensional nearly K\"{a}hler manifold $S^3\times S^3$.
Solving the above Einstein equations over the metric \eqref{111} and the
complex structure \eqref{112} by using \eqref{3}, and choosing arbitrary constants as $c_{1}=1$, $c_{2}=-1$, and
$c_{3}=1$, gives the function $F(t)$ and the cosmological constant, respectively
as follows
\begin{eqnarray}\label{F(t)}
F(t)={\frac { \left( -24\,\Lambda\,\xi+280\,a-1400\,b-5 \right) t+ \left( -
24\,\Lambda+9 \right) \xi+280\,a-1400\,b-5}{ \left( t+1 \right)
\left( -24\,\Lambda\,\xi+280\,a-1400\,b-5 \right) }}
,
\end{eqnarray}
\begin{eqnarray}
\Lambda=\dfrac{1}{18}\,{\frac {264~a-1320~b-5}{\xi}}.
\end{eqnarray}
Consequently, by a redefinition of time as $t={{\rm e}^{\tau}}-1$, we obtain the 4-dimensional part of metric \eqref{34} and the almost complex structure of 4-dimensional space-time, respectively as
\begin{eqnarray}
g_{\mu\nu}={\frac {-27\xi}{216 a-1080 b+5}}(-d\tau^{2}+{{\rm e}^{-\tau}}dx_{1}^{2}-x_{3}{{\rm e}^{-2\tau}}dx_{1}dx_{2}+ (-x_{3}^{2}{{\rm e}^{-2\tau}}+{{\rm e}^{-\tau}})dx_{2}^{2}+ {{\rm e}^{-\tau}} dx_{3}^{2}),
\end{eqnarray}
\begin{eqnarray}
J_{\mu}^{~\nu}=(\dfrac{27\xi}{216 a +1080 b +5})^{2}\left[ \begin {array}{cccc} 0&{{\rm e}^{-3\,\tau}} \left( {x_{3}}^{2}
+1 \right) &-{{\rm e}^{-3\,\tau}}x_{3}\, \left( -{x_{3}}^{2}+{{\rm e}^
{\tau}}-1 \right) &0\\ \noalign{\medskip}-{{\rm e}^{-\tau}}&0&0&0
\\ \noalign{\medskip}-{{\rm e}^{-\tau}}x_{3}&0&0&-{{\rm e}^{-2\,\tau}}
\\ \noalign{\medskip}0&-{{\rm e}^{-3\,\tau}}x_{3}&{{\rm e}^{-3\,\tau}}
\left( -{x_{3}}^{2}+{{\rm e}^{\tau}} \right) &0\end {array} \right]_.
\end{eqnarray}
Also, it turns out that the 4-dimensional part of the energy-momentum tensor has a direct relation with space-time metric as
\begin{eqnarray}\label{CC}
T_{\mu\nu}={\frac {1260\,a-6300\,b}{27\,\xi}}g_{\mu\nu},
\end{eqnarray}
i.e. the 4-dimensional induced matter obtained in this example is in the form of a {\it cosmological constant} which depends explicitly on the parameters $a, b$ and $\xi$, among which $\xi$ is an element of $6d$ part of the manifold. In this regard, it is appealing to discuss about the cosmological constant and its known problems in the context of the present model.
\subsection{Cosmological constant and the fine-tuning problem}
According to the present observations, the experimental upper bound on the current value of the cosmological constant is extremely small. Moreover, it is usually assumed that an effective cosmological constant describes the energy density of the vacuum $<\rho_{vac}>$. Actually, it is commonly
believed that the vacuum energy density $<\rho_{vac}>$ contains the quantum field theory contributions to the effective cosmological constant
\begin{equation}\label{1'}
\Lambda_{eff}=\Lambda+ \kappa <\rho_{vac}>,
\end{equation}
where $\Lambda$ is a bare cosmological constant.
On the other hand, the calculations show that the quantum field theory contributions affect enormously the value of effective cosmological constant as \cite{carroll}
\begin{equation}\label{3'}
<\rho_{vac}> \sim
M_{EW}^4 \sim 10^{47} {\rm ~erg/cm}^3\ ,
\end{equation}
for electroweak cut off
\begin{equation}\label{4'}
<\rho_{vac}> \sim
M_{QCD}^4 \sim 10^{36} {\rm ~erg/cm}^3\ ,
\end{equation}
for QCD cut off
\begin{equation}\label{5'}
<\rho_{vac}> \sim
M_{GUT}^4 \sim 10^{102} {\rm ~erg/cm}^3\ ,
\end{equation}
for GUT cut off, and
\begin{equation}\label{6'}
<\rho_{vac}> \sim
M_P^4 \sim 10^{110} {\rm ~erg/cm}^3\ ,
\end{equation}
for Planck cut off. General relativity as a classical theory is applied on the scales larger than the Planck scale so that one may reasonably expect that theoretically the Einstein equation in 4-dimensional space-time is certainly valid for electroweak, QCD, GUT scales and is almost valid for Planck scale (in $G=1$ units) as
\begin{equation}\label{7'}
R_{\mu\nu} - {1\over 2}R^{(4)}g_{\mu\nu}+ \Lambda_{eff}\, g_{\mu\nu}
= 8\pi T_{\mu\nu}\,,
\end{equation}
where $\Lambda_{eff}$ is the effective cosmological constant with contributions
coming from electroweak, QCD, GUT and even Planck scales. However, the current
observational considerations requires the following Einstein equation
\begin{equation}\label{8}
R_{\mu\nu} - {1\over 2}R^{(4)}g_{\mu\nu}
+ \Lambda_{obs} \,g_{\mu\nu}
= 8\pi T_{\mu\nu}\,,
\end{equation}
where $\Lambda_{obs}$ is the observed cosmological constant corresponding
to an energy density with the order of magnitude $10^{-10} {\rm ~erg/cm}^3$. The fact that $\Lambda_{obs} \sim 10^{-120} \Lambda_{eff}$ is the
well-known cosmological constant problem \cite{Weinberg}.
{Many approaches have been introduced to solve this challenging problem with
no full success. Hence, some people have been interested in finding an alleviation
to this problem by resorting to a fine-tuning mechanism
which at least can provide us with a small observed value for the cosmological
constant.} In the above example, we found the induced 4-dimensional energy-momentum tensor \eqref{CC} in the form of a cosmological constant. Hence, the 4-dimensional Einstein equation \eqref{33} can be written as
\begin{eqnarray}\label{33'}
R_{\mu\nu}-\frac{1}{2}R^{(4)} g_{\mu\nu}+4\Lambda g_{\mu\nu}=(
{\frac {1260\,a-6300\,b}{27\,\xi}}+\frac{1}{2}R^{(6)})g_{\mu\nu},
\end{eqnarray}
or effectively as
\begin{eqnarray}\label{33''}
R_{\mu\nu}-\frac{1}{2}R^{(4)} g_{\mu\nu}+\Lambda_{obs}\, g_{\mu\nu}=0,
\end{eqnarray}
where
\begin{equation}\label{obs}
\Lambda_{obs}=\dfrac{2}{9}{\frac {264\,a-1320\,b-5}{\xi}}-{\frac {1260\,a-6300\,b}{27\,\xi}}-\frac{1}{2}R^{(6)}.
\end{equation}
Eq.\eqref{33''} shows an Einstein equation where the induced matter \eqref{CC}
is removed in favour of an effective cosmological constant $\Lambda_{obs}$ which is capable of being fine-tuned to a very small or vanishing value, in agreement with observations, by the free parameters $a, b$ and $\xi$. Actually, the parameter $\xi$ depends on the $6d$ part of manifold through \eqref{44} and \eqref{38} and is related to the Ricci scalar of the $6d$ part as $R^{(6)}=\frac{-5}{3 \xi}$.
Setting $\Lambda_{obs}\simeq0$ gives
$$
a\simeq{5\,b+{\frac {5}{1896}}},
$$
or equivalently the factor $(18a - 90b)$ in Lagrangian will be fixed as $(\simeq){\dfrac{15}{316}}$.
\section{Candidate for dark energy }
Recent cosmological observations {\cite{c1}}, WMAP {\cite{c2}}, SDSS {\cite{c3}} and X-ray {\cite{c4}} indicate that our universe is really experiencing
an accelerated expansion. These observations confirm also that the
universe is spatially flat, and consists of about $70 \%$ dark
energy with negative pressure, $30\%$ dust matter (cold dark
matter plus baryons), and some negligible radiation.
To explain the nature of dark energy and the origin of cosmic acceleration, many theories and models have been proposed. The simplest candidate for the dark energy is considered as a tiny positive cosmological constant. An alternative proposal to explain the dark energy is the dynamical dark energy scenario where the effective dynamical nature of dark energy can originate from various fields, such as canonical scalar field (quintessence) \cite{quint}, phantom field \cite{phant}, or the combination of quintessence and phantom in a unified model named quintom
\cite{quintom}. Another theory has recently been constructed in the light of the holographic principle of quantum gravity which may simultaneously provide a solution to the coincidence problem \cite{holoprin}.
Let us now investigate if the almost complex structure can play the role
of dark energy. In this regard, we add a $4d$ baryonic matter term
to the action \eqref{model} as \footnote{Note that as mentioned in introduction,
the Einstein-Hilbert term, the $\nabla J \nabla J$ term and the matter term do not include terms with higher than \textit{second} derivative of metric.
}
\begin{eqnarray}\label{24'}
S=\frac{1}{16\pi}\int d^{d}x\sqrt{-g}(R-(d-2)\Lambda)+\dfrac{(d-1)(2a-db)}{16\pi}\int d^{d}x\sqrt{-g}(\nabla^{K}J^{~M}_{N})(\nabla_{K}J^{~N}_{M})+\int d^{4}x\sqrt{-g}\, (\textit{L}_M).
\end{eqnarray}
In $d=10$, it is easy to show that the $4d$ field equations take the following form
\begin{eqnarray}\label{33'''}
R_{\mu\nu}-\frac{1}{2}R^{(4)} g_{\mu\nu}+4\Lambda g_{\mu\nu}=8 \pi T_{\mu\nu}^{(M)}+\frac{(18a-90b)}{2
}g_{\mu\nu}(\nabla_{\hat{\lambda}}J^{~\hat{\rho}}_{\hat{\sigma}})(\nabla^{\hat{\lambda}}J^{~\hat{\sigma}}_{\hat{\rho}})+\frac{1}{2}R^{(6)} g_{\mu\nu},
\end{eqnarray}
where $T_{\mu\nu}^{(M)}$ is the baryonic matter energy-momentum tensor. Now,
we may rewrite \eqref{33'''}
in the following form
\begin{eqnarray}\label{33''''}
R_{\mu\nu}-\frac{1}{2}R^{(4)} g_{\mu\nu}=8\pi (T_{\mu\nu}^{(M)}+T_{\mu\nu}^{(DE)}),
\end{eqnarray}
where $T_{\mu\nu}^{(DE)}$ is considered as the dark energy constructed entirely by the
almost complex structure as
\begin{eqnarray}\label{33'''''}
T_{\mu\nu}^{(DE)}=\frac{(18a-90b)}{2}g_{\mu\nu}(\nabla_{\hat{\lambda}}J^{~\hat{\rho}}_{\hat{\sigma}})
(\nabla^{\hat{\lambda}}J^{~\hat{\sigma}}_{\hat{\rho}})-4\Lambda g_{\mu\nu}+\frac{1}{2}R^{(6)} g_{\mu\nu}.
\end{eqnarray}
For example, in the case of $R\times II \times S^{3}\times S^{3}$ the dark
energy $T_{\mu\nu}^{(DE)}$
becomes $\Lambda_{obs}$ given by \eqref{obs}. Considering $T_{\mu\nu}^{(M)}$
as the perfect fluid and $g_{\mu\nu}$ as the Friedman-Robertson-Walker
metric, the Einstein equations lead to the following acceleration
equation
\begin{eqnarray}
\frac{\ddot{a}}{a}=-\frac{1}{6}[\rho+3(p-\Lambda_{obs})].
\end{eqnarray}
At present era, where we have a pressureless universe $p=0$, the above equation
indicates that the universe is accelerating provided that the current matter
density $\rho$ is smaller than the dark energy density $3\Lambda_{obs}$ namely $\rho<3\Lambda_{obs}$.
{\it In other words, we have a $\Lambda$CDM model where the dark energy is provided by the almost complex structure}.
At the end of this section it is worth mentioning that according to \eqref{30}, the trace of energy-momentum tensor is proportional to the second term in the action, so the Lagrangian of \eqref{24'} may be regarded as a type of $f(R,T)$ modified theory of gravity which is currently considered as
alternative to dark energy model \cite{FRT}
\begin{eqnarray}
S=\int d^{d}x\sqrt{-g}(~\frac{1}{16\pi}~f(R,T)+L_{M}).
\end{eqnarray}
In a special case of $f(R,T)=R+2f(T)$, our model is equivalent to $$
f(T)=\dfrac{8\pi}{(d-6)}T-\frac{(d-2)}{2}\Lambda.
$$
\section{Conclusion}
In this work, we have investigated a curvature-like tensor on nearly K\"{a}hler manifolds, which beside possessing of the symmetry properties of curvature tensor, carries almost complex structure and may be invariant under conformal transformation of metric. Then, following the idea of including
the almost complex structure in action integral, we constructed a gravitational model with the scalar curvature
of the curvature-like tensor instead of the Ricci scalar in Einstein-Hilbert action.
Therefore, the almost complex structure appeared in the action integral strictly by a geometrical way. Moreover, the corresponding energy-momentum tensor is interpreted as a dark energy term. It is remarkable that the conservation of energy-momentum tensor
and diffeomorphism invariance in the coordinates coincide with the nearly K\"{a}hlerian properties of the manifold.
Furthermore, we have identified a nearly K\"{a}hler complex structure with associated Hermitian metric on a
particular example of group manifold $R\times II\times S^{3}\times S^{3}$. The formalism has been developed
in non-coordinate basis which greatly simplified the analysis. It turned
out that the obtained structure on $S^{3}\times S^{3}$ part
of manifold is in accordance with the structure which has been found in Ref. \cite{halfflat}.
Then, we have solved the Einstein field equations exactly and obtained
the corresponding $4d$ metric and energy-momentum tensor. The
energy-momentum tensor as the induced matter appeared in the form of a cosmological
constant. We studied the cosmological constant and found a solution for
the fine-tuning problem. Moreover, it turned out that the almost complex structure may be considered
as potential candidate for dark energy.
There are some interesting open problems which deserve to be investigated
as: {\it What is the two dimensional sigma model whose 4-dimensional effective field theory is identified with our model}. These and some other interesting problems are under our current investigation.
\section*{Acknowledgment}
This research has been supported by Azarbaijan Shahid Madani university by a research fund No. 401.231.
\section*{Appendix A}
In this appendix, we list the behaviors of tensors under conformal transformation of the metric, $\bar{g}=e^{2\sigma}g$. Riemann curvature tensor transforms as \cite{nakahara}
\begin{eqnarray}\label{16}
\bar{R}^{S}_{~LMN}=R^{S}_{~LMN}-g_{NL}B_{M}^{~S}+g_{KL}B_{M}^{~K}\delta^{S}_{N}-g_{KL}B_{N}^{~K}\delta^{S}_{M}+g_{ML}B_{N}^{~S},
\end{eqnarray}
where
\begin{eqnarray}\label{17}
B_{M}^{~K}=-\partial_{M}\sigma g^{KL}\partial_{L}\sigma+g^{KL}(\partial_{M}\partial_{L}\sigma-\Gamma^{P}{}_{ML}\partial_{P}\sigma)+\frac{1}{2}g^{LP}\partial_{L}\sigma\partial_{P}\sigma\delta_{M}^{K},
\end{eqnarray}
and $B_{MN}=g_{NL}B_{M}^{~L}=B_{NM}$. Also we have
\begin{eqnarray}\label{18}
\bar{R}_{MN}=R_{MN}-g_{MN}B_{S}^{~S}-(d-2)B_{MN},
\end{eqnarray}
and
\begin{eqnarray}\label{19}
\bar{g}_{MN}\bar{R}=(R-2(d-1)B_{S}^{~S})g_{MN},
\end{eqnarray}
where $d=dim M$. Under the conformal transformation of metric, one can obtain the following results for the transformed Hermitian Ricci tensor and Ricci scalar
\begin{eqnarray}\label{20}
\bar{R}^{*}_{MN}=R^{*}_{MN}-B_{MN}-g_{KS}B_{R}^{K}J^{~R}_{M}J^{~S}_{N},
\end{eqnarray}
and
\begin{eqnarray}\label{21}
\bar{g}_{MN}\bar{R}^{*}=g_{MN}(R^{*}-2B_{R}^{R}).
\end{eqnarray}
\section{Appendix B}
Considering the $\Omega_{MN}=g_{NK}J_{M}^{~K}$ field as a natural generalization of the Maxwell vector field $A_{M}$, the field strength $H_{MNK}$ associated with $\Omega_{MN}$ is defined by
\begin{eqnarray}
H_{MNK}=\partial_{M}\Omega_{NK}+\partial_{N}\Omega_{KM}+\partial_{K}\Omega_{MN}=\nabla_{M}\Omega_{NK}+\nabla_{N}\Omega_{KM}+\nabla_{K}\Omega_{MN},
\end{eqnarray}
with respect to Levi-civita connection. So a non-vanishing $H_{MNK}$ needs a non-K\"{a}hler geometry. Particularly, for a strictly nearly K\"{a}hler structure the $H_{MNK}$ has the form of
\begin{eqnarray}\label{90}
H_{MNK}^{(nk)}=3~\nabla_{M}\Omega_{NK}=3~g_{KR}\nabla_{M}J_{N}^{~R}.
\end{eqnarray}
Then, in nearly K\"{a}hler geometry the $H_{MNK}H^{MNK}$ giving the dynamics to the $\Omega_{MN}$ field will be in its \textit{simplest form}
\begin{eqnarray}
H_{MNK}^{(nk)}H^{(nk)~MNK}=9(\nabla^{K}J^{~M}_{N})(\nabla_{K}J^{~N}_{M}).
\end{eqnarray}
Obviously, the right hand side is the last term in the action \eqref{24}. The action \eqref{24} is analogues to the string effective action (within Einstein
frame) in nearly K\"{a}hler case in which the dynamics
of the Kalb-Romond field (here $ \Omega_{MN}$) is given by a $H_{MNK}H^{MNK}$ term \cite{string}.
\section{Appendix C}
In this section, we are going to systematically identify nearly K\"{a}hler complex structure with associated Hermitian metric.
We will write the formalism in the non-coordinate basis which is related to coordinate basis by vielbein. The covariant derivative of vielbein is given by
\begin{eqnarray}\label{49}
\nabla_{M}e^{N}_{~~A}=e_{M}^{~~D}e^{N}_{~~C}~\Gamma_{DA}^{C}.
\end{eqnarray}
Using the basic definition of Levi-Civita connection and Riemannian curvature tensor, we have explicit expressions of them in non-coordinate basis in terms of structure constants and time derivation of metric as follows
\begin{eqnarray}\label{37}
\Gamma_{AB}^{C}=\frac{1}{2}(g^{DC}(e_{A}(g_{BC})+e_{B}(g_{CA})-e_{C}(g_{AB}))-g^{DC}(f_{AC}^{~~~E}g_{EB}+f_{BC}^{~~~E}g_{EA})+f_{AB}^{~~~C}),
\end{eqnarray}
\begin{eqnarray}\label{38}
R_{AB}=e_{D}(\Gamma_{BA}^{D})-e_{B}(\Gamma_{DA}^{D})+\Gamma_{BA}^{E}\Gamma_{DE}^{D}-\Gamma_{DA}^{E}\Gamma_{BE}^{D}-f_{DB}^{~~~E}\Gamma_{EA}^{D}.
\end{eqnarray}
Now, by employing \eqref{49} and \eqref{37}, one can obtain the nearly K\"{a}hler condition $(5a)$ in non-coordinate basis in the following form
\begin{eqnarray}\label{39}
\Gamma_{EA}^{C}J_{D}^{~A}-\Gamma_{ED}^{A}J_{A}^{~C}+\Gamma_{DA}^{C}J_{E}^{~A}-\Gamma_{DE}^{A}J_{A}^{~C}+e_{D}(J_{E}^{~C})+e_{E}(J_{D}^{~C})=0,
\end{eqnarray}
where $e_{D}$ acts on $J_{B}^{~A}$ and $g_{AB}$ as $\delta^{0}_{D}\frac{d}{dt}$. In this basis, 4-dimensional part of almost complex structure, $J_{d}^{~a}$, depends on time variable but the 6-dimensional part of it, has constant components only. Noting the fact that 6-dimensional Lie group $SU(2)\times SU(2)$ is nearly K\"{a}hler while 4-dimensional $R\times B$ Lie group is K\"{a}hlerian\footnote{Note
that $B$ is a three dimensional Bianchi type Lie group.}, the above equation results in two decomposed equations as follows
\begin{eqnarray}\label{40}
\Gamma_{\hat{m}\hat{a}}^{\hat{n}}J_{\hat{d}}^{~\hat{a}}-\Gamma_{\hat{m}\hat{d}}^{\hat{a}}J_{\hat{a}}^{~\hat{n}}+\Gamma_{\hat{d}\hat{a}}^{\hat{n}}J_{\hat{m}}^{~\hat{a}}-\Gamma_{\hat{d}\hat{m}}^{\hat{a}}J_{\hat{a}}^{~\hat{n}}=0,
\end{eqnarray}
\begin{eqnarray}\label{41}
\Gamma_{ma}^{n}J_{d}^{~a}-\Gamma_{md}^{a}J_{a}^{~n}+e_{d}(J_{m}^{~n})=0.
\end{eqnarray}
It is useful to introduce two kinds of matrix for instance for an appeared $\Gamma_{\hat{a}\hat{b}}^{\hat{c}}$ in equations as follows
\begin{eqnarray}\label{42}
(\Gamma 1_{\hat{a}})_{\hat{b}}^{~~\hat{c}}=\frac{1}{2}(-\chi_{\hat{a}}+g.\chi_{\hat{a}}^{t}.g^{-1}+{\cal {Y}}^{\hat{d}}.g^{-1} g_{\hat{d}\hat{a}}),\nonumber\\
(\Gamma 2_{\hat{b}})_{\hat{a}}^{~~\hat{c}}=\frac{1}{2}(\chi_{\hat{b}}+g.\chi_{\hat{b}}^{t}.g^{-1}+{\cal {Y}}^{\hat{d}}.g^{-1} g_{\hat{d}\hat{b}}),
\end{eqnarray}
where $(\chi_{\hat{a}})_{\hat{b}}^{~~\hat{c}}=-f_{\hat{a}\hat{b}}^{~~\hat{c}}$ and $({\cal {Y}}^{\hat{c}})_{\hat{a}\hat{b}}=-f_{\hat{a}\hat{b}}^{~~\hat{c}}$ are adjoint representation of the Lie algebra of $su(2)\bigoplus su(2)$. Therefore, the matrix form of nearly K\"{a}hler equation \eqref{40} will be
\begin{eqnarray}\label{43}
J.\Gamma 1_{\hat{a}}-\Gamma 1_{\hat{a}}.J+\Gamma 2_{\hat{d}}* J_{\hat {a}}^{~~\hat{d}}-\Gamma 2_{\hat{a}}.J=0.
\end{eqnarray}
Now, solving the equation \eqref{43} by using \eqref{1} and \eqref{2} for $SU(2)\times SU(2)$ gives an example of 6-dimensional nearly K\"{a}hler structure up to an arbitrary constant $\xi$ in the following form
\begin{eqnarray}\label{44}
g_{\hat{a}\hat{b}}=- 2\,\xi \left[ \begin {array}{cccccc} 1&0&0&1/2&0&0\\ \noalign{\medskip}0&1&0
&0&1/2&0\\ \noalign{\medskip}0&0&1&0&0&1/2\\ \noalign{\medskip}1/2&0&0
&1&0&0\\ \noalign{\medskip}0&1/2&0&0&1&0\\ \noalign{\medskip}0&0&1/2&0
&0&1\end {array} \right]_,
\end{eqnarray}
\begin{eqnarray}\label{444}
J_{\hat{a}}^{~\hat{b}}=\frac{\sqrt{3}}{3} \left[ \begin {array}{cccccc} -1&0&0&-2&0&0\\ \noalign{\medskip}0&-1&0
&0&-2&0\\ \noalign{\medskip}0&0&-1&0&0&-2\\ \noalign{\medskip}2&0&0&1&0
&0\\ \noalign{\medskip}0&2&0&0&1&0\\ \noalign{\medskip}0&0&2&0&0&1
\end {array} \right]_.
\end{eqnarray}
The resulted $J_{\hat{a}}^{~\hat{b}}$ and $g_{\hat{a}\hat{b}}$ with our method are in accordance with half flat structure on $S^{3}\times S^{3}$ which is obtained in \cite{halfflat}, noting the fact that nearly K\"{a}hler structure is a special class of half flat structures.
The vielbein on $S^{3}\times S^{3}$ part is given by
\begin{eqnarray}\label{vs}
e_{\hat{\mu}}^{~~\hat{a}}= \left[ \begin {array}{cccccc} \cos \left( x_{{6}} \right) \cos
\left( x_{{7}} \right) &-\cos \left( x_{{6}} \right) \sin \left( x_{{
7}} \right) &\sin \left( x_{{6}} \right) &0&0&0\\ \noalign{\medskip}
\sin \left( x_{{6}} \right) &\cos \left( x_{{7}} \right) &0&0&0&0
\\ \noalign{\medskip}0&0&1&0&0&0\\ \noalign{\medskip}0&0&0&\cos
\left( x_{{9}} \right) \cos \left( x_{{10}} \right) &-\cos \left( x_{
{8}} \right) \sin \left( x_{{10}} \right) &\sin \left( x_{{9}}
\right) \\ \noalign{\medskip}0&0&0&\sin \left( x_{{9}} \right) &\cos
\left( x_{{10}} \right) &0\\ \noalign{\medskip}0&0&0&0&0&1
\end {array} \right]_.
\end{eqnarray}
On the other hand, we are looking for a K\"{a}hler structure on 4-dimensional manifold. As a special example,
we consider Bianchi type \textit{II} as 3-dimensional Lie group $B$ with non-zero commutation relation \cite{Landau}
\begin{eqnarray}
[X_{2},X_{3}]=X_{1},
\end{eqnarray}
and vielbein $e_{\alpha}^{~~i}(x)$ as \cite{mr}
\begin{eqnarray}\label{vII}
e_{\alpha}^{~~i}(x)=\left[ \begin {array}{ccc} 1&0&0\\ \noalign{\medskip}{\it x_{3}}&1&0
\\ \noalign{\medskip}0&0&1\end {array} \right]_.
\end{eqnarray}
Now, solving \eqref{41} by using \eqref{1} and \eqref{2} on $R\times II$ gives K\"{a}hler almost complex structure and Hermitian metric as follows
\begin{eqnarray}\label{55}
g_{ab}= \left[ \begin {array}{cccc} -{\frac {d}{dt}}F \left( t \right) &0&0&0
\\ \noalign{\medskip}0&-{{\it c_{3}}}^{2}{\frac {d}{dt}}F \left( t
\right) &0&0\\ \noalign{\medskip}0&0&{\it c_{1}}+F \left( t \right) {
\it c_{2}}&0\\ \noalign{\medskip}0&0&0&{\frac { \left( {\it c_{1}}+F \left(
t \right) {\it c_{2}} \right) {{\it c3}}^{2}}{{{\it c_{2}}}^{2}}}
\end {array} \right]_,
\end{eqnarray}
\begin{eqnarray}\label{555}
J_{a}^{~b}=\left[ \begin {array}{cccc} 0&-{\frac {{\it c_{3}}\,{\frac {d}{dt}}F
\left( t \right) }{{\it c_{1}}+F \left( t \right) {\it c_{2}}}}&0&0
\\ \noalign{\medskip}-{{\it c_{3}}}^{-1}&0&0&0\\ \noalign{\medskip}0&0&0&
-{\frac {{\it c_{3}}\, \left( {\it c_{1}}+F \left( t \right) {\it c_{2}}
\right) }{ \left( {\frac {d}{dt}}F \left( t \right) \right) {\it c_{2}}
}}\\ \noalign{\medskip}0&0&-{\frac {{\it c_{2}}}{{\it c_{3}}}}&0\end {array}
\right]_,
\end{eqnarray}
where $F(t)$ is an arbitrary well defined function of $t$ and $c_{1}, c_{2}$ and $c_{3}$ are arbitrary constants.
|
1,116,691,498,337 | arxiv | \section{Introduction}
Federated Learning or Federated Machine Learning (FML) \cite{b1}
is introduced to solve privacy issues in machine learning using data
from multiple parties. Instead of transferring data directly into
a centralized data warehouse for building machine learning models,
Federated Learning allows each party to own the data in its own place
and still enables all parties to build a machine learning model together.
This is achieved either by building a meta-model from the sub-models
each party builds so that only model parameters are transferred, or
by using encryption techniques to allow safe communications in between
different parties \cite{b2}.
Federated Learning opens new opportunities for many industry applications.
Companies have been having big concerns on the protection of their
own data and are unwilling to share with other entities. With Federated
Learning, companies can build models together without disclosing their
data and share the benefit of machine learning. An example of Federated
Learning use case is in insurance industry. Primary insurers, reinsurers
and third-party companies like online retailers can all work together
to build machine learning models for insurance applications. Number
of training instances is increased by different insurers and reinsurers,
and feature space for insurance users is extended by third-party companies.
With the help of Federated Learning, machine learning can cover more
business cases and perform better.
For the the ecosystem of Federated Learning to work, we need to encourage
different parties to contribute their data and participate in the
collaboration federation. A credit allocation and rewarding mechanism
is crucial for the incentives current and potential participants of
Federated Learning. A fair measure of the contribution for each party
in Federated Learning enables fair credits allocation. Data quantity
alone is certainly not enough, as one party may contribute lots of
data that doesn't help much on building the model. We need a way to
fairly measure the data quality overall and hence decide the contribution.
In this paper we develop simple but powerful techniques to fairly
calculate the contributions of multiple parties in FML, in the context
of both horizontal FML and vertical FML. For Horizontal FML, each
party contributes part of the training instances. We use deletion
method to calculate the grouped instance influence. Each time we delete
the instances provided from one certain party and retrain the model,
and calculate the difference of the prediction results between the
new model and the original one, and use this measure of difference
to decide the contribution of this certain party. For Vertical FML,
each party owns party of the feature space. We use Shapley Values
\cite{b21} to calculate the grouped feature importance, and use this
measure of importance to decide the contribution of each party. The
method we propose in our knowledge is the first attempt of research
on model contribution and credit allocation in the context of federated
machine learning.
In the next chapters of this paper, we first briefly introduce Federated
Learning. We then cover the Federated Deletion method and Federated
Shap method we propose on measuring contributions of multiple parties
for horizontal and vertical FML models, followed by some experiments.
We conclude the paper with some discussions in the last chapter.
\section{Federated Learning}
Federated Learning originated from some academic papers like \cite{b1,b3}
and a follow-up blog from Google in 2017. The idea is that Google
wants to train its own input method on its Android phones called ``Gboard''
but does not want to upload the sensitive keyboard data from their
users to Google's own servers. Rather than uploading user's data and
training models in the cloud, Google lets users train a separate model
on their own smartphones (thanks to the neural engines from several
chip manufacturers) and upload those black-box model parameters from
each of their users to the cloud and merge the models, update the
official centralized model, and push the model back to Google users.
This not only avoids the transmission and storage of user's sensitive
personal data, but also utilizes the computational power on the smartphones
(a.k.a the concept of Edge Computing) and reduce the computation pressure
from their centralized servers.
When the concept of Federated Learning was published, Google's focus
was on the transmission of models as the upload bandwidth of mobile
phones is usually very limited. One possible reason is that similar
engineering ideas have been discussed intensively in distributed machine
learning. The focus of Federated Learning was thus more on the ``engineering
work'' with no rigorous distributed computing environment, limited
upload bandwidth and slave nodes as massive number of users.
Data privacy is becoming an important issue, and lots of relating
regulations and laws have been taken into action by authorities and
governments\cite{b4,b5}. The companies that have been accumulating
tons of data and have just started to make value of it now have their
hands tightened. On the other hand, all companies value a lot their
own data and feel reluctant from sharing data with others. Information
islands kill the possibility of cooperation and mutual benefit. People
are looking for a way to break such prisoner dilemma while complying
with all the regulations. Federated Learning was soon recognized as
a great solution for encouraging collaboration while respecting data
privacy.
\cite{b2} describes Federated Learning in three categories: Horizontal
Federated Learning, Vertical Federated Learning and Federated Transfer
Learning. Such categorization extends the concept of Federated Learning
and clarify the specific solutions under different use cases.
\emph{Horizontal Federated Learning} applies to circumstances where
we have a lot of overlap on features but only a few on instances.
This refers to the Google Gboard use case and models can be ensembled
directly from the edge models.
\emph{Vertical Federated Learning} refers to where we have many overlapped
instances but few overlapped features. An example is between insurers
and online retailers. They both have lots of overlapped users, but
each owns their own feature space and labels. Vertical Federated Learning
merges the features together to create a larger feature space for
machine learning tasks and uses homomorphic encryption to provide
protection on data privacy for involved parties.
\emph{Federated Transfer Learning} uses Transfer Learning \cite{b6}
to improve model performance when we have neither much overlap on
features nor on instances.
An example in insurance industry to illustrate the idea is the following.
Horizontal FML corresponds to primary insurers working with a reinsurer.
For the same product primary insurers share the similar features.
Vertical FML corresponds to reinsurer working with another third-party
data provider like online retailer. An online retailer will have more
features for a certain policyholder that can increase the prediction
power for models built for insurance.
For a detailed introduction of Federated Learning and their respective
technology that is used, please refer to \cite{b2}.
\section{Deletion Method for Horizontal FML}
Most model interpretation methods can be applied, with some minor
modifications, to contribution measure for Horizontal Federated Learning
as all parties have data for the full feature space. There is no special
issue for interpreting prediction results on both training data and
new data, for both specific single predictions as granular check or
for batch pre-dictions as holistic check.
Approaches to identifying influential instances, such as deletion
diagnostics \cite{b8} and influence functions \cite{b7}, can be
used to measure the importance of individuals to a machine learning
model. Here we propose a method based on deletion diagnostics to measure
contributions of different parties for horizontal FML.
Deletion diagnostics is intuitive. With the deletion approach, we
retrain the model each time an instance is omitted from training dataset
and measure how much the prediction of retrained model changes. Supposing
we are evaluating the effect of the $i$th instance on the model predictions,
the influence measure can be formulated as follows,
\begin{equation}
\text{Influence}^{-i}=\frac{1}{n}\sum_{j=1}^{n}|\hat{y}_{j}-\hat{y}_{j}^{-i}|,
\end{equation}
where $n$ is the size of dataset, $\hat{y}_{j}$ is the prediction
on $j$th instance made by the model trained on all data, and $\hat{y}_{j}^{-i}$
is the prediction on $j$th instance made by model trained with the
$i$th instance omitted.
For one party in horizontal FML with a subset of instances $D$, we
define the contribution as the total influence of all instances it
posseses in the following form,
\begin{equation}
\text{Influence}^{-D}=\sum_{i\in D}\text{Influence}^{-i}.\label{eq:influenceD}
\end{equation}
We propose an approximation algorithm to implement the above influence
measure, considering a batch of instances as a whole during each deletion,
which is shown in Algorithm \ref{algo:influence}.
\begin{algorithm}
\caption{Approximating influence estimation for each party in horizontal FML}
\label{algo:influence} \begin{algorithmic} \Input \State number
of parties $K$, model $f$ \State instance subsets $D_{1},\dots,D_{K}$
\EndInput \Output \State Influence measure $\text{Influence}^{-D_{k}}$
for $k=1,\dots,K$ \EndOutput \ForAll{k=1,\dots ,K} \State delete
$D_{k}$ from training dataset \State retrain model $f'$ \State
compute $\text{Influence}^{-D_{k}}=\frac{1}{n}\sum_{j}|\hat{y}_{j}-\hat{y}_{j}^{-D}|$
\EndFor \State return $\text{Influence}^{-D_{k}}$ for $k=1,\dots,K$
\end{algorithmic}
\end{algorithm}
\section{Shapley Values for Vertical FML}
In this section we focus on the contribution measure of different
parties in vertical Federated Machine Learning. In the vertical mode
a party contributes to FML model by sharing its features with other
parties, which means the contribution of the party can be represented
by the combined contributions of its shared features. Therefore, we
first introduce how to distribute the contributions among individual
features and then show the extension to measuring contribution of
grouped features.
\subsection{Shapley Values for Individual Feature}
Generally, we are interested in how a particular feature value influences
the model prediction. For an additive model like linear regression
\begin{equation}
f(x)=\beta_{0}+\sum_{i=1}^{n}\beta_{i}x_{i},
\end{equation}
where $\beta_{i}$ is the model coefficient and $x_{i}$ the feature
value, we can measure the influence of $X_{i}=x_{i}$ according to
the situational importance \cite{b10}
\begin{equation}
\varphi_{i}(x)=\beta_{i}x_{i}-\beta_{i}\mathbb{E}[X_{i}].
\end{equation}
The situational importance is the difference between what a feature
contributes when its value is $x_{i}$ and what it is expected to
contribute. For a more general model which we treat as a black box,
the feature influence can be computed in the following way similar
to the situational importance:
\begin{equation}
\varphi_{i}(x)=f(x_{1},\dots,x_{n})-\mathbb{E}[f(x_{1},\dots,x_{i},\dots,x_{n})],\label{eq:generalSituationImportance}
\end{equation}
which is the difference between a prediction for an instance and the
expected prediction for the same instance if the \textit{i}th feature
had not been known.
The Shapley value \cite{b11}, which is originated from coalitional
game theory with proven theoretical properties, provides an effective
approach to distribute contributions among features in a fair way
by assigning to each feature a number which denotes its influence
\cite{b12}\cite{b13}\cite{b14}\cite{b15}. In a coalitional game,
it is assumed that a grand coalition formed by \textit{n} players
has a certain worth and each smaller coalition has its own worth.
The goal is to ensure that each player receives his fair share, taking
into account all sub-coalitions. In our case, the Shapley value is
defined as
\begin{equation}
\phi_{i}(x)=\sum\limits _{Q\subseteq S\setminus\{i\}}\frac{|Q|!(|S|-|Q|-1)!}{|S|!}(\Delta_{Q\cup\{i\}}(x)-\Delta_{Q}(x)),\label{eq:shapleyValue}
\end{equation}
where $S$ is s feature index set, $Q\subseteq S=\{1,2,\dots,n\}$
is a subset of features, $x$ is the vector of feature values of the
instance in consideration and $|\cdot|$ is the size of a feature
set. $\Delta_{Q}(x)$ denotes the influence of a subset of feature
values, which generalizes \eqref{eq:generalSituationImportance},
in the following form
\begin{equation}
\Delta_{Q}(x)=\mathbb{E}[f|X_{i}=x_{i},\forall i\in Q]-\mathbb{E}[f].
\end{equation}
The Shapley value $\phi_{i}(x)$ gives a strong solution to the problem
of measuring individual feature contribution. However, computing \eqref{eq:shapleyValue}
has an exponential time complexity, making the method infeasible for
practical scenarios. An approximation algorithm with Monte-Carlo sampling
is proposed in \cite{b12} to reduce the computational complexity:
\begin{equation}
\phi_{i}(x)=\frac{1}{M}\sum\limits _{m=1}^{M}\left(f(x_{+i}^{m})-f(x_{-i}^{m})\right),\label{eq:shapleyValueApprox}
\end{equation}
where $M$ is the number of iterations. $f(x_{+i}^{m})$ is the prediction
for instance $x$, with a random number of feature values replaced
by feature values from a randomly selected instance $z$, except for
the respective value of feature $i$. The vector $x_{-i}^{m}$ is
almost identical to $x_{+i}^{m}$, except that the value $x_{i}^{m}$
is taken from the sampled $z$. The approximation algorithm is summaried
in Algorithm \ref{algo:individualShapley}.
\begin{algorithm}
\caption{Approximating Shapley estimation for individual feature value}
\label{algo:individualShapley} \begin{algorithmic} \Input \State
number of iterations $M$, instance feature vector $x$, \State model
$f$, feature space $\mathcal{X}$, and feature index $i$ \EndInput
\Output \State Shapley value for the value of the $i$th feature
$\phi_{i}(x)$ \EndOutput \ForAll {$m=1,\dots,M$} \State select
a random instance $z\in\mathcal{X}$ \State select a random permutation
of the feature values \State construct two new instances: \State
~~ $x_{+i}=(x_{1},\dots,x_{i-1},x_{i},z_{i+1},\dots,z_{n})$ \State
~~ $x_{-i}=(x_{1},\dots,x_{i-1},z_{i},z_{i+1},\dots,z_{n})$ \State
compute marginal contribution $\phi_{i}^{m}=f(x_{+i})-f(x_{-i})$
\EndFor \State compute Shapley value $\phi_{i}(x)=\frac{1}{M}\sum_{m=1}^{M}\phi_{i}^{m}$
\end{algorithmic}
\end{algorithm}
\subsection{Shapley Values for Grouped Features}
Vertical Federated Learning raises new issues for measuring contributions
of multiple parties where the feature space is divided into different
parts. Directly using methods like Shapley values for each prediction
will very likely reveal the protected feature value from the other
parties and cause privacy issues. Thus it is not trivial to develop
a safe mechanism for vertical Federated Learning and find a balance
between contribution measurement and data privacy.
We propose a variant version of the approach proposed in \cite{b13}
to use Shapley value for measuring contributions of different parties
in vertical FML. Here we take the dual-party vertical Federated Learning
as an example, while the idea can be extended to multiple parties.
For the $k$th instance, the label is $y_{k}$ and one party owns
part of the features $x^{h,k}$ and the other party owns the rest
part of the features $x^{g,k}$, where $k=1,\dots,K$ as we suppose
both parties have $K$ overlapped instances with the same IDs. By
using vertical FML, the two parties collaborate to develop a machine
learning model for predicting labels $Y$. We first give some definitions
and assumptions in this problem and then propose an approximation
algorithm to compute the Shapley group value for measuring the contributions
of different parties.
\begin{defn}
(United Federated Feature). For the vertical FML with a set of parties
$G$ and a set of features $S$, the united federated feature $x^{fed}$
is a combination feature of the features $x^{g}\in X^{g}\subset S$
for party $g\in G$.
\end{defn}
\noindent We treat a united federated feature as a single feature
since individual features of each party are private and not visible
to other parties.
\begin{defn}
(Shapley Group Value). The Shapley group value is the group value
that sums the individual Shapley values for all elements in the group.
Formally, the Shapley group value for a subset $P\subset S$ is given
by
\begin{equation}
\phi_{P}(x)=\sum\limits _{i\in P}\phi_{i}(x).\label{eq:groupShapley}
\end{equation}
\end{defn}
\noindent The Shapley group value denotes the contribution of a subset
of features.
\begin{defn}
(Shapley Group Interaction Index). The Shapley group interaction index
is the additional combined feature effect of group $P\subset G$ given
by
\begin{equation}
\varphi_{P}(x)=\sum\limits _{Q\subseteq S\setminus P}\frac{|Q|!(|S|-|Q|-1)!}{|S|!}\delta_{P}(x),
\end{equation}
where
\begin{equation}
\delta_{P}(x)=\Delta_{Q\cup P}(x)-\sum\limits _{i\in P}\Delta_{Q\cup\{i\}}(x)+(|P|-1)\Delta_{Q}(x).\label{eq:deltaInteraction}
\end{equation}
\end{defn}
\noindent The Shapley group interaction index is a variant of the
Shapley interaction index \cite{b16} which extends the definition
of the combined feature effect from two features to a group of features.
\begin{assu}\label{ass:interaction} The Shapley group interaction
index for feature set $X^{g}\subset S$ of any party in vertical FML
is zero, i.e., $\varphi_{X^{g}}(x)=0$, $\forall g\in G$. \end{assu}
\begin{assu}\label{ass:dummy} All features in the feature set $X^{g}$
of party $g$ are dummy features with $\Delta_{Q\cup\{j\}}(x)=\Delta_{Q}(x)+\Delta_{\{j\}}(x)$,
$\forall j\in X^{g}$, $\forall g\in G$ and $\forall Q\subset S$.
\end{assu}
\begin{prop}
\label{prop:groupShap} If either of Assumption \ref{ass:interaction}
and \ref{ass:dummy} holds, then the Shapley group value for a party
$g\in G$ with feature set $X^{g}$ is given by
\begin{equation}
\phi_{X^{g}}=\sum\limits _{Q\subseteq S\setminus\{j^{fed}\}}\frac{|Q|!(|S|-|Q|-1)!}{|S|!}(\Delta_{Q\cup\{j^{fed}\}}(x)-\Delta_{Q}(x)),\label{eq:guestGroupShapley}
\end{equation}
where $j^{fed}$ is the index of the united federated feature $x^{fed}$.
\end{prop}
\begin{proof} We consider the vertical FML scenario where the other
parties act collaboratively as a whole and reach an agreement on a
protocol of sharing and permulation among all their features when
computing the Shapley group value for one party. Actually this reduces
to the dual-party FML case. If Assumption \ref{ass:interaction} holds,
then
\[
\sum\limits _{i\in P}(\Delta_{Q\cup\{i\}}(x)-\Delta_{Q}(x))=\Delta_{Q\cup P}(x)-\Delta_{Q}(x).
\]
According to the definition of federated feature, we can treat $\{j^{fed}\}$
as $X^{g}$. Thus putting the above equation into \eqref{eq:shapleyValue}
and \eqref{eq:groupShapley} gives \eqref{eq:guestGroupShapley}.
If Assumption \ref{ass:dummy} holds, then \begin{IEEEeqnarray}{rCl} \Delta_{Q \cup \{j_1^g,\dots ,j_k^g\}} (x) & = & \Delta_{Q \cup \{j_1^g,\dots ,j_{k-1}^g\}} (x) + \Delta_{\{j_k^g\}} (x) \nonumber \\ & = & \dots \nonumber \\ & = & \Delta_{Q} (x) + \sum_{j \in X^g} \Delta_{\{j\}} (x) \nonumber \end{IEEEeqnarray}
The above equation together with the dummy property makes \eqref{eq:deltaInteraction}
equal to zero. Thus Assumption \ref{ass:interaction} holds. \end{proof}
Proposition \ref{prop:groupShap} indicates that we can measure the
importance of a feature subset without revealing the details of any
private feature of a party in the vertical FML. Suppose we want to
measure the contribution of one party to the prediction of an instance
by looking at the Shapley group value of the feature set shared by
the party. Instead of giving out individual Shapley values for all
features in its feature space, we combine the private features as
one united federated feature, and compute the Shapley value for this
federated feature together with the features of all the other parties.
Since this method requires to turn on and off certain features for
calculating the Shapley value, for the federated feature we will need
a specific ID to inform the party in consideration to return its part
of the prediction with all its features turned off. For models that
takes in NA values, this mean that the features will be set to NA.
For models that cannot handle missing values, we follow the practice
of \cite{b13} to set the feature values to be the median value of
all the instances as the reference value.
Although the two assumptions are quite strong, the experiment results
show that the approximation algorithm works well in real scenarios.
Also, as discussed above the approximation algorithm for the dual-party
case can be extended to measuring contributions of multiple parties
as long as an agreement on a protocol of feature sharing and permutation
is reached. In summary, the process of computing Shapley group value
for one party is described in Algorithm \ref{algo:groupShapley}.
Repeating the estimation algorithm for all parties gives their corresponding
contribution measure.
\begin{algorithm}
\caption{Approximating Shapley estimation for federated feature for one party
in vertical FML}
\label{algo:groupShapley} \begin{algorithmic} \Input \State number
of iterations $M$, instance feature vector $x$, \State model $f$,
the federated feature index $j^{fed}$ , \State the index set of
other features $I^{h}$ , party $g$ \EndInput \Output \State Shapley
group value $\phi_{j^{fed}}$ for $j^{fed}$ \EndOutput \ForAll
{$m=1,\dots,M$} \State select a subset $Q\in I^{h}\cup\{j^{fed}\}$
\State construct new instance $x'$: \State ~~ Set $x'_{k}=x_{k}$
for $k\in Q$ \State ~~ Set $x'_{k}$ to reference value for $k\notin Q$
\If{$j^{fed}\in Q$} \State Send encrypted ID of $x$ to party
$g$ \State Set $x'_{j^{fed}}=x_{j^{fed}}$ \Else \State Send special
ID to party $g$ \State Set $x'_{j^{fed}}$ to reference value \EndIf
\State Run federated model prediction for $x'$ \State Save prediction
result of $Q$ \EndFor \State compute $\phi_{j^{fed}}$ using Algorithm
\ref{algo:individualShapley} \State return Shapley group value $\phi_{j^{fed}}$
\end{algorithmic}
\end{algorithm}
\section{Experiment}
We developed the algorithms for calculating multi-party contribution
as derived in Section 3 and 4. In this section, we test our algorithm
by training a machine learning model on the Cervical cancer (Risk
Factors) Data Set \cite{b9} and experimenting by calculating the
participant contributions on both horizontal and vertical FML setups.
The Cervical cancer dataset is used to predict whether an individual
female will get cervical cancer given indicators and risk factors
as features of the dataset, which is showed in Table~\ref{tab1}.
We normalized the data and used Scikit-learn to train a SVM (Support
Vector Machine) model for the cervical cancer classification task.
\begin{table}[htbp]
\caption{Cervical cancer (Risk Factors) Data Set Attribute Information}
\centering{}%
\begin{tabular}{|c|c|}
\hline
\textbf{Attribute} & \multicolumn{1}{c|}{\textbf{Type}}\tabularnewline
\hline
Age & int\tabularnewline
\hline
Number of sexual partners & int\tabularnewline
\hline
First sexual intercourse (age) & int\tabularnewline
\hline
Num of pregnancies & int\tabularnewline
\hline
Smokes & bool\tabularnewline
\hline
Smokes (years) & int\tabularnewline
\hline
Hormonal Contraceptives & bool\tabularnewline
\hline
Hormonal Contraceptives (years) & int\tabularnewline
\hline
IUD & bool\tabularnewline
\hline
IUD (years) & int\tabularnewline
\hline
STDs & bool\tabularnewline
\hline
STDs (number) & int\tabularnewline
\hline
STDs: Number of diagnosis & int\tabularnewline
\hline
STDs: Time since first diagnosis & int\tabularnewline
\hline
STDs: Time since last diagnosis & int\tabularnewline
\hline
Biopsy$^{\mathrm{a}}$ & bool\tabularnewline
\hline
\multicolumn{2}{l}{$^{\mathrm{a}}$Target Variable}\tabularnewline
\end{tabular}\label{tab1}
\end{table}
The contribution of the FML participants can be of two ways, namely
the horizontal FML setup where we use deletion method for indicating
group instance importance, and the vertical FML setup where Shapley
value is used for evaluating importance of features from different
parties. For purpose of illustration, we separately considered the
experiments in order to illustrate that for both horizontal FML and
vertical FML our proposed methods can give a reasonable explanation
for the contribution of multiple participants.
\subsection{Deletion Method (Horizontal FML)}
As we explained in Section 3, the deletion method can be used to evaluate
importance of a single instance and then generalized to the scenario
where different groups of instances are from different parties. The
experiments are performed on the Cervical dataset with ``Biopsy''
as the target value. For simplicity, we only considered binary classification
problem, where the Biopsy takes the value Health and Cancer. Since
the deletion method is defined for the training process, without losing
the generality we don't split the Cervical dataset in the experiment,
so the training set has the number of instances the same as the entire
dataset. We use SVM as the classification algorithm, where the output
is set to 'probability' and the kernel is RBF (Radial Basis Function)
with the coefficient as 1/(number of features). In order to simulate
the participant contribution, we evenly split the dataset with a given
number of instances, so that in our experiment we conceptually build
up a Horizontal Federated Learning ecosystem with five players. We
used deletion method to calculate the contribution of those five participants
and the results are shown in Fig.~\ref{fig}.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.4]{fig1}} \caption{The importance of instance groups of cervical data. We simulated five
parties and each party has same number of training instances. The
vertical axis shows the value of horizontal FML instance group importance
value.}
\label{fig}
\end{figure}
\subsection{Shapley Value (Vertical FML)}
We also did the experiments to calculate the Shapley value for feature
importance, where we simulate the vertical FML ecosystem and each
participant shares a certain part of the feature space. The experiments
are performed on the same Cervical cancer dataset as explained in
the previous session. We randomly shuffled the data and used 70\%
instances for training and 30\% for testing. For testing data the
accuracy reaches 95.42\%. The algorithm's setup is exactly the same
as the experiment in the previous session. In order to avoid the inconsistence
due to the algorithm's hyperparameter choices, the random state for
splitting and shuffling the dataset is set to the same random seed.
As the first demonstration, we pick one specific instance from the
training data, run the prediction and test our Shapley Federated algorithm
for feature importance. The result can be seen on Fig.~\ref{fig2}.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.5]{fig2}} \caption{The Shapley value for predicting one instance in Cervical cancer dataset.
This is the demonstration that Shapley value can give a reasonable
explanation for feature importance on each prediction.}
\label{fig2}
\end{figure}
Following the demonstration, we then considered the Vertical FML ecosystem
framework. We first calculated the Shapley value for the whole feature
space, as shown in Fig.~\ref{fig3}, which directly reflects the
importance for different features as normal Shapley value indicats.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.6]{fig3-a}}\centerline{ \includegraphics[scale=0.6]{fig4-a}}\caption{Feature Importance (Shapley values) for 855 instances for the whole
feature space. Above one is the scatter plot for each prediction on
the data, below one is bar plot for each feature's total contribution
from all the predictions.}
\label{fig3}
\end{figure}
We then simulated the vertical FML for multiple participants, where
we evenly separate the 15 features into 5 groups and each group represents
a single participants with 3 features. Each time we group the features
from one party together as the federated feature, and run the Shapley
value algorithms to calculate the feature importance for this single
federated feature together with the other individual features from
the other participants. The simulated results are showed in Fig.~\ref{fig5}
and Fig.~\ref{fig6}. Another way of doing this is to use federated
features all together for all the participants and calculate the Shapley
value at one go. We expect that this will give less accurate results.
Our experiment indicates that in the multi-party Vertical FML setup,
federated Shapley value is a good quantity to indicate the contribution
for each participant.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.4]{fig5-a}} \centerline{\includegraphics[scale=0.4]{fig5-b}}
\centerline{\includegraphics[scale=0.4]{fig5-c}} \centerline{\includegraphics[scale=0.4]{fig5-d}}
\centerline{\includegraphics[scale=0.4]{fig5-e}} \caption{Scatter plot for Feature Importance (Shapley values) for 855 instances.
We considered different federated groups of different features. For
combined feature has different impact on the feature importance. We
evenly separated the 15 features into 5 groups, and each group has
3 features }
\label{fig5}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.4]{fig6-a}} \centerline{\includegraphics[scale=0.4]{fig6-b}}
\centerline{\includegraphics[scale=0.4]{fig6-c}} \centerline{\includegraphics[scale=0.4]{fig6-d}}
\centerline{\includegraphics[scale=0.4]{fig6-e}} \caption{Bar plot for average Feature Importance (Shapley values) for 855 instances.
We considered different federated groups of different features. For
combined feature has different impact on the feature importance. We
evenly separated the 15 features into 5 groups, and each group has
3 features}
\label{fig6}
\end{figure}
\section{Conclusion}
Fair calculating the contribution of each participants in Federated
Machine Learning is crucial for credits and rewards allocation. In
this paper, we proposed methods that can calculate participant contribution
for both Horizontal FML and Vertical FML by using group instance deletion
and group Shapley values. Our experiment results indicate that our
method is effective and can give fair and reliable contribution measurements
for FML participants without disclosing the data and breaking the
initial intent of preserving data privacy.
Our work for contribution measurement for FML participants is model
agnostic, meaning that this should work for almost any kind of machine
learning algorithms, and become a general framework for this task.
We expect our work can be built into FML tool sets like FATE and TFF
and become the start of developing a standard model contribution measurement
module for Federated Learning that is critical for industrial applications.
For future work, we expect some more advanced algorithms like influential
functions \cite{b7} for horizontal FML and for some sampling version
of calculating Shapley values for vertical FML. Those algorithms will
help get an accurate and fair contribution measurement results with
higher computational efficiency.
\bigskip{}
\bigskip{}
\bigskip{}
\bigskip{}
\bigskip{}
\bigskip{}
\bigskip{}
\bigskip{}
\bigskip{}
\bigskip{}
\bigskip{}
|
1,116,691,498,338 | arxiv | \section{Introduction}
Our understanding of the phase structure of QCD at high baryon density
and low temperature remains severely hampered by the sign problem. In
the absence of first-principles methods which have been proven to
circumvent this problem, we can study a related theory, QCD with
colour group SU(2) (QC$_2$D), which does not suffer from the sign
problem. This may firstly allow us to confront model studies with
lattice results, thereby constraining these models in their
application to real QCD, and secondly reveal generic features of the
phase structure of strongly interacting gauge theories, including the
nature of deconfinement at high density.
Here we present an update of our ongoing investigation of the phase
structure of QC$_2$D as a function of temperature and chemical
potential \cite{Cotter:2012mb,Hands:2012yy}.
\section{Simulation details}
We study two-colour QCD with a conventional Wilson action for the
gauge fields and two flavours of unimproved Wilson fermion. The
fermion action is augmented by a gauge- and iso-singlet diquark source
term which serves the dual purpose of lifting the low-lying
eigenvalues of the Dirac operator and allowing a controlled study of
diquark condensation. Further details about the action and the Hybrid
Monte Carlo algorithm used can be found in \cite{Hands:2006ve}.
We have performed simulations at $\beta=1.9, \kappa=0.168$,
corresponding to a lattice spacing $a=0.178$fm, determined from the
string tension, and a pion mass
$am_\pi=0.645$ or $m_\pi\approx710$MeV. The ratio of the ground state
pseudoscalar to vector masses is $m_\pi/m_\rho=0.80$ \cite{Hands:2010gd}.
Our lattice volumes and the corresponding values for temperatures $T$,
chemical potentials $\mu$ and diquark sources $j$ are given in
table~\ref{tab:params}.
All results shown are extrapolated to $j=0$ using a linear Ansatz
except where otherwise stated.
\begin{table}[thb]
\begin{center}
\begin{tabular}{|rr|rrr|}
\hline
$N_s$ & $N_\tau$ & $T$ (MeV) & $\mu a$ & $ja$ \\ \hline
16 & 24 & 47 & 0.3--0.9 & 0.04 \\
12 & 24 & 47 & 0.25--1.1 & 0.02, 0.03, 0.04 \\
12 & 16 & 70 & 0.3--0.9 & 0.04 \\
16 & 12 & 94 & 0.2--0.9 & 0.02, 0.04 \\
16 & 8 & 141 & 0.1--0.9 & 0.02, 0.04 \\ \hline
\end{tabular}
\caption{Lattice volumes and associated temperatures $T$, chemical
potentials $\mu$ and diquark sources $j$.}
\label{tab:params}
\end{center}
\end{table}
\section{Order parameters and phase structure}
\label{sec:phases}
\begin{figure}[tb]
\includegraphics*[width=\colw]{qq_j0.eps}
\includegraphics*[width=\colw]{polyakov.eps}
\caption{Left: The diquark condensate $\langle qq\rangle/\mu^2$
extrapolated to $j=0$ for $N_\tau=24, 12, 8$ ($T=47,94,141$ MeV).
Right: The renormalised Polyakov loop as a function of chemical
potential, for all temperatures. The shaded symbols are for
$ja=0.04$; the open symbols are $ja=0.02$. The filled black circles
are the results for the $16^3\times24$ lattice. The dashed line
indicates the inflection point of $L$ at $\mu=0$. The inset shows the
unrenormalised Polyakov loop.}
\label{fig:orderparams}
\end{figure}
The left panel of fig.~\ref{fig:orderparams} shows the diquark condensate
$\bra qq\ket = \langle\psi^{2tr}C\gamma_5\tau_2\psi^1
-\bar\psi^1C\gamma_5\tau_2\bar\psi^{2tr}\rangle$
as a function of chemical potential, for the $N_\tau=24, 12$ and 8
lattices. In the case of a weakly-coupled BCS condensate at the Fermi
surface, the diquark condensate, which is the number density of Cooper
pairs, should be proportional to the area of the Fermi surface, ie
$\bra qq\ket\sim\mu^2$.
For the lowest temperature, $T=47$ MeV ($N_\tau=24$), we see that
$\bra qq\ket/\mu^2$ has a plateau in the region $0.35\lesssim\mu a\lesssim0.6$. The
increase for $\mu a\gtrsim0.6$ may be evidence of a transition to a
new state of matter at high density, although the impact of lattice
artefacts cannot be excluded. The
lower limit of the plateau roughly coincides with the onset chemical
potential $\mu_o\approx m_\pi/2\approx0.33a^{-1}$, below which both
the quark number density and diquark condensate are expected to be
zero. We find no substantial volume
dependence at any $\mu$. Our results at $T=70$ MeV (not shown here)
are almost identical to those at $T=47$ MeV.
At $T=94$ MeV ($N_\tau=12$), $\bra qq\ket$ is significantly suppressed, and
drops dramatically for $\mu a\gtrsim0.7$. At $T=141$ MeV ($N_\tau=8$)
the diquark condensate is zero at all $\mu$, confirming that the
system is in the normal phase at this temperature.
In the right panel of fig.~\ref{fig:orderparams} we show the order
parameter for deconfinement,
the Polyakov loop $L$, for our four temperatures. It
has been renormalised by requiring $L(Ta=\frac{1}{4},\mu=0)=1$, see
\cite{Cotter:2012mb}\ for details.
We see that for each temperature $T$, $L$ increases rapidly
from zero above a chemical potential $\mu_d(T)$ which we may identify
with the chemical potential for deconfinement. In the absence of a
more rigorous criterion, we have taken the point where $L$ crosses the
value it takes at $T_d(\mu=0)$, $L_d=0.6$ \cite{Cotter:2012mb}, to define $\mu_d(T)$. The
results are shown in fig.~\ref{fig:phases}, with error bars denoting
the range $L_d=$0.5--0.7. To more accurately locate the deconfinement
line, we will need to perform a temperature scan for fixed
$\mu$-values. This is underway.
The estimates of critical chemical potentials for deconfinement and
superfluidity can be translated into a tentative phase diagram, shown in
fig.~\ref{fig:phases}.
In summary, from the order parameters we find signatures of three
different regions (or phases): a normal (hadronic) phase with
$\bra qq\ket=0,\braket{L}\approx0$; a BCS (quarkyonic) region with
$\bra qq\ket\sim\mu^2$ at low $T$ and intermediate to large $\mu$; and a
deconfined, normal phase with $\bra qq\ket=0,\braket{L}\neq0$ at large $T$
and/or $\mu$.
After extrapolating our results to $j=0$ we see no
evidence of a BEC region described by $\chi$PT, with
$\bra qq\ket\sim\sqrt{1-\mu_o^4/\mu^4}$ \cite{Kogut:2000ek}, in contrast with
earlier work with staggered lattice fermions \cite{Hands:2000ei}.
This may be related to the large value of $m_\pi/m_\rho$ in this
study. Simulations with lighter quarks may yield further insight into this.
\begin{figure}[tb]
\includegraphics*[width=\colw]{phases_su2.eps}
\includegraphics*[width=\colw]{potential.eps}
\caption{Left: A tentative phase diagram, including the location of the
deconfinement transition in the $(\mu,T)$ plane, determined from the
renormalised Polyakov loop, and the transition to the diquark condensed $\langle
qq\rangle\neq0$ phase. Right: The static quark potential computed from the
Wilson loop, for the $12^3\times24$ lattice and different values of $\mu$.}
\label{fig:phases}
\end{figure}
In the right panel of fig.~\ref{fig:phases} we show the static quark
potential computed from the Wilson loop at $N_\tau=24$, for $\mu a=0.3, 0.5, 0.7,
0.9$. We find that as we enter the superfluid region, the string
tension is slightly reduced, but that this is reversed as $\mu$ is
increased further, leading to a strongly enhanced string tension at
$\mu a=0.9$, which according to our analysis of the Polyakov loops
should be in the deconfined region. This agrees with the pattern that was already observed
in \cite{Hands:2006ve}. We also find no significance $j$-dependence in our
results. At present we do not have a good understanding of why the
static quark potential should become antiscreened at large $\mu$.
Computing the static quark potential using Polyakov loop correlators
rather than Wilson loops may yield further insight into this issue.
\section{Equation of state}
\label{sec:eos}
\begin{figure}[tb]
\includegraphics*[width=\colw]{nq_lat.eps}
\includegraphics*[width=\colw]{nq_cont.eps}
\caption{The quark number density divided by the density for a
noninteracting gas of lattice quarks (left) and continuum quarks
(right).}
\label{fig:density}
\end{figure}
We now turn to the bulk thermodynamics of the system, and in
particular the quark number $n_q$ and the energy density $\varepsilon$.
Fig.~\ref{fig:density} shows the quark number density $n_q$ for
$N_\tau=24, 12$ and 8, extrapolated to zero diquark source, and
normalised by the noninteracting value for lattice fermions on the
left and for continuum fermions on the right. The difference between
the two gives an indication of the lattice artefacts. We see that the
density rises from zero at $\mu\approx\mu_o=0.32a^{-1}$, and for the
two lower temperatures is roughly constant and approximately equal to
the noninteracting fermion density in the region $0.4\lesssim\mu
a\lesssim0.7$. The peak at $\mu a\simeq0.4$ in the $N_\tau=24$ data
in the upper panel is an artefact of the normalisation with $n_{SB}$
for a finite lattice volume: the raw numbers for the
$12^3\times24$ and $16^3\times24$ lattices are identical within
errors, but $n_{SB}$ differs by about 50\% around $\mu a=0.4$.
The density for $N_\tau=8$ does not show any plateau as a function of
$\mu$; instead, $n_q/n_{SB}$ shows a roughly linear increase in the
region $0.4\leq\mu a\leq0.7$. This is suggestive of the system being
in a different phase at this temperature. We also note that
$n_q/n_{SB}$ for $N_\tau=12$ rises above the corresponding $N_\tau=24$
data for $\mu a\gtrsim0.7$, where, according to the results of
Sec.~\ref{sec:phases}, the hotter system is entering the deconfined,
normal phase.
These results lend further support to our previous conjecture that in
the intermediate-density, low-temperature region the system is in a
``quarkyonic'' phase: a confined phase (all excitations are
colourless) that can be described by quark degrees of freedom.
The renormalised energy density can be derived by going to an
anisotropic lattice formulation with bare anisotropies
$\gamma_g=\sqrt{\beta_t/\beta_s}, \gamma_q=\kappa_t/\kappa_s$ and
physical anisotropy $\xi=a_s/a_\tau$.
In the isotropic limit $\gamma_q=\gamma_g=\xi=1$
the energy density is then
given by $\varepsilon=\varepsilon_g+\varepsilon_q$ with
\begin{align}
\varepsilon_g &= \frac{3}{2a^4}\biggl[
\langle\Re\operatorname{Tr} U_{ij}\rangle
\left(\del{\beta}{\xi}-\beta\del{\gamma_g}{\xi}\right)
+ \langle\Re\operatorname{Tr} U_{i0}\rangle
\left(\del{\beta}{\xi}+\beta\del{\gamma_g}{\xi}\right)\biggr]\,,
\label{eq:epsG} \\
\varepsilon_q &= \frac{1}{a^4}\bigg[
\kappa^{-1}\del{\kappa}{\xi}\big(16+\bra\psibar\psi\ket\big)
- \kappa\del{\gamma_q}{\xi}\langle\overline{\psi} D_0\psi\rangle\bigg]\,.
\label{eq:epsQ}
\end{align}
We have determined the Karsch coefficients $\dell{c_i}{\xi}$ with
$c_i=\gamma_g,\gamma_q,\beta,\kappa$ by performing simulations with
$\gamma_q, \gamma_g\neq1$. Our estimates for these coefficients are
\cite{Cotter:2012mb}\
\begin{equation}
\del{\gamma_g}{\xi} = 0.90\err{4}{14},\quad
\del{\gamma_q}{\xi} = 0.13\err{40}{5},\quad
\del{\beta}{\xi} = 0.59\err{0.24}{1.37},\quad
\del{\kappa}{\xi} = -0.052\err{69}{15}\,.
\end{equation}
\begin{figure}[tb]
\includegraphics*[width=\textwidth]{energy_tot.eps}
\caption{The quark and gluon contributions to the energy density
(left) and total energy density (right), divided by $\mu^4$, for
$ja=0.04$ (open symbols) and $j=0$ (filled symbols).}
\label{fig:energy}
\end{figure}
Our results for the energy density are shown in
fig.~\ref{fig:energy}. We see that the quark contribution is negative
for all values of $\mu$ and $T$, but this is balanced by the positive
gluon contribution, giving a positive or zero total energy.
The energy density is very sensitive to the values of the Karsch
coefficients \cite{Cotter:2012mb}; for example, if $\dell{\gamma_q}{\xi}$ is
changed from the suprisingly low value of 0.13 to a more `natural'
value of 0.8, we find that $\varepsilon_q>0$ for $\mu a\gtrsim 0.6$.
\section{Heavy quarkonium}
\label{sec:heavy}
\begin{figure}
\begin{center}
\includegraphics*[width=\colw]{1S0-allT_jk_m50.eps}
\includegraphics*[width=\colw]{1P0subratiom50.eps}
\end{center}
\caption{Left: Temperature dependence of the $^1S_0$ state energy
vs. $\mu$ for $Ma = 5.0$ with $j = 0.04$. Right: The ratio
$\sum_{\mathbf x} G({\mathbf x},\tau;\mu)/\sum_{\mathbf x} G({\mathbf x},\tau;0)$ for $^1P_0$
correlators on $12^3 \times 24$ with $Ma = 5.0$. Due to the
noisiness of the P-wave data, only a limited $\tau$ range is shown.}
\label{fig:quarkonium}
\end{figure}
We have investigated the heavy quarkonium spectrum by computing
non-relativistic QC$_2$D correlators on our $N_\tau=24$, 16 and 12
lattices. We use an ${\cal O}(v^4)$ lattice NRQCD lagrangian
\cite{Bodwin:1994jh} to compute the heavy quark Green function; see
\cite{Hands:2012yy} for further details. We find that the S-wave
correlators can be fitted with an exponential
decay $\propto e^{-\Delta E_n\tau}$
even once $\mu\neq0$; moreover the fits are quite stable over large
ranges of $\tau$, indicating that $S$-wave bound states persist
throughout the region 47 MeV $\lesssim T\lesssim$ 90 MeV.
Fig.~\ref{fig:quarkonium} shows the $T$- and $\mu$-dependences of the
$^1S_0$ state energy $\Delta E$.
We see that as $\mu$ is varied, initially the $^1S_0$ state energy
decreases from that at $\mu = 0$, but once $\mu$ reaches the region
$\mu_1 (\simeq 0.5 )\le \mu a \le \mu_2 (\simeq 0.85)$, the $^1S_0$
state energy stays roughly constant. For $\mu > \mu_2$, the $^1S_0$
state energy starts increasing again.
In contrast to the observables studied in Secs~\ref{sec:phases} and
\ref{sec:eos}, we find no clear, systematic dependence on the diquark
source term for $\mu a\lesssim0.5$. For $\mu a\gtrsim0.5$ on the
other hand, $\Delta E(ja=0.02)<\Delta E(ja=0.04)$. This suggests that
the energy, extrapolated to $j=0$, may continue to decrease up to $\mu
a\approx0.7$ before increasing.
As the temperature increases from 47 MeV ($N_\tau=24$) to 70 MeV
($N_\tau=16$) we find that the point where the energy of the $^1S_0$
state starts increasing goes from $\mu a\approx0.7$ to 0.55. This is
consistent with the estimate of the deconfinement transition in
Sec.~\ref{sec:phases}. For $N_\tau=12$ we do not yet have any data in
the $\mu$-region which might confirm this. It is interesting to note
that $\Delta E$ increases with increasing $T$, in accordance with
what has been observed in hot QCD with
$\mu=0$~\cite{Aarts:2011sm}.
In contrast to the $S$-waves, it is difficult to find stable
exponential fits to the $P$-wave correlators with the current
Monte-Carlo data before statistical noise sets in, except for the case
$\mu a \le 0.25$. In the right panel of fig.~\ref{fig:quarkonium} we
instead show the ratios of the
$^1P_0$ correlators at different values $\mu\not=0$
to the correlator at $\mu=0$. Note that any effect we observe
is entirely due to the dense medium.
The S-wave correlator ratios show an increase with $\tau$ which
corresponds to the
negative $^1S_0$ energy difference $\Delta E(\mu) - \Delta E(\mu-0)$
that was previously observed. In the quarkyonic region, the
$P$-wave ratios behave similarly to the $S$-wave, but in the
deconfined region ($\mu \ge \mu_2$), the $P$-wave ratios
are non-monotonic, initially decreasing with $\tau$ before turning to
rise above unity for $\tau/a\sim4$. On the other hand, the
P-wave correlator ratios on the $12^3 \times 16$ and $16^3 \times 12$
lattice show monotonic behavior similar to those of the S-waves,
suggesting a subtle interplay of density and temperature effects
on the P-wave states.
\section{Summary and outlook}
From lattice simulations of dense QC$_2$D at a range of temperatures,
we have identified three distinct regions of the phase diagram: a
hadron gas at low $\mu$ and $T$, a quarkyonic region at intermediate
$\mu$ and low to intermediate $T$, and a deconfined quark--gluon
plasma at high $T$ and/or $\mu$. Taking the limit of zero diquark
source has served to make our identification of the quarkyonic region
more robust. Investigations into the exact nature and location of the
deconfinement and the superfluid to normal transitions are underway,
as are simulations at smaller lattice spacings and with smaller quark
masses.
\section*{Acknowledgments}
This work is carried out as part of the UKQCD collaboration and the
DiRAC Facility jointly funded by STFC, the Large Facilities Capital
Fund of BIS and Swansea University. We thank the DEISA Consortium
(www.deisa.eu), funded through the EU FP7 project RI-222919, for
support within the DEISA Extreme Computing Initiative, and the USQCD
for use of computing resources at Fermilab. JIS and SC
acknowledge the support of Science Foundation Ireland grants
08-RFP-PHY1462, 11-RFP.1-PHY3193 and 11-RFP.1-PHY3193-STTF-1.
SK is grateful to STFC
for a Visiting Researcher Grant and is supported by the National Research
Foundation of Korea grant funded by the Korea government (MEST) No.\
2011-0026688. DM acknowledges support from the U.S. Department of
Energy under contract no. DE-FG02-85ER40237.
JIS acknowledges the support and hospitality of the Institute for
Nuclear Theory at the University of Washington, where part of this
work was carried out.
|
1,116,691,498,339 | arxiv | \section{Introduction}
The atomic nucleus is a fascinating quantum-many body system which shows a rich variety of shapes and structures \cite{BM75}. Major advances in experimental techniques have facilitated these studies of atomic nuclei at extremes of isospin, angular-momentum and excitation energy. These investigations have revealed new structures and phenomena, hitherto, unknown in nuclear physics. In nuclear high-spin spectroscopy, band-structures have been observed up to high angular momentum in some of the nuclei and
investigations of these high-spin states probe the predicted modifications of the shell structure and pairing properties with increasing rotational frequency. In particular, nuclei in the mass range 60 $\leq$ A $\leq$ 70 display a wide range of phenomena, for instance, co-existence of oblate and prolate shapes, shape changes and dramatic variations in band crossing properties have been observed with particle number.
In most deformed nuclei, the ground-state band is crossed by a two-quasiparticle aligned structure resulting in the well established phenomenon of backbending \cite{EG73}. The yrast band after band crossing consists of two-quasiparticle aligned state and the ground-state configuration becomes the excited band. In several nuclei this band referred to as the yrare band, is observed up to high-spins. Further, in some of the nuclei, the forking of ground-state band into two two-quasiparticle structures have also been observed. For example, in even-even Xe-Ba-Ce nuclei with N=66-76, the ground-state band forks into two distinct band structures based on h$_{11/2}$ two-quasiparticle configurations \cite{wyss}. Most of these observed bands after forking have been interpreted as two-neutron and two-proton quasiparticle structures which align almost simultaneously. The forking in these axially-symmetric nuclei has been explained \cite{forking} as resulting from repulsive nature of neutron-proton interaction in high-j intruder orbital $h_{11/2}$ for the particle-hole configuration. In the present work, we report a forking of ground state band in $^{70}$Ge. This is shown to arise from a $\gamma-$band built on a two-quasiparticle configuration.
Well developed $\gamma-$bands are known to exist in many transitional nuclei close to the ground-state which have been investigated using various phenomenological models \cite{VG81,JL88}. In the framework of microscopic triaxial projected shell model (TPSM) approach \cite{JS99}, these $\gamma-$ bands result from projection of the K=2 state of the triaxial self-conjugate vacuum configuration. This state is a superposition of K=0, 2, 4,.... configuration with K=0 projected state corresponding to ground-state band. The projections from K=2 and 4 correspond to $\gamma$- and $\gamma\gamma$- bands, respectively \cite{JS14,JS09}. It has been demonstrated in several studies that TPSM approach provides an excellent description of the observed $\gamma$-bands in several mass regions \cite{JS14,JS09}. It is also obvious from this description that not only the ground-state band, but also the quasiparticle-particle excited configurations should have the associated $\gamma$-bands built on them. The existence of $\gamma$-band on each intrinsic state has been predicted by Bohr and Mottelson quite sometime back \cite{BM75}.
Low-spin states of $^{70}$Ge were previously investigated through the $(p,p^\prime$), $(n, n^\prime \gamma$), (p,t) and ($^{3}He, d)$ reactions \cite{previous,previous1,pre2,previous3}. These studies reported the level structure of $^{70}$Ge up to 5.1 MeV excitation energy. Later, high spin states were studied by two groups \cite{budda, sugawara}, identified the ground state positive-parity band up to J$^{\pi}$ = (12$^+$) state. In this article, we have presented the experimental observation of $\gamma-$band structure built on two-quasiparticle configuration in $^{70}$Ge. Experimental details and relevant results are described in Sec.~\ref{exp.details}. Deduced band structures are discussed in Sec.~\ref{discussion} using cranked Hartree Fock Bogoliubov model (CHFBM) and triaxial projected shell model (TPSM) approaches. A brief summary is presented in Sec.~\ref{summary}.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.62, angle=0]{70Ge-2016.eps}
\caption{Partial level scheme of $^{70}$Ge obtained from the present work. Bands are labeled as B1, B2 and B3 for reference in the text.}\label{fig:level-scheme}
\end{center}
\end{figure}
\begin{figure}
\includegraphics[width=8.0cm, height=5.6cm, trim = 1cm 0cm 1cm 0cm ]{70ge_b1.eps}
\caption{\label{band1} A $\gamma$-$\gamma$ coincidence spectrum with a gate on 906-keV $\gamma$-ray illustrating transitions in band B1. Inset shows transitions in B1, which are in coincidence with both 1051- and 1474-keV $\gamma$-rays.}\label{b1}
\vspace{0.0cm}
\end{figure}
\section{Experimental details and results}\label{exp.details}
High-spin states of $^{70}$Ge were populated using fusion-evaporation reaction $^{64}$Ni($^{12}$C, $\alpha$2n)$^{70}$Ge. A beam of $^{12}$C at 55 MeV energy was delivered by the 15 UD Pelletron accelerator \cite{pel} at Inter University Accelerator Centre, New Delhi. The target used in this experiment was an isotopically enriched $^{64}$Ni with thickness of $\sim$ 1.5 mg/cm$^2$ on a 7 mg/cm$^2$ thick Au backing. The de-excitation cascades of $\gamma$-rays from residual nuclei were detected using the Gamma Detector Array (GDA)~\cite{gda}. The GDA consisted of 12 Compton suppressed n-type Hyper Pure Germanium (HPGe) detectors, having 23\% efficiency relative to 3" x 3" NaI(Tl) crystal. These detectors were arranged in three groups, with each group consisted of four detectors, at angles 50$^\circ$, 98$^\circ$ and 144$^\circ$ with respect to beam direction. Anti-Compton shields (ACS) made of Bismuth Germanate (BGO) were used to suppress the background from Compton scattered events.
The data were recorded using an online CAMAC-based data acquisition system called Freedom~\cite{candle1} and trigger was set when at least two detectors were fired in coincidence. A total of more than 13 $\times$ 10$^7$ twofold or higher coincidence events were recorded in list mode. About 20\% of the recorded events correspond to $\alpha$2n evaporation channel leading to the nucleus of interest $^{70}$Ge. Offline data analysis was carried out using programs RADWARE \cite{radware}, CANDLE \cite{candle2} and INGASORT \cite{ingasort}. List-mode data were sorted into a E$_\gamma$- E$_\gamma$ matrix from which coincidence spectra were generated with an energy dispersion of 0.5 keV/channel. In addition, separate 4k x 4k angle-dependent matrices were constructed by taking energies of $\gamma$-ray transitions from all detectors at 50$^\circ$ or 144$^\circ$ on one axis and coincidence $\gamma$ energies from rest of the detectors at 98$^\circ$ on the other axis. These matrices were used to assign multipolarities of the $\gamma$ transitions using the directional correlation of oriented state (DCO) technique \cite{dco2}. The experimental DCO ratios for the present work is defined \cite{73As} as
\begin{equation}
R_{DCO} = \dfrac{I_{\gamma_1} \;\;at \;\; 50^{\circ}\;or\;144^{\circ} \;\;gated \;\;by\;\; \gamma_2 \;\;at\;\; 98^\circ}{I_{\gamma_1} \;\;at\;\; 98^\circ \;\;gated \;\;by\;\; \gamma_2 \;\;at\;\; 50^\circ\;or\; 144^{\circ}}
\end{equation}
If the gating transition is of stretched quadrupole nature then this ratio is $\sim$ 1 for pure quadrupole transitions and 0.5 for pure dipole ones. If the gating transition is of pure dipole multipolarity then this ratio is between 0 to 2 depending on the mixing ratio, and is 1 for pure dipole transitions.
\begin{figure}
\centerline{\includegraphics[scale = 0.37, angle=0 ]{70ge_b2.eps}}
\caption{ Representative $\gamma$-$\gamma$ coincidence spectra showing the transitions in band B2, common in gates on (a) both 1134- and 1109-keV, and (b) both 1134- and 1240-keV $\gamma$-rays.} \label{pp2015}
\vspace*{0.1cm}
\end{figure}
The level scheme of $^{70}$Ge has been extended up to the state with J$^\pi$= (20$^+$) and excitation energy 10.2 MeV based on $\gamma$-$\gamma$ coincidence relationships, intensity arguments and DCO ratio measurements. The partial level scheme of $^{70}$Ge established in the present work, relevant to the discussion in this article is shown in Fig.\ref{fig:level-scheme}. The ground state positive-parity band determined from the present work is shown in Fig.1 as band B1. This band was known previously up to spin, J$^\pi$ = (12$^+$) \cite{sugawara,pre2,pre3} and is extended to 14$\hbar$ in the present work with inclusion of 1051 keV $\gamma$-ray transition on top of 12$^+$ level. An example of $\gamma$-$\gamma$ coincidence spectrum gated on 906 keV is shown in Fig. \ref{b1} illustrating the transitions in band B1. The inset of this figure shows the transitions in band B1 consistently, common in gates on 1051 and 1474 keV $\gamma$-ray transitions. An important observation in the present work is identification of a new band structure B2 which arises from forking of the ground state band at 6$^+$ state. Such band structures with forking have also been observed earlier in neighboring nuclei $^{66-68}$Ge \cite{66Ge,68Ge-1, 68Ge-2,68Ge}. The band B2 is extended to 20$\hbar$ with the addition of five new $\gamma$-transitions of energies 1240, 840, 626, 1178 and 846 keV above the 5538 keV state. Representative $\gamma-\gamma$ coincidence spectra gated on 1134- and 1109-keV, 1134- and 1240-keV $\gamma$-rays (generated using the AND logic in RADWARE \cite{radware}) are shown in panels (a) and (b) of Fig. \ref{pp2015}, display the newly identified transitions in band B2. The DCO ratios calculated from two asymmetric matrices for all the transitions in band B2 (except 846 keV, which is quite weak) are consistent with stretched quadrupole nature, and therefore they are placed in the level scheme as $\Delta$J=2 spin sequence.
The band B3 is extended up to spin 8$\hbar$ by placing a 1067-keV $\gamma$ transition above 3752-keV state. A 1218 keV $\gamma$-transition decaying from (5$^+$) to 3$^+$ state in band B3 is also confirmed in the present work, consistent with the placement in Ref~\cite{pre2}, whereas this transition was not reported in recent work \cite{sugawara}. A representative sum $\gamma-\gamma$ coincidence spectrum gated on 667- and 1098-keV $\gamma$-rays is shown in Fig. \ref{bandb3}. Parity for energy levels in bands B1, B2 and B3 are assigned based on earlier works and from systematics \cite{sugawara, 66Ge, 68Ge}. Details of the $\gamma$-ray energies, measured relative intensities, DCO ratios and multipolarities of the observed $\gamma$-ray transitions of $^{70}$Ge are summarized in Table~\ref{DCO}.
\begin{figure}
\vspace{0.4cm}
\includegraphics[width=8.0cm, height=4.5cm, trim = 1cm 0cm 1cm 0cm ]{70geb3.eps}
\caption{Sum $\gamma$-$\gamma$ coincidence spectrum displaying the transitions in band B3 with gates on 667-, and 1098-keV $\gamma$-rays. The 912-keV $\gamma$-ray marked with asterick is a contaminant from $^{73}$As.}\label{bandb3}
\end{figure}
\setlength{\tabcolsep}{4pt}
\begin{table
\centering
\caption{Transition energy (E$_\gamma$), relative intensity (I$_\gamma$), DCO ratio (R$_{DCO}$), multipolarity of the transition (Q: Quadrupole/D: Dipole), and spins of initial (J$_{i}^{\pi}$) and final states (J$_{f}^{\pi}$) for the $\gamma$-transitions shown in the level scheme of $^{70}$Ge, are listed. Relative intensities are calculated with respect to the 1143-keV transition by normalizing its intensity to a value of 100. $\Delta$J = 2 transitions are used as gating transitios for DCO ratio measurements. Errors are given in parentheses for I$_\gamma$ and R$_{DCO}$. Multipolarity mentioned in parenthesis is tentative. \label{DCO}}
\begin{center}
\begin{tabular}{cccccc}
\hline\hline
\emph{$E_{\gamma}$} & \emph{$I_{\gamma}$} & \emph{$R_{DCO}$} & \emph{Multipolarity of} &\emph{$J_i^{\pi}$} &\emph{$J_f^{\pi}$} \\
(keV) & (Rel.) & & \emph{transition} & \\
\hline\hline
450 & 1.5(3) & - & (Q) & 8$^+$ & 6$^+$\\
490 & 0.8(2) & - & (Q) & 6$^+$ & 4$^+$ \\
626 & 8.9(11) & 1.17(22) & Q & 16$^{+}$ & 14$^{+}$\\
653 & 1.2(4) & - & - & 4$^+$ & 4$^+$ \\
667 & 11.9(6) & 0.94(7) &$\Delta I=0$, Q & 2$^+$ & 2$^+$\\
677 & 1.0(3) & - & (Q) & 8$^+$ & 6$^+$\\
743 & 3.4(3) & 0.72(13) & D & 3$^+$ & 2$^+$\\
840 & 10.8(10) & 1.09(18) & Q & 14$^{+}$ & 12$^{+}$\\
846 & 2.2(8) & & (Q) & (20$^{+}$) & 18$^{+}$\\
906 & 51.1(12) & 0.99(7) & Q & 8$^{+}$ & 6$^{+}$\\
947 & 6.9(5) & 1.01(3) & Q & 6$^+$ & 4$^+$\\
1039 & 183.4(9) & 1.01(5) & Q & 2$^{+}$ & 0$^{+}$\\
1039 & 24.8(9) & 1.12(11) & Q & 10$^{+}$ & 8$^+$\\
1051 & 11.7(14) & 1.17(12) & Q & 14$^{+}$ & 12$^{+}$\\
1067 & 3.3(5) & - & (Q) & (8$^+$) & 6$^+$\\
1098 & 9.3(7) & 1.19(10) & Q & 4$^+$ & 2$^+$\\
1109 & 23.1(12) & 0.98(12) & Q & 10$^{+}$ & 8$^{+}$\\
1113 & 134.9(9) & 1.05(6) & Q & 4$^{+}$ & 2$^{+}$ \\
1134 & 29.5(9) & 1.01(9) & Q & 8$^{+}$ & 6$^{+}$ \\
1143 & 100 & & Q & 6$^+$ & 4$^+$\\
1178 & 4.3(9) & 0.94(25) & Q & 18$^{+}$ & 16$^{+}$\\
1218 & 1.5(4) & - & (Q) & (5$^+$) & 3$^+$\\
1240 & 14.1(11) & 1.08(15) & Q & 12$^+$ & 10$^+$\\
1411 & 3.4(4) & - & (D) & 3$^+$ & 2$^+$\\
1474 & 14.7(11) & 1.13(12) & Q & 12$^+$ & 10$^+$ \\
1707 & 4.7(5) & - & (Q) & 2$^+$ & 0$^+$ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\includegraphics[scale=0.43, angle=0]{ge70-trs-final.eps}
\caption{ Total Routhian surface calculations for positive-parity, postive signature states ($\pi $, $\alpha $) = (+, +) \cite{cranking} for $^{70}$Ge at rotational frequencies of 0.50 MeV (top panel) and 0.70 MeV (bottom panel). The energy separation between adjacent contours is 0.2 MeV.}\label{TRS}
\vspace{0.0 cm}
\end{figure}
\begin{figure}
\includegraphics[scale=0.30,angle=270]{ge70-nqp-ener-new.eps}
\vspace{0.2cm}
\includegraphics[scale=0.30,angle=270]{ge70-pqp-ener-new.eps}
\caption{\label{qp}(Color online) Cranked shell model calculations using the universal Woods-Saxon potential for quasineutrons (top panel) and quasiprotons (bottom panel) for $^{70}$Ge.}
\end{figure}
\section{Discussion}\label{discussion}
Low-spin positive-parity states in $^{70}$Ge were interpreted by several authors using various theoretical models \cite{ibm,ibm2,hfb}. In the present work, the observed band structures and shape evolution are discussed using standard cranked shell model and triaxial projected shell model approaches.
\subsection{Cranked Hartree Fock Bogoliubov analysis}\label{trs}
\begin{figure}
\includegraphics[scale=0.34,angle=0]{alignments-70Ge.eps}
\caption{\label{fig:alignment}(Color online) Experimental alignments as a function of rotational frequency
for bands, B1 and B2 in $^{70}$Ge. The reference rotor which was subtracted is based on Harris parameters, J$_0$ = 6.0 $\hbar^2$/MeV and J$_1$ = 3.5 $\hbar^4$/MeV$^3$. The alignments using the TPSM approach, to be discussed later, are also included for a comparison.}
\vspace{0.3cm}
\end{figure}
\begin{figure}
\includegraphics[width=6.0cm, trim = 1cm 0cm 1cm 0cm]{70ge-E-I.eps}
\caption{(Color online)Band diagrams for $^{70}$Ge. Labels ($K$,n-qp) indicate the $K$-value and the quasiparticle character of the configuration, for instance, $(3,2n)$ corresponds to the
$\gamma$-band built on this 2n-aligned state. For clarity, only the lowest projected K-configurations are shown and in the numerical calculations projections have been performed from
more than forty intrinsic states. }\label{figtpsm1}
\vspace{0.0 cm}
\end{figure}
Hartree-Fock-Bogoliubov cranking calculations have been performed using the universal parametrization of the Woods-Saxon potential with short range monopole pairing~\cite{HFB}. BCS formalism was used to calculate the pairing gap $\Delta$ for both protons and neutrons. Total Routhian Surface (TRS) calculations were performed in the ($\beta_2$, $\gamma$) plane at different rotational frequencies and the total energy was minimized with respect to hexadecapole deformation ($\beta_4$). TRS plots for favored positive-parity states (+, +) are shown in Fig.~\ref{TRS} at rotational frequencies of $\hbar\omega$ = 0.5 and 0.7 MeV. These indicate that the nucleus has substantial quadrupole deformation. At a rotational frequency $\hbar\omega$ = 0.5 MeV, in the vicinity of the first band crossing, a minimum is seen at ($\beta_2$, $\gamma $) $\approx $ (0.27, -15$^ \circ$), indicating that the nuclear shape is triaxial, but approaching prolate ($\gamma$=0$^\circ$). At even higher rotational frequency ($\hbar\omega$ = 0.7 MeV), the TRS predict a fairly well-defined minimum with $\gamma \approx$ +12$^\circ$ and approximately the same quadrupole deformation. The energy minimum moves towards increasingly positive values of $\gamma $ at higher rotational frequencies, indicating loss of collectivity. To investigate the nature of observed bands and crossing frequencies, the quasiparticle routhians were calculated for $\beta_2 \approx$ 0.27, $\gamma \approx$ -15$^\circ$ as a function of rotational frequency \cite{Tandel} and are depicted in Fig.~\ref{qp}. The neutron crossing is predicted at a considerably lower rotational frequency ($\hbar\omega$ = 0.5 MeV), while the proton crossing is expected at a much higher frequency, $\hbar\omega$ = 0.75 MeV.
Cranking formalism \cite{cranking} has been applied to extract the experimental alignments (i$_x$) as a function of rotational frequency ($\hbar \omega$). Figure \ref{fig:alignment} shows the alignment plot for bands B1 and B2 in $^{70}$Ge. The observed alignment at $\hbar\omega \approx$ 0.50 MeV for band B1, shown in Fig.\ref{fig:alignment}, is attributed to $g_{9/2}^2$ neutron alignment, consistent with predictions in previous work ~\cite{sugawara,pre2,pre3}. In comparison to neighboring isotopes, the observed crossing in band B1 of $^{70}$Ge occurs slightly earlier (by $\approx$ 0.16 MeV) than the observed alignments in $^{66-68}$Ge \cite{66Ge,68Ge}. This might be attributed to the shape change in $^{66-70}$Ge due to large shell gaps existing at N = 34 and 36 and 38 in the Nilsson single particle level diagram. The newly identified positive-parity band B2 having band head spin, $I=8\hbar$ also exhibits band crossing around rotational frequency $\approx$ 0.53 MeV (Fig. \ref{fig:alignment}b) with moderate band interaction above the 6$^+$ state which is similar to the one observed in yrast band B1. The proton band crossing is ruled out as that is expected at $\approx$ 0.75 MeV from the cranked shell model analysis. Thus the observed band crossings in both the bands, B1 and B2 are attributed to g$_{9/2 }$ neutrons. The second alignment is also observed in band B2 above spin 14$^+$ which might be composed of four quasiparticle structure. As is evident from Fig. \ref{fig:alignment} that the observed alignments in both bands B1 and B2 are consistent with the TPSM results which are discussed in the following section.
\subsection{Triaxial projected shell model calculations}
\begin{figure}
\includegraphics[width=7.8cm, trim = 1cm 0cm 1cm 0cm]{70ge_levels.eps}
\caption{(Color online) Comparison of calculated energies by TPSM with observed experimental data for $^{70}$Ge.}\label{figtpsm2}
\end{figure}
TPSM Hamiltonian consists of pairing plus
quadrupole-quadrupole interaction terms \cite{KY95} :
\begin{equation}
\hat H = \hat H_0 - {1 \over 2} \chi \sum_\mu \hat Q^\dagger_\mu
\hat Q^{}_\mu - G_M \hat P^\dagger \hat P - G_Q \sum_\mu \hat
P^\dagger_\mu\hat P^{}_\mu,
\label{hamham}
\end{equation}
with the last term in (\ref{hamham}) being the quadrupole-pairing
force. Interaction strengths of the model Hamiltonian are chosen as follows: $QQ$-force strength $\chi$ is adjusted such that the physical quadrupole deformation $\epsilon$ is obtained as a result of self-consistent mean-field HFB calculation \cite{KY95}. Monopole pairing strength $G_M$ is of
the standard form
\begin{equation}
G_{M} = (G_{1}\mp G_{2}\frac{N-Z}{A})\frac{1}{A} \,(\rm{MeV}),
\label{gmpairing}
\end{equation}
where $- (+)$ is for neutron (proton).
In the present calculation, we use $G_1=20.82$ and $G_2=13.58$,
which approximately reproduces the observed odd-even mass differences
in this region \cite{Chanli15, js01, rp01}. The oscillator model space considered in the present
work is $N=3, 4$ and 5 for both neutrons and protons. The quadrupole pairing strength $G_Q$ is
assumed to be proportional to $G_M$ and the proportionality constant fixed to 0.18. These interaction strengths are consistent with those used earlier for the same mass region \cite{JS14}. Intrinsic quasiparticle states have been constructed for $^{70}$Ge with the deformation parameters of $\epsilon = 0.235$ and $\epsilon'=0.145$ \cite{JS14}.
The projected states from various intrinsic states close to the Fermi surface are displayed in Fig.~\ref{figtpsm1}. The ground-state, $\gamma$- and $\gamma\gamma$-bands labeled by $(0,0), (2,0)$ and $(4,0)$ result from angular-momentum projection of the vacuum configuration by specifying K=0, 2 and 4 respectively in the rotational D-matrix \cite{RS80}. It is noted that $\gamma$- and $\gamma\gamma$-bands depict a substantial signature splitting and even-spin states of the $\gamma$-band are close in energy to the ground-state band.
\begin{figure}
\includegraphics[width=6.5cm, trim = 1cm 0cm 1cm 0cm]{70ge-kappa.eps}
\caption{(Color online) Comparison of calculated energies with observed experimental level energies subtracted from reference value displayed as a function of spin for the band B1, B2 and ground-state band (g-band) in $^{70}$Ge.}\label{EvsI-70ge}
\end{figure}
What is most interesting to observe from Fig.~\ref{figtpsm1} is the crossing of K =1 two neutron-aligned configuration $(1,2n)$ with the ground-state band at spin, I=6$\hbar$. Further the $\gamma$-band built on this configuration with K =3 also crosses the ground-state band between I=6 and 8$\hbar$. These aligning states result from the projection of the same intrinsic state but having different K-values. Since K=3 two-neutron aligned $\gamma$-band has lower signature splitting as compared to the parent band, the lowest odd-spin members along the yrast-band shall originate from this configuration. The proton-aligned configurations, $(1,2p)$ and $(3,2p)$ lie at higher excitation energies and do not cross the ground-state band. However, the two-neutron plus two-proton aligned configurations crosses the two-neutron aligned configuration above I=14$\hbar$ and the yrast-band above this spin value is composed of four-quasiparticle states.
\begin{figure}
\includegraphics[width=6.5cm, trim = 1cm 0cm 1cm 0cm]{70ge-amplitude.eps}
\caption{(Color online) Probability of various projected K-configurations in the wavefunctions of the observed bands for $^{70}$Ge. See caption of Fig. \ref{figtpsm1} for meaning of various symbols.}\label{figtpsm3}
\vspace{0.2 cm}
\end{figure}
Projected states shown in Fig.~\ref{figtpsm1} and many more states around the Fermi surface ($\sim 40$ in number ) are then employed to diagonalize the shell model Hamiltonian, Eq.~(\ref{hamham}). Energies obtained after diagonalization are compared with the experimental data in Fig.~\ref{figtpsm2}. It is evident from the figure that the experiential data is reproduced reasonably well by TPSM calculations. This can seen more clearly in Fig.~\ref{EvsI-70ge} where experimental data are compared with TPSM calculations for ground-state, B1 and B2 bands after subtracting the level energies from reference value. The experimental level energies degenerate with TPSM results up to highest observed spin in band B1. In case of band B2, level energies are almost degenerate up to spin 14$\hbar$ and then deviate at higher spin. This could be due to shape changes at higher spins as predicted by the TRS study presented in Sec.~\ref{trs}.
Further, to probe the structure of high-spin states shown in Fig.~\ref{figtpsm2} after band mixing, dominant components of projected wavefunctions of the states are displayed in Fig.~\ref{figtpsm3}. The ground-state band up to spin, I= 4$\hbar$ has predominately 0-quasiparticle configuration with K=0 as is evident from the panel (a) of Fig.~\ref{figtpsm3}. The spin state with I=6$\hbar$ has substantial contribution from the two-quasiparticle neutron configuration having K=1. The amplitudes of the wavefunctions of two aligned bands observed above ground-state band are shown in panels (b) and (c). These are noted to be dominated by K=1 two-neutron aligned configuration $(1,2n)$ and the $\gamma$-band built on this aligned state with K=3 for angular-momentum states of I=8, 10, 12 and 14$\hbar$. For high-spin states of I=16$\hbar$ and above, the wavefunctions are mostly composed of four-quasiparticle states.
\subsection{Comparison with band structures in $^{68}$Ge}
The nature of observed quasiparticle alignments and band structures in $^{70}$Ge is quite similar to its neighboring isotope $^{68}$Ge~\cite{68Ge-1,68Ge-2,68Ge}, in which the ground state band forks beyond $I=8^+$ into multiple band structures.
\begin{figure}
\includegraphics[width=7.5cm, trim = 1cm 0cm 1cm 0cm]{68ge-levels.eps}
\caption{(Color online) Comparison of the calculated TPSM energies with available experimental data for $^{68}$Ge \cite{68Ge}.}\label{figtpsm4}
\end{figure}
To have an insight of the nature of observed alignments and forking band structures in $^{68}$Ge, we have performed the TPSM calculations for $^{68}$Ge with deformation parameters of $\epsilon = 0.22$ and $\epsilon'=0.16$. The predicted TPSM band structures after band mixing are compared with experimental data in Fig.~\ref{figtpsm4}. It is evident from the figure that the four observed bands, B1 to B4 above ground state band are reproduced quite well by TPSM approach. Figure \ref{fig:alignment68ge} shows the comparison of observed alignments with TPSM calculations as a function of rotational frequency for the bands B1, B2 and B3, indicating that all three bands have sharp band crossings and are composed of two-quasiparticle structures after the bandcrossing. Further, from the analysis of the TPSM wavefunctions, it is seen that these three bands B1, B2 and B3 have dominant structure of two-neutron aligned band with K=1, $\gamma$-band built on this aligned configuration having K=3 and two-proton aligned band with K=1 respectively. The odd-spin band has dominant contribution from $\gamma$-band built on the neutron-aligned band with K=3. Therefore, the two even-spin aligned band structures are predicted to have same neutron configuration and the third one has proton structure. In previous work, these three even-spin bands B1, B2 and B3 have been interpreted \cite{68Ge-2, 68Ge} as two two-neutron aligned bands and the configuration of the third band remained unresolved. The g-factor measurements of the states in $^{70}$Ge and $^{68}$Ge are highly desirable to probe further the predicted intrinsic structures of observed bands.
\begin{figure}
\includegraphics[scale=0.38,angle=0]{alignments-68Ge.eps}
\caption{\label{fig:alignment68ge}(Color online) Comparison of experimental and TPSM alignments as a function of rotational frequency for the bands B1, B2 and B3 in $^{68}$Ge.}
\vspace{0.3 cm}
\end{figure}
\section{Summary and conclusions}\label{summary}
In summary, a new positive-parity band has been identified in $^{70}$Ge through $\gamma$-ray spectroscopic study which extended the level scheme up to (20$\hbar$) and excitation energy of 10.2 MeV. The intensity of the ground state band forks into two branches above 6$^+$ state, resulting into two positive-parity band structures. It has been demonstrated using CSM and TPSM approaches that both the observed band structures have two-neutron aligned configurations. The possibility of proton structure is ruled out as in CSM study it occurs at $\hbar \omega=0.75$ MeV and in both the bands the crossing is observed at $\hbar \omega \approx 0.5$ MeV. From the TPSM wavefunctions, it is noted that band B1 is based on two-neutron quasiparticle configuration having K=1 and band B2 is predicted to be a $\gamma$-band built on this aligned two-quasiparticle band with K=3. The forking of the ground-state band into two bands in $^{70}$Ge has, therefore, a different origin as compared to the earlier observed forking in nuclei. Further, it has been shown that one of the observed bands in $^{68}$Ge also has the structure of the $\gamma$-band built on the two-quasipaticle configuration, indicating that this kind of two-quasiparticle band structures may be more widespread and need to be explored in other nuclei and mass regions of the periodic table.
\section{Acknowledgments}
We thank the Pelletron crew of the IUAC, New Delhi for their support during the experiment and the target laboratory group of IUAC for their help in making $^{64}$Ni target. We also thank Professor S. C. Pancholi for his valuable suggestions. The author (M.K.R) would like to acknowledge financial support provided by UFR project fellowship (No. 42328), IUAC, New Delhi and Senior Research Fellowship (No.09/002(0494)/2011-EMR-1) from Council of Scientific and Industrial Research (CSIR), India.
|
1,116,691,498,340 | arxiv |
\section{Introduction}\label{sec:intro}
Network embedding is a fundamental task for graph analytics, which has attracted much attention from both academia (\textit{e.g.}, \cite{perozzi2014deepwalk,node2vec2016,tang2015line}) and industry (\textit{e.g.}, \cite{pbg2019,zhu2019aligraph}). Given an input graph $G$, network embedding converts each node $v \in G$ to a compact, fixed-length vector $X_v$, which captures the topological features of the graph around node $v$. In practice, however, graph data often comes with \textit{attributes} associated to nodes. While we could treat graph topology and attributes as separate features, doing so loses the important information of \textit{node-attribute affinity} \cite{meng2019co}, \textit{i.e.}, attributes that can be reached by a node through one or more hops along the edges in $G$. For instance, consider a graph containing companies and board members. An important type of insights that can be gained from such a network is that one company (\textit{e.g.}, Tesla Motors) can reach attributes of another related company (\textit{e.g.}, SpaceX) connected via a common board member (Elon Musk). To incorporate such information, \textit{attributed network embedding} maps both topological and attribute information surrounding a node to an embedding vector, which facilitates accurate predictions, either through the embeddings themselves or in downstream machine learning tasks.
Effective ANE computation is a highly challenging task, especially for massive graphs, \textit{e.g.}, with millions of nodes and billions of edges. In particular, each node $v \in G$ could be associated with a large number of attributes, which corresponds to a high dimensional space; further, each attribute of $v$ could influence not only $v$'s own embedding, but also those of $v$'s neighbors, neighbors' neighbors, and far-reaching connections via multiple hops along the edges in $G$. Existing ANE solutions are immensely expensive and largely fail on massive graphs. Specifically, as reviewed in Section \ref{sec:rw}, one class of previous methods \textit{e.g.}, \cite{yang2015network,zhang2016homophily,yang2018binarized,huang2017accelerated}, explicitly construct and factorize an $n\times n$ matrix, where $n$ is the number of nodes in $G$. For a graph with 50 million nodes, storing such a matrix of double-precision values would take over 20 petabytes of memory, which is clearly infeasible.
Another category of methods, \textit{e.g.}, \cite{zhang2018anrl,gao2018deep,pan2018adversarially,liu2018content}, employ deep neural networks to extract higher-level features from nodes' connections and attributes. For a large dataset, training such a neural network incurs vast computational costs; further, the training process is usually done on GPUs with limited graphics memory, \textit{e.g.}, 32GB on Nvidia's flagship Tesla V100 cards. Thus, for massive graphs, currently the only option is to compute ANE with a large cluster, \textit{e.g.}, \cite{zhu2019aligraph}, which is financially costly and out of the reach of most researchers.
In addition, to our knowledge, all existing ANE solutions are designed for undirected graphs, and it is unclear how to incorporate edge direction information (\textit{e.g.}, asymmetric transitivity \cite{zhou2017scalable}) into their resulting embeddings. In practice, many graphs are directed (\textit{e.g.}, one paper citing another), and existing methods yield suboptimal result quality on such graphs, as shown in our experiments. Can we compute effective ANE embeddings on a massive, attributed, directed graph on a single server?
This paper provides a positive answer to the above question with $\mathsf{PANE}$\xspace, a novel solution that significantly advances the state of the art in ANE computation. Specifically, as demonstrated in our experiments, the embeddings obtained by $\mathsf{PANE}$\xspace simultaneously achieve the highest prediction accuracy compared to previous methods for 3 common graph analytics tasks: attribute inference, link prediction, and node classification, on common benchmark graph datasets. On the largest Microsoft Academic Knowledge Graph ({\em MAG}) dataset, $\mathsf{PANE}$\xspace is the only viable solution on a single server, whose resulting embeddings lead to 0.88 average precision (AP) for attribute inference, 0.965 AP for link prediction, and 0.57 micro-F1 for node classification. $\mathsf{PANE}$\xspace obtains such results using 10 CPU cores, 1TB memory, and 12 hours running time.
$\mathsf{PANE}$\xspace achieves effective and scalable ANE computation through three main contributions: a well-thought-out problem formulation based on a novel random walk model, a highly efficient solver, and non-trivial parallelization of the algorithm.
Specifically, As presented in Section \ref{sec:objective}, $\mathsf{PANE}$\xspace formulates the ANE task as an optimization problem with the objective of approximating normalized multi-hop node-attribute affinity using node-attribute co-projections \cite{meng2019co,meng2020jointly}, guided by a shifted pairewise mutual information (SPMI) metric that is inspired by natural language processing techniques. The affinity between a given node-attribute pair is defined via a random walk model specifically adapted to attributed networks. Further, we incorporate edge direction information by defining separate forward and backward affinity, embeddings, and SPMI metrics. Solving this optimization problem is still immensely expensive with off-the-shelf algorithms, as it involves the joint factorization of two $O(n \cdot d)$-sized matrices, where $n$ and $d$ are the numbers of nodes and attributes in the input data, respectively. Thus, $\mathsf{PANE}$\xspace includes a novel solver with a key module that seeds the optimizer through a highly effective greedy algorithm, which drastically reduces the number of iterations till convergence.
Finally, we devise non-trivial parallelization of the $\mathsf{PANE}$\xspace algorithm, to utilize modern multi-core CPUs without significantly compromising result utility.
Extensive experiments, using 8 real datasets and comparing against 10 existing solutions, demonstrate that $\mathsf{PANE}$\xspace consistently obtains high-utility embeddings with superior prediction accuracy for attribute inference, link prediction and node classification, at a fraction of the cost compared to existing methods.
Summing up, our contributions in this paper are as follows:
\vspace{-\topsep}
\begin{itemize}[leftmargin=*]
\item We formulate the ANE task as an optimization problem with the objective of approximating multi-hop node-attribute affinity.
\item We further consider edge direction in our objective by defining forward and backward affinity matrices using the SPMI metric.
\item We propose several techniques to efficiently solve the optimization problem, including efficient approximation of the affinity matrices, fast joint factorization of the affinity matrices, and a key module to greedily seed the optimizer, which drastically reduces the number of iterations till convergence.
\item We develop non-trivial parallelization techniques of $\mathsf{PANE}$\xspace to further boost efficiency.
\item The superior performance of $\mathsf{PANE}$\xspace, in terms of efficiency and effectiveness, is evaluated against 10 competitors on 8 real datasets.
\end{itemize}
\vspace{-\topsep}
\vspace{1mm} \noindent
The rest of the paper is organized as follows. In Section \ref{sec:back}, we formally formulate our ANE objective by defining node-attribute affinity. We present single-thread $\mathsf{PANE}$\xspace with several speedup techniques in Section \ref{sec:opt}, and further develop non-trivial parallel $\mathsf{PANE}$\xspace in Section \ref{sec:parallel}. The effectiveness and efficiency of our solutions are evaluated in Section \ref{sec:exp}. Related work is reviewed in Section \ref{sec:rw}. Finally, Section \ref{sec:ccl} concludes the paper. Note that proofs of lemmas are given in Appendix \ref{sec:proof}.
\section{Problem Formulation}\label{sec:back}
\subsection{Preliminaries}
\begin{table}[!t]
\centering
\renewcommand{\arraystretch}{1.1}
\begin{small}
\caption{Frequently used notations.}\vspace{-3mm} \label{tbl:notations}
\begin{tabular}{|p{0.9in}|p{2.15in}|}
\hline
{\bf Notation} & {\bf Description}\\
\hline
$G$=$(V,E_{V},R,E_{R})$ & A graph $G$ with node set $V$, edge set $E_{V}$, attribute set $R$, and node-attribute association set $E_{R}$.\\
\hline
$n, m, d$ & The numbers of nodes, edges, and attributes in $G$, respectively.\\
\hline
$k$ & The space budget of embedding vectors. \\
\hline
$\mathbf{A}\xspace, \mathbf{D}\xspace, \mathbf{P}\xspace, \mathbf{R}\xspace$ & The adjacency, out-degree, random walk and attribute matrices of $G$. \\
\hline
$\mathbf{R}\xspace_r, \mathbf{R}\xspace_c$ & The row-normalized and column-normalized attribute matrices. See \equref{eq:norm-r}. \\
\hline
$\mathbf{F}\xspace,\mathbf{B}\xspace$ & The forward and backward affinity matrices. See Equations \eqref{eq:fwd-prob} and \eqref{eq:bwd-prob}.\\
\hline
$\mathbf{F}\xspace',\mathbf{B}\xspace'$ & The approximate forward and backward affinity matrices. See Equation \eqref{equ:approxFB}.\\
\hline
$\mathbf{X}_f\xspace, \mathbf{X}_b\xspace, \mathbf{Y}\xspace$ & The forward and backward embedding vectors, and attribute embedding vectors.\\
\hline
$\alpha$ & The random walk stopping probability. \\
\hline
$n_b$ & The number of threads.\\
\hline
\end{tabular}
\end{small}
\vspace{2mm}
\end{table}
Let $G=(V, E_V, R, E_R)$ be an {\it attributed network}, consisting of (i) a node set $V$ with cardinality $n$, (ii) a set of edges $E_V$ of size $m$, each connecting two nodes in $V$, (iii) a set of attributes $R$ with cardinality $d$, and (iv) a set of node-attribute associations $E_R$, where each element is a tuple $(v_i ,r_j, w_{i,j})$ signifying that node $v_i \in V$ is directly associated with attribute $r_j \in R$ with a weight $w_{i,j}$ (\textit{i.e.}, the attribute value). Note that for a categorical attribute such as marital status, we first apply a pre-processing step that transforms the attribute into a set of binary ones through one-hot encoding.
Without loss of generality, we assume that $G$ is a directed graph; if $G$ is undirected, then we treat each edge $(v_i, v_j)$ in $G$ as a pair of directed edges with opposing directions: $(v_i, v_j)$ and $(v_j, v_i)$.
Given a space budget $k \ll n$, a \textit{node embedding} maps a node $v \in V$ to a length-$k$ vector. The general, hand-waving goal of attributed network embedding (ANE) is to compute such an embedding $X_v$ for each node $v$ in the input graph, such that $X_v$ captures the graph structure and attribute information surrounding node $v$.
In addition, following previous work \cite{meng2019co}, we
also allocate a space budget $\frac{k}{2}$ (explained later in Section \ref{sec:obj2}) for each attribute $r \in R$, and aim to compute an {\em attribute embedding} vector for $r$ of length $\frac{k}{2}$.
\vspace{1mm} \noindent
\textbf{Notations.} We denote matrices in bold uppercase, {\it e.g.},\xspace $\mathbf{M}\xspace$. We use $\mathbf{M}\xspace[v_i]$ to denote the $v_i$-th row vector of $\mathbf{M}\xspace$, and $\mathbf{M}\xspace[:,r_j]$ to denote the $r_j$-th column vector of $\mathbf{M}\xspace$. In addition, we use $\mathbf{M}\xspace[v_i,r_j]$ to denote the element at the $v_i$-th row and $r_j$-th column of $\mathbf{M}\xspace$. Given an index set $S$, we let $\mathbf{M}\xspace[S]$ (resp.\ $\mathbf{M}\xspace[:,S]$) be the matrix block of $\mathbf{M}\xspace$ that contains the row (resp.\ column) vectors of the indices in $S$.
Let $\mathbf{A}\xspace$ be the adjacency matrix of the input graph $G$, {\it i.e.},\xspace $\mathbf{A}\xspace[v_i, v_j] = 1$ if $(v_i,v_j)\in E_V$, otherwise $\mathbf{A}\xspace[v_i, v_j] = 0$. Let $\mathbf{D}\xspace$ be the diagonal out-degree matrix of $G$, {\it i.e.},\xspace $\mathbf{D}\xspace[v_i,v_i] = \sum_{v_j\in V}{\mathbf{A}\xspace[v_i,v_j]}$. We define the random walk matrix of $G$ is defined as $\mathbf{P}\xspace = \mathbf{D}\xspace^{-1}\mathbf{A}\xspace$.
Furthermore, we define an attribute matrix $\mathbf{R}\xspace \in \mathbb{R}^{n\times d}$, such that $\mathbf{R}\xspace[v_i,r_j] = w_{i,j}$ is the weight associated with the entry ($v_i$, $r_j$, $w_{ij}$) $ \in E_R$. We refer to $\mathbf{R}\xspace[v_i]$ as node $v_i$'s \textit{attribute vector}. Based on $\mathbf{R}\xspace$, we derive a row-normalized (resp.\ column-normalized) attribute matrices $\mathbf{R}\xspace_r$ (resp.\ $\mathbf{R}\xspace_c$) as follows:
\begin{align}
\mathbf{R}\xspace_r[v_i,r_j]=\frac{\mathbf{R}\xspace[v_i,r_j]}{\sum_{v_l\in V}{\mathbf{R}\xspace[v_l,r_j]}},\quad \mathbf{R}\xspace_c[v_i,r_j]=\frac{\mathbf{R}\xspace[v_i,r_j]}{\sum_{r_l\in R}{\mathbf{R}\xspace[v_i,r_l]}}\label{eq:norm-r}.
\end{align}
\tblref{tbl:notations} lists the frequently used notations in our paper.
\vspace{1mm} \noindent
\textbf{Extended graph.}
Our solution utilizes an \textit{extended graph} $\mathcal{G}$ that incorporates additional nodes and edges into $G$. To illustrate, \figref{fig:toy} shows an example extended graph $\mathcal{G}$ constructed based on an input attributed network $G$ with 6 nodes $v_1$-$v_6$ and 3 attributes $r_1$-$r_3$. The left part of the figure (in black) shows the topology of $G$, \textit{i.e.}, the edge set $E_V$. The right part of the figure (in blue) shows the attribute associations $E_R$ in $G$. Specifically, for each attribute $r_j \in R$, we create an additional node in $\mathcal{G}$; then, For each entry in $E_R$, \textit{e.g.}, ($v_3$, $r_1$, $w_{3, 1}$), we include in $\mathcal{G}$ a pair of edges with opposing directions connecting the node (\textit{e.g.}, $v_3$) with the corresponding attribute node (\textit{e.g.}, $r_1$), with an edge weight (\textit{e.g.}, $w_{3, 1}$). Note that in this example, nodes $v_1$ and $v_2$ are not associated with any attribute.
\begin{figure}[!t]
\centering
\hspace{-18mm}
\begin{minipage}{0.39\textwidth}
\centering
\begin{small}
\includegraphics[width=0.5\columnwidth]{./figure/toy3.pdf}
\vspace{0mm}
\caption{Extended graph $\mathcal{G}$}\label{fig:toy}
\end{small}
\end{minipage}
\hspace{-17mm}
\begin{minipage}{0.27\textwidth}
\vspace{-2mm}
\centering
\captionsetup{type=table}
\renewcommand{\arraystretch}{1.0}
\begin{small}
\caption{Targets for $\mathbf{X}\xspace[v_i]\cdot\mathbf{Y}\xspace[r_j]^{\top}$.}\label{tbl:toy}
\vspace{-2mm}
\begin{tabular}{|c|c|c|c|}\hline
& $\mathbf{Y}\xspace[r_1]$ & $\mathbf{Y}\xspace[r_2]$ & $\mathbf{Y}\xspace[r_3]$ \\ \hline
$\mathbf{X}_f\xspace[v_1]$ & 1.0 & 0.92 & 0.47 \\
$\mathbf{X}_b\xspace[v_1]$ & 0.93 & 0.88 & 1.17 \\ \hline
$\mathbf{X}_f\xspace[v_2]$ & 1.0 & 0.92 & 0.47 \\
$\mathbf{X}_b\xspace[v_2]$ & 1.11 & 1.08 & 0.8 \\ \hline
$\mathbf{X}_f\xspace[v_3]$ & 1.12 & 1.04 & 0.54 \\
$\mathbf{X}_b\xspace[v_3]$ & 1.06 & 0.95 & 0.99 \\ \hline
$\mathbf{X}_f\xspace[v_5]$ & 0.98 & 1.1 & 1.08 \\
$\mathbf{X}_b\xspace[v_5]$ & 1.09 & 1.22 & 0.61 \\ \hline
$\mathbf{X}_f\xspace[v_6]$ & 0.89 & 0.82 & 2.05 \\
$\mathbf{X}_b\xspace[v_6]$ & 0.53 & 0.61 & 1.6 \\ \hline
\end{tabular}
\end{small}
\end{minipage}
\vspace{2mm}
\end{figure}
\subsection{Homogeneous Network Embedding}
One pioneer work for homogeneous network embedding (HNE) is $\mathsf{DeepWalk}$ \cite{perozzi2014deepwalk}, which adopts the SkipGram model \cite{mikolov2013distributed} and random walks to capture the graph structure surrounding a node, and to map it into a low-dimensional embedding vector. Several studies \cite{tang2015line,node2vec2016,zhou2017scalable,tsitsulin2018verse} aim to improve the performance over $\mathsf{DeepWalk}$, by exploiting different random walk schemes. These random-walk-based solutions suffer from severe efficiency issues, since they need to sample a large number of random walks and conduct expensive training processes.
To alleviate the efficiency issue, massively parallel network embedding systems, including $\mathsf{PBG}$ \cite{pbg2019} and $\mathsf{Graphy}$ \cite{zhu2019graphvite}, are developed to utilize a large system with multiple processing units, including CPUs and GPUs. However, these systems consume immense computational resources that are financially expensive.
Qiu {\it et al.} proved that the aforementioned random-walk-based methods have their equivalent matrix factorization forms, and proposed an efficient factorization-based HNE solution \cite{qiu2018network}.
In literature, there are many factorization-based HNE solutions exhibiting superior efficiency and effectiveness, such as $\mathsf{RandNE}$ \cite{zhang2018billion}, $\mathsf{AROPE}$ \cite{zhang2018arbitrary}, $\mathsf{STRAP}$ \cite{yin2019scalable} and $\mathsf{NRP}$\xspace \cite{yang13homogeneous}.
However, all HNE solutions ignore attributes associated with nodes, limiting their utility in real-world attributed networks.
\section{Related Work}\label{sec:rw}
\subsection{Attributed Network Embedding}
\vspace{1mm} \noindent
{\bf Factorization-based methods.} Given an attributed network $G$ with $n$ nodes, existing factorization-based methods mainly involve two stages: (i) build a proximity matrix $\mathbf{M}\xspace\in \mathbb{R}^{n\times n}$ that models the proximity between nodes based on graph topology or attribute information; (ii) factorize $\mathbf{M}\xspace$ via techniques such as SGD \cite{bottou2010large}, ALS \cite{comon2009tensor}, and coordinate descent \cite{wright2015coordinate}.
Specifically, $\mathsf{TADW}$\xspace \cite{yang2015network} constructs a second-order proximity matrix $\mathbf{M}\xspace$ based on the adjacency matrix of $G$, and aims to reconstruct $\mathbf{M}\xspace$ by the product of the learned embedding matrix and the attribute matrix. $\mathsf{HSCA}$\xspace \cite{zhang2016homophily} ensures that the learned embeddings of connected nodes are close in the embedding space. $\mathsf{AANE}$\xspace \cite{huang2017accelerated} constructs a proximity matrix $\mathbf{M}\xspace$ using the cosine similarities between the attribute vectors of nodes. $\mathsf{BANE}$\xspace \cite{yang2018binarized} learns a binary embedding vector per node, {\it i.e.},\xspace $\{-1,1\}^{k}$, by minimizing the reconstruction loss of a unified matrix that incorporates both graph topology and attribute information. $\mathsf{BANE}$\xspace reduces space overheads at the cost of accuracy. To further balance the trade-off between space cost and representation accuracy, $\mathsf{LQANR}$\xspace \cite{ijcai2019-low} learns embeddings $\in \{-2^{b},\cdots,-1,0,1,\cdots,2^b\}^k$, where $b$ is the bit-width.
All these factorization-based methods incur immense overheads in building and factorizing the $n\times n$ proximity matrix. Further, these methods are designed for undirected graphs only.
\vspace{1mm} \noindent
{\bf Auto-encoder-based methods.}
An auto-encoder \cite{goodfellow2016deep} is a neural network model consisting of an encoder that compresses the input data to obtain embeddings and a decoder that reconstructs the input data from the embeddings, with the goal to minimize the reconstruction loss.
Existing methods either use different proximity matrices as inputs or design various neural network structures for the auto-encoder. Specifically, $\mathsf{ANRL}$\xspace \cite{zhang2018anrl} combines auto-encoder with SkipGram model to learn embeddings. $\mathsf{DANE}$\xspace \cite{gao2018deep} designs two auto-encoders to reconstruct the high-order proximity matrix and the attribute matrix respectively. $\mathsf{ARGA}$\xspace \cite{pan2018adversarially} integrates auto-encoder with graph convolutional networks \cite{kipf2016semi} and generative adversarial networks \cite{goodfellow2014generative} together. $\mathsf{STNE}$\xspace \cite{liu2018content} samples nodes via random walks and feeds the attribute vectors of the sampled nodes into a LSTM-based auto-encoder \cite{hochreiter1997long}.
$\mathsf{NetVAE}$\xspace \cite{ijcai2019-370} compresses the graph structures and node attributes with a shared encoder for transfer learning and information integration.
$\mathsf{CAN}$\xspace \cite{meng2019co} embeds both nodes and attributes into two Gaussian distributions using a graph convolutional network and a dense encoder. None of these auto-encoder-based methods considers edge directions. Further, they suffer from severe efficiency issues due to the expensive training process of auto-encoders. {$\mathsf{SAGE2VEC}$ \cite{sheikh2019simple} proposes an enhanced auto-encoder model that preserves global graph structure and meanwhile handles the non-linearity and sparsity of both graph structures and attributes. $\mathsf{AdONE}$ \cite{bandyopadhyay2020outlier} designs an auto-encoder model for detecting and minimizing the effect of community outliers while generating embeddings.}
\vspace{1mm} \noindent
{\bf Other methods.}
$\mathsf{PRRE}$\xspace \cite{zhou2018prre} categorizes node relationships into positive, ambiguous and negative types, according to the graph and attribute proximities between nodes, and then employs Expectation Maximization (EM) \cite{dempster1977maximum} to learn embeddings.
$\mathsf{SAGE}$\xspace \cite{hamilton2017inductive} samples and aggregates features from a node’s local neighborhood and learns embeddings by LSTM and pooling.
$\mathsf{NetHash}$\xspace \cite{wu2018efficient} builds a rooted tree for each node by expanding along the neighborhood of the node, and then recursively sketches the rooted tree to get a summarized attribute list as the embedding vector of the node.
$\mathsf{PGE}$\xspace \cite{hou2019representation} groups nodes into clusters based on their attributes, and then trains neural networks with biased neighborhood samples in clusters to generate embeddings. $\mathsf{ProGAN}$\xspace \cite{gao2019progan} adopts generative adversarial networks to generate node proximities, followed by neural networks to learn node embeddings from the generated node proximities. $\mathsf{DGI}$\xspace \cite{velickovic2018deep} derives embeddings via graph convolutional networks, such that the mutual information between the embeddings for nodes and the embedding vector for the whole graph is maximized.
{ $\mathsf{MARINE}$ \cite{wu2019scalable} preserves the long-range spatial dependencies between nodes into embeddings by minimizing the information discrepancy in a Reproducing Kernel Hilbert Space.}
Recently, there are embedding studies on attributed heterogeneous networks that consist of not only graph topology and node attributes, but also node types and edge types. When there are only one type of node and one type of edge, these methods effectively work on attributed networks. For instance, Alibaba proposed $\mathsf{GATNE}$\xspace \cite{cen2019representation}, to process attributed heterogeneous network embedding.
For each node on every edge type, it learns an embedding vector, by using SkipGram model and random walks over the attributed heterogeneous network. Then it obtains the overall embedding vector for each node by concatenating the embeddings of the node over all edge types. $\mathsf{GATNE}$\xspace incurs expensive training overheads and highly relies on the power of distributed systems.
\section{Problem Formulation}\label{sec:overview}
\subsection{Node-Attribute Affinity via Random Walks}\label{sec:objective}
As explained in Section \ref{sec:intro}, the resulting embedding of a node $v \in V$ should capture its \textit{affinity} with attributes in $R$, where the affinity definition should take into account both the attributes directly associated with $v$ in $E_R$, and the attributes of the nodes that $v$ can reach via edges in $E_V$. To effectively model node-attribute affinity via multiple hops in $\mathcal{G}$, we employ an adaptation of the \textit{random walks with restarts} (\textit{RWR}) \cite{jeh2003scaling,tong2006fast} technique to our setting with an extended graph $\mathcal{G}$. In the following, we refer to an RWR simply as a \textit{random walk}. Specifically, since $\mathcal{G}$ is directed, we distinguish two types of node-attribute affinity: \textit{forward affinity}, denoted as $\mathbf{F}\xspace$, and \textit{backward affinity}, denoted as $\mathbf{B}\xspace$.
\vspace{1mm} \noindent
\textbf{Forward affinity.} We first focus on forward affinity. Given an attributed graph $G$, a node $v_i$, and random walk stopping probability $\alpha$ ($0<\alpha<1$), a \textit{forward random walk} on $\mathcal{G}$ starts from node $v_i$. At each step, assume that the walk is currently at node $v_l$. Then, the walk can either (i) with proabability $\alpha$, terminate at $v_l$ , or (ii) with probability $1-\alpha$, follow an edge in $E_V$ to a random out-neighbor of $v_l$. After a random walk terminates at a node $v_l$,
we randomly follow an edge in $E_R$ to an attribute $r_j$, with probability $\mathbf{R}\xspace_r[v_l,r_j]$, \textit{i.e.}, a normalized edge weight defined in Equation \eqref{eq:norm-r}\footnote{In the degenerate case that $v_l$ is not associated with any attribute, \textit{e.g.}, $v_1$ in Figure \ref{fig:toy}, we simply restart the random walk from the source node $v_i$, and repeat the process.}. The forward random walk yields a \textit{node-to-attribute pair} $(v_i,r_j)$, and we add this pair to a collection $\mathcal{S}_f$.
Suppose that we sample $n_r$ node-to-attribute pairs for each node $v_i$, the size of $\mathcal{S}_f$ is then $n_r\cdot n$, where $n$ is the number of nodes in $G$.
Denote $p_f(v_i,r_j)$ as the probability that a forward random walk starting from $v_i$ yields a node-to-attribute pair $(v_i, r_j)$.
Then, the \textit{forward affinity} $\mathbf{F}\xspace[v_i,r_j]$ between note $v_i$ and attribute $r_j$ is defined as follows.
\begin{equation}\label{eq:fwd-prob}
\mathbf{F}\xspace[v_i,r_j] = \log\left(\frac{n\cdot p_f(v_i,r_j)}{\sum_{v_h\in V}{p_f(v_h,r_j)}}+1\right)
\end{equation}
To explain the intuition behind the above definition, note that in collection $\mathcal{S}_f$, the probabilities of observing node $v_i$, attribute $r_j$, and pair $(v_i, r_j)$ are $\mathbb{P}(v_i)=\frac{1}{n}$, $\mathbb{P}(r_j)=\frac{\sum_{v_h\in V}{\cdot p_f(v_h,r_j)}}{n}$, and $\mathbb{P}(v_i,r_j)=\frac{p_f(v_i,r_j)}{n}$, respectively. Thus, the above definition of forward affinity is a variant of the \textit{pointwise mutual information} (PMI) ~\cite{church1990word} between node $v_i$ and attribute $r_j$. In particular, given a collection of element pairs $\mathcal{S}$, the PMI of element pair $(x,y)\in \mathcal{S}$, denoted as $\textrm{PMI}(x,y)$, is defined as $\textrm{PMI}(x,y)=\log\left(\frac{\mathbb{P}(x,y)}{\mathbb{P}(x)\cdot\mathbb{P}(y)}\right)$, where $\mathbb{P}(x)$ (resp.\ $\mathbb{P}(y)$) is the probability of observing $x$ (resp.\ $y$) in $\mathcal{S}$ and $\mathbb{P}(x,y)$ is the probability of observing pair $(x,y)$ in $\mathcal{S}$. The larger $\textrm{PMI}(x,y)$ is, the more likely $x$ and $y$ co-occur in $\mathcal{S}$. Note that $\textrm{PMI}(x,y)$ can be negative. To avoid this, we use an alternative: shifted PMI, defined as $\textrm{SPMI}(x,y)=\log\left(\frac{\mathbb{P}(x,y)}{\mathbb{P}(x)\cdot\mathbb{P}(y)}+1\right)$, which is guaranteed to be positive, while retaining the original order of values of PMI. $\mathbf{F}\xspace[v_i, r_j]$ in Equation \eqref{eq:fwd-prob} is then $\textrm{SPMI}(v_i,r_j)$.
Another way to understand Equation \eqref{eq:fwd-prob} is through an analogy to TF/IDF \cite{salton1986introduction} in natural language processing. Specifically, if we view the all forward random walks
as a ``document'', then $n \cdot p_f(v_i, r_j)$ is akin to the term frequency of $r_j$, whereas the denominator in Equation \eqref{eq:fwd-prob} is similar to the inverse document frequency of $r_j$. Thus, the normalization penalizes common attributes, and compensates for rare attributes.
\vspace{1mm} \noindent
\textbf{Backward affinity.} Next we define backward affinity in a similar fashion. Given an attributed network $G$, an attribute $r_j$ and stopping probability $\alpha$, a {\em backward random walk} starting from $r_j$ first randomly samples a node $v_l$ according to probability $\mathbf{R}\xspace_c[v_l,r_j]$, defined in Equation \eqref{eq:norm-r}. Then, the walk starts from node $v_l$; at each step, the walk either terminates at the current node with $\alpha$ probability, or randomly jumps to an out-neighbor of current node with $1-\alpha$ probability. Suppose that the walk terminates at node $v_i$; then, it returns an \textit{attribute-to-node pair} $(r_j,v_i)$, which is added to a collection $\mathcal{S}_b$. After sampling $n_r$ attribute-to-node pairs for each attribute, the size of $\mathcal{S}_b$ becomes $n_r\cdot d$.
Let $p_b(v_i, r_j)$ be the probability that a backward random walk starting from attribute $r_j$ stops at node $v_i$. In collection $\mathcal{S}_b$, the probabilities of observing attribute $r_j$, node $v_i$ and pair $(r_j, v_i)$ are $\mathbb{P}(r_j)=\frac{1}{d}$, $\mathbb{P}(v_i)=\frac{\sum_{r_h\in R}{p_b(v_i,r_h)}}{d}$ and $\mathbb{P}(v_i,r_j)=\frac{p_b(v_i, r_j)}{d}$, respectively. By the definition of SPMI, we define backward affinity $\mathbf{B}\xspace[v_i,r_j]$ as follows.
\begin{equation}\label{eq:bwd-prob}
\mathbf{B}\xspace[v_i,r_j] = \log\left(\frac{d\cdot p_b(v_i,r_j)}{\sum_{r_h\in R}{p_b(v_i,r_h)}}+1\right).
\end{equation}
\subsection{Objective Function} \label{sec:obj2}
Next we define our objective function for ANE, based on the notions of forward and backward node-attribute affinity defined in Equation \eqref{eq:fwd-prob} and Equation \eqref{eq:bwd-prob}, respectively.
Let $\mathbf{F}\xspace[v_i,r_j]$ (resp.\ $\mathbf{B}\xspace[v_i,r_j]$) be the forward affinity (resp.\ backward affinity) between node $v_i$ and attribute $r_j$. Given a space budget $k$, our objective is to learn (i) two embedding vectors for each node $v_i$, namely a \textit{forward embedding vector}, denoted as $\mathbf{X}_f\xspace[v_i]\in \mathbb{R}^{\frac{k}{2}}$ and a \textit{backward embedding vector}, denoted as $\mathbf{X}_b\xspace[v_i]\in \mathbb{R}^{\frac{k}{2}}$, as well as (ii) an \textit{attribute embedding vector} $\mathbf{Y}\xspace[r_j]\in \mathbb{R}^{\frac{k}{2}}$ for each attribute $r_j$, such that the following objective is minimized:
\begin{align}
\mathcal{O}=\min_{\mathbf{X}_f\xspace,\mathbf{Y}\xspace,\mathbf{X}_b\xspace}& \sum_{v_i\in V}\sum_{r_j\in R}\left(\mathbf{F}\xspace[v_i,r_j]-\mathbf{X}_f\xspace[v_i]\cdot \mathbf{Y}\xspace[r_j]^{\top}\right)^2\nonumber\\
& \quad\quad\quad +\left(\mathbf{B}\xspace[v_i,r_j]-\mathbf{X}_b\xspace[v_i]\cdot \mathbf{Y}\xspace[r_j]^{\top}\right)^2\label{eq:obj1}
\end{align}
Intuitively, in the above objective function, we approximate the forward node-attribute affinity $\mathbf{F}\xspace[v_i,r_j]$ between node $v_i$ and attribute $r_j$ using the dot product of their respective embedding vectors, \textit{i.e.}, $\mathbf{X}_f\xspace[v_i]\cdot \mathbf{Y}\xspace[r_j]^{\top}$. Similarly, we also approximate the backward node-attribute affinity using $\mathbf{X}_b\xspace[v_i]\cdot \mathbf{Y}\xspace[r_j]^{\top}$. The objective is then to minimize the total squared error of such approximations, over all nodes and all attributes in the input data.
\vspace{1mm} \noindent
{\bf Running Example.}
Assume that in the extended graph shown in Figure \ref{fig:toy}, all attribute weights in $E_R$ are $1$, and the random walk stopping probability $\alpha$ is set to $0.15$ \cite{jeh2003scaling,tong2006fast}. \tblref{tbl:toy} lists the target values, \textit{i.e.}, the exact forward and backward affinity values. According to Equation \eqref{eq:obj1}, for the inner products of attribute embedding vectors of $r_1$-$r_3$ and embedding vectors of $v_1$-$v_6$. These values are calculated based on Equations \eqref{eq:fwd-prob} and \eqref{eq:bwd-prob}, using simulated random walks on the extended graph in \figref{fig:toy}. Observe, for example, that node $v_1$ has high affinity values (both forward and backward) with attribute $r_1$, which agrees with the intuition that $v_1$ is connected to $r_1$ via many different intermediate nodes, \textit{i.e.}, $v_3, v_4, v_5$. For node $v_5$, if only forward affinity is considered, observe that $v_5$ has higher forward affinity value with $r_3$ than that with $r_1$, which cannot reflect the fact that $v_5$ owns $r_1$ but not $r_3$, leading to wrong attribute inference. If both forward and backward affinity are considered, this issue is resolved.
\section{The PANE\xspace Algorithm}\label{sec:opt}
\begin{algorithm}[t]
\begin{small}
\caption{$\mathsf{PANE}$\xspace (single thread)}
\label{alg:mainopt}
\BlankLine
\KwIn{Attributed network $G$, space budget $k$, random walk stopping probability $\alpha$, error threshold $\epsilon$.}
\KwOut{Forward and backward embedding vectors $\mathbf{X}_f\xspace$, $\mathbf{X}_b\xspace$ and attribute embedding vectors $\mathbf{Y}\xspace$.}
$t\gets \frac{\log(\epsilon)}{\log(1-\alpha)}-1$\;
$\mathbf{F}\xspace^{\prime}, \mathbf{B}\xspace^{\prime} \gets \mathsf{APMI}(\mathbf{P}\xspace, \mathbf{R}\xspace, \alpha, t)$\;
$\mathbf{X}_f\xspace,\mathbf{Y}\xspace, \mathbf{X}_b\xspace \gets \mathsf{SVDCCD}(\mathbf{F}\xspace^{\prime},\mathbf{B}\xspace^{\prime},k,t)$\;
\Return $\mathbf{X}_f\xspace,\mathbf{Y}\xspace,\mathbf{X}_b\xspace$\;
\end{small}
\end{algorithm}
\begin{algorithm}[!t]
\begin{small}
\caption{$\mathsf{APMI}$\xspace}
\label{alg:appr}
\KwIn{$\mathbf{P}\xspace$, $\mathbf{R}\xspace, \alpha, t$.}
\KwOut{$\mathbf{F}\xspace^{\prime},\mathbf{B}\xspace^{\prime}$.}
Compute $\mathbf{R}\xspace_r$ and $\mathbf{R}\xspace_c$ by \equref{eq:norm-r}\;
$\mathbf{P}\xspace_f^{(0)} \gets \mathbf{R}\xspace_r, \ \mathbf{P}\xspace_b^{(0)} \gets \mathbf{R}\xspace_c$\;
\For{$\ell \gets 1$ to $t$}{
$\mathbf{P}\xspace_f^{ (\ell)} \gets (1-\alpha)\cdot\mathbf{P}\xspace \mathbf{P}\xspace_f^{(\ell-1)} + \alpha\cdot\mathbf{P}\xspace_f^{(0)}$\;
$\mathbf{P}\xspace_b^{(\ell)} \gets (1-\alpha)\cdot\mathbf{P}\xspace^{\top} \mathbf{P}\xspace_b^{(\ell-1)} + \alpha\cdot\mathbf{P}\xspace_b^{(0)}$\;
}
Normalize $\mathbf{P}\xspace_f^{ (t)}$ by columns to get $\widehat{\mathbf{P}\xspace}_f^{ (t)}$\;
Normalize $\mathbf{P}\xspace_b^{ (t)}$ by rows to get $\widehat{\mathbf{P}\xspace}_b^{ (t)}$\;
$\mathbf{F}\xspace^{\prime} \gets \log(n\cdot \widehat{\mathbf{P}\xspace}_f^{ (t)}+1),\quad \mathbf{B}\xspace^{\prime} \gets \log(d\cdot \widehat{\mathbf{P}\xspace}_b^{ (t)}+1)$\;
\Return $\mathbf{F}\xspace^{\prime},\mathbf{B}\xspace^{\prime}$\;
\end{small}
\end{algorithm}
It is technically challenging to train embeddings of nodes and attributes that preserve our objective function in Equation \eqref{eq:obj1}, especially on massive attributed networks.
First, node-attribute affinity values are defined by random walks, which are rather expensive to be stimulated in a huge number from every node and attribute of massive graphs, to accurately get the affinity values of all possible node-attribute pairs.
Second, our objective function preserves both forward and backward affinity ({\it i.e.},\xspace considering edge directions), which makes the training process hard to converge.
Further, jointly preserving both forward and backward affinity involves intensive computations, severely dragging down the performance.
To address the technical challenges, we propose $\mathsf{PANE}$\xspace that can efficiently handle large-scale data and produce high-quality ANE results.
At a high level, $\mathsf{PANE}$\xspace consists of two phases: (i) iteratively computing approximated versions $\mathbf{F}\xspace'$ and $\mathbf{B}\xspace'$ of the forward and backward affinity matrices with rigorous approximation error guarantees, without actually sampling random walks
(\secref{sec:apmi}), and (ii) initializing the embedding vectors with a greedy algorithm for fast convergence, and
then jointly factorizing $\mathbf{F}\xspace'$ and $\mathbf{B}\xspace'$ using {\em cyclic coordinate descent} \cite{wright2015coordinate} to efficiently obtain the output embedding vectors $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace$, and $\mathbf{Y}\xspace$ (\secref{sec:svdccd}).
Given an input attributed network $G$, space budget $k$, random walk stopping probability $\alpha$ and an error threshold $\epsilon$ as inputs, \algref{alg:mainopt} outlines the proposed $\mathsf{PANE}$\xspace algorithm in the single-threaded setting.
For ease of presentation, this section describes the single-threaded version of the proposed solution $\mathsf{PANE}$\xspace for ANE. The full version of $\mathsf{PANE}$\xspace that runs in multiple threads is explained later in Section \ref{sec:parallel}.
\vspace{-2mm}
\subsection{Forward and Backward Affinity Approximation}\label{sec:apmi}
In Section \ref{sec:objective}, node-attribute affinity values are defined using a large number of random walks, which are expensive to simulate on a massive graph.
For the purpose of efficiency, in this section, we transform forward and backward affinity in Equations \eqref{eq:fwd-prob} and \eqref{eq:bwd-prob} into their matrix forms and propose $\mathsf{APMI}$\xspace in Algorithm \ref{alg:appr}, which efficiently approximates forward and backward affinity matrices with error guarantee and in linear time complexity, without actually sampling random walks.
Observe that in Equations \eqref{eq:fwd-prob} and \eqref{eq:bwd-prob}, the key for forward and backward affinity computation is to obtain $p_f(v_i,r_j)$ and $p_b(v_i,r_j)$ for every pair $(v_i,r_j)\in V\times R$. Recall that $p_f(v_i,r_j)$ is the probability that a forward random walk starting from node $v_i$ picks attribute $r_j$, while $p_b(v_i,r_j)$ is the probability of a backward random walk from attribute $r_j$ stopping at node $v_i$. Given nodes $v_i$ and $v_l$, denote $\pi(v_i,v_l)$ as the probability that a random walk starting from $v_i$ stops at $v_l$, {\it i.e.},\xspace the random walk score of $v_l$ with respect to $v_i$. By definition, $\textstyle p_f(v_i,r_j) =\sum_{v_l\in V}{\pi(v_i,v_l)\cdot{\mathbf{R}\xspace_r}[v_l,r_j]}$, where ${\mathbf{R}\xspace_r}[v_l,r_j]$ is the probability that node $v_l$ picks attribute $r_j$, according to Equation \eqref{eq:norm-r}.
Similarly, $p_b(v_i,r_j)$ is formulated as $\textstyle p_b(v_i,r_j) =\sum_{v_l\in V}{\mathbf{R}\xspace_c[v_l,r_j]\cdot \pi(v_l,v_i)}$, where $\mathbf{R}\xspace_c[v_l,r_j]$ is the probability that attribute $r_j$ picks node $v_l$ from all nodes having $r_j$ based on their attribute weights.
By the definition of random walk scores in \cite{jeh2003scaling,tong2006fast}, we can derive the matrix form of $p_f$ and $p_b$ as follows.
\begin{equation}\label{eq:fwd-bwd-prob-m}
\begin{split}
\mathbf{P}\xspace_f &= \alpha\sum_{\ell=0}^{\infty}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\ell}\cdot\mathbf{R}\xspace_r},\\
\mathbf{P}\xspace_b &= \alpha\sum_{\ell=0}^{\infty}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\top \ell}\cdot\mathbf{R}\xspace_c}.
\end{split}
\end{equation}
We only consider $t$ iterations to approximate $\mathbf{P}\xspace_f$ and $\mathbf{P}\xspace_b$ in \equref{eq:fwd-bwd-prob-m-t}, where $t$ is set to $\frac{\log(\epsilon)}{\log(1-\alpha)}-1$.
\begin{equation}\label{eq:fwd-bwd-prob-m-t}
\mathbf{P}\xspace^{(t)}_f = \alpha\sum_{\ell=0}^{t}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\ell}\cdot\mathbf{R}\xspace_r},\quad \mathbf{P}\xspace^{(t)}_b = \alpha\sum_{\ell=0}^{t}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\top \ell}\cdot\mathbf{R}\xspace_c}.
\end{equation}
Then, we normalize $\mathbf{P}\xspace_f^{ (t)}$ by columns and $\mathbf{P}\xspace_b^{ (t)}$ by rows as follows.
\begin{equation*}
\widehat{\mathbf{P}\xspace}_f^{ (t)}[v_i,r_j]=\frac{\mathbf{P}\xspace_f^{ (t)}[v_i,r_j]}{\sum_{v_l\in V}{\mathbf{P}\xspace_f^{ (t)}[v_l,r_j]}},\quad \widehat{\mathbf{P}\xspace}_b^{ (t)}[v_i,r_j]=\frac{\mathbf{P}\xspace_b^{ (t)}[v_i,r_j]}{\sum_{r_l\in R}{\mathbf{P}\xspace_b^{ (t)}[v_i,r_l]}}
\end{equation*}
After normalization, we compute $\mathbf{F}\xspace'$ and $\mathbf{B}\xspace'$ according to the definitions of forward and backward affinity as follows.
\begin{equation}\label{equ:approxFB}
\mathbf{F}\xspace^{\prime} = \log(n\cdot\widehat{\mathbf{P}\xspace}_f^{ (t)}+1),\quad \mathbf{B}\xspace^{\prime} = \log(d\cdot\widehat{\mathbf{P}\xspace}_b^{ (t)}+1)
\end{equation}
\algref{alg:appr} shows the pseudo-code of $\mathsf{APMI}$\xspace to compute $\mathbf{F}\xspace'$ and $\mathbf{B}\xspace'$. Specifically, $\mathsf{APMI}$\xspace takes as inputs random walk matrix $\mathbf{P}\xspace$, attribute matrix $\mathbf{R}\xspace$, random walk stopping probability $\alpha$ and the number of iterations $t$. At Line 1, $\mathsf{APMI}$\xspace begins by computing row-normalized attribute matrix $\mathbf{R}\xspace_r$ and column-normalized attribute matrix $\mathbf{R}\xspace_c$ according to \equref{eq:norm-r}. Then, $\mathsf{APMI}$\xspace computes $\mathbf{P}\xspace^{(t)}_f$ and $\mathbf{P}\xspace^{(t)}_b$ based on \equref{eq:fwd-bwd-prob-m-t}. Note that $\mathbf{P}\xspace$ is sparse and has $m$ non-zero entries. Thus, the computations of $\alpha\sum_{\ell=0}^{t}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\ell}}$ and $\alpha\sum_{\ell=0}^{t}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\top \ell}}$ in \equref{eq:fwd-bwd-prob-m-t} need $O(mnt)$ time, which is prohibitively expensive on large graphs. We avoid such expensive overheads and achieve a time cost of $O(mdt)$ for computing $\mathbf{P}\xspace^{(t)}_f$ and $\mathbf{P}\xspace^{(t)}_b$ by an iterative process as follows.
Initially, we set $\mathbf{P}\xspace_f^{ (0)}=\mathbf{R}\xspace_r$ and $\mathbf{P}\xspace_b^{ (0)}=\mathbf{R}\xspace_c$ (Line 2). Then, we start an iterative process from Line 3 to 5 with $t$ iterations; at the $\ell$-th iteration, we compute $\mathbf{P}\xspace_f^{ (\ell)}=(1-\alpha)\cdot\mathbf{P}\xspace \mathbf{P}\xspace_f^{ (\ell-1)} + \alpha\cdot\mathbf{P}\xspace_f^{(0)}$ and $\mathbf{P}\xspace_b^{ (\ell)}=(1-\alpha)\cdot\mathbf{P}\xspace^{\top}\mathbf{P}\xspace_b^{ (\ell-1)} + \alpha\cdot\mathbf{P}\xspace_b^{(0)}$. After $t$ iterations, $\mathsf{APMI}$\xspace normalizes $\mathbf{P}\xspace_f^{ (t)}$ by column and $\mathbf{P}\xspace_b^{ (t)}$ by row (Lines 6-7). At Line 8, $\mathsf{APMI}$\xspace obtains $\mathbf{F}\xspace^{\prime}$ and $\mathbf{B}\xspace^{\prime}$ as the approximate forward and backward affinity matrices. The following lemma establishes the accuracy guarantee of $\mathsf{APMI}$\xspace.
\begin{lemma}\label{lem:apa}
Given $\mathbf{P}\xspace,{\mathbf{R}\xspace_r},\alpha,\epsilon$ as inputs to \algref{alg:appr}, the returned approximate forward and backward affinity matrices $\mathbf{F}\xspace^{\prime}$, $\mathbf{B}\xspace^{\prime}$ satisfy that, for every pair $(v_i,r_j)\in V\times R$,
\begin{align*}
&\textstyle \frac{2^{\mathbf{F}\xspace^{\prime}[v_i,r_j]}-1}{2^{\mathbf{F}\xspace[v_i,r_j]}-1}\in \left[\max\Big\{0,1-\frac{\epsilon}{\mathbf{P}\xspace_f[v_i,r_j]}\Big\}, 1+\frac{\epsilon}{\sum_{v_l\in V}{\max\{0,\mathbf{P}\xspace_f[v_l,r_j]-\epsilon\}}}\right],\\
&\textstyle \frac{2^{\mathbf{B}\xspace^{\prime}[v_i,r_j]}-1}{2^{\mathbf{B}\xspace[v_i,r_j]}-1}\in \left[\max\Big\{0,1-\frac{\epsilon}{\mathbf{P}\xspace_b[v_i,r_j]}\Big\}, 1+\frac{\epsilon}{\sum_{r_l\in R}{\max\{0,\mathbf{P}\xspace_b[v_i,r_l]-\epsilon\}}}\right].
\end{align*}
\begin{proof}
First, with $t=\frac{\log(\epsilon)}{\log(1-\alpha)}-1$, we have
\begin{equation}\label{eq:alpha-eps}
\textstyle\sum_{\ell=t+1}^{\infty}{\alpha(1-\alpha)^{\ell}}=1-\sum_{\ell=0}^{t}{\alpha(1-\alpha)^{\ell}}=(1-\alpha)^{t+1}=\epsilon.
\end{equation}
By the definitions of $\mathbf{P}\xspace_f,\mathbf{P}\xspace^{(t)}_f$ and $\mathbf{P}\xspace_b,\mathbf{P}\xspace^{(t)}_b$ ({\it i.e.},\xspace Equation \eqref{eq:fwd-bwd-prob-m} and Equation \eqref{eq:fwd-bwd-prob-m-t}), for every pair $(v_i,r_j)\in V\times R$,
\begin{align}
&\textstyle \mathbf{P}\xspace_f[v_i,r_j]-\mathbf{P}\xspace_f^{(t)}[v_i,r_j]=\sum_{\ell=t+1}^{\infty}{\alpha(1-\alpha)^{\ell}\mathbf{P}\xspace^{\ell}}[v_i]\cdot{\mathbf{R}\xspace_r}^{\top}[r_j]\nonumber\\
=&\textstyle \left(\sum_{\ell=t+1}^{\infty}{\alpha(1-\alpha)^{\ell}\mathbf{P}\xspace^{\ell}}\right)[v_i]\cdot{\mathbf{R}\xspace_r}^{\top}[r_j]\le \sum_{\ell=t+1}^{\infty}{\alpha(1-\alpha)^{\ell}}=\epsilon\nonumber,\\
&\textstyle \mathbf{P}\xspace_b[v_i,r_j]-\mathbf{P}\xspace_b^{(t)}[v_i,r_j] =\sum_{\ell=t+1}^{\infty}{\alpha(1-\alpha)^{\ell}\mathbf{P}\xspace^{\top\ell}[v_i]\cdot\mathbf{R}\xspace^{\top}_c[r_j]}\nonumber\\
\le&\textstyle\sum_{v_l\in V}{\left(\sum_{\ell=t+1}^{\infty}{\alpha(1-\alpha)^{\ell}}\right)\cdot\mathbf{R}\xspace_c[v_l,r_j]}\le \sum_{v_l\in V}{\epsilon\cdot\mathbf{R}\xspace_c[v_l,r_j]}=\epsilon\nonumber.
\end{align}
Based on the above inequalities, for every pair $(v_i,r_j)\in V\times R$,
\begin{align}
\max\{0, \mathbf{P}\xspace_f[v_i,r_j]-\epsilon\}\le \mathbf{P}\xspace^{(t)}_f[v_i,r_j] \le \mathbf{P}\xspace_f[v_i,r_j],\label{eq:fwd-v-r}\\
\max\{0, \mathbf{P}\xspace_b[v_i,r_j]-\epsilon\}\le \mathbf{P}\xspace^{(t)}_b[v_i,r_j] \le \mathbf{P}\xspace_b[v_i,r_j]\label{eq:bwd-v-r}.
\end{align}
According to Lines 6-9 in Algorithm \ref{alg:appr}, for every pair $(v_i,r_j)\in V\times R$,
\begin{align}
\textstyle\frac{2^{\mathbf{F}\xspace^{\prime}[v_i,r_j]}-1}{2^{\mathbf{F}\xspace[v_i,r_j]}-1}&=\textstyle\frac{\widehat{\mathbf{P}\xspace}^{(t)}_f[v_i,r_j]}{\widehat{\mathbf{P}\xspace}_f[v_i,r_j]}=\frac{\mathbf{P}\xspace^{(t)}_f[v_i,r_j]}{\sum_{v_l\in V}{\mathbf{P}\xspace^{(t)}_f[v_l,r_j]}}\times \frac{\sum_{v_l\in V}{\mathbf{P}\xspace_f[v_l,r_j]}}{\mathbf{P}\xspace_f[v_i,r_j]}\label{eq:flog},\\
\textstyle\frac{2^{\mathbf{B}\xspace^{\prime}[v_i,r_j]}-1}{2^{\mathbf{B}\xspace[v_i,r_j]}-1}
&=\textstyle\frac{\widehat{\mathbf{P}\xspace}^{(t)}_f[v_i,r_j]}{\widehat{\mathbf{P}\xspace}_f[v_i,r_j]}=\frac{\mathbf{P}\xspace^{(t)}_b[v_i,r_j]}{\sum_{r_l\in R}{\mathbf{P}\xspace^{(t)}_b[v_i,r_l]}}\times \frac{\sum_{r_l\in R}{\mathbf{P}\xspace_b[v_i,r_l]}}{\mathbf{P}\xspace_b[v_i,r_j]}.\label{eq:blog}
\end{align}
Plugging Inequalities \eqref{eq:fwd-v-r} and \eqref{eq:bwd-v-r} into Inequalities \eqref{eq:flog} and \eqref{eq:blog} leads to the desired results, which completes our proof.
\end{proof}
\end{lemma}
\vspace{-1mm}
\subsection{Joint Factorization of Affinity Matrices}\label{sec:svdccd}
This section presents the proposed algorithm $\mathsf{SVDCCD}$\xspace, outlined in \algref{alg:svdccd}, which jointly factorizes the approximate forward and backward affinity matrices $\mathbf{F}\xspace^{\prime}$ and $\mathbf{B}\xspace^{\prime}$, in order to obtain the embedding vectors of all nodes and attributes, {\it i.e.},\xspace $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace$, and $\mathbf{Y}\xspace$. As the name suggests, the proposed $\mathsf{SVDCCD}$\xspace solver is based on the \textit{cyclic coordinate descent} (\textit{CCD}) framework, which iteratively updates each embedding value towards optimizing the objective function in Equation \eqref{eq:obj1}. The problem, however, is that a direct application of CCD, starting from random initial values of the embeddings, requires numerous iterations to converge, leading to prohibitive overheads.
Furthermore, CCD computation itself is expensive, especially on large-scale graphs.
To overcome these challenges, we firstly propose a greedy initialization method to facilitate fast convergence, and then design techniques for efficient refinement of initial embeddings, including dynamic maintenance and partial updates of intermediate results to avoid redundant computations in CCD.
\vspace{1mm} \noindent
\textbf{Greedy initialization.} In many optimization problems, all we need for efficiency is a good initialization. Thus, a key component in the proposed $\mathsf{SVDCCD}$\xspace algorithm is such an initialization of embedding values, based on {\em singular value decomposition} (\textit{SVD}) \cite{golub1971singular}. Note that unlike other matrix factorization problems, here SVD by itself cannot solve our problem, because the objective function in Equation \eqref{eq:obj1} requires the joint factorization of both the forward and backward affinity matrices at the same time, which cannot be directly addressed with SVD.
\algref{alg:isvd} describes the proposed $\mathsf{GreedyInit}$\xspace module of $\mathsf{SVDCCD}$\xspace, which initializes embeddings $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace$, and $\mathbf{Y}\xspace$.
Specifically, the algorithm first employs an efficient randomized SVD algorithm \cite{musco2015randomized} at Line 1 to decompose $\mathbf{F}\xspace'$ into $\mathbf{U}\xspace\in \mathbb{R}^{n\times \frac{k}{2}},\boldsymbol{\Sigma}\in \mathbb{R}^{\frac{k}{2}\times \frac{k}{2}}$, $\mathbf{V}\xspace\in \mathbb{R}^{d\times \frac{k}{2}}$, and then initializes $\mathbf{X}_f\xspace=\mathbf{U}\xspace\mathbf{\Sigma}$ and $\mathbf{Y}\xspace=\mathbf{V}\xspace$ at Line 2, which satisfies $\mathbf{X}_f\xspace\cdot\mathbf{Y}\xspace^{\top}\approx \mathbf{F}\xspace^{\prime}$. In other words, this initialization immediately gains a good approximation of the forward affinity matrix.
Recall that our objective function in \equref{eq:obj1} also aims to find $\mathbf{X}_b\xspace$ such that $\mathbf{X}_b\xspace\mathbf{Y}\xspace^{\top}\approx \mathbf{B}\xspace'$, {\it i.e.},\xspace to approximate the backward affinity matrix well. Here comes the key observation of the algorithm: that matrix $\mathbf{V}\xspace$ ({\it i.e.},\xspace $\mathbf{Y}\xspace$) returned by exact SVD is \textit{unitary}, {\it i.e.},\xspace $\mathbf{Y}\xspace^{\top}\mathbf{Y}\xspace=\mathbf{I}$, which implies that $\mathbf{X}_b\xspace\approx\mathbf{X}_b\xspace\mathbf{Y}\xspace^{\top}\mathbf{Y}\xspace\approx\mathbf{B}\xspace^{\prime}\mathbf{Y}\xspace$. Accordingly, we seed $\mathbf{X}_b\xspace$ with $\mathbf{B}\xspace'\mathbf{Y}\xspace$ at Line 2 of \algref{alg:isvd}. This initialization of $\mathbf{X}_b\xspace$ also leads to a relatively good approximation of the backward affinity matrix. Consequently, the number of iterations required by $\mathsf{SVDCCD}$\xspace is drastically reduced, as confirmed by our experiments in Section \ref{sec:exp}.
\vspace{1mm} \noindent
\textbf{Efficient refinement of the initial embeddings.} In \algref{alg:svdccd}, after initializing $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace$ and $\mathbf{Y}\xspace$ at Line 1, we apply cyclic coordinate descent to refine the embedding vectors according to our objective function in \equref{eq:obj1} from Lines 2 to 14. The basic idea of CCD is to cyclically iterate through all entries in $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace$ and $\mathbf{Y}\xspace$, one at a time, minimizing the objective function with respect to each entry ({\it i.e.},\xspace coordinate direction). Specifically, in each iteration, CCD updates each entry of $\mathbf{X}_f\xspace, \mathbf{X}_b\xspace$ and $\mathbf{Y}\xspace$ according to the following rules:
\begin{align}
\mathbf{X}_f\xspace[v_i,l]\gets& \mathbf{X}_f\xspace[v_i,l]-\mu_f(v_i,l),\label{eq:xf-update}\\
\mathbf{X}_b\xspace[v_i,l]\gets& \mathbf{X}_b\xspace[v_i,l]-\mu_b(v_i,l),\label{eq:xb-update}\\
\mathbf{Y}\xspace[r_j,l]\gets& \mathbf{Y}\xspace[r_j,l]-\mu_y(r_j,l)\label{eq:y-update},
\end{align}
with $\mu_f(v_i,l),\mu_b(v_i,l)$ and $\mu_y(r_j,l)$ computed by:
\begin{align}
\mu_f(v_i,l)= \frac{\mathbf{S}\xspace_f[v_i]\cdot\mathbf{Y}\xspace[:,l]}{\mathbf{Y}\xspace^{\top}[l]\cdot\mathbf{Y}\xspace[:,l]},\quad \mu_b(v_i,l)= \frac{\mathbf{S}\xspace_b[v_i]\cdot\mathbf{Y}\xspace[:,l]}{\mathbf{Y}\xspace^{\top}[l]\cdot\mathbf{Y}\xspace[:,l]},\label{eq:update-x-mu}\\
\mu_y(r_j,l)= \frac{\mathbf{X}_f\xspace^{\top}[l]\cdot\mathbf{S}\xspace_f[:,r_j]+\mathbf{X}_b\xspace^{\top}[l]\cdot\mathbf{S}\xspace_b[:,r_j]}{\mathbf{X}_f\xspace^{\top}[l]\cdot\mathbf{X}_f\xspace[:,l]+\mathbf{X}_b\xspace^{\top}[l]\cdot\mathbf{X}_b\xspace[:,l]},\label{eq:update-y-mu}\quad
\end{align}
where $\mathbf{S}\xspace_f=\mathbf{X}_f\xspace\mathbf{Y}\xspace^{\top}-\mathbf{F}\xspace^{\prime}$ and $\mathbf{S}\xspace_b=\mathbf{X}_b\xspace\mathbf{Y}\xspace^{\top}-\mathbf{B}\xspace^{\prime}$ are obtained at Line 3 in \algref{alg:isvd}.
However, directly applying the above updating rules to learn $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace$, and $\mathbf{Y}\xspace$ is inefficient, leading to many redundant matrix operations.
Lines 2-14 in \algref{alg:svdccd} show how to efficiently apply the above updating rules by dynamically maintaining and partially updating intermediate results.
Specifically, each iteration in Lines 3-14 first fixes $\mathbf{Y}\xspace$ and updates each row of $\mathbf{X}_f\xspace$ and $\mathbf{X}_b\xspace$ (Lines 3-9), and then updates each column of $\mathbf{Y}\xspace$ with $\mathbf{X}_f\xspace$ and $\mathbf{X}_b\xspace$ fixed (Lines 10-14).
According to Equations \eqref{eq:update-x-mu} and \eqref{eq:update-y-mu},
$\mu_f(v_i,l)$, $\mu_b(v_i,l)$, and $\mu_y(r_j,l)$ are pertinent to $\mathbf{S}\xspace_f[v_i]$, $\mathbf{S}\xspace_b[v_i]$, and $\mathbf{S}\xspace_f[:,r_j], \mathbf{S}\xspace_b[:,r_j]$ respectively, where $\mathbf{S}\xspace_f$ and $\mathbf{S}\xspace_b$ further depend on embedding vectors $\mathbf{X}_f\xspace$, $\mathbf{X}_b\xspace$ and $\mathbf{Y}\xspace$. Therefore, whenever $\mathbf{X}_f\xspace[v_i,l], \mathbf{X}_b\xspace[v_i,l]$, and $\mathbf{Y}\xspace[r_j,l]$ are updated in the iteration (Lines 6-7 and Line 13), $\mathbf{S}\xspace_f$ and $\mathbf{S}\xspace_b$ need to be updated accordingly. Directly recomputing $\mathbf{S}\xspace_f$ and $\mathbf{S}\xspace_b$ by $\mathbf{S}\xspace_f=\mathbf{X}_f\xspace\mathbf{Y}\xspace^{\top}-\mathbf{F}\xspace^{\prime}$ and $\mathbf{S}\xspace_b=\mathbf{X}_b\xspace\mathbf{Y}\xspace^{\top}-\mathbf{B}\xspace^{\prime}$ whenever an entry in $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace$ and, $\mathbf{Y}\xspace$ is updated is expensive.
Instead, we dynamically maintain and partially update $\mathbf{S}\xspace_f$ and $\mathbf{S}\xspace_b$ according to Equations \eqref{eq:update-x-sf}, \eqref{eq:update-x-sb} and \eqref{eq:update-y-sf-sb}.
Specifically, when $\mathbf{X}_f\xspace[v_i,l]$ and $\mathbf{X}_b\xspace[v_i,l]$ are updated (Lines 6-7), we update $\mathbf{S}\xspace_f[v_i]$ and $\mathbf{S}\xspace_b[v_i]$ respectively with $O(d)$ time at Lines 8-9 by
\begin{align}
\mathbf{S}\xspace_f[v_i]&\gets\mathbf{S}\xspace_f[v_i]-\mu_f(v_i,l)\cdot \mathbf{Y}\xspace[:,l]^{\top},\label{eq:update-x-sf}\\
\mathbf{S}\xspace_b[v_i]&\gets\mathbf{S}\xspace_b[v_i]-\mu_b(v_i,l)\cdot \mathbf{Y}\xspace[:,l]^{\top},\label{eq:update-x-sb}
\end{align}
Whenever $\mathbf{Y}\xspace[r_j,l]$ is updated at Line 13, both $\mathbf{S}\xspace_f[:,r_j]$ and $\mathbf{S}\xspace_b[:,r_j]$ are updated in $O(n)$ time at Line 14 by
\begin{equation}\label{eq:update-y-sf-sb}
\begin{split}
\mathbf{S}\xspace_f[:,r_j]\gets\mathbf{S}\xspace_f[:,r_j]-\mu_y(r_j,l)\cdot \mathbf{X}_f\xspace[:,l],\\ \mathbf{S}\xspace_b[:,r_j]\gets\mathbf{S}\xspace_b[:,r_j]-\mu_y(r_j,l)\cdot \mathbf{X}_b\xspace[:,l],
\end{split}
\end{equation}
\begin{algorithm}[!t]
\begin{small}
\caption{$\mathsf{GreedyInit}$\xspace}
\label{alg:isvd}
\KwIn{$\mathbf{F}\xspace^{\prime},\mathbf{B}\xspace^{\prime}, k, t$.}
\KwOut{$\mathbf{X}_f\xspace,\mathbf{X}_b\xspace,\mathbf{Y}\xspace,\mathbf{S}\xspace_f,\mathbf{S}\xspace_b$.}
$\mathbf{U}\xspace, \boldsymbol{\Sigma}, \mathbf{V}\xspace \gets \mathsf{RandSVD}(\mathbf{F}\xspace^{\prime},\frac{k}{2},t)$\;
$\mathbf{Y}\xspace\gets \mathbf{V}\xspace,\ \mathbf{X}_f\xspace \gets \mathbf{U}\xspace\boldsymbol{\Sigma},\ \mathbf{X}_b\xspace \gets \mathbf{B}\xspace^{\prime}\cdot\mathbf{Y}\xspace$\;
$\mathbf{S}\xspace_f \gets \mathbf{X}_f\xspace\mathbf{Y}\xspace^{\top}-\mathbf{F}\xspace^{\prime},\ \mathbf{S}\xspace_b \gets \mathbf{X}_b\xspace\mathbf{Y}\xspace^{\top}-\mathbf{B}\xspace^{\prime}$\;
\Return $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace,\mathbf{Y}\xspace,\mathbf{S}\xspace_f,\mathbf{S}\xspace_b$\;
\end{small}
\end{algorithm}
\begin{algorithm}[t]
\begin{small}
\caption{$\mathsf{SVDCCD}$\xspace}
\label{alg:svdccd}
\BlankLine
\KwIn{$\mathbf{F}\xspace^{\prime}, \mathbf{B}\xspace^{\prime}$, $k$, $t$.}
\KwOut{$\mathbf{X}_f\xspace,\mathbf{Y}\xspace,\mathbf{X}_b\xspace$.}
$\mathbf{X}_f\xspace,\mathbf{X}_b\xspace,\mathbf{Y}\xspace,\mathbf{S}\xspace_f,\mathbf{S}\xspace_b \gets \mathsf{GreedyInit}(\mathbf{F}\xspace^{\prime},\mathbf{B}\xspace^{\prime}, k, t)$\;
\For{$\ell\gets 1$ to $t$}{
\For{$v_i\in V$}{
\For{$l\gets 1$ to $\frac{k}{2}$}{
Compute $\mu_f(v_i,l),\mu_b(v_i,l)$ by \equref{eq:update-x-mu}\;
$\mathbf{X}_f\xspace[v_i,l]\gets \mathbf{X}_f\xspace[v_i,l]-\mu_f(v_i,l)$\;
$\mathbf{X}_b\xspace[v_i,l]\gets \mathbf{X}_b\xspace[v_i,l]-\mu_b(v_i,l)$\;
Update $\mathbf{S}\xspace_f[v_i]$ by \equref{eq:update-x-sf}\;
Update $\mathbf{S}\xspace_b[v_i]$ by \equref{eq:update-x-sb}\;
}
}
\For{$r_j\in R$}{
\For{$l\gets 1$ to $\frac{k}{2}$}{
Compute $\mu_y(r_j,l)$ by \equref{eq:update-y-mu}\;
$\mathbf{Y}\xspace[r_j,l]\gets \mathbf{Y}\xspace[r_j,l]-\mu_y(r_j,l)$\;
Update $\mathbf{S}\xspace_f[:,r_j],\mathbf{S}\xspace_b[:,r_j]$ by \equref{eq:update-y-sf-sb}\;
}
}
}
\Return $\mathbf{X}_f\xspace,\mathbf{Y}\xspace,\mathbf{X}_b\xspace$\;
\end{small}
\end{algorithm}
\vspace{-1mm}
\subsection{Complexity Analysis}\label{sec:algo-als}
In the proposed algorithm $\mathsf{PANE}$\xspace (\algref{alg:mainopt}), the maximum length of random walk is $t=\frac{\log(\epsilon)}{\log(1-\alpha)}-1=\frac{\log(\frac{1}{\epsilon})}{\log(\frac{1}{1-\alpha})}-1$. According to \secref{sec:apmi}, \algref{alg:appr} runs in time $\textstyle O\left(md\cdot t\right)=O\left(md\cdot\log\frac{1}{\epsilon}\right)$. Meanwhile, according to \cite{musco2015randomized}, given $\mathbf{F}\xspace^{\prime}\in \mathbb{R}^{n\times d}$ as input, $\mathsf{RandSVD}$ in \algref{alg:isvd} requires $O\left(ndkt\right)$ time, where $n$, $d$, $k$ are the number of nodes, number of attributes, and embedding space budget, respectively. The computation of $\mathbf{S}\xspace_f,\mathbf{S}\xspace_b$ costs $O(ndk)$ time. In addition, the $t$ iterations of CCD for updating $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace$ and $\mathbf{Y}\xspace$ take $O(ndkt) = O(ndk\log\frac{1}{\epsilon})$ time. Therefore, the overall time complexity of \algref{alg:mainopt} is $O\left((md+ndk)\cdot\log\left(\frac{1}{\epsilon}\right)\right).$ The memory consumption of intermediate results yielded in \algref{alg:mainopt}, {\it i.e.},\xspace $\mathbf{F}\xspace^{\prime}$,$\mathbf{B}\xspace^{\prime}$, $\mathbf{U}\xspace,\boldsymbol{\Sigma},\mathbf{V}\xspace$,$\mathbf{S}\xspace_f$,$\mathbf{S}\xspace_b$ are at most $O(nd)$. Hence, the space complexity of \algref{alg:mainopt} is bounded by $O(nd+m)$.
\section{Parallelization of PANE\xspace}\label{sec:parallel}
\begin{algorithm}[t]
\begin{small}
\caption{$\mathsf{PANE}$\xspace}
\label{alg:mainoptp}
\BlankLine
\KwIn{Attributed network $G$, space budget $k$, random walk stopping probability $\alpha$, error threshold $\epsilon$, the number of threads $n_b$.}
\KwOut{Forward and backward embedding vectors $\mathbf{X}_f\xspace$, $\mathbf{X}_b\xspace$ and attribute embedding vectors $\mathbf{Y}\xspace$.}
Partition $V$ into $n_b$ subsets $\mathcal{V}\gets\{V_1,\cdots,V_{n_b}\}$ equally\;
Partition $R$ into $n_b$ subsets $\mathcal{R}\gets\{R_1,\cdots,R_{n_b}\}$ equally\;
$t\gets \frac{\log(\epsilon)}{\log(1-\alpha)}-1$\;
$\mathbf{F}\xspace^{\prime}, \mathbf{B}\xspace^{\prime} \gets \mathsf{PAPMI}(\mathbf{P}\xspace,\mathbf{R}\xspace,\alpha,t,\mathcal{V},\mathcal{R})$\;
$\mathbf{X}_f\xspace, \mathbf{Y}\xspace, \mathbf{X}_b\xspace \gets \mathsf{PSVDCCD}(\mathbf{F}\xspace^{\prime}, \mathbf{B}\xspace^{\prime},\mathcal{V},\mathcal{R},k,t)$\;
\Return $\mathbf{X}_f\xspace,\mathbf{Y}\xspace,\mathbf{X}_b\xspace$\;
\end{small}
\end{algorithm}
Although single-thread $\mathsf{PANE}$\xspace ({\it i.e.},\xspace \algref{alg:mainopt}) runs in linear time to the size of the input attributed network, it still requires substantial time to handle large-scale attributed networks in practice.
For instance, on {\em MAG} dataset that has $59.3$ million nodes, $\mathsf{PANE}$\xspace (single thread) takes about five days.
To further boost efficiency, in this section we develop a parallel $\mathsf{PANE}$\xspace (Algorithm \ref{alg:mainoptp}), and it takes only $11.9$ hours on {\em MAG} when using $10$ threads ({\it i.e.},\xspace up to 10 times speedup).
Note that it is challenging to develop a parallel algorithm achieving such linear scalability to the number of threads on a multi-core CPU.
Specifically, $\mathsf{PANE}$\xspace involves various computational patterns, including intensive matrix computation, factorization, and CCD updates.
Therefore, it is non-trivial to assign computing tasks of both nodes and attributes to threads, to fully utilize the parallel power. Moreover, it is also challenging to maintain the intermediate result of each thread and combine them as the final result.
To this end, we propose several parallelization techniques for $\mathsf{PANE}$\xspace.
In the first phase, we adopt block matrix multiplication \cite{golub1996matrix} and propose $\mathsf{PAPMI}$\xspace to compute forward and backward affinity matrices in a parallel manner (Section \ref{sec:papa}). In the second phase, we develop $\mathsf{PSVDCCD}$\xspace with a split-merge-based parallel SVD technique to efficiently decompose affinity matrices, and further propose a parallel CCD technique to refine the embeddings efficiently (Section \ref{sec:split-merge}).
\algref{alg:mainoptp} illustrates the pseudo-code of parallel $\mathsf{PANE}$\xspace. Compared to the single-thread version, parallel $\mathsf{PANE}$\xspace takes as input an additional parameter, the number of threads $n_b$, and { randomly partitions the node set $V$, as well as the attribute set $R$, into $n_b$ subsets with equal size, denoted as $\mathcal{V}$ and $\mathcal{R}$, respectively (Lines 1-2).} $\mathsf{PANE}$\xspace invokes $\mathsf{PAPMI}$\xspace (\algref{alg:rpapr}) at Line 4 to get $\mathbf{F}\xspace'$ and $\mathbf{B}\xspace'$, and then invokes $\mathsf{PSVDCCD}$\xspace (\algref{alg:psvdccd}) to refine the embeddings.
Note that the parallel version of $\mathsf{PANE}$\xspace does not return exactly the same outputs as the single-thread version, as some modules (\textit{e.g.}, the parallel version of SVD) introduce additional error. Nevertheless, as the experiments in Section \ref{sec:exp} demonstrates, the degradation of result utility in parallel $\mathsf{PANE}$\xspace is small, but the speedup is significant.
\vspace{-1mm}
\subsection{Parallel Forward and Backward Affinity Approximation}\label{sec:papa}
\begin{algorithm}[!t]
\begin{small}
\caption{$\mathsf{PAPMI}$\xspace}
\label{alg:rpapr}
\BlankLine
\KwIn{$\mathbf{P}\xspace,\mathbf{R}\xspace,\alpha,t,\mathcal{V},\mathcal{R}$}
\KwOut{$\mathbf{F}\xspace^{\prime}, \mathbf{B}\xspace^{\prime}$}
Compute $\mathbf{R}\xspace_r$ and $\mathbf{R}\xspace_c$ by \equref{eq:norm-r}\;
\textbf{parallel} \For{$R_i\in \mathcal{R}$}{
{${\mathbf{P}\xspace_f}^{ (0)}_i \gets \mathbf{R}\xspace_r[:,R_i], {\mathbf{P}\xspace_b}^{ (0)}_i \gets \mathbf{R}\xspace_c[:,R_i]$}\;
\For{$\ell \gets 1$ to $t$}{
${\mathbf{P}\xspace_f}^{ (\ell)}_i \gets (1-\alpha)\cdot\mathbf{P}\xspace {\mathbf{P}\xspace_f}^{ (\ell-1)}_i + \alpha\cdot{\mathbf{P}\xspace_f}^{ (0)}_i$\;
${\mathbf{P}\xspace_b}^{ (\ell)}_i \gets (1-\alpha)\cdot\mathbf{P}\xspace^{\top} {\mathbf{P}\xspace_b}^{ (\ell-1)}_i + \alpha\cdot{\mathbf{P}\xspace_b}^{ (0)}_i$\;
}
}
\setcounter{AlgoLine}{6}
${{\mathbf{P}\xspace}_f}^{ (t)}\gets [{{\mathbf{P}\xspace}_{f_1}}^{ (t)}\cdots {{\mathbf{P}\xspace}_{f_{n_b}}}^{ (t)}]$\;
${{\mathbf{P}\xspace}_b}^{ (t)}\gets [{\mathbf{P}\xspace_b}^{ (t)}_1\cdots {\mathbf{P}\xspace_b}^{ (t)}_{n_b}]$\;
{\nonl{Lines 9-10 are the same as Lines 6-7 in Algorithm \ref{alg:appr}}\;
\setcounter{AlgoLine}{10}}
\textbf{parallel} \For{$V_i\in \mathcal{V}$}{
$\mathbf{F}\xspace^{\prime}[V_i] \gets \log(n\cdot {\widehat{\mathbf{P}\xspace}_f}^{ (t)}[V_i] +1)$\;
$\mathbf{B}\xspace^{\prime}[V_i] \gets \log(d\cdot {\widehat{\mathbf{P}\xspace}_b}^{ (t)}[V_i] +1)$\;
}
\setcounter{AlgoLine}{13}
\Return $\mathbf{F}\xspace^{\prime}, \mathbf{B}\xspace^{\prime}$
\end{small}
\end{algorithm}
We propose $\mathsf{PAPMI}$\xspace in Algorithm \ref{alg:rpapr} to estimate $\mathbf{F}\xspace'$ and $\mathbf{B}\xspace'$ in parallel. After obtaining $\mathbf{R}\xspace_r$ and $\mathbf{R}\xspace_c$ based on Equation \eqref{eq:norm-r} at Line 1, $\mathsf{PAPMI}$\xspace divides $\mathbf{R}\xspace_r$ and $\mathbf{R}\xspace_c$ into matrix blocks according to two input parameters, the node subsets $\mathcal{V}=\{V_1,V_2,\cdots,V_{n_b}\}$ and attribute subsets $\mathcal{R}=\{R_1,R_2,\cdots,R_{n_b}\}$.
Then, $\mathsf{PAPMI}$\xspace parallelizes the matrix multiplications for computing $\mathbf{P}\xspace^{(t)}_f$ and $\mathbf{P}\xspace^{(t)}_b$ from Line 2 to 6, using $n_b$ threads in $t$ iterations. Specifically, the $i$-th thread initializes ${\mathbf{P}\xspace_{f_i}}^{ (0)}$ by $\mathbf{R}\xspace_r[:,R_i]$ and ${\mathbf{P}\xspace_{b_i}}^{ (0)}$ by $\mathbf{R}\xspace_c[:,R_i]$ (Line 3), and then computes ${\mathbf{P}\xspace_f}^{ (\ell)}_i = (1-\alpha)\cdot\mathbf{P}\xspace {\mathbf{P}\xspace_f}^{ (\ell-1)}_i + \alpha\cdot{\mathbf{P}\xspace_f}^{ (0)}_i$ and ${\mathbf{P}\xspace_b}^{ (\ell)}_i = (1-\alpha)\cdot\mathbf{P}\xspace^{\top} {\mathbf{P}\xspace_b}^{ (\ell-1)}_i + \alpha\cdot{\mathbf{P}\xspace_b}^{ (0)}_i$ (Lines 4-6).
Then, we use a main thread to aggregate the partial results of all threads at Lines 7-8. Specifically, $n_b$ matrix blocks ${{\mathbf{P}\xspace}_{f_i}}^{ (t)}$ (resp. ${{\mathbf{P}\xspace}_{b_i}}^{ (t)}$) are concatenated horizontally together as ${{\mathbf{P}\xspace}_f}^{ (t)}$ (resp. ${{\mathbf{P}\xspace}_b}^{ (t)}$) at Line 7 (resp. Line 8). At Lines 9-10, we normalize ${\widehat{\mathbf{P}\xspace}_f}^{ (t)}$ and ${\widehat{\mathbf{P}\xspace}_b}^{ (t)}$ in the same way as Lines 6-7 in Algorithm \ref{alg:appr}.
From Lines 11 to 13, $\mathsf{PAPMI}$\xspace starts $n_b$ threads to compute $\mathbf{F}\xspace'$ and $\mathbf{B}\xspace'$ block by block in parallel, based on the definitions of forward and backward affinity. Specifically, the $i$-th thread computes $\mathbf{F}\xspace^{\prime}[V_i]=\log(n\cdot {\widehat{\mathbf{P}\xspace}_f}^{ (t)}[V_i] +1)$ and $\mathbf{B}\xspace^{\prime}[V_i]=\log(d\cdot {\widehat{\mathbf{P}\xspace}_b}^{ (t)}[V_i] +1)$.
Finally, $\mathsf{PAPMI}$\xspace returns $\mathbf{F}\xspace^{\prime}$ and $\mathbf{B}\xspace^{\prime}$ as the approximate forward and backward affinity matrices (Line 14). \lemref{lem:papa} indicates the accuracy guarantee of $\mathsf{PAPMI}$\xspace.
\begin{lemma}\label{lem:papa}
Given same parameters $\mathbf{P}\xspace,\mathbf{R}\xspace,\alpha$ and $t$ as inputs to \algref{alg:appr} and \algref{alg:rpapr}, the two algorithms return the same approximate forward and backward affinity matrices $\mathbf{F}\xspace^{\prime}$, $\mathbf{B}\xspace^{\prime}$.
\end{lemma}
\begin{algorithm}[!t]
\begin{small}
\caption{$\mathsf{SMGreedyInit}$\xspace}
\label{alg:split-merge}
\KwIn{$\mathbf{F}\xspace^{\prime}, \mathbf{B}\xspace^{\prime}, \mathcal{V}, k, t$.}
\KwOut{$\mathbf{X}_f\xspace, \mathbf{X}_b\xspace, \mathbf{Y}\xspace, \mathbf{S}\xspace_f, \mathbf{S}\xspace_b$.}
\textbf{parallel} \For{$V_i\in \mathcal{V}$}{
$\boldsymbol{\Phi}, \boldsymbol{\Sigma}, \mathbf{V}\xspace_i \gets \mathsf{RandSVD}(\mathbf{F}\xspace^{\prime}[V_i],\frac{k}{2},t)$\;
$\mathbf{U}\xspace_i\gets \boldsymbol{\Phi}\boldsymbol{\Sigma}$\;
}
$\mathbf{V}\xspace\gets\left[\mathbf{V}\xspace_1\ \cdots\ \mathbf{V}\xspace_{n_b}\right]^{\top}$\;
$\boldsymbol{\Phi}, \boldsymbol{\Sigma}, \mathbf{Y}\xspace \gets \mathsf{RandSVD}(\mathbf{V}\xspace,\frac{k}{2},t)$\;
$\mathbf{W}\xspace \gets \boldsymbol{\Phi}\boldsymbol{\Sigma}$\;
\textbf{parallel} \For{$V_i\in \mathcal{V}$}{
$\mathbf{X}_f\xspace[V_i] \gets \mathbf{U}\xspace_{i}\cdot \mathbf{W}\xspace[(i-1)\cdot \frac{k}{2}:i\cdot \frac{k}{2}]$\;
$\mathbf{X}_b\xspace[V_i] \gets \mathbf{B}\xspace^{\prime}[V_i]\cdot\mathbf{Y}\xspace$\;
$\mathbf{S}\xspace_f[V_i] \gets \mathbf{X}\xspace_f[V_i]\cdot\mathbf{Y}\xspace^{\top}-\mathbf{F}\xspace^{\prime}[V_i]$\;
$\mathbf{S}\xspace_b[V_i] \gets \mathbf{B}\xspace^{\prime}[V_i]-\mathbf{X}_b\xspace[V_i]\cdot\mathbf{Y}\xspace^{\top}$\;
}
\Return $\mathbf{X}_f\xspace, \mathbf{X}_b\xspace, \mathbf{Y}\xspace, \mathbf{S}\xspace_f, \mathbf{S}\xspace_b$\;
\end{small}
\end{algorithm}
\begin{algorithm}[!t]
\begin{small}
\caption{$\mathsf{PSVDCCD}$\xspace}
\label{alg:psvdccd}
\KwIn{$\mathbf{F}\xspace^{\prime}, \mathbf{B}\xspace^{\prime}, \mathcal{V},\mathcal{R},k,t$.}
\KwOut{$\mathbf{X}_f\xspace, \mathbf{Y}\xspace, \mathbf{X}_b\xspace$.}
$\mathbf{X}_f\xspace, \mathbf{X}_b\xspace, \mathbf{Y}\xspace, \mathbf{S}\xspace_f, \mathbf{S}\xspace_b \gets \mathsf{SMGreedyInit}(\mathbf{F}\xspace^{\prime}, \mathbf{B}\xspace^{\prime}, \mathcal{V}, k, t)$\;
\For{$\ell\gets 1$ to $t$}{
\textbf{parallel} \For{$V_h\in \mathcal{V}$}{
\For{$v_i \in V_h$}{
\nonl {Lines 5-10 are the same as Lines 4-9 in Algorithm \ref{alg:svdccd}}\;
}
}
\setcounter{AlgoLine}{10}
\textbf{parallel} \For{$R_h\in \mathcal{R}$}{
\For{$r_j\in R_h$}{
\nonl {Lines 13-16 are the same as Lines 11-14 in Algorithm \ref{alg:svdccd}}\;
}
}
}
\setcounter{AlgoLine}{16}
\Return $\mathbf{X}_f\xspace,\mathbf{Y}\xspace, \mathbf{X}_b\xspace$\;
\end{small}
\end{algorithm}
\subsection{Parallel Joint Factorization of Affinity Matrices}\label{sec:split-merge}
This section presents the parallel algorithm $\mathsf{PSVDCCD}$\xspace in Algorithm \ref{alg:psvdccd} to further improve the efficiency of the joint affinity matrix factorization process. At Line 1 of the algorithm, we design a parallel initialization algorithm $\mathsf{SMGreedyInit}$\xspace with a split-and-merge-based parallel SVD technique for embedding vector initialization.
Algorithm \ref{alg:split-merge} shows the pseudo-code of $\mathsf{SMGreedyInit}$\xspace, which takes as input $\mathbf{F}\xspace^{\prime}$, $\mathbf{B}\xspace^{\prime}$, $\mathcal{V}$, and $k$.
Based on $\mathcal{V}$, $\mathsf{SMGreedyInit}$\xspace splits matrix $\mathbf{F}\xspace'$ into $n_b$ blocks and launches $n_b$ threads. Then, the $i$-th thread applies $\mathsf{RandSVD}$ to block $\mathbf{F}\xspace^{\prime}[V_i]$ generated by the rows of $\mathbf{F}\xspace'$ based on node set $V_i\in \mathcal{V}$ (Line 1-3).
After obtaining $\mathbf{V}\xspace_1,\cdots,\mathbf{V}\xspace_{n_b}$, $\mathsf{SMGreedyInit}$\xspace merges these matrices by concatenating $\mathbf{V}\xspace_1,\cdots,\mathbf{V}\xspace_{n_b}$ into $\mathbf{V}\xspace = [\mathbf{V}\xspace_1\ \cdots\ \mathbf{V}\xspace_{n_b}]^{\top}\in \mathbb{R}^{\frac{kn_b}{2}\times d}$, and then applies $\mathsf{RandSVD}$ over it to obtain $\mathbf{W}\xspace\in \mathbb{R}^{\frac{kn_b}{2}\times \frac{k}{2}}$ and $\mathbf{Y}\xspace\in\mathbb{R}^{d\times \frac{k}{2}}$ (Lines 4-6).
At Line 7, $\mathsf{SMGreedyInit}$\xspace creates $n_b$ threads, and uses the $i$-th thread to handle node subset $V_i$ for initializing embedding vectors $\mathbf{X}_f\xspace[V_i]$ and $\mathbf{X}_b\xspace[V_i]$ at Lines 8-9, as well as computing $\mathbf{S}\xspace_f$ and $\mathbf{S}\xspace_b$ at Lines 10-11. Specifically, the forward embedding vectors of node subset $V_i$ are initialized as $\mathbf{X}_f\xspace[V_i]=\mathbf{U}\xspace_{i}\cdot \mathbf{W}\xspace[(i-1)\cdot \frac{k}{2}:i\cdot \frac{k}{2}]$ at Line 8; the backward embedding vectors of $V_i$ are initialized as $\mathbf{X}_b\xspace[V_i] = \mathbf{B}\xspace^{\prime}[V_i]\cdot\mathbf{Y}\xspace$ at Line 9; $\mathbf{S}\xspace_f[V_i]$ and $\mathbf{S}\xspace_b[V_i]$ for node subset $V_i$ are computed as $\mathbf{S}\xspace_f[V_i]=\mathbf{X}\xspace_f[V_i]\cdot\mathbf{Y}\xspace^{\top}-\mathbf{F}\xspace^{\prime}[V_i]$ at Line 10 and $\mathbf{S}\xspace_b[V_i]=\mathbf{X}_b\xspace[V_i]\cdot\mathbf{Y}\xspace^{\top}-\mathbf{B}\xspace^{\prime}[V_i]$ at Line 11. Finally, $\mathsf{SMGreedyInit}$\xspace returns initialized embedding vectors $\mathbf{Y}\xspace$, $\mathbf{X}_f\xspace$, and $\mathbf{X}_b\xspace$ as well as intermediate results $\mathbf{S}\xspace_f,\mathbf{S}\xspace_b$ at Line 12. \lemref{lem:smsvd} indicates that the initial embedding vectors produced by $\mathsf{SMGreedyInit}$\xspace and $\mathsf{GreedyInit}$\xspace are close.
After obtaining $\mathbf{X}_f\xspace, \mathbf{X}_b\xspace$, and $\mathbf{Y}\xspace$ by $\mathsf{SMGreedyInit}$\xspace, Lines 2-16 in Algorithm \ref{alg:psvdccd} train embedding vectors by cyclic coordinate descent in parallel based on subsets $\mathcal{V}$ and $\mathcal{R}$, in $t$ iterations. In each iteration, $\mathsf{PSVDCCD}$\xspace first fixes $\mathbf{Y}\xspace$ and launches $n_b$ threads to update $\mathbf{X}_f\xspace$ and $\mathbf{X}_b\xspace$ in parallel by blocks according to $\mathcal{V}$, and then updates $\mathbf{Y}\xspace$ using the $n_b$ threads in parallel by blocks according to $\mathcal{R}$, with $\mathbf{X}_f\xspace$ and $\mathbf{X}_b\xspace$ fixed. Specifically,
Lines 5-10 are the same as Lines 4-9 of Algorithm \ref{alg:svdccd}, and Lines 13-16 are the same as Lines 11-14 of Algorithm \ref{alg:svdccd}.
Finally, \algref{alg:psvdccd} returns embedding results at Line 17
\begin{lemma}\label{lem:smsvd}
Given same $\mathbf{F}\xspace^{\prime},\mathbf{B}\xspace^{\prime},k$ and $t$ as inputs to \algref{alg:isvd} and \algref{alg:split-merge}, the outputs $\mathbf{X}_f\xspace,\mathbf{Y}\xspace,\mathbf{S}\xspace_f,\mathbf{S}\xspace_b$ returned by both algorithms satisfy that $\mathbf{X}_f\xspace\cdot\mathbf{Y}\xspace^{\top}=\mathbf{F}\xspace^{\prime}$,$\mathbf{Y}\xspace^{\top}\mathbf{Y}\xspace=\mathbf{I}$ and $\mathbf{S}\xspace_f=\mathbf{S}\xspace_b\mathbf{Y}\xspace=\mathbf{0}$, when $t=\infty$.
\end{lemma}
\subsection{Complexity Analysis}\label{sec:algo-opt-als} Observe that the non-parallel parts of Algorithms \ref{alg:rpapr} and \ref{alg:split-merge} take $O(nd)$ time, as each of them performs a constant number of operations on $O(nd)$ matrix entries. Meanwhile, for the parallel parts of Algorithms~\ref{alg:rpapr} and \ref{alg:psvdccd}, each thread runs in $\textstyle O\left(\frac{md}{n_b}\cdot\log\left(\frac{1}{\epsilon}\right)\right)$ and $O(\frac{ndkt}{n_b})$ time, respectively, since we divides the workload evenly to $n_b$ threads.
Specifically, each thread in \algref{alg:rpapr} runs in $\textstyle O\left(\frac{md}{n_b}\cdot\log\left(\frac{1}{\epsilon}\right)\right)$ time. \algref{alg:psvdccd} first takes $O(\frac{n}{n_b}dkt)$ time for each thread to factorize a $\frac{n}{n_b}\times d$ matrix block of $\mathbf{F}\xspace^{\prime}$ (Lines 1-3 in \algref{alg:split-merge}). In addition, Lines 4-6 in \algref{alg:split-merge} requires $O(n_bdk)$ time. In merge course ({\it i.e.},\xspace Lines 7-11 in \algref{alg:split-merge}), the matrix multiplications take $O(\frac{n}{n_b}k^2)$ time. In the $t$ iterations of CCD ({\it i.e.},\xspace Lines 2-16 in \algref{alg:psvdccd}), each thread spends $O(\frac{ndkt}{n_b})$ time to update. Thus, the computational time complexity per thread in \algref{alg:mainoptp} is $O\left(\frac{md+ndk}{n_b}\cdot\log\left(\frac{1}{\epsilon}\right)\right).$ \algref{alg:rpapr} and \algref{alg:psvdccd} require $O(m+nd)$ and $O(nd)$ space, respectively. Therefore, the space complexity of $\mathsf{PANE}$\xspace is $O(m+nd)$.
\subsection{Paramter Analysis}\label{sec:exp-param}
$\mathsf{PANE}$\xspace involves several parameters including embedding dimensionality $k$, the number of threads $n_b$, error threshold $\epsilon$ and random walk stopping probability $\alpha$. We study the effects of varying these parameters for attribute inference and link prediction on {\em Cora}, {\em Citeseer}, {\em Facebook}, {\em Pubmed} and {\em Flickr}, respectively. We report AUC results in \figref{fig:attr-param} and \figref{fig:link-param} when one of the parameters is varied, the others are kept as default values in \secref{sec:exp-set}.
\figref{fig:acc-attr-k} and \figref{fig:acc-link-k} display the AUC scores of $\mathsf{PANE}$\xspace with regard to attribute inference and link prediction on five graphs, respectively, when varying space budget $k$ in $\{16,32,64,128,256\}$. It can be observed that the AUC grows notably when $k$ increases from $16$ to $256$. This indicates that a large space budget benefits to producing more accurate embedding vectors. \figref{fig:acc-attr-nb} and \figref{fig:acc-link-nb} depict the AUC scores when varying the number of threads $n_b$ from $1$ to $20$. We observe that the attribute inference performance and the link prediction performance decreases slowly as $n_b$ increases. This is due to that $\mathsf{PANE}$\xspace performs SVD over each matrix block and each factorization introduces an error, and thus the larger $n_b$ is, the less accurate the embedding vectors are. Note that when $\alpha=0.5$, varying $\epsilon$ from $0.001$ to $0.25$ corresponds to reducing the number of iterations $t$ from $9$ to $1$. From \figref{fig:acc-attr-eps} and \figref{fig:acc-link-eps}, we can see that the attribute inference and link prediction performance are nearly stationary when increasing $\epsilon$ from $0.001$ to $0.05$, while the performance declines rapidly when $\epsilon$ is beyond $0.05$. From Figure \ref{fig:acc-attr-alpha} and \ref{fig:acc-link-alpha}, where random walk stopping probability $\alpha$ is varied from $0.1$ to $0.9$, we note that $\mathsf{PANE}$\xspace achieves the best performance in terms of link prediction and attribute inference on {\em Cora}, {\em Facebook} and {\em Pubmed} when $\alpha=0.5$ and that on {\em Citeseer} and {\em Flickr} when $\alpha=0.7$. The performance first increases and then downgrades as $\alpha$ is increased owing to that if $\alpha$ is too small, $\mathsf{PANE}$\xspace tends to detect distant nodes; at the other extreme if $\alpha$ is too large, only limited local neighborhoods of nodes are preserved in the embedding vectors. As a result, picking $\alpha=0.5$ yields favorable performance.
\subsection{Effectiveness Evaluation of $\mathsf{GreedyInit}$\xspace}\label{sec:effk-init}
In this set of experiments, we evaluate the effectiveness of $\mathsf{GreedyInit}$\xspace, by comparing it with random initialization, as shown in Figures \ref{fig:time-gi-link} and \ref{fig:time-gi-attr}. First, let $\mathsf{PANE}$\xspace-$\mathsf{R}$ be the algorithm that uses random initialization, replacing $\mathsf{GreedyInit}$\xspace of $\mathsf{PANE}$\xspace (Line 1 in Algorithm \ref{alg:svdccd}). Figure \ref{fig:time-gi-link} and \ref{fig:time-gi-attr} plot the running time ($x$-axis) \textit{v.s.} AUC ($y$-axis) for link prediction and attribute inference tasks respectively, on {\em Facebook}, {\em Pubmed} and {\em Flickr} datasets, when varying the number of iterations $t$ for the cyclic coordinate descent (Line 2 in Algorithm 4 of the revised paper) in $\{1,2,5,10,20\}$. From Figure \ref{fig:time-gi-link} and \ref{fig:time-gi-attr}, we observe that, as the number of iterations $t$ increases, both $\mathsf{PANE}$\xspace and $\mathsf{PANE}$\xspace-$\mathsf{R}$ require more running time to finish, and also achieve higher link prediction and attribute inference performance. This is due to that using more iterations, we are able to obtain high-quality embeddings that are closer to the optimal solution of the objective function in Equation \eqref{eq:obj1}. Particularly, our main observation from Figure \ref{fig:time-gi-link} and \ref{fig:time-gi-attr} is that $\mathsf{PANE}$\xspace consistently outperforms $\mathsf{PANE}$\xspace-$\mathsf{R}$ over {\em Facebook}, {\em Pubmed} and {\em Flickr}. Given the same amount of time, $\mathsf{PANE}$\xspace-$\mathsf{R}$ always has lower AUC than $\mathsf{PANE}$\xspace.
In other words, to achieve the same AUC as $\mathsf{PANE}$\xspace, $\mathsf{PANE}$\xspace-$\mathsf{R}$ takes longer time (more iterations). For instance, in Figure \ref{fig:time-pd-gi-attr}, $\mathsf{PANE}$\xspace achieves $0.87$ AUC score, using only $5$ seconds, while $\mathsf{PANE}$\xspace-$\mathsf{R}$ requires $12$ seconds. As a result, we can draw the conclusion that $\mathsf{GreedyInit}$\xspace is effective to converge quickly and produce high-quality results, which is consistent with our claim in Section \ref{sec:svdccd}.
\section{Experiments}\label{sec:exp}
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{footnotesize}
\caption{Datasets. {\small (K=$10^3$, M=$10^6$)}}\label{tbl:exp-data}\vspace{-3mm}
\begin{tabular}{|l|r|r|r|r|c|c|}
\hline
{\bf Name} & \multicolumn{1}{c|}{$|V|$} & \multicolumn{1}{c|}{$|E_V|$} & \multicolumn{1}{c|}{$|R|$}& \multicolumn{1}{c|}{$|E_R|$} & \multicolumn{1}{c|}{\bf $|L|$} & \multicolumn{1}{c|}{\bf Refs}\\
\hline
{\bf\em Cora} & 2.7K & 5.4K & 1.4K & 49.2K & 7 & \cite{pan2018adversarially,zhou2018prre,liu2018content,yang2015network,meng2019co,yang2018binarized} \\
\hline
{\bf\em Citeseer} & 3.3K & 4.7K & 3.7K & 105.2K & 6 & \cite{pan2018adversarially,zhou2018prre,liu2018content,yang2015network,meng2019co,yang2018binarized} \\
\hline
{\bf\em Facebook} & 4K & 88.2K & 1.3K & 33.3K & 193 & \cite{leskovec2012learning,yang2013community,meng2019co,zhang2018anrl} \\
\hline
{\bf\em Pubmed} & 19.7K & 44.3K & 0.5K & 988K & 3 & \cite{pan2018adversarially,zhou2018prre,meng2019co,zhang2018anrl} \\
\hline
{\bf\em Flickr} & 7.6K & 479.5K & 12.1K & 182.5K & 9 & \cite{meng2019co} \\
\hline
{\bf\em Google+} & 107.6K & 13.7M & 15.9K & 300.6M & 468 & \cite{leskovec2012learning,yang2013community} \\
\hline
{\bf\em TWeibo} & 2.3M & 50.7M & 1.7K & 16.8M & 8 & - \\
\hline
{\bf\em MAG} & 59.3M & 978.2M & 2K & 434.4M & 100 & - \\
\hline
\end{tabular}
\end{footnotesize}
\vspace{0mm}
\end{table}
\begin{table*}[ht]
\centering
\renewcommand{\arraystretch}{1.2}
\caption{Attribute inference performance.}\vspace{-2mm}
\begin{small}
\begin{tabular}{|c|c c |c c| c c| c c| c c|c c|c c| c c|}
\hline
\multirow{2}{*}{\bf Method} & \multicolumn{2}{c|}{\bf {\em Cora}} & \multicolumn{2}{c|}{\bf {\em Citeseer}} & \multicolumn{2}{c|}{\bf {\em Facebook}} & \multicolumn{2}{c|}{\bf {\em Pubmed}} & \multicolumn{2}{c|}{\bf {\em Flickr}} & \multicolumn{2}{c|}{\bf {\em Google+}} & \multicolumn{2}{c|}{\bf {\em TWeibo}} & \multicolumn{2}{c|}{\bf {\em MAG}}\\ \cline{2-17}
& AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP\\
\hline
$\mathsf{BLA}$\xspace & 0.559 & 0.563 & 0.540 & 0.541 & 0.653 & 0.648 & 0.520 & 0.524 & 0.660 & 0.653 & -& -& -& - & -& -\\
$\mathsf{CAN}$\xspace & 0.865 & 0.855 & 0.875 & 0.859 & 0.765 & 0.745 & 0.734 & 0.72 & 0.772 & 0.774 & -& -& -& -& -& - \\
\hline
$\mathsf{PANE}$\xspace (single thread) & \cellcolor{blue!25}0.913 & \cellcolor{blue!25}0.925 & \cellcolor{blue!25}0.903 & \cellcolor{blue!25}0.916 & \cellcolor{blue!25}0.828 & \cellcolor{blue!25}0.84 & \cellcolor{blue!25}0.871 & \cellcolor{blue!25}0.874 & \cellcolor{blue!25}0.825 & \cellcolor{blue!25}0.832 & \cellcolor{blue!25}0.972 & \cellcolor{blue!25}0.973 & \cellcolor{blue!25}0.774 & \cellcolor{blue!25}0.837 & \cellcolor{blue!25}0.876 & \cellcolor{blue!25}0.888 \\
$\mathsf{PANE}$\xspace (parallel) & 0.909 & 0.92 & 0.899 & 0.913 & 0.825 & 0.837 & 0.867 & 0.869 & 0.822 & 0.831 & 0.969 & 0.97 & 0.773 & 0.836 & 0.874 & 0.887 \\
\hline
\end{tabular}
\end{small}\label{tab:attr}
\vspace{2mm}
\end{table*}
We experimentally evaluate our proposed method $\mathsf{PANE}$\xspace (both single-thread and parallel versions) against 10 competitors on three tasks: link prediction, attribute inference and node classification, using 8 real datasets. All experiments are conducted on a Linux machine powered by an
Intel Xeon(R) E7-8880 [email protected] CPUs and 1TB RAM. The codes of all algorithms are collected from their respective authors, and all are implemented in Python, except $\mathsf{NRP}$\xspace, $\mathsf{TADW}$\xspace and $\mathsf{LQANR}$\xspace. For fair comparison of efficiency, we re-implement $\mathsf{TADW}$\xspace and $\mathsf{LQANR}$\xspace in Python.
\subsection{Experiments Setup}\label{sec:exp-set}
\begin{table*}[ht]
\centering
\renewcommand{\arraystretch}{1.2}
\caption{Link prediction performance.}\vspace{-2mm}
\begin{small}
\begin{tabular}{|c|c c|c c|c c|c c| c c| c c|c c|c c|}
\hline
\multirow{2}{*}{\bf Method} & \multicolumn{2}{c|}{\bf {\em Cora}} & \multicolumn{2}{c|}{\bf {\em Citeseer}} & \multicolumn{2}{c|}{\bf {\em Pubmed}} & \multicolumn{2}{c|}{\bf {\em Facebook}} & \multicolumn{2}{c|}{\bf {\em Flickr}} & \multicolumn{2}{c|}{\bf {\em Google+}} & \multicolumn{2}{c|}{\bf {\em TWeibo}} & \multicolumn{2}{c|}{\bf {\em MAG}}\\ \cline{2-17}
& AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP\\
\hline
$\mathsf{NRP}$\xspace & 0.796 & 0.777 & 0.86 & 0.808 & 0.87 & 0.861 & 0.969 & 0.973 & 0.909 & 0.902 & \cellcolor{blue!25}0.989 & \cellcolor{blue!25}0.992 & 0.967 & 0.979 & 0.915 & 0.92 \\
\hline
$\mathsf{GATNE}$\xspace & 0.791 & 0.822 & 0.687 & 0.767 & 0.745 & 0.796 & 0.961 & 0.954 & 0.805 & 0.785 &- &- & -&- &- & -\\
\hline
$\mathsf{TADW}$\xspace & 0.829 & 0.805 & 0.895 & 0.868 & 0.904 & 0.863 & 0.752 & 0.793 & 0.573 & 0.58 & - & -& -& -& -&-\\
$\mathsf{ARGA}$\xspace & 0.64 & 0.485 & 0.637 & 0.484 & 0.623 & 0.474 & 0.71 & 0.636 & 0.676 & 0.656 & -& -&- & -& -&-\\
$\mathsf{BANE}$\xspace & 0.875 & 0.823 & 0.899 & 0.873 & 0.919 & 0.847 & 0.796 & 0.795 & 0.64 & 0.605 & 0.56 & 0.533 &- & -& -&-\\
$\mathsf{PRRE}$\xspace & 0.879 & 0.836 & 0.895 & 0.855 & 0.887 & 0.813 & 0.899 & 0.884 & 0.789 & 0.806 & -&- &- & -&- &-\\
$\mathsf{STNE}$\xspace & 0.808 & 0.829 & 0.71 & 0.781 & 0.789 & 0.774 & 0.962 & 0.957 & 0.638 & 0.659 & -& -& -& -&- &-\\
$\mathsf{CAN}$\xspace & 0.663 & 0.559 & 0.734 & 0.652 & 0.734 & 0.559 & 0.714 & 0.639 & 0.5 & 0.5 &- &- & -& -& -&-\\
$\mathsf{DGI}$\xspace & 0.51 & 0.4 & 0.5 & 0.4 & 0.73 & 0.554 & 0.711 & 0.637 & 0.769 & 0.824 & 0.792 & 0.795 & 0.721 & 0.64 &- &-\\
$\mathsf{LQANR}$\xspace & 0.886 & 0.863 & 0.916 & 0.916 & 0.904 & 0.8 & 0.951 & 0.917 & 0.824 & 0.805 & -&- &- & -&- &-\\
\hline
$\mathsf{PANE}$\xspace (single thread) & \cellcolor{blue!25}0.933 & \cellcolor{blue!25}0.918 & \cellcolor{blue!25}0.932 & \cellcolor{blue!25}0.919 & \cellcolor{blue!25}0.985 & \cellcolor{blue!25}0.977 & \cellcolor{blue!25}0.982 & \cellcolor{blue!25}0.982 & \cellcolor{blue!25}0.929 & \cellcolor{blue!25}0.927 & 0.987 & 0.982 & \cellcolor{blue!25}0.976 & \cellcolor{blue!25}0.986 & \cellcolor{blue!25}0.96 & \cellcolor{blue!25}0.965 \\
$\mathsf{PANE}$\xspace (parallel) & 0.929 & 0.914 & 0.929 & 0.916 & 0.985 & 0.976 & 0.98 & 0.979 & 0.927 & 0.924 & 0.984 & 0.98 & 0.975 & 0.985 & 0.958 & 0.962 \\
\hline
\end{tabular}
\end{small}
\label{tab:link}
\vspace{2mm}
\end{table*}
\vspace{1mm} \noindent
{\bf Datasets.} \tblref{tbl:exp-data} lists the statistics of the datasets used in our experiments. {All graphs are directed except {\em Facebook} and {\em Flickr}}. $|V|$ and $|E_V|$ denote the number of nodes and edges in the graph, whereas $|R|$ and $|E_R|$ represent the number of attributes and the number of node-attribute associations ({\it i.e.},\xspace the number of nonzero entries in attribute matrix $\mathbf{R}\xspace$). In addition, $L$ is the set of \textit{node labels}, which are used in the node classification task. {\em Cora}\footnote{\label{fn:linqs}{http://linqs.soe.ucsc.edu/data}}, {\em Citeseer}\footnoteref{fn:linqs}, {\em Pubmed}\footnoteref{fn:linqs} and {\em Flickr}\footnote{https://github.com/mengzaiqiao/CAN} are benchmark datasets used in prior work \cite{yang2015network,hamilton2017inductive,meng2019co,pan2018adversarially,zhou2018prre,liu2018content}. {\em Facebook}\footnote{\label{fn:snap}{http://snap.stanford.edu/data}} and {\em Google+}\footnoteref{fn:snap} are social networks used in \cite{leskovec2012learning}. For {\em Facebook} and {\em Google+}, we treat each ego-network as a label and extract attributes from their user profiles, which is consistent with the experiments in prior work \cite{meng2019co,yang2013community}
To evaluate the scalability of the proposed solution, we also introduce two new datasets \textit{TWeibo}\footnote{https://www.kaggle.com/c/kddcup2012-track1} and \textit{MAG}\footnote{http://ma-graph.org/rdf-dumps/} that have not been used in previous ANE papers due to their massive sizes. {\em TWeibo} \cite{kddcup2012tweibo} is a social network, in which each node represents a user, and each directed edge represents a following relationship. We extract the $1657$ most popular tags and keywords from its user profile data as the node attributes. The labels are generated and categorized into eight types according to the ages of users. {\em MAG} dataset is extracted from the well-known {\em Microsoft Academic Knowledge Graph} \cite{sinha2015overview}, where each node represents a paper and each directed edge represents a citation. We extract 2000 most frequently used distinct words from the abstract of all papers as the attribute set and regard the fields of study of each paper as its labels.
We will make {\em TWeibo} and {\em MAG} datasets publicly available upon acceptance.
\vspace{1mm} \noindent
{\bf Baselines and Parameter Settings.} We compare our methods $\mathsf{PANE}$\xspace (single thread) and $\mathsf{PANE}$\xspace (parallel) against 10 state-of-the-art competitors: eight recent ANE methods including $\mathsf{BANE}$\xspace \cite{yang2018binarized}, $\mathsf{CAN}$\xspace \cite{meng2019co}, $\mathsf{STNE}$\xspace \cite{liu2018content}, $\mathsf{PRRE}$\xspace \cite{zhou2018prre}, $\mathsf{TADW}$\xspace \cite{yang2015network}, $\mathsf{ARGA}$\xspace \cite{pan2018adversarially}, $\mathsf{DGI}$\xspace \cite{velickovic2018deep} and $\mathsf{LQANR}$\xspace \cite{ijcai2019-low}, one state-of-the-art homogeneous network embedding method $\mathsf{NRP}$\xspace \cite{yang13homogeneous}, and one latest attributed heterogeneous network embedding algorithm $\mathsf{GATNE}$\xspace \cite{cen2019representation}. All methods except $\mathsf{PANE}$\xspace (parallel) run on a single CPU core. Note that although $\mathsf{GATNE}$\xspace itself is a parallel algorithm, its parallel version requires the proprietary AliGraph platform which is not available to us.
The parameters of all competitors are set as suggested in their respective papers.
For $\mathsf{PANE}$\xspace (single thread) and $\mathsf{PANE}$\xspace (parallel), by default we set error threshold $\epsilon=0.015$ and random walk stopping probability $\alpha=0.5$, and we use $n_b=10$ threads for $\mathsf{PANE}$\xspace (parallel). Unless otherwise specified, we set space budget $k=128$.
The evaluation results of our proposed methods against the competitors for attribute inference, link prediction and node classification, are reported in Sections \ref{sec:attr-infer}, \ref{sec:link-pred} and \ref{sec:node-class} respectively. The efficiency and scalability evaluation is reported in \secref{sec:exp-effi}. A method will be excluded if it cannot finish training within one week. Section \ref{sec:exp-param} analyses the parameter sensitivities of proposed methods and Section \ref{sec:effk-init} evaluates the effectiveness of \isvd in \algopt.
\begin{figure*}[!t]
\centering
\captionsetup[subfloat]{captionskip=-0.5mm}
\begin{small}
\begin{tabular}{cccc}
\multicolumn{4}{c}{\hspace{-4mm} \includegraphics[height=5mm]{./figure/algo-legend-eps-converted-to.pdf}}\vspace{-3mm} \\
\hspace{-4mm}\subfloat[{\em Cora}]{\includegraphics[width=0.26\linewidth]{./figure/class/cora-eps-converted-to.pdf}\label{fig:acc-class-cora}} &
\hspace{-4mm}\subfloat[{\em Citeseer}]{\includegraphics[width=0.26\linewidth]{./figure/class/citeseer-eps-converted-to.pdf}\label{fig:acc-class-citeseer}} &
\hspace{-4mm}\subfloat[{\em Facebook}]{\includegraphics[width=0.26\linewidth]{./figure/class/facebook-eps-converted-to.pdf}\label{fig:acc-class-facebook}} &
\hspace{-4mm}\subfloat[{\em Flickr}]{\includegraphics[width=0.26\linewidth]{./figure/class/flickr-eps-converted-to.pdf}\label{fig:acc-class-flickr}}
\vspace{-2mm}\\
\hspace{-4mm}\subfloat[{\em Pubmed}]{\includegraphics[width=0.26\linewidth]{./figure/class/pubmed-eps-converted-to.pdf}\label{fig:acc-class-pubmed}} &
\hspace{-4mm}\subfloat[{\em Google+}]{\includegraphics[width=0.26\linewidth]{./figure/class/google-eps-converted-to.pdf}\label{fig:acc-class-google}} &
\hspace{-4mm}\subfloat[{\em TWeibo}]{\includegraphics[width=0.26\linewidth]{./figure/class/tweibo-eps-converted-to.pdf}\label{fig:acc-class-tweibo}} &
\hspace{-4mm}\subfloat[{\em MAG}]{\includegraphics[width=0.26\linewidth]{./figure/class/mag-eps-converted-to.pdf}\label{fig:acc-class-mag}}
\end{tabular}
\vspace{-2mm}
\caption{Node classification results (best viewed in color).} \label{fig:acc-class}
\end{small}
\vspace{2mm}
\end{figure*}
\subsection{Attribute Inference}\label{sec:attr-infer}
Attribute inference aims to predict the values of attributes of nodes. Note that, except for $\mathsf{CAN}$\xspace \cite{meng2019co}, none of the the other competitors is capable of performing attribute inference, since they only generate embedding vectors for nodes, not attributes. Hence, we compare our solutions against $\mathsf{CAN}$\xspace for attribute inference. Further, we compare against $\mathsf{BLA}$\xspace, the state-of-the-art attribute inference algorithm \cite{yang2017bi}. Note that $\mathsf{BLA}$\xspace is not an ANE solution.
We split the nonzero entries in the attribute matrix $\mathbf{R}\xspace$, and regard $20\%$ as the test set $\mathbf{R}\xspace_{test}$ and the remaining $80\%$ part as the training set $\mathbf{R}\xspace_{train}$. $\mathsf{CAN}$\xspace runs over $\mathbf{R}\xspace_{train}$ to generate node embedding vector $\mathbf{X}\xspace[v_i]$ for each node $v_i\in V$ and attribute embedding vector $\mathbf{Y}\xspace[r_j]$ for each attribute $r_j\in R$. Following \cite{meng2019co}, we use the inner product of $\mathbf{X}\xspace[v_i]$ and $\mathbf{Y}\xspace[r_j]$ as the predicted score of attribute $r_j$ with respect to node $v_i$.
Note that $\mathsf{PANE}$\xspace generates a forward embedding vector $\mathbf{X}_f\xspace[v_i]$ and a backward embedding vector $\mathbf{X}_b\xspace[v_i]$ for each node $v_i\in V$, and also an attribute embedding vector $\mathbf{Y}\xspace[r_j]$ for each attribute $r_j\in R$.
Based on objective function in \equref{eq:obj1}, $\mathbf{X}_f\xspace[v_i]\cdot \mathbf{Y}\xspace[r_j]^{\top}$ is expected to preserve forward affinity value $\mathbf{F}\xspace[v_i,r_j]$, and $\mathbf{X}_b\xspace[v_i]\cdot \mathbf{Y}\xspace[r_j]^{\top}$ is expected to preserve backward affinity value $\mathbf{B}\xspace[v_i,r_j]$. Thus, we predict the score between $v_i$ and $r_j$ through the affinity between node $v_i$ and attribute $r_j$, including both forward affinity and backward affinity, denoted as $p(v_i,r_j)$, by utilizing their embedding vectors as follows.
\begin{align}
p(v_i,r_j)&= \mathbf{X}_f\xspace[v_i]\cdot\mathbf{Y}\xspace[r_j]^{\top}+\mathbf{X}_b\xspace[v_i]\cdot\mathbf{Y}\xspace[r_j]^{\top}\label{eq:attr-infer} \\
&\approx \mathbf{F}\xspace[v_i,r_j]+\mathbf{B}\xspace[v_i,r_j]\nonumber.
\end{align}
Following prior work \cite{meng2019co}, we adopt the {\em Area Under Curve} (AUC) and {\em Average Precision} (AP) metrics to measure the performance.
\tblref{tab:attr} presents the attribute inference performance of $\mathsf{PANE}$\xspace (single thread), $\mathsf{PANE}$\xspace (parallel), $\mathsf{CAN}$\xspace and $\mathsf{BLA}$\xspace. Observe that $\mathsf{PANE}$\xspace (single thread) consistently achieves the best performance on all datasets and significantly outperforms existing solutions by a large margin, demonstrating the power of forward affinity and backward affinity that are preserved in embedding vectors $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace$ and $\mathbf{Y}\xspace$, to capture the affinity between nodes and attributes in attributed networks.
For instance, on {\em Pubmed}, $\mathsf{PANE}$\xspace (single thread) has high accuracy $0.871$ AUC and $0.874$ AP, while that of $\mathsf{CAN}$\xspace are only $0.734$ and $0.72$ respectively.
Further, $\mathsf{CAN}$\xspace and $\mathsf{BLA}$\xspace fail to process large attributed networks {\em Google+}, {\em TWeibo} and {\em MAG} in one week, and, thus are not reported.
Observe that parallel $\mathsf{PANE}$\xspace has close performance ({\it i.e.},\xspace AUC and AP) to that of $\mathsf{PANE}$\xspace (single thread). For instance, on {\em Pubmed}, the difference of AUC between $\mathsf{PANE}$\xspace (single thread) and $\mathsf{PANE}$\xspace (parallel) is just $0.004$. This negligible difference is introduced by the split-merge-based parallel SVD technique $\mathsf{SMGreedyInit}$\xspace for matrix decomposition.
As shown in \secref{sec:exp-effi}, parallel $\mathsf{PANE}$\xspace is considerably faster than $\mathsf{PANE}$\xspace (single thread) by up to 9 times, while obtaining almost the same accuracy performance.
\subsection{Link Prediction}\label{sec:link-pred}
Link prediction aims to predict the edges that are most likely to form between nodes. We first randomly remove $30\%$ edges in input graph $G$, obtaining a residual graph $G'$ and a set of the removed edges. We then randomly sample the same amount of non-existing edges as negative edges. The test set $E^{\prime}$ contains both the removed edges and the negative edges. We run $\mathsf{PANE}$\xspace and all competitors on the residual graph $G'$ to produce embedding vectors, and then evaluate the link prediction performance with $E^{\prime}$ as follows. $\mathsf{PANE}$\xspace produces the a forward embedding $\mathbf{X}_f\xspace[v_i]$ and a backward embedding $\mathbf{X}_b\xspace[v_i]$ for each node $v_i\in V$, as well as an attribute embedding $\mathbf{Y}\xspace[r_l]$ for each attribute $r_l\in R$. As explained, $\mathbf{X}_f\xspace[v_i]\cdot \mathbf{Y}\xspace[r_l]^{\top}$ preserves $\mathbf{F}\xspace[v_i,r_l]$, and $\mathbf{X}_b\xspace[v_j]\cdot \mathbf{Y}\xspace[r_l]^{\top}$ preserves $\mathbf{B}\xspace[v_j,r_l]$. Recall that $\mathbf{F}\xspace[v_i,r_l]$ measures the affinity from $v_i$ to $r_l$ over the attributed network; similarly given node $v_j$ and attribute $r_l$, $\mathbf{B}\xspace[v_j,r_l]$ measures the affinity from $r_l$ to $v_j$ over the network.
Intuitively, $\mathbf{F}\xspace[v_i,r_l]\cdot\mathbf{B}\xspace[v_j,r_l]$ represents the affinity from node $v_i$ to node $v_j$ based on attribute $r_l$. The affinity between nodes $v_i$ and $v_j$, denoted as $p(v_i,v_j)$, can be evaluated by summing up the affinity between the two nodes over all attributes in $R$, which can be computed as follows and indicates the possibility of forming an edge from $v_i$ to $v_j$.
\begin{align}
p(v_i,v_j)&= \sum_{r_l\in R}{(\mathbf{X}_f\xspace[v_i]\cdot\mathbf{Y}\xspace[r_l]^{\top})\cdot (\mathbf{X}_b\xspace[v_j]\cdot\mathbf{Y}\xspace[r_l]^{\top})} \label{eq:link-pred} \\
& \approx \sum_{r_l\in R}{{\mathbf{F}\xspace[v_i,r_l]\cdot\mathbf{B}\xspace[v_j,r_l]}} \nonumber.
\end{align}
Therefore, for $\mathsf{PANE}$\xspace, we can calculate $p(v_i,v_j)$ as the prediction score of the directed edge $(v_i,v_j)$.
$\mathsf{NRP}$\xspace generates a forward embedding $\mathbf{X}_f\xspace[v_i]$ and a backward embedding $\mathbf{X}_b\xspace[v_i]$ for each node $v_i$ and uses $p(v_i,v_j)=\mathbf{X}_f\xspace[v_i]\cdot\mathbf{X}_b\xspace[v_i]^{\top}$ as the prediction score for the directed edge $(v_i,v_j)$ \cite{yang13homogeneous}. For undirected graphs, $\mathsf{PANE}$\xspace (single thread), $\mathsf{PANE}$\xspace (parallel) and $\mathsf{NRP}$\xspace utilize $p(v_i,v_j)+p(v_j,v_i)$ as the prediction score for the undirected edge between $v_i$ and $v_j$.
In terms of the remaining competitors that only work for undirected graphs, they learn one embedding $\mathbf{X}\xspace[v_i]$ for each node $v_i$.
In literature, there are four ways to calculate the link prediction score $p(v_i,v_j)$, including {\em\ inner product} method used in $\mathsf{CAN}$\xspace and $\mathsf{ARGA}$\xspace, {\em cosine similarity} method used in $\mathsf{PRRE}$\xspace and $\mathsf{ANRL}$\xspace, {\em Hamming distance} method used in $\mathsf{BANE}$\xspace, as well as {\em edge feature} method used in \cite{node2vec2016,ma2018hierarchical}.
We adopt all these four prediction methods over each competitor and report the competitor's best performance on each dataset. Following previous work \cite{meng2019co,pan2018adversarially}, we use {\em Area Under Curve} (AUC) and {\em Average Precision} (AP) to evaluate link prediction accuracy.
\tblref{tab:link} reports the AUC and AP scores of each method on each dataset. $\mathsf{PANE}$\xspace (single thread) consistently outperforms all competitors over all datasets except $\mathsf{NRP}$\xspace on {\em Google+},
by a substantial margin of up to $6.6\%$ for AUC and up to $13\%$ for AP.
For large attributed networks including {\em Google+, TWeibo} and {\em MAG}, most existing solutions fail to finish processing within a week and thus are not reported. The superiority of $\mathsf{PANE}$\xspace (single thread) over competitors is achieved by (i) learning a forward embedding vector and a backward embedding vector for each node to capture the asymmetric transitivity ({\it i.e.},\xspace edge direction) in directed graphs, and (ii) combining both node embedding vectors and attribute embedding vectors together for link prediction in \equref{eq:link-pred}, with the consideration of both topological and attribute features. On {\em Google+}, $\mathsf{NRP}$\xspace is slightly better than $\mathsf{PANE}$\xspace (single thread), since {\em Google+} has more than $15$ thousand attributes (see \tblref{tbl:exp-data}), leading to some accuracy loss when factorizing forward and backward affinity matrices into low dimensionality $k=128$ by $\mathsf{PANE}$\xspace (single thread). As shown in \tblref{tab:link}, our parallel $\mathsf{PANE}$\xspace also outperforms all competitors significantly except $\mathsf{NRP}$\xspace on {\em Google+}, and parallel $\mathsf{PANE}$\xspace has comparable performance with $\mathsf{PANE}$\xspace (single thread) over all datasets.
As reported later in \secref{sec:exp-effi}, parallel $\mathsf{PANE}$\xspace is significantly faster than $\mathsf{PANE}$\xspace (single thread) by up to 9 times, with almost the same accuracy performance for link prediction.
\begin{figure}[!t]
\centering
\begin{small}
\begin{tabular}{cc}
\multicolumn{2}{c}{
\hspace{-5mm}\includegraphics[height=5.5mm]{./figure/algo-legend2-eps-converted-to.pdf}\vspace{-2mm}}\\
\hspace{-5mm}\subfloat[{small graphs}.]{\includegraphics[width=0.55\linewidth]{./figure/time1-eps-converted-to.pdf}\label{fig:time-small}} &
\hspace{-5mm}\subfloat[{large graphs}.]{\includegraphics[width=0.55\linewidth]{./figure/time2-eps-converted-to.pdf}\label{fig:time:large}}
\vspace{-2mm}
\end{tabular}
\end{small}
\caption{Running time (best viewed in color).} \label{fig:time-all}
\vspace{2mm}
\end{figure}
\begin{figure}[!t]
\vspace{-4mm}
\centering
\begin{tabular}{ccc}
\hspace{-5mm}\subfloat[{\scriptsize {speedups vs. $n_b$}}.]{\includegraphics[width=0.38\linewidth]{./figure/speed-block-eps-converted-to.pdf}\label{fig:time-nb}} &
\hspace{-6mm}\subfloat[{\scriptsize time vs. $k$}.]{\includegraphics[width=0.38\linewidth]{./figure/time-dim-eps-converted-to.pdf}\label{fig:time-k}} &
\hspace{-6mm}\subfloat[{\scriptsize time vs. $\epsilon$}.]{\includegraphics[width=0.38\linewidth]{./figure/time-eps-eps-converted-to.pdf}\label{fig:time-eps}}
\vspace{-2mm}
\end{tabular}
\caption{Efficiency with varying parameters.} \label{fig:time-param}
\vspace{2mm}
\end{figure}
\begin{figure*}[!t]
\centering
\captionsetup[subfloat]{captionskip=-0.5mm}
\begin{small}
\begin{tabular}{cccc}
\multicolumn{4}{c}{\hspace{-4mm} \includegraphics[height=2.5mm]{./figure/algo-legend3-eps-converted-to.pdf}}\vspace{-2mm} \\
\hspace{-4mm}\subfloat[{\em Varying $k$}]{\includegraphics[width=0.26\linewidth]{./figure/attr-k-eps-converted-to.pdf}\label{fig:acc-attr-k}} &
\hspace{-4mm}\subfloat[{\em Varying $n_b$}]{\includegraphics[width=0.26\linewidth]{./figure/attr-nb-eps-converted-to.pdf}\label{fig:acc-attr-nb}} &
\hspace{-4mm}\subfloat[{\em Varying $\epsilon$}]{\includegraphics[width=0.26\linewidth]{./figure/attr-eps-eps-converted-to.pdf}\label{fig:acc-attr-eps}} &
\hspace{-4mm}\subfloat[{\em Varying $\alpha$}]{\includegraphics[width=0.26\linewidth]{./figure/attr-alpha-eps-converted-to.pdf}\label{fig:acc-attr-alpha}}
\end{tabular}
\caption{Attribute inference results with varying parameters (best viewed in color).} \label{fig:attr-param}
\end{small}
\vspace{-6mm}
\end{figure*}
\begin{figure*}[!t]
\centering
\captionsetup[subfloat]{captionskip=-0.5mm}
\begin{small}
\begin{tabular}{cccc}
\multicolumn{4}{c}{
}\vspace{0mm}\\
\hspace{-4mm}\subfloat[{\em Varying $k$}]{\includegraphics[width=0.26\linewidth]{./figure/link-k-eps-converted-to.pdf}\label{fig:acc-link-k}} &
\hspace{-4mm}\subfloat[{\em Varying $n_b$}]{\includegraphics[width=0.26\linewidth]{./figure/link-nb-eps-converted-to.pdf}\label{fig:acc-link-nb}} &
\hspace{-4mm}\subfloat[{\em Varying $\epsilon$}]{\includegraphics[width=0.26\linewidth]{./figure/link-eps-eps-converted-to.pdf}\label{fig:acc-link-eps}} &
\hspace{-4mm}\subfloat[{\em Varying $\alpha$}]{\includegraphics[width=0.26\linewidth]{./figure/link-alpha-eps-converted-to.pdf}\label{fig:acc-link-alpha}}
\end{tabular}
\caption{Link prediction results with varying parameters (best viewed in color).} \label{fig:link-param}
\end{small}
\vspace{-2mm}
\end{figure*}
\begin{figure}[!t]
\vspace{-4mm}
\centering
\begin{tabular}{ccc}
\hspace{-5mm}\subfloat[{\scriptsize {\em Facebook}}.]{\includegraphics[width=0.38\linewidth]{./figure/facebook-gi-link-eps-converted-to.pdf}\label{fig:time-fb-gi-link}} &
\hspace{-6mm}\subfloat[{\scriptsize{\em Pubmed}}.]{\includegraphics[width=0.38\linewidth]{./figure/pubmed-gi-link-eps-converted-to.pdf}\label{fig:time-pd-gi-link}} &
\hspace{-6mm}\subfloat[{\scriptsize{\em Flickr}}.]{\includegraphics[width=0.38\linewidth]{./figure/flickr-gi-link-eps-converted-to.pdf}\label{fig:time-fr-gi-link}}
\vspace{-2mm}
\end{tabular}
\caption{Effectiveness of $\mathsf{GreedyInit}$\xspace in Link Prediction.} \label{fig:time-gi-link}
\vspace{0mm}
\end{figure}
\begin{figure}[!t]
\vspace{-4mm}
\centering
\begin{tabular}{ccc}
\hspace{-5mm}\subfloat[{\scriptsize {\em Facebook}}.]{\includegraphics[width=0.38\linewidth]{./figure/facebook-gi-attr-eps-converted-to.pdf}\label{fig:time-fb-gi-attr}} &
\hspace{-6mm}\subfloat[{\scriptsize{\em Pubmed}}.]{\includegraphics[width=0.38\linewidth]{./figure/pubmed-gi-attr-eps-converted-to.pdf}\label{fig:time-pd-gi-attr}} &
\hspace{-6mm}\subfloat[{\scriptsize{\em Flickr}}.]{\includegraphics[width=0.38\linewidth]{./figure/flickr-gi-attr-eps-converted-to.pdf}\label{fig:time-fr-gi-attr}}
\vspace{-2mm}
\end{tabular}
\caption{Effectiveness of $\mathsf{GreedyInit}$\xspace in Attribute Inference.} \label{fig:time-gi-attr}
\vspace{0mm}
\end{figure}
\subsection{Node Classification}\label{sec:node-class}
Node classification predicts the node labels. Note that {\em Facebook}, {\em Google+} and {\em MAG} are multi-labelled, meaning that each node can have multiple labels. We first run $\mathsf{PANE}$\xspace (single thread), $\mathsf{PANE}$\xspace and the competitors on the input attributed network $G$ to obtain their embeddings. Then we randomly sample a certain number of nodes (ranging from 10\% to 90\%) to train a linear support-vector machine (SVM) classifier \cite{cortes1995support} and use the rest for testing. $\mathsf{NRP}$\xspace, $\mathsf{PANE}$\xspace (single thread), and $\mathsf{PANE}$\xspace generate a forward embedding vector $\mathbf{X}_f\xspace[v_i]$ and a backward embedding vector $\mathbf{X}_b\xspace[v_i]$ for each node $v_i\in V$. So we normalize the forward and backward embeddings of each node $v_i$, and then concatenate them as the feature representation of $v_i$ to be fed into the classifier. Akin to prior work \cite{meng2019co,hou2019representation,ijcai2019-low}, we use Micro-F1 and Macro-F1 to measure node classification performance. We repeat for 5 times and report the average performance.
\figref{fig:acc-class} shows the Micro-F1 results when varying the percentage of nodes used for training from 10\% to 90\% ({\it i.e.},\xspace 0.1 to 0.9). The results of Macro-F1 are similar and thus omitted for brevity. Both versions of $\mathsf{PANE}$\xspace consistently outperform all competitors on all datasets, which demonstrates that our proposed solutions effectively capture the topology and attribute information of the input attributed networks.
Specifically, compared with the competitors, $\mathsf{PANE}$\xspace (single thread) achieves a remarkable improvement, up to $3.7\%$ on {\em Cora}, {\em Citeseer}, {\em Pubmed} and {\em Flickr}, and up to $11.6\%$ on {\em Facebook}. On the large graphs {\em Google+}, {\em TWeibo} and {\em MAG}, most existing solutions fail to finish within a week and thus their results are omitted. Furthermore, $\mathsf{PANE}$\xspace (single thread) outperforms $\mathsf{NRP}$\xspace by at least $3.4\%$ and $6\%$ on {\em Google+} and {\em TWeibo} as displayed in Figures \ref{fig:acc-class-google} and \ref{fig:acc-class-tweibo}, respectively. In addition, $\mathsf{PANE}$\xspace (single thread) and $\mathsf{PANE}$\xspace (parallel) are superior to $\mathsf{NRP}$\xspace with a significant gain up to $17.2\%$ on {\em MAG}. Over all datasets, $\mathsf{PANE}$\xspace (parallel) has similar performance to that of $\mathsf{PANE}$\xspace (single thread), while as shown in \secref{sec:exp-effi}, $\mathsf{PANE}$\xspace (parallel) is significantly faster than $\mathsf{PANE}$\xspace (single thread).
\subsection{Efficiency and Scalability}\label{sec:exp-effi}
\figref{fig:time-small} and \figref{fig:time:large} together report the running time required by each method on all datasets. The $y$-axis is the running time (seconds) in $\log$-scale. The reported running time does not include the time for loading datasets and outputting embedding vectors. We omit any methods with processing time exceeding one week.
Both versions of $\mathsf{PANE}$\xspace are significantly faster than all ANE competitors, often by orders of magnitude. For instance, on {\em Pubmed} in Figure \ref{fig:time-small}, $\mathsf{PANE}$\xspace takes 1.1 seconds and $\mathsf{PANE}$\xspace (single thread) requires 8.2 seconds, while the fastest ANE competitor $\mathsf{TADW}$\xspace consumes 405.3 seconds, meaning that $\mathsf{PANE}$\xspace (single thread) (resp. $\mathsf{PANE}$\xspace) is $49\times$ (resp. $368\times$) faster.
On large attributed networks including {\em Google+}, {\em TWeibo}, and {\em MAG}, most existing ANE solutions cannot finish within a week, while our proposed solutions $\mathsf{PANE}$\xspace (single thread) and $\mathsf{PANE}$\xspace are able to handle such large-scale networks efficiently.
$\mathsf{PANE}$\xspace is up to $9$ times faster than $\mathsf{PANE}$\xspace (single thread) over all datasets. For instance, on {\em MAG} dataset that has $59.3$ million nodes, when using $10$ threads, $\mathsf{PANE}$\xspace requires $11.9$ hours while $\mathsf{PANE}$\xspace (single thread) costs about five days, which indicates the power of our parallel techniques in \secref{sec:parallel}.
{ Figure \ref{fig:time-nb} displays the speedups of parallel $\mathsf{PANE}$\xspace over single-thread version on {\em Google+} and {\em TWeibo} when varying the number of threads $n_b$ from $1$ to $20$.
When $n_b$ increases, parallel $\mathsf{PANE}$\xspace becomes much faster than single-thread $\mathsf{PANE}$\xspace, demonstrating the parallel scalability of $\mathsf{PANE}$\xspace with respect to the $n_b$. Figure \ref{fig:time-k} and Figure \ref{fig:time-eps} illustrate the running time of $\mathsf{PANE}$\xspace when varying space budget $k$ from $16$ to $256$ and error threshold $\epsilon$ from $0.001$ to $0.25$, respectively.}
In Figure \ref{fig:time-k}, when $k$ is increased from $16$ to $256$, the running time is quite stable and goes up slowly, showing the efficiency robustness of our solution.
In \figref{fig:time-eps}, the running time of $\mathsf{PANE}$\xspace decreases considerably when increasing $\epsilon$ in $\{0.001,0.005,0.015,0.05,0.25\}$. When $\epsilon$ increases from $0.001$ to $0.25$, the running time on {\em Google+} and {\em TWeibo} reduces by about $10$ times, which is consistent with our analysis that $\mathsf{PANE}$\xspace runs in linear to $\log\left({1}/{\epsilon}\right)$ in \secref{sec:parallel}.
\section{Conclusion}\label{sec:ccl}
This paper presents $\mathsf{PANE}$\xspace, an effective solution for ANE computation that scales to massive graphs with tens of millions of nodes, while obtaining state-of-the art result utility. The high scalability and effectiveness of $\mathsf{PANE}$\xspace are mainly due to a novel problem formulation based on a random walk model, a highly efficient and sophisticated solver, and non-trivial parallelization. Extensive experiments show that $\mathsf{PANE}$\xspace achieves substantial performance enhancements over state-of-the-arts in terms of both efficiency and result utility. Regarding future work, we plan to further develop GPU / multi-GPU versions of $\mathsf{PANE}$\xspace, and adapt $\mathsf{PANE}$\xspace to heteogeneous graphs, as well as time-varying graphs where attributes and node connections change over time.
\section{Proofs}\label{sec:proof}
\subsection{Proof of \lemref{lem:papa}}
\begin{proof}
According to Line 3 in \algref{alg:rpapr}, we can see that
\begin{equation*}
\mathbf{R}\xspace_r=
\begin{bmatrix}
{\mathbf{P}\xspace_f}^{(0)}_1 & {\mathbf{P}\xspace_f}^{(0)}_2 & \cdots & {\mathbf{P}\xspace_f}^{(0)}_{n_b}
\end{bmatrix},
\end{equation*}
where ${\mathbf{P}\xspace_f}^{(0)}_1,\cdots,{\mathbf{P}\xspace_f}^{(0)}_{n_b-1}\in \mathbb{R}^{n\times \frac{d}{n_b}}$ and ${\mathbf{P}\xspace_f}^{(0)}_{n_b}\in \mathbb{R}^{n\times (d\%n_b)}$, and
\begin{equation*}
\mathbf{R}\xspace_c=
\begin{bmatrix}
{\mathbf{P}\xspace_b}^{(0)}_1 & {\mathbf{P}\xspace_b}^{(0)}_2 & \cdots & {\mathbf{P}\xspace_b}^{(0)}_{n_b}
\end{bmatrix},
\end{equation*}
where ${\mathbf{P}\xspace_b}^{(0)}_1,\cdots,{\mathbf{P}\xspace_b}^{(0)}_{n_b-1}\in \mathbb{R}^{n\times \frac{d}{n_b}}$ and ${\mathbf{P}\xspace_b}^{(0)}_{n_b}\in \mathbb{R}^{n\times (d\%n_b)}$. After $t$ iterations, by Lines 4-6 in \algref{alg:rpapr}, we have
\begin{align*}
{\mathbf{P}\xspace_f}^{(t)}_i&=\alpha\sum_{\ell=0}^{t}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\ell}{\mathbf{P}\xspace_f}^{(0)}_i}\ \textrm{and}\\
{\mathbf{P}\xspace_b}^{(t)}_i&=\alpha\sum_{\ell=0}^{t}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\top\ell}{\mathbf{P}\xspace_b}^{(0)}_i}.
\end{align*}
Thus, we can derive that
\begin{align*}
\mathbf{P}\xspace^{(t)}_f=\begin{bmatrix} {\mathbf{P}\xspace_f}^{(t)}_1 & \cdots & {\mathbf{P}\xspace_f}^{(t)}_{n_b}
\end{bmatrix}=\alpha\sum_{\ell=0}^{t}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\ell}\mathbf{R}\xspace_r},\\
\mathbf{P}\xspace^{(t)}_b=\begin{bmatrix}
{\mathbf{P}\xspace_b}^{(t)}_1 & \cdots & {\mathbf{P}\xspace_b}^{(t)}_{n_b}
\end{bmatrix}=\alpha\sum_{\ell=0}^{t}{(1-\alpha)^{\ell}\mathbf{P}\xspace^{\ell}\mathbf{R}\xspace_c}.
\end{align*}
According to \iequref{eq:fwd-v-r} and \iequref{eq:bwd-v-r}, for every pair
$(v_i,r_j)\in V\times R$, $$\max\{0,\mathbf{P}\xspace_f[v_i,r_j]-\epsilon\}\le \mathbf{P}\xspace^{(t)}_f[v_i,r_j]\le \mathbf{P}\xspace_f[v_i,r_j]$$ $$\max\{0,\mathbf{P}\xspace_b[v_i,r_j]-\epsilon\}\le \mathbf{P}\xspace^{(t)}_b[v_i,r_j]\le \mathbf{P}\xspace_b[v_i,r_j].$$
By Lines 9-10 in \algref{alg:rpapr}, for $i$-th block and every pair $(v_l,r_j)\in V\times R_i$,
\begin{align*}
{\widehat{\mathbf{P}\xspace}_{f}}^{(t)}[v_l,r_j]={\widehat{\mathbf{P}\xspace}_{f_i}}^{(t)}[v_l,r_j]&=\frac{{\mathbf{P}\xspace_f}^{(t)}_i[v_l,r_j]}{\sum_{v_h\in V}{{\mathbf{P}\xspace_f}^{(t)}_i[v_h,r_j]}}\\
&=\frac{\mathbf{P}\xspace^{(t)}_f[v_l,r_j]}{\sum_{v_h\in V}{\mathbf{P}\xspace^{(t)}_f[v_h,r_j]}},\\
{\widehat{\mathbf{P}\xspace}_{b}}^{(t)}[v_l,r_j]&=\frac{{\mathbf{P}\xspace_b}^{(t)}_i[v_l,r_j]}{\sum_{R_i\in \mathcal{R}}\sum_{r_h\in R_i}{{\mathbf{P}\xspace_b}^{(t)}_i[v_l,r_h]}}\\
&=\frac{\mathbf{P}\xspace^{(t)}_b[v_l,r_j]}{\sum_{r_h\in R}{\mathbf{P}\xspace^{(t)}_b[v_l,r_h]}}.
\end{align*}
By Lines 11-13 in \algref{alg:rpapr}, the results in in \lemref{lem:papa} are now at hand.
\end{proof}
\subsection{Proof of \lemref{lem:smsvd}}
\begin{proof}
Let the outputs of \algref{alg:isvd} be $\mathbf{X}_f\xspace,\mathbf{X}_b\xspace,\mathbf{Y}\xspace,\mathbf{S}\xspace_f$ and $\mathbf{S}\xspace_b$, and the results returned by \algref{alg:split-merge} be $\widehat{\mathbf{X}\xspace}_f$,$\widehat{\mathbf{X}\xspace}_b$,$\widehat{\mathbf{Y}\xspace}$ and $\widehat{\mathbf{S}\xspace}_f,\widehat{\mathbf{S}\xspace}_b$. According to \cite{musco2015randomized}, $t=\infty$ implies that $\mathsf{RandSVD}$ produces the same factorized results as that returned by exact SVD. Therefore, $\mathbf{X}_f\xspace\cdot \mathbf{Y}\xspace^{\top}=\mathbf{F}\xspace^{\prime}, \mathbf{S}\xspace_f=\mathbf{0}, \mathbf{X}_b\xspace=\mathbf{B}\xspace^{\prime}\mathbf{Y}\xspace$ and $\mathbf{Y}\xspace$ is unitary, {\it i.e.},\xspace $\mathbf{Y}\xspace^{\top}\mathbf{Y}\xspace=\mathbf{I}$. This leads to $\mathbf{S}\xspace_b\mathbf{Y}\xspace = (\mathbf{X}_b\xspace\mathbf{Y}\xspace^{\top}-\mathbf{B}\xspace^{\prime})\mathbf{Y}\xspace=\mathbf{0}$.
On the other hand, we consider \algref{alg:split-merge}. Based on Lines 2-3 and Lines 5-6, we have $\mathbf{U}\xspace_i\mathbf{V}\xspace^{\top}_i=\mathbf{F}\xspace^{\prime}[V_i]$, $\mathbf{W}\xspace\widehat{\mathbf{Y}\xspace}^{\top}=\mathbf{V}\xspace$ and unitary matrix $\widehat{\mathbf{Y}\xspace}$, {\it i.e.},\xspace $\widehat{\mathbf{Y}\xspace}^{\top}\widehat{\mathbf{Y}\xspace}=\mathbf{I}$. Then by Line 8 and Line 10, we derive that
\begin{align*}
\mathbf{F}\xspace^{\prime}=\begin{bmatrix}
\mathbf{F}\xspace^{\prime}[V_1]\\
\mathbf{F}\xspace^{\prime}[V_1]\\
\vdots\\
\mathbf{F}\xspace^{\prime}[V_{n_b}]
\end{bmatrix}
&=
\begin{bmatrix}
\mathbf{U}\xspace_1 & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} & \mathbf{U}\xspace_2 & \cdots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{0} & \mathbf{0} & \cdots & \mathbf{U}\xspace_{n_b}
\end{bmatrix}
\cdot
\begin{bmatrix}
\mathbf{V}\xspace^{\top}_1\\
\mathbf{V}\xspace^{\top}_2\\
\vdots\\
\mathbf{V}\xspace^{\top}_{n_b}
\end{bmatrix}\\
&=
\begin{bmatrix}
\mathbf{U}\xspace_1 & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} & \mathbf{U}\xspace_2 & \cdots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{0} & \mathbf{0} & \cdots & \mathbf{U}\xspace_{n_b}
\end{bmatrix}
\cdot \begin{bmatrix}
\mathbf{W}\xspace_1\\
\mathbf{W}\xspace_2\\
\vdots\\
\mathbf{W}\xspace_{n_b}
\end{bmatrix}
\cdot \widehat{\mathbf{Y}\xspace}^{\top}\\
&=
\begin{bmatrix}
\widehat{\mathbf{X}\xspace}_f[V_1]\\
\widehat{\mathbf{X}\xspace}_f[V_2]\\
\vdots\\
\widehat{\mathbf{X}\xspace}_f[V_{n_b}]
\end{bmatrix}
\cdot \widehat{\mathbf{Y}\xspace}^{\top} = \widehat{\mathbf{X}\xspace}_f\cdot \widehat{\mathbf{Y}\xspace}^{\top},
\end{align*}
and thus $\widehat{\mathbf{S}\xspace}_f=\mathbf{0}$. In addition, according to Line 9 and Line 11, we have $\widehat{\mathbf{X}\xspace}_b=\mathbf{B}\xspace^{\prime}\widehat{\mathbf{Y}\xspace}$ and $\widehat{\mathbf{S}\xspace}_b\widehat{\mathbf{Y}\xspace}=(\widehat{\mathbf{X}\xspace}_b\widehat{\mathbf{Y}\xspace}^{\top}-\mathbf{B}\xspace^{\prime})\widehat{\mathbf{Y}\xspace}=\mathbf{0}$. The proof is complete.
\end{proof}
|
1,116,691,498,341 | arxiv | \section{Introduction}
The theory of Eisenstein series on finite dimensional Lie groups has had profound applications in number theory, geometry, and mathematical physics. To name one important example, Langlands' monograph {\em Euler Products} \cite{eulerproducts} gave meromorphic continuations to the new $L$-functions he discovered in their constant terms. This work not only led him to formulate his functoriality conjectures, but also initiated the Langlands-Shahidi method of constructing and analyzing $L$-functions. The method frequently obtains the most powerful results on analytic continuations of $L$-functions we presently know, but does not treat all $L$-functions; instead, it only applies when an underlying representation occurs in the adjoint action of the Levi component of a parabolic on its nilradical. Some of the most fascinating and important applications to number theory and the Langlands program come from maximal parabolic subgroups of exceptional groups. Unfortunately, these are finite in number and all $L$-functions under the purview of the Langlands-Shahidi method have already been treated.
It has been an intriguing possibility to expand the landscape of possible parabolic subgroups by considering infinite-dimensional Kac-Moody groups such as loop groups.
This setting features a vastly wider range of Levi adjoint actions (some of which we have identified and which will be the subject of a future paper), and thus the possibility of new $L$-functions.
This paper is a continuation of a series of papers \cite{ga:abs,ga:ragh,ga:ms1, ga:ms2, ga:ms3, ga:ms4,ga:zuck} by the first-named author on the loop Eisenstein series.
Loop Eisenstein series induced from quasi-characters on a minimal parabolic subgroup were shown to converge in a region
analogous to the shifted Weyl chamber established by Godement \cite[\S3]{go:lang} in the finite-dimensional setting. By developing an analog of the Maass-Selberg relations, they were then meromorphically continued to a larger region in \cite{ga:ms1, ga:ms2, ga:ms3, ga:ms4}. In contrast to the finite dimensional case, however, these minimal parabolic loop Eisenstein series cannot be meromorphically continued everywhere, and it was observed in \cite{ga:ragh} that there is a natural boundary or ``wall" beyond which the series cannot be continued.
Because the Levi components of parabolic subgroups of loop groups are finite dimensional, there are also \emph{cuspidal loop Eisenstein series} induced from cusp forms on these classical Levi components. In \cite{ga:zuck}, their convergence in the Godement-type region was deduced by a domination argument from the convergence of the minimal-parabolic loop Eisenstein series. Moreover, their extension up to the location of the aforementioned ``wall'' was also shown using the Maass-Selberg relations \cite{ga:ms1, ga:ms2, ga:ms3, ga:ms4}. Somewhat surprisingly, as was observed in \cite{ga:zuck}, the Maass-Selberg relations here consist of just a single holomorphic term, and consequently these cuspidal Eisenstein series have no poles in this region.
In this paper we show a much stronger result holds in the case of cuspidal loop Eisenstein series, at least those induced from the maximal parabolic whose Levi component comes from the underlying finite-dimensional root system. The following result analytically continues these series beyond the ``wall'':
\begin{nthm} \label{maininintro} Let $\phi$ be a spherical cusp form for a Chevalley group $G$ over ${\mathbb{Q}}$. Then the cuspidally-induced Eisenstein series $ E_{\phi, \nu}$ (\ref{cusp:es}) on the loop group of $G$ converges absolutely for any $\nu \in {\mathbb{C}}$ to an entire function of $\nu$. \end{nthm}
\noindent For example, this theorem applies to Eisenstein series on the loop group $E_9$ induced from cusp forms $\phi$ on $E_8$. By {\em spherical} we mean a cusp form which is $K$-fixed at all places of ${\mathbb{Q}}$. This restriction is undoubtedly not essential to our proof,
but is made because of an obstacle coming from the local theory of loop groups:~the usual definition of $K$-finite Eisenstein series (e.g., \cite[(6.3.1)]{shahidi}) relies on a matrix coefficient construction whose generalization to loop groups has not been developed. Thus the loop Eisenstein series induced from $K$-finite cusp forms have not even been defined, and so it is premature to study their convergence.
The $K$-finite condition at nonarchimedean places amounts to a congruence group condition, and hence a smaller group of summation in the Eisenstein series definition. Since our arguments involve dominating by larger, convergent sums, relaxing the $K$-fixed condition is therefore likely to make the convergence only easier.
The $K$-fixed condition at the
archimedean place also arises in a paper by Kr\"otz and Opdam \cite{Kr:opdam} on Bernstein's theorem that cusp forms decay exponentially.
Though those methods apply to $K$-finite cusp forms as well (e.g., in unpublished notes by those authors), there is no proof in the literature. These decay results imply our important ingredient Theorem~\ref{decay:body}. For completeness, and in order to allow our convergence argument to apply to $K$-finite Eisenstein series once their definition has been given, we have included a proof of Theorem~\ref{decay:body} for arbitrary $K$-finite forms in the Appendix. Together, this provides the full analytic machinery anticipated to be necessary to handle the convergence of the presently-undefined $K$-finite Eisenstein series. The appendix also discusses some related questions about decay estimates posed by string theorists.
A function-field analog of this result was announced by A.~Braverman and D.~Kazhdan in \cite[Theorem 5.2(3)]{bk:ad}.
Their proof relies on a geometric interpretation of the Eisenstein series unavailable over number fields. In fact, at
each fixed element in the appropriate symmetric space, their cuspidal Eisenstein series is a finite sum (independent of the spectral parameter). This finiteness crucially uses the classical theorem of G.~Harder that cusp forms vanish outside of a compact subset of the fundamental domain. This vanishing outside of a compact set is far from true in the number field case, where such forms instead merely decay rapidly in the cusps. While rapid (e.g., polynomial) decay statements are well-known, it was somewhat striking to us that the classical decay results were insufficient for our purposes (see Remark~\ref{remark731} for further details). Instead we require the {\em exponential decay} result Theorem~\ref{mainthm} (or its equivalent statement Theorem~\ref{decay:body}) mentioned above to obtain convergence in the number field setting.
Another key ingredient in our work is Theorem~\ref{iwineq} and its Corollary~\ref{cor:iwineq:2}, a growth estimate on the diagonal Iwasawa components in the classical and central directions as one varies over a discrete group. In the function field setting, this estimate -- coupled with Harder's compact support theorem -- is enough to deduce the Braverman-Kazhdan result. In the number field setting, the analysis is more involved and the entirety result we prove here is instead shown by establishing a reduction to the half-plane convergence result \cite{ga:zuck} recalled in Theorem~\ref{deg:es}.
\vspace{0.15in}
\emph{Acknowledgements:} We would like to thank Alexander Braverman for informing us of his results with David Kazhdan and of Bernstein's exponential decay estimates. We would also like to thank William Casselman, Michael B.~Green, Bernhard Kr\"otz, and Wilfried Schmid for their helpful comments about rapid decay.
\section{Notation} \label{section:notation}
In this section we shall recall some background material which is used later in the paper.
Part 2A concerns the finite-dimensional situation, part 2B discusses affine loop groups, part 2C focuses on the Weyl group, and part 2D summarizes some useful decompositions. Finally, part 2E redescribes some of this material from the adelic point of view. A general reference for most of this material is Kac's book~\cite{kac}.
\subsection*{A. Finite Dimensional Lie Algebras and Groups}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{finrtsystems} Let $\bf{C}$ be an irreducible, classical, Cartan matrix of size $\ell \times \ell.$ Let $\mf{g}$ be the corresponding real, split, simple, finite-dimensional Lie algebra of rank $\ell.$ Choose a Cartan subalgebra $\mf{h} \subset \mf{g}$ and denote the set of roots with respect to $\mf{h}$ by $\Delta.$ Let $\Pi= \{ \alpha_1, \ldots, \alpha_{\ell} \}\subset {\frak h}^*$ denote a fixed choice of positive simple roots, and $\Pi^{\vee} = \{ h_1, \ldots, h_{\ell} \}\subset {\frak h}$ the corresponding set of simple coroots. We let $\Delta_{+}$ and $\Delta_{-}$ be the sets of positive and negative roots, respectively. For a root $\alpha \in \Delta$ we denote by $h_{\alpha}$ the corresponding coroot. Let $(\cdot, \cdot)$ denote the Killing form on $\mf{h}$ normalized so that $(\alpha_0, \alpha_0)=2$, where $\alpha_0$ is the highest root of $\mf{g}$, and write $\langle \cdot,\cdot \rangle$ for the natural pairing between ${\frak h}^*$ and $\frak h$.
Let $W$ be the corresponding Weyl group of the root system defined above and $Q$ and $Q^{\vee}$ the root and coroot lattices, which are spanned over $\mathbb{Z}$ by $\Pi$ and $\Pi^{\vee}$, respectively. We define $\Lambda \supset Q$ and $\Lambda^{\vee}\supset Q^\vee$ to be the weight and coweight lattices, respectively. The fundamental weights will be denoted by $\omega_i$ for $i=1, \ldots, \ell$; recall that these are defined by the conditions that
\begin{equation}\label{new2.1}
\langle \omega_i, h_j \rangle \ \ = \ \ \left\{
\begin{array}{ll}
1\,, & i\,=\,j\, \\
0\,, & i\,\neq\,j\,
\end{array}
\right. \ \ \
\text{ for ~} 1 \,\le \, i,j\, \le \, \ell \,.
\end{equation}
We also define the fundamental coweights $\omega_j^{\vee} \in \mf{h}$ for $j=1, \ldots, \ell$ by the conditions that
\begin{equation}\label{new2.2}
\langle \alpha_i, \omega^{\vee}_j \rangle \ \ = \ \ \left\{
\begin{array}{ll}
1\,, & i\,=\,j\, \\
0\,, & i\,\neq\,j\,
\end{array}
\right. \ \ \ \text{ for ~} 1 \,\le \, i,j\, \le \, \ell \,.
\end{equation}
Note that we have \be{cowt:corts} ( \omega_i^{\vee}, h_j ) =\ \ \left\{
\begin{array}{ll}
\f{2}{(\alpha_i,\alpha_i)}\,, & i\,=\,j\, \\
\ \ \ \ 0\,, & i\,\neq\,j\,
\end{array}
\right. \ \ \ \text{ for ~} 1 \,\le \, i,j\, \le \, \ell \,. \end{eqnarray}
As usual we set \be{rho} \rho \ \ = \ \ \smallf{1}{2} \sum_{\alpha \, \in \, \Delta_+} \alpha
\ \ = \ \ \sum_{j\,=\,1}^\ell\omega_j
\,, \end{eqnarray}
which satisfies the condition \be{rho:h_i} \langle \rho, h_i \rangle \ \ = \ \ 1\; \text{ for ~} 1\,\le \, i \, \le \, \ell \,.\end{eqnarray} We record here the following elementary statement which will be useful later. It is equivalent to the positivity of the inverse of the Cartan matrix.
\begin{nlem}[\cite{ragh}*{p.111, Lemma 6}] \label{ragh} Each fundamental weight $\omega\in\{\omega_1,\ldots,\omega_\ell\}$ can be written as a strictly positive linear combination of positive simple roots $\alpha_1,\ldots,\alpha_\ell$.
\end{nlem}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{sd1} Let $G$ be a finite dimensional, simple, real Chevalley group with Lie algebra $\frak g.$ Let $B \subset G$ be a Borel subgroup with unipotent radical $U$ and split torus $T.$ Let $A \subset T$ be the connected component of the identity of $T$, and assume (as we may) that $B$ is arranged so that $\Lie(A)=\frak h$ and that $\Lie(U)$ is spanned by the root vectors for $\D_+$. Then $G$ has a maximal compact subgroup $K$ such that the Iwasawa decomposition
\be{iw:fd} G \ \ = \ \ U \, A\, K \end{eqnarray}
holds with uniqueness of expression.
We denote the natural projection onto the $A$-factor by the map $g\mapsto \iw_A(g)$.
Any linear functional $\lambda: \mf{h} \rightarrow {\mathbb{C}}$ gives rise to a
quasi-character $a \mapsto a^{\lambda}$ of $A$ by the formula
\be{atolambdef} a^{\lambda} \ \ := \ \ \exp( \langle \lambda , \ln a \rangle ). \end{eqnarray}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{sd2} Let $\Gamma \subset G$ denote the stabilizer of the fixed Chevalley lattice defining $G.$ We then use the following notion of Siegel set as defined in \cite{bor:groupes}. Let $U_{\mc{D}} \subset U$ denote a fundamental domain for the action of $\Gamma \cap U$ on $U.$ For any $t > 0$ we set \be{Atdef} A_t \ \ := \ \ \{ a \in A \, |\, a^{\alpha_i} > t \, \ \text{for each}\ 1\,\le\,i \,\le\,\ell \}\,. \end{eqnarray} A Siegel set has the form $\mf{S}_t = U_\mc{D} A_t K$.
The following is shown in \cite{bor:groupes}.
\begin{nprop} Suppose that $t < \sqrt{3}/2.$ Then for every $g \in G$ there exists $\gamma \in \Gamma$ such that $\gamma g \in \mf{S}_t$, i.e., the $\G$-translates of ${\frak S}_t$ cover $G$. \end{nprop}
\tpoint{Remark}\label{remark231} One of the key points in the proof of this proposition is the following minimum principle (which will be important for us later). Let $V^\rho$ be the highest weight representation of $G$ corresponding to the dominant integral weight $\rho: \mf{h} \rightarrow {\mathbb{C}}$ from (\ref{rho}).
This representation comes equipped with a $K$-invariant norm $|| \cdot ||$. Let $v_{\rho} \in V^{\rho}$ denote a highest weight vector. Then for any $g \in G,$ it is not hard to see that the function
\begin{equation}\label{Psig}
\aligned
\Psi_g\, : \ \Gamma \ \, &\rightarrow& {\mathbb{R}}_{>0}\, , \ \ \ \\ \gamma \, \ &\mapsto& || \gamma \, g \, v_{\rho} ||
\endaligned
\end{equation}
achieves a minimum value (see \cite{bor:groupes}). Furthermore, if $\gamma \in \Gamma$ minimizes $\Psi_{g}(\gamma)$, then $\gamma g \in \mf{S}_t.$
\subsection*{B. Affine Lie Algebras }
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} Let $\wh{\bf{C}}$ denote the $(\ell+1)\times (\ell+1)$ affine Cartan matrix corresponding to $\bf C,$ and $\hf{g}$
the one-dimensional central extension of the loop algebra corresponding to the finite dimensional Lie algebra $\frak{g}$
associated to {\bf C}.
The extension \begin{equation}\label{new2.4.1}
\hf{g}^e \ \ = \ \ \hf{g} \,\oplus \, {\mathbb{R}} \mathbf{D}
\end{equation}
of $\hat{\frak{g}}$ by the \emph{degree derivation} $\mathbf{D}$ is the (untwisted) affine Kac-Moody algebra associated to $\wh{\bf{C}}$ (see [Kac, Chapter 6, 7]).
Let $\hf{h} \subset \hf{g}$ be a Cartan subalgebra and define the extended Cartan subalgebra \be{h:ext} \hf{h}^e \ \ := \ \ \hf{h} \, \oplus \, {\mathbb{R}} \mathbf{D} \,. \end{eqnarray}
Let $(\hf{h}^e)^*$ denote the (real) algebraic dual of $\hf{h}^e$ and continue to denote the natural pairing as \be{dualpair} \langle \cdot, \cdot \rangle \, :\, (\hf{h}^e)^* \times \hf{h}^e \ \rightarrow \ {\mathbb{R}}\,. \end{eqnarray}
The set of simple affine roots will be denoted by \be{simp:aff:roots} \wh{\Pi} \ \ = \ \ \{ a_1, \ldots, a_{\ell+1} \} \ \ \subset \ \ (\hf{h}^e)^*\,. \end{eqnarray} Similarly, we write \be{simp:aff:coroots} \wh{\Pi}^{\vee} \ \ = \ \ \{ h_1, \ldots, h_{\ell+1} \} \ \ \subset \ \ \hf{h} \ \ \subset \ \ \hf{h}^e \end{eqnarray} for the set of simple affine coroots. Note that the simple roots $a_i: \hf{h}^e \rightarrow {\mathbb{R}}$ satisfy the relations\begin{equation}\label{new2.16}
\aligned \langle a_i , \mathbf{D} \rangle & \ \ = \ \ 0 \ \ \text{ for } \ i \, =\, 1,\, \ldots, \,\ell \\ \text{and} \ \ \ \ \ \
\langle a_{\ell+1} , \mathbf{D} \rangle &\ \ = \ \ 1\,.
\endaligned
\end{equation}
A more explicit description of the $a_i$ will be given in section~\ref{sec:2.5}.
We define the affine root lattice as $\wh{Q} = \mathbb{Z} a_1 + \cdots + \mathbb{Z} a_{\ell+1}$
and the affine coroot lattice as $\wh{Q}^{\vee} = \mathbb{Z} h_1 + \cdots + \mathbb{Z} h_{\ell+1}$.
We shall denote the subset of non-negative integral linear combinations of the $a_i$ (respectively, $h_i$) as $\wh{Q}_+$ (respectively, $\wh{Q}^{\vee}_+$). The integral weight lattice is defined as
\begin{equation}\label{integralweightlatticedef}
\wh{\Lambda} \ \ := \ \ \{ \lambda \in \hf{h}^* \ | \ \langle \lambda, h_i \rangle \, \in \, \mathbb{Z} \, \ \text{ for } \, i\,=\, 1, \ldots, \ell+1 \}\,.
\end{equation} We regard $\wh{\Lambda}$ as a subset of $(\hf{h}^e)^*$ by declaring that \be{2.20} \langle \lambda , \mathbf{D} \rangle \ \ = \ \ 0 \, \ \ \ \text{for} \ \ \lambda \,\in\,\wh{\Lambda}\,. \end{eqnarray} The lattice $\wh{\Lambda}$ is spanned by the fundamental affine weights $\Lambda_1, \ldots, \Lambda_{\ell+1},$ which are defined by the conditions that \be{aff:wts} \langle \Lambda_i, h_j \rangle \ \ = \ \ \left\{
\begin{array}{ll}
1\,, & i\,=\,j\, \\
0\,, & i\,\neq\,j\,
\end{array}
\right. \ \ \
\text{ for ~} 1 \,\le \, i,j\, \le \, \ell+1 \,.\end{eqnarray}
Note that the dual space $(\hf{h}^e)^*$ is spanned by $a_1, a_2, \cdots, a_{\ell+1}, \Lambda_{\ell+1}.$
For any $a\in \wh{Q}$ define
\begin{equation}\label{rootspacedef}
\hf{g}^{a} \ \ := \ \ \{ x \in \hf{g} \ |\ [h, x] \,= \,\langle a, h \rangle x\, , \ \text{ for all } h \in \hf{h} \}\,.
\end{equation}
The set of nonzero $a\in \wh{Q}$ such that $\dim \hf{g}^{a} > 0$ will be called the roots of $\hf{g}$ and denoted by $\wh{\Delta}.$
The sets $\wh{\Delta}_+:= \wh{\Delta} \cap \wh{Q}_+$ and $\widehat{\Delta}_{-}:= \widehat{\Delta} \cap ( - \wh{Q}_{+})$ will be called the sets of positive and negative affine roots, respectively. We have that $\wh{\Delta}=\wh{\Delta}_+\sqcup \wh{\Delta}_-$, i.e., every element in $\widehat{\Delta}$ can be written as a linear combination of elements from $\wh{\Pi}$ with all positive integral or all negative integral coefficients.
For each $i \in \{ 1, 2, \ldots, \ell+1 \}$ we define the reflection $w_i: (\hf{h}^e)^* \rightarrow (\hf{h}^e)^*$ by the formula \be{sim:ref} w_i : \lambda \ \mapsto \ \lambda \, - \, \langle \lambda, h_i \rangle \,a_i \ \ \ \ \text{ for } \ \ \ \lambda \, \in\, (\hf{h}^e)^*\,. \end{eqnarray} The Weyl group $\wh{W} \subset \operatorname{Aut}((\hf{h}^e)^*)$ is the group generated by the elements $w_1,\ldots,w_{\ell+1}$. The dual action of $\wh{W}$ on $\hf{h}^e$ is defined by the formula
\be{affw:hact} \ \langle \lambda, w \cdot h \rangle \ \ = \ \ \langle w^{-1} \lambda, h \rangle \ \ \ \ \text{ for all } \ \lambda \,\in\, (\hf{h}^e)^* \, , \ h \,\in \,\hf{h}^e \, , \ \text{and} \ w \,\in \,\wh{W}\,. \end{eqnarray}
The roots decompose as
\begin{equation}\label{DDpDm}
\wh{\Delta} \ \ = \ \ \ \widehat{\Delta}_W \ \sqcup \ \wh{\Delta}_I \,, \\
\end{equation}
where $\widehat{\Delta}_W$ (known as the ``real roots'' or ``Weyl roots'') is the $\wh{W}$-orbit of $\wh{\Pi}$, and $\wh{\Delta}_I$ (known as the ``imaginary roots'') is its complement in $\wh{\Delta}$. These sets will be described explicitly in (\ref{deltawhat})-(\ref{deltaihat}) below. Each imaginary root is fixed by $\wh{W}$. The space $\hf{g}^{a}$ is 1-dimensional for $a\in \widehat{\Delta}_W$ and is $\ell$-dimensional for $a\in\widehat{\Delta}_I$.
Coroots of elements
$a \in \widehat{\Delta}_W$ can be defined by the formula
\begin{equation}\label{gencorootdef}
h_a \ \ := \ \ w^{-1} h_j \, \in \, \hf{h}\, ,
\end{equation}
where $w$ is an element of $\wh{W}$ such that $w a =a_j$ (this is shown to be well-defined in \cite[\S5.1]{kac}).
\newcommand{\mathbf{c}}{\mathbf{c}}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{sec:2.5} We shall now give a more concrete description of the affine roots and coroots in terms of the underlying finite dimensional Lie algebra $\mf{g}$. It is known that $\hf{g}$ is a one-dimensional central extension of the \emph{loop algebra} of the finite dimensional Lie algebra $\mf{g}.$ We denote this one-dimensional center by ${\mathbb{R}} \mathbf{c},$ where $\mathbf{c} \in \hf{h}.$
The Cartan subalgebra in the extended affine algebra $\hf{g}^e$ can then be written as \be{extended:cartan:c} \hf{h}^e \ \ = \ \ {\mathbb{R}} \mathbf{c} \, \oplus \, \mf{h} \, \oplus \, {\mathbb{R}} \mathbf{D}\,, \end{eqnarray}
in which we have assumed (as we may) that $\frak h$ coincides with the fixed Cartan subalgebra of $\frak g$ chosen in section~\ref{finrtsystems}. The finite dimensional roots $\alpha \in \Delta$ can then be extended to elements of $(\hf{h}^e)^*$ by stipulating that
\be{2.28} \langle \alpha, \mathbf{c} \rangle \ = \ \langle \alpha, \mathbf{D} \rangle \ = \ 0 \ \ \ \text{ for each } \ \alpha \, \in \, \Delta \, . \end{eqnarray}
In particular the element $\rho$ defined in (\ref{rho}), which for us always represents an object from the classical group, extends to an element of $(\hf{h}^e)^*$ that is trivial on $\mathbf{c}$ and $\mathbf{D}$.
We may then identify the first $\ell$ simple roots of the affine Lie algebra with those of its classical counterpart:
\be{a:alpha}a_i \ \ = \ \ \alpha_i \ \text{ for } \ i =1, 2, \ldots, \ell\,. \end{eqnarray} We likewise extend the fundamental weights $\omega_j$ for $j =1, \ldots, \ell$ to elements of $(\hf{h}^e)^*$ by setting \be{omega:ext} \langle \omega_j , \mathbf{c} \rangle \ = \ \langle \omega_j, \mathbf{D} \rangle \ = \ 0\,. \end{eqnarray} Also, we can identity the affine coroots $h_1, h_2, \ldots, h_{\ell}$ with the corresponding classical coroots defined in \S\ref{finrtsystems} under the same name. It remains to describe both $a_{\ell+1}$ and $h_{\ell+1}$ in terms of the underlying finite-dimensional root data.
Let $\iota \in \wh{\Delta}$ be the minimal positive imaginary root. It is characterized as the unique linear map $\iota \in (\hf{h}^e)^*$ satisfying the condition that
\begin{equation}\label{iotaX}
\aligned \langle \iota, X \rangle & \ \ = \ \ 0 \ \ \ \ \text{ for } \ X \, \in \,{\mathbb{R}} \mathbf{c} \oplus \mf{h} \\
\text{and} \ \ \ \ \ \ \langle \iota , \mathbf{D} \rangle & \ \ = \ \ 1\,. \endaligned
\end{equation}
We then have the following explicit description of the real and imaginary roots of $\hf{g}^e$:
\be{affine:roots}
\widehat{\Delta}_W & = & \ \ \{ \alpha \, + \, n \,\iota \,\mid \, \alpha \, \in \, \Delta\,, \ n \, \in \, {\mathbb{Z}} \} \label{deltawhat} \\
\wh{\Delta}_I \ & = & \ \ \{n\,\iota \,|\, n\,\neq\,0 \,,\, n \in \, {\mathbb{Z}} \}\,. \label{deltaihat}
\end{eqnarray}
The classical roots $\Delta$ can be regarded as the subset of $\wh{\Delta}_W$ with $n=0$ in this parametrization. One then has \be{newroot} a_{\ell+1} \ \ = \ \ -\, \alpha_0 \, + \, \iota\,, \end{eqnarray}
where we recall that $\alpha_0$ denotes the highest root of the underlying finite-dimensional root system.
We shall now give a similar description of $h_{\ell+1}.$ To do so, we recall from \cite[(6.2.1)]{kac} that one can define a symmetric, non-degenerate, invariant bilinear form $(\cdot | \cdot)$ on $\hf{h}^e.$ The form is first defined on $\hf{h}$ in terms of certain labels of the affine Dynkin diagram (coming from the coefficients of $\a_0$ when expanded as a sum of the finite simple roots), and then is extended to all of $\hf{h}^e$ by setting
\begin{equation}\label{2.35}
\aligned
(h_i \mid \mathbf{D}) & \ \ = \ \ 0\,, \ \ \ \text{ for } \ i\,=\, \, 1, \ldots, \ell\,, \\ (h_{\ell+1} \mid \mathbf{D} ) & \ \ = \ \ 1\,, \\
\text{and} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
(\mathbf{D} \mid \mathbf{D}) &\ \ = \ \ 0\,. \endaligned
\end{equation}
The form $(\cdot \mid \cdot)$ also induces a symmetric bilinear form on $(\hf{h}^e)^*$, which we continue to denote by $(\cdot \mid \cdot)$, and which has the following characterization:
\begin{equation}\label{norm:affkilling}
\aligned (a_i \mid a_j) & \ \ = \ \ (\alpha_i, \alpha_j)\,, \ \ \ \text{ for } \ i, j \, \in \, \{ 1, \ldots, \ell \}\,, \\
(a_i \mid a_{\ell + 1}) & \ \ = \ \ (\alpha_i, - \alpha_0)\,, \ \ \text{ for } \ i\,\in\,\{1, \ldots, \ell\} \, , \\
\text{and} \ \ \ \ \ \ \ (a_{\ell+1} \mid a_{\ell+1}) & \ \ = \ \ (\alpha_0, \alpha_0) \ \ = \ \ 2 \,. \endaligned
\end{equation} It is easy to see that the above equations imply that \be{norm:affkilling:iota} (\iota \mid \iota) \ = \ 0 \ \text{ and } \ (\iota \mid a_i ) \ = \ 0 \text{ for } i=1, \ldots, \ell. \end{eqnarray}
Note that together with the relations
\begin{equation}\label{2.37}
\aligned (\Lambda_{\ell+1} \mid \Lambda_{\ell+1} ) & \ \ = \ \ 0\,, \\
(a_i \mid \Lambda_{\ell+1} ) & \ \ = \ \ 0 \,, \ \ \ \text{ for } i =1, \ldots, \ell\,, \\
\text{and} \ \ \ \ \ \ \ (a_{\ell+1} \mid \Lambda_{\ell+1} ) &\ \ = \ \ 1\,, \endaligned
\end{equation}
(\ref{norm:affkilling}) and (\ref{norm:affkilling:iota}) completely specify the form $(\cdot | \cdot)$ on $(\hf{h}^e)^*$ (and hence $\hf{h}^e$). We define normalized coroots by the formula
\be{hprimeh}
h'_{a_i} \ \ := \ \ \smallf{ (a_i| a_i)}{2}\,h_i \, , \ \ i\,\in\,\{1,\,\ldots,\,\ell+1\}\,. \end{eqnarray} More generally for
\begin{equation}\label{2.39}
b \ \ = \ \ \sum_{i\,=\,1}^{\ell+1} \kappa_i \,a_i
\end{equation}
we set
\begin{equation}\label{2.40}
h'_b \ \ := \ \ \sum_{i\,=\,1}^{\ell+1} \kappa_i \,h'_{a_i}\,.
\end{equation} Using (\ref{a:alpha}) and (\ref{newroot}) these conventions give a definition for $h'_{\iota}$.
For any real root $b$ we have defined the corresponding coroot $h_b$ in (\ref{gencorootdef}) in terms of the Weyl group action. One then has that \be{h:h'} h'_b \ \ = \ \ \smallf{ (b|b) }{2}\,h_b \end{eqnarray} (see for example \cite[(5.1.1)]{kac}).
We now set
\begin{equation}\label{2.41}
h_{\iota} \ \ := \ \ \smallf{2}{(\alpha_0, \alpha_0)}\, h'_{\iota} \ \ = \ \ h'_{\iota}\,,
\end{equation}
using $(\alpha_0,\alpha_0)=2$. Suppose $a = \alpha + n \iota \in \wh{\Delta}_W$ with $\alpha \in \Delta$ and $n \in \mathbb{Z}$ as in (\ref{deltawhat}). Then in fact \begin{equation} \label{h_a:h'} h_a \ \ = \ \ \smallf{(\alpha \mid \alpha)}{(a \mid a)} \,h_{\alpha} \ + \ \smallf{2}{ (a \mid a) } \, n \,h_{\iota}\, \ \ = \ \ h_{\alpha} \ + \ \smallf{2}{ (\alpha \mid \alpha) } \, n \,h_{\iota}\,,
\end{equation} where in the second equality we have used the fact that $(a \mid a) = ( \alpha \mid \alpha)$, a consequence of (\ref{norm:affkilling:iota}). In particular we also have the formula\be{newcoroot} h_{\ell+1} \ \ = \ \ h_{- \alpha_0} \ + \ h_{\iota}\,, \end{eqnarray}
in analogy to (\ref{newroot}).
One may check that $h_{\iota}$ is also a generator for the one-dimensional center ${\mathbb{R}} {\bf c}$ of $\hf{g}^e$ and that we have the direct
sum decompositions
\be{extended:cartan} \hf{h}^e \ \ = \ \ {\mathbb{R}} h_\iota \, \oplus \, \mf{h} \, \oplus \, {\mathbb{R}} \mathbf{D}\ \ \ \ \text{and} \ \ \ \
\hf{h} \ \ = \ \ {\mathbb{R}} h_\iota\,\oplus\,\mf{h}
\end{eqnarray} similar to (\ref{extended:cartan:c}).
This decomposition of $\hf{h}^e$ will be frequently used later in this paper.
\subsection*{C. Lemmas on the Affine Weyl Group}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{sec:What} The affine Weyl group $\wh{W}$ was defined above as the group generated by the reflections $w_1,\ldots,w_{\ell+1}$ from (\ref{sim:ref}). It also has a more classical description as the semidirect product \be{affweyl:semidirect} \wh{W} \ \ = \ \ W \ltimes Q^{\vee}. \end{eqnarray}
More concretely, the elements $b$ of the coroot lattice $Q^{\vee}$ correspond to translations $T_b$ in the Weyl group $\wh{W}.$ Recall that $T_b$ fixes $\iota$, as do all elements in $\wh{W}$. The action of $T_b$ on $\l\in \operatorname{Span}(a_1,\ldots,a_\ell)$ is given by the formula
\begin{equation}\label{Tbdef}
T_b\,:\,\l \ \mapsto \ \l\,+\,\langle \l, b \rangle \iota \,.
\end{equation} It is also possible to obtain a general formula for the action of $T_b$ on an element in $(\hf{h}^e)^*,$ though we will not need it here.
We shall now state
a formula for the action of $\wh{W}$ on $\hf{h}^e$, which we shall make frequent use of later.
The general element of $\hf{h}^e$ can be written in terms of the decomposition (\ref{extended:cartan}) as $m h_{\iota} + h + r \mathbf{D}$. If we use (\ref{affweyl:semidirect}) to factor $w \in \wh{W}$ as $w= \t{w} T_b$ for some $\t{w} \in W$ and $b \in Q^{\vee},$ then we have
\be{affw:actionandcenter} w \cdot \left( m h_{\iota} \, + \, h \, + \, r \,\mathbf{D}\right ) \ \ = \ \ \left[-\,\frac{r\,(b, b)}{2} \, + \, (h, b) \, + \, m\right] h_{\iota} \ - \ r \, \t{w}(b) + \t{w}(h) \ + \ r\, \mathbf{D} \end{eqnarray}
(see \cite[p. 309]{ga:ragh}). Note that in particular $\wh{W}$ fixes $h_\iota$, $\iota$, and $\mathbf{c}$. Combined with the fact that (\ref{gencorootdef}) is well-defined, this shows that the coroots satisfy
\begin{equation}\label{weyloncoroot}
w\,h_a \ \ = \ \ h_{wa}
\end{equation}
for any $w\in \wh{W}$ and $a\in\wh{\Delta}$.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} For each $w \in \wh{W}$ we denote by $\ell(w)$ the length of $w$, i.e., the minimal length of a word in $w_1,\ldots,w_{\ell+1}$ which represents $w$. We then have the following estimate:
\begin{nlem} \label{l(w):ineq} Let $\|\cdot\|$ be an arbitrary norm on $\frak h$. Then there exist constants $E,\, E' > 0$ depending only on the (finite-dimensional) root system $\Delta$ and the choice of norm $\|\cdot\|$ such that
\begin{equation}\label{new2.49}
\ E' \,|| b ||\, -\, \#(\Delta_+) \, \ \ \leq \ \ \ell(w) \ \ = \ \ \ell(w^{-1}) \ \ \leq \
\ E \,|| b ||\, +\, \#(\Delta_+) \,,
\end{equation}
for any element $w \in \wh{W}$ of the form $w=T_b \t{w}$ or $w=\t{w}T_b$, where $b \in Q^{\vee}$ and $\t{w} \in W.$
\end{nlem}
\begin{proof}
Assume that $w$ has the form $T_b\t{w}$ (the statement for $w=\t{w}T_b$ is equivalent).
From \cite[Proposition 1.23]{iwamat} one has \be{iwm:len} \ell(w) \ \ = \ \ \sum_{ \alpha \, \in \, \Delta_+} | \langle \alpha, b \rangle - \chi_{\Delta_-}(\t{w}^{-1} \alpha) |\,, \end{eqnarray} where $\chi_{\Delta_-}$ is the characteristic function of $\Delta_-.$ By the triangle inequality
\be{l(w):ineq1} \ell(w) \ \ \leq \ \ \sum_{\alpha \, \in \, \Delta_+} (| \langle \alpha, b \rangle | + 1) \ \ = \ \
\sum_{\alpha \, \in \, \Delta_+} | \langle \alpha, b \rangle | \ + \ \#(\Delta_+)
\,. \end{eqnarray} Similarly, we deduce from (\ref{iwm:len}) that
\be{l(w):ineq1b} \ell(w) \ \ \geq \ \ \sum_{\alpha \, \in \, \Delta_+} (| \langle \alpha, b \rangle | - 1) \ \ = \ \
\sum_{\alpha \, \in \, \Delta_+} | \langle \alpha, b \rangle | \ - \ \#(\Delta_+)
\,. \end{eqnarray}
Writing $b = \sum_{i=1}^\ell d_i \omega_i^{\vee}$ in terms of the fundamental coweights $\omega_i^{\vee}$ and letting \be{2.52} M \ \ = \ \ \max_{\srel{ \ \alpha\,\in\,\Delta_+}{1\le i \le \ell}}\langle \alpha,\omega_i^{\vee} \rangle\, ,\end{eqnarray}
we have
$| \langle \alpha, b \rangle |\leq M \,\sum_{i=1}^{\ell} | d_i |$
for any $\alpha \in \Delta_+$. On the other hand,
\be{2.54} \sum_{\alpha \, \in \, \Delta_+} | \langle \alpha, b \rangle | \ \ \geq
\ \ \sum_{\alpha \, \in \, \Pi} | \langle \alpha, b \rangle | \ \ =
\ \ \sum_{i\,=\,1}^\ell | \langle \alpha_i, b \rangle | \ \ = \ \ \sum_{i\,=\,1}^{\ell} | d_i |\,. \end{eqnarray}
As all norms on a finite dimensional vector space are equivalent, the assertions of the lemma follow. \end{proof}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{flip} For any element $w \in \wh{W}$ we set
\begin{equation}\label{Deltawdef}
\wh{\Delta}_w \ \ = \ \ \{ a \in \wh{\Delta}_+\,\mid\, w^{-1} a < 0 \}\,.
\end{equation}
It is a standard result that
\begin{equation}\label{lengthandcard}
\#( \wh{\Delta}_w ) \ \ = \ \ \ell(w) \ \ = \ \ \ell(w^{-1}) \,.
\end{equation}
Moreover, given a reduced decomposition $w= w_{i_r} \ldots w_{i_1},$ $r=\ell(w)$, the set $\wh{\Delta}_w$ can be described as \be{delta_w} \wh{\Delta}_w \ \ = \ \ \{ \beta_1, \ldots, \beta_r \}\,, \end{eqnarray} where
$\beta_j = w_{i_r} \cdots w_{i_{j+1}} (a_{i_j})$
for $j=1, \ldots, r.$ We also set \be{neg:flipped} \wh{\Delta}_{-, w} \ \ = \ \ \{ a \in \wh{\Delta}_-\, |\, w a > 0 \} \ \ = \ \ w^{-1}\,\wh{\Delta}_w \,. \end{eqnarray} Similarly to (\ref{delta_w}) it equals
\be{delta_-w} \wh{\Delta}_{-, w} \ \ = \ \ \{ \gamma_1, \ldots, \gamma_r \}, \end{eqnarray} where
$\gamma_j = w^{-1} \beta_j = w_{i_1} \cdots w_{i_{j}}(a_{i_j}) =
- w_{i_1} \cdots w_{i_{j-1}}(a_{i_j})$
for $j=1,\ldots,r$.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} Considering the classical Weyl group $W$ as the subgroup of $ \wh{W}$ generated by $w_1,\ldots,w_\ell$, let $W^{\theta}$ denote the {\em Kostant coset representatives} for the quotient $W \backslash \wh{W} $:
\be{kostant} W^\theta \ \ = \ \ \{ w \,\in\,\wh{W} \,\mid \, w^{-1} \alpha_i > 0 \, \ \text{ for } \, \ i=1, \ldots,\ell \}\,. \end{eqnarray}
For any $i =1, \ldots, \ell$ and $w \in W^{\theta}$ let us write
\be{kappa_i} w^{-1} \alpha_i \ \ = \ \ \sigma_i \ + \ \kappa_i(w^{-1}) \iota\,, \ \ \ \text{ where } \ \sigma_i \,\in\, \Delta \ \ \, \text{and} \ \ \kappa_i(w^{-1}) \, \in\, \mathbb{Z}_{\geq 0}\,, \end{eqnarray}
which is possible since the Weyl translates of the $\alpha_i$ lie in $\wh{\Delta}_W$ (see (\ref{deltawhat})).
\begin{nlem} \label{kap:ineq} For any $w \in W^{\theta}$ and $i=1, \ldots, \ell,$ we have that
\be{map:ineq} \kappa_i(w^{-1}) \ \ \leq \ \ \ell(w) \ + \ 1 \ \ = \ \ \ell(w^{-1}) + 1 \,.\end{eqnarray} \end{nlem}
\begin{proof}
Suppose first that $\sigma_i$ in (\ref{kappa_i}) is positive. Then \be{w:app} w^{-1} ( - \alpha_i + n \iota ) \ \ = \ \ - \sigma_i \ + \ (n - \kappa_i(w^{-1}) )\, \iota \end{eqnarray}
since $\wh{W}$ preserves the imaginary root $\iota$.
The root $-\alpha_i + n \iota$ is positive if $n >0$, while $w^{-1} ( - \alpha_i + n \iota)$ is negative if $n - \kappa_i(w^{-1}) \leq 0.$ Hence $\wh{\Delta}_{w}$ from (\ref{Deltawdef}) includes the roots \be{flipped:w} -\alpha_i+ \iota, \,- \alpha_i + 2 \iota, \,\ldots, \,- \alpha_i + \kappa_i(w^{-1}) \iota\,. \end{eqnarray}
From this and (\ref{lengthandcard}) we conclude that $\kappa_i(w^{-1}) \leq \ell(w^{-1}) = \ell(w)$.
A similar analysis for the case of $\sigma_i <0$ shows $\wh{\Delta}_{w}$ contains the string \be{string:2} -\alpha_i + \iota,\, \ldots,\, - \alpha_i + (\kappa_i(w^{-1})-1)\iota, \end{eqnarray} and so
$\kappa_i(w^{-1}) \leq \ell(w^{-1}) + 1 $.
\end{proof}
\subsection*{D. Loop Groups and Some Decompositions}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{groups:section} We next introduce some notation related to loop groups following \cite{ga:ihes}, which we briefly review here. Let $\, \mf{g}_{\mathbb{Z}} \subset \mf{g}\,$ and $\,\hf{g}^e_{\mathbb{Z}} \subset \hf{g}^e\,$ be Chevalley $\mathbb{Z}$-forms of the Lie algebras defined above. Given a dominant integral weight $\lambda \in \hf{h}^*$ extended to $(\hf{h}^e)^*$ as in (\ref{integralweightlatticedef}-\ref{2.20}), let $V^{\lambda}$ be the corresponding irreducible highest-weight representation of $\hf{g}^e$. Also, let $V^{\lambda}_{\mathbb{Z}} \subset V^{\lambda}$ denote a Chevalley $\mathbb{Z}$-form for this representation, chosen compatibly with the Chevalley forms $\mf{g}_{\mathbb{Z}}$ and $\hf{g}^e_{\mathbb{Z}}$ and with suitable divided powers as in \cite{ga:la}.
For any commutative ring $R$ with unit, let $\mf{g}_R$, $\hf{g}^e_R$, and $V^{\lambda}_R$ be the objects obtained by tensoring the respective objects $\mf{g}_{\mathbb{Z}}$, $\hf{g}^e_{\mathbb{Z}}$, and $V^{\lambda}_{\mathbb{Z}}$ over ${\mathbb{Z}}$ with $R$. For any field $k$, we let $\wh{G}^{\lambda}_k \subset \operatorname{Aut}(V_k^{\lambda})$ be the group defined in \cite[(7.21)]{ga:ihes} by exponentiation of the corresponding Lie algebra representation. When no subscript is present, we shall implicitly assume that $k= {\mathbb{R}}$, so that for example $\widehat{G}^{\lambda}$ is taken to mean $\wh{G}^{\lambda}_{{\mathbb{R}}},$ etc.
For $a \in \wh{\Delta}_W$ and $u \in k$ we define $\chi_a(u)$ as in \cite[(7.14)]{ga:ihes}, which parameterizes the one-parameter root group corresponding to $a.$ Using $\chi_a(u)$ we may then define the elements
\begin{equation}\label{wasdef}
w_a(s) \ \ := \ \ \chi_a(s)\,\chi_{-a}(-s^{-1})\,\chi_a(s)
\end{equation}
and
\begin{equation}\label{hasdef}
h_a(s) \ \ := \ \ w_a(s)\,w_a(1)^{-1}
\end{equation}
for $s \in k^*$. We shall use the abbreviation $h_i(s)=h_{a_i}(s)$ for $i=1,\ldots,\ell+1$. We shall fix $\lambda$ throughout and shorten our notation to $\wh{G}:= \wh{G}^{\lambda}_{{\mathbb{R}}}$ and make a similar convention for $\wh{G}_k.$
In case $k= {\mathbb{R}},$ we let $\{\cdot, \cdot \}$ denote the real, positive-definite inner product on $V^{\lambda}_{{\mathbb{R}}}$ (see \cite[\S16]{ga:ihes} and the references therein). We then define subgroups
\begin{equation}\label{disc:cpt}
\aligned
\wh{\Gamma} & \ \ = \ \ \left\{ \gamma \in \wh{G} \ | \ \gamma \, V_{\mathbb{Z}}^{\lambda} \, = \, V_{\mathbb{Z}}^{\lambda} \right\} \\
\text{and} \ \ \ \ \wh{K} & \ \ = \ \ \left\{ k \in \wh{G} \ | \ \{ k \xi, k \eta \} = \{ \xi, \eta \} \; \text{ for } \xi, \eta \in V^{\lambda}_{\mathbb{R}} \right\}.
\endaligned
\end{equation}
Fix a coherently ordered basis $\mc{B}$ of $V_{\mathbb{Z}}^{\lambda}$ in the sense of \cite[p.~60]{ga:ihes}, and let $\widehat{H}$, $\wh{U}$, and $\wh{B}$ respectively denote the subgroups of $\wh{G}$ consisting of diagonal, unipotent upper triangular, and upper triangular matrices with respect to $\mc{B}$. The subgroup $\wh{U}$ contains
the one-parameter subgroups $\chi_a(u)$ for each $a\in\wh{\D}_+$.
The subgroup $\widehat{H}$ normalizes $\widehat{U}$ and
\be{Bsemi} \widehat{B} \ \ = \ \ \widehat{H} \, \ltimes \, \widehat{U} \, \end{eqnarray}
is their semi-direct product.
The group $\widehat{H}$
is isomorphic to $({\mathbb{R}}^*)^{\ell+1}$ via the map \be{new2.74} (s_1, \ldots, s_{\ell+1}) \ \ \mapsto \ \ h_1(s_1) \, \cdots \,h_{\ell+1}(s_{\ell+1})\,. \end{eqnarray}
We use this parametrization to define the map $h\mapsto h^{a_i}$ on $\wh{H}$ for any simple root $a_i$, via the formula $h_{j}(s)^{a_i}=s^{\langle a_i,h_j \rangle}$. The products on the righthand side of (\ref{new2.74}) with each $s_i>0$ form a subgroup $\wh{A}\cong {\mathbb{R}}_{>0}^{\ell+1}$ of $\wh{H}$, so that $\wh{A}$ is isomorphic to $\wh{\frak h}$ by the logarithm map
\begin{equation}\label{lndef}
\ln \, : \, h_1(s_1)\cdots h_{\ell+1}(s_{\ell+1}) \ \ \mapsto \ \ \ln(s_1)\,h_1 \ + \ \cdots \ + \ \ln(s_{\ell+1})\,h_{\ell+1}\,.
\end{equation}
Both $\wh{H}$ and $\wh{A}$ have Lie algebra $\wh{\frak h}$, and in fact
\be{Hhat} \widehat{H} \ \ = \ \ \widehat{A} \,\times \,(\widehat{H} \cap \widehat{K}) \end{eqnarray}
holds as a direct product decomposition.
The notation (\ref{atolambdef}) extends to give a character $a\mapsto a^\l$ of $\wh{A}$ for any $\l\in(\wh{\frak h}^e)^*$.
For any linear combination
\begin{equation}\label{new2.75}
X \ \ = \ \ c_1 \, a_{1} \ + \ \cdots \ + \ c_{\ell+1}\, a_{\ell+1} \, , \ \ \ \ \ c_i \, \in \, \mathbb{Z} \,,
\end{equation}
we define the map \be{new2.76} h_X(s) \ \ := \ \ h_1(s)^{c_1}\, \cdots \, h_{\ell+1}(s)^{c_{\ell+1}} \ \ \ \ \ \text{for} \ \, s \,\in\,{\mathbb{R}}^*\,.\end{eqnarray} Using the relation $\iota=\a_0+a_{\ell+1}$ from (\ref{newroot}), this then allows us to define $h_{\iota}(s)$ for $s \in {\mathbb{R}}^*$.
Define
\begin{equation}\label{hh}
\aligned
\widehat{H}_{cen} & \ \ := \ \ \left\{\, h_{\iota}(s) \,|\, s \,\in\, {\mathbb{R}}^* \right\} \\
\text{and} \ \ \ \ \ \ \ \widehat{H}_{cl}\, & \ \ := \ \ \left\{ \prod_{i=1}^{\ell} h_{i}(s_i) \ |\ s_1,\ldots,s_{\ell}\, \in\, {\mathbb{R}}^* \right\},
\endaligned
\end{equation}
and then set $\widehat{A}_{cen} = \widehat{A} \cap \widehat{H}_{cen}$ and $\widehat{A}_{cl} = \widehat{A} \cap \widehat{H}_{cl}$. We then have a direct product decomposition \be{hh:dp} \widehat{A} \ \ = \ \ \widehat{A}_{cen} \, \times \, \widehat{A}_{cl}\,. \end{eqnarray} Indeed, it suffices to show that $\widehat{A}_{cl} \cap \widehat{A}_{cen}=\{1\}.$ Indeed, any element $x \in \widehat{A}_{cl} \cap \widehat{A}_{cen}$ can be written as
\be{x:2} x \ \ = \ \ h_{\iota}(s) \ \ = \ \ h \,, \ \ \ \text{ with } \ s \, \in \,{\mathbb{R}}_{>0} \ \ \text{and} \ \ h \, \in \, \widehat{A}_{cl}\, . \end{eqnarray}
Recall from above that the group $\wh{G}$ is defined with respect to a dominant integral weight $\l$.
Because of \cite[Proposition 20.2]{ga:ihes}, there exists a positive integer $m$ together with a homomorphism $\pi(\lambda, m \Lambda_{\ell+1}): \widehat{G}^{\lambda} \hookrightarrow \widehat{G}^{m \Lambda_{\ell+1}}.$ According to \cite[p.~107]{ga:ihes} this map identifies the elements $h_i(s)$ in the two groups, for any $1\le i \le \ell+1$.
From (\ref{aff:wts}) and (\ref{newcoroot}) we see that the fundamental weight $\Lambda_{\ell+1}$ satisfies $\langle \Lambda_{\ell+1}, h_{\iota} \rangle =1$ and $\langle \Lambda_{\ell+1}, h_{i} \rangle = 0$ for $i=1, \ldots, \ell.$
In the group $\widehat{G}^{m \Lambda_{\ell+1}}$, which acts on the highest weight representation $V^{m \Lambda_{\ell+1}}$ with highest weight vector $v_{m \Lambda_{\ell+1}}$, \cite[Lemma~11.2]{ga:ihes} shows
\be{Acen:Acl} h_{\iota}(s)\, v_{m \Lambda_{\ell+1}} \ \ = \ \ s^{ m \langle \Lambda_{\ell+1}, h_{\iota} \rangle }\, v_{m \Lambda_{\ell+1}} \ \ = \ \ s^m\, v_{m \Lambda_{\ell+1}}\,. \end{eqnarray}
However, using (\ref{x:2}) the same lemma shows
$h_{\iota}(s) v_{m \Lambda_{\ell+1}} = h v_{m \Lambda_{\ell+1}} = v_{m \Lambda_{\ell+1}}$
because $h \in \widehat{A}_{cl}.$ Thus $s^m=1$ from which we can conclude that $s=1$ itself.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{bruhat:section} For any field $k$, we define $\widehat{H}_k$ to be the subgroup of $\widehat{G}_k$ consisting of diagonal matrices with respect to the coherent basis $\mc{B}$. Similarly we set $\widehat{B}_k$ and $\widehat{U}_k$ to be the groups of upper triangular and unipotent upper triangular matrices with respect to $\mc{B}$, respectively. As in (\ref{Bsemi}), $\widehat{B}_k$ is the semi-direct product of $\widehat{H}_k$ and $\widehat{U}_k$.
Moreover, the group $\widehat{H}_k$ is isomorphic to $(k^*)^{\ell+1}$ via (\ref{new2.74}) (recall that the elements $h_a(s)$ from (\ref{hasdef}) are defined for $s \in k^*$).
The group $\widehat{G}_k$ is equipped with a Tits system (see \cite[\S13-14]{ga:ihes}) which identifies the affine Weyl group $\wh{W}$ from section~\ref{sec:What} as the quotient $N(\widehat{H}_k)/\widehat{H}_k = \wh{W}$, where $N(\widehat{H}_k)$ is the normalizer of $\widehat{H}_k$ in $\widehat{G}.$
Thus each $w \in \wh{W}$ has a representative in $N(H_k)$, which we continue to denote by $w$. Moreover, if $w$ is written as a word in the generators $w_1,\ldots,w_{\ell+1}$ from (\ref{sim:ref}), a representative is furnished by the corresponding word in the elements $w_{a_1}(1),\ldots,w_{a_{\ell+1}}(1)$ from (\ref{wasdef}). We shall tacitly identify each $w\in \wh{W}$ with this particular representative.
In the special case $k = {\mathbb{R}}$ these representatives lie in $\widehat{K}$. With these conventions, there exists a Bruhat decomposition \be{bruhat} \widehat{G}_k \ \ = \ \ \bigcup_{w \,\in \,\wh{W}} \widehat{B}_k \, w \, \widehat{B}_k \,, \end{eqnarray}
where in fact each double coset $\widehat{B}_k w \widehat{B}_k$ is independent of the chosen representative $w$.
Because of (\ref{Bsemi}) and the fact that $\wh{W}$ normalizes $\widehat{H}_k$,
every element $g \in \wh{G}_k$ can be written as
\be{bruhat:elt} g \ \ = \ \ u_1 \,z \,w \,u_2\,, \ \ \ \ \text{ where } \ u_1,\, u_2 \ \in \ \widehat{U}_k\, , \ \ w \ \in \ \wh{W}\,, \ \ \text{ and } \ z \ \in\ \widehat{H}_k\, .\end{eqnarray}
If $k= {\mathbb{R}}$, the elements $w\in \wh{W}$ and $z\in \widehat{H}$ are uniquely determined by $g$, though $u_1$ and $u_2 \in \wh{U}$ are not in general.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{iwasawa:section}
Next, we recall from \cite[\S16]{ga:ihes} that there exist Iwasawa decompositions \be{iwasawa} \wh{G} \ \ = \ \ \wh{U} \, \wh{A}\, \wh{K} \ \ = \ \ \wh{K} \, \wh{A} \, \wh{U} \end{eqnarray}
with respect to the subgroups $\wh{U}$, $\wh{A}$, and $\wh{K}$ defined in section~\ref{groups:section}, with uniqueness of decomposition in either particular fixed order. Given an element $g \in \widehat{G}$ we denote by $\iw_{\ha}(g) \in \widehat{A}$ its projection onto the $\widehat{A}$ factor in the first of the above decompositions. Note that $\iw_{\ha}: \widehat{G} \rightarrow \widehat{A}$ is left $\widehat{U}$-invariant and right $\widehat{K}$-invariant by construction.
For $r \in {\mathbb{R}},$ write $s =e^{r}$ and define the exponentiated degree operator on $V^{\lambda}_{\mathbb{R}}$ by the formula \be{eta(s)} \eta(s) \ \ = \ \ \exp(r \mathbf{D})\,. \end{eqnarray}
The element $\eta(s)$ acts on the one-parameter subgroups $\chi_{\a+n\iota}(\cdot)$ by
\begin{equation}\label{etaandchi}
\eta(s)\chi_{\a+n\iota}(u)\eta(s)^{-1} \ \ = \ \ \chi_{\a+n\iota}(s^nu)\,.
\end{equation}
It furthermore acts as a diagonal operator with respect to the coherently ordered basis $\mc{B}$, and consequently it normalizes $\wh{U}$ and commutes with $\wh{A}.$
It then follows that \be{ida:deg} \eta(s)\,\wh{G} \ \ = \ \ \{ \eta(s) g \,|\,g \,\in \,\widehat{G} \} \ \ = \ \ \wh{U} \, \eta(s) \, \wh{A} \, \wh{K} . \end{eqnarray} We then extend the function $\iw_{\ha}$ above to a function $ \iw_{\eta(s) \ha}\, : \eta(s) \widehat{G} \rightarrow \eta(s) \widehat{A} $ by defining
\be{iweadef} \iw_{\eta(s) \ha}\,( \eta(s)\, g ) \ \ = \ \ \eta(s)\, \iw_{\ha}(g) \ \ \in \ \ \eta(s) \,\widehat{A}\, .
\end{eqnarray}
This extension is obviously right $\widehat{K}$-invariant, and is also left $\widehat{U}$-invariant because $\eta(s)$ normalizes $\widehat{U}.$ It furthermore satisfies
\begin{equation}\label{iweaequivariant}
\iw_{\eta(s) \ha}\,( \eta(s)\, a_1\,u\,a_2\,k ) \ \ = \ \ \iw_{\eta(s) \ha}\,( \eta(s)\, a_1)\,a_2 \ \ = \ \ \eta(s)\, a_1 \,a_2
\end{equation}
for any $u\in \wh{U}$, $a_1, a_2\in \wh{A}$, and $k\in \wh{K}$.
We extend the logarithm map $\ln:\wh{A}\cong\wh{\frak h}$ defined in (\ref{lndef}) to $\eta(s)\wh{A}$ by the rule
\begin{equation}\label{logslicerule}
\ln(\eta(s)\,a) \ \ = \ \ r \,\mathbf{D} \ + \ \ln(a) \ \ \ \ \text{for} \ \ a\,\in\,\wh{A}
\end{equation}
(cf. (\ref{eta(s)})).
For the duration of the paper we make the important restriction to consider \emph{only} the case that $r > 0.$\footnote{Note that the convention in \cite{ga:ihes}
is to also consider $r>0$, although $\eta(s)$ from (\ref{eta(s)}) is instead parameterized there as $\eta(s)=\exp(-r\mathbf{D})$. This is because here we work with the Iwasawa decomposition (\ref{iwasawa}) having $\wh{K}$ on the right, whereas in \emph{op. cit} $\wh{K}$ is on the left.}
\subsection*{E. Adelic Loop Groups}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} In this section, we review some aspects of adelic loop groups and their decompositions; further details can be found in \cite{ga:ragh, ga:ms2}. Let $\mc{V}$ denote the set of finite places of ${\mathbb{Q}}$, each of which can be identified with a prime number $p$ and the $p$-adic norm $| \cdot |_p$. For each $p \in \mc{V}$ the field ${\mathbb{Q}}_p$ is the corresponding completion of ${\mathbb{Q}}$ and has ring of integers ${\mathbb{Z}}_p.$ We write $\mc{V}^e = \mc{V} \cup \{\infty \}$ for the set of all places of ${\mathbb{Q}}$, where the place $\infty$ corresponds to the archimedean valuation $| \cdot |_{\infty}$\,, i.e.,
the usual absolute value on ${\mathbb{Q}}_{\infty} = {\mathbb{R}}.$ The adeles are defined as the ring
$
{\mathbb{A}} = \prod'_{p\in\mc{V}^e} {\mathbb{Q}}_p,
$
where the prime indicates the restricted direct product of the factors with respect to the ${\mathbb{Z}}_p$. Likewise, the finite adeles ${\mathbb{A}}_f$ are the restricted direct product of all ${\mathbb{Q}}_p$, $p\in\mc{V}$, with respect to the ${\mathbb{Z}}_p$. For each $a= (a_p) \in {\mathbb{A}}$ the adelic valuation is defined as $|a|_{{\mathbb{A}}} := \prod_{p \in \mc{V}^e} | a_p |_p$.
We also write ${\mathbb{I}}={\mathbb{A}}^*$ for the group of ideles and ${\mathbb{I}}_f={\mathbb{I}}\cap {\mathbb{A}}_f$ the group of finite ideles.
In section~\ref{groups:section} we introduced the exponentiated group $\widehat{G}_k$ for any field $k$, in particular $k={\mathbb{Q}}_p$ for any $p\in\mc{V}^e$. For shorthand denote $\widehat{G}_p \ := \ \widehat{G}_{{\mathbb{Q}}_p}$, so that $\widehat{G}_{\infty}$ is just the real group $\widehat{G}$. For each $p \in \mc{V},$ we set
\be{Kp} \widehat{K}_p \ \ = \ \ \{ \, g \in \widehat{G}_p \,\mid \, g V^{\lambda} _{\mathbb{Z}_p} \, =\, V_{\mathbb{Z}_p}^{\lambda} \,\} .\end{eqnarray} By convention, we also set $\widehat{K}_{\infty}$ to be the group $\widehat{K}$ introduced in (\ref{disc:cpt}). The adelic loop group is then defined as
\be{Gad} \widehat{G}_{{\mathbb{A}}} \ \ := \ \ \prod'_{p \,\in \, \mc{V}^e} \widehat{G}_p\,, \end{eqnarray}
where the product is restricted with respect to the family of subgroups $\{ \widehat{K}_p \}_{p \in \mc{V}}.$ We also define
\be{Kad} \widehat{K}_{{\mathbb{A}}} \ \ = \ \prod_{p \, \in \, \mc{V}^e} \widehat{K}_p \ \ \subset \ \ \widehat{G}_{\mathbb{A}}\,. \end{eqnarray}
Analogously, the groups $\widehat{G}_{{\mathbb{A}}_f}$ and $\widehat{K}_{{\mathbb{A}}_f}$ are defined
by replacing $\mc{V}^e$ with $\mc{V}$ in (\ref{Gad}) and (\ref{Kad}), respectively.
We set $\widehat{H}_p:= \widehat{H}_{{\mathbb{Q}}_p} \subset \widehat{G}_{p}$ to be the group defined at the beginning of \S\ref{bruhat:section}, where we remarked that it is generated by the elements $h_1(s)$, $h_2(s)$,\ldots, $h_{\ell+1}(s)$ for $s \in {\mathbb{Q}}_p^*$; thus for example, $\widehat{H}_{\infty}=\widehat{H}$. Define
$ \hh_{\A} = \prod'_{p \in \mc{V}^e} \widehat{H}_{p}$,
where the product is restricted with respect to the family of subgroups $\{ \widehat{H}_{p} \cap \widehat{K}_p \}_{p \in \mc{V}}.$ Analogously to (\ref{new2.74}), every element $h \in \hh_{\A}$ has an expression
\be{h:ad}
h \ \ = \ \ \prod_{i\,=\,1}^{\ell+1} h_i(s_i)\,, \ \ \ \text{ where each }\, s_i \, \in \,{\mathbb{I}}\,.
\end{eqnarray}
For such an expression, we define its norm to be the element of $\widehat{A}$ given by the product
\be{h:norm}
|h| \ \ = \ \ \prod_{i \,=\,1}^{\ell+1} h_i( |s_i|_{\mathbb{A}})\,,
\end{eqnarray}
which can be shown to be uniquely determined by $h \in \hh_{\A}$ independently of its factorization (\ref{h:ad}). We set
$\hu_{\A} = \prod'_{p \in \mc{V}^e} \widehat{U}_{{\mathbb{Q}}_p}$,
where the product is restricted with respect to the family $\{ \widehat{U}_{{\mathbb{Q}}_p} \cap \widehat{K}_p \}_{p \in \mc{V}}.$ We shall also write
$\hb_{\A} = \hu_{\A} \cdot \hh_{\A}$,
which itself is the restricted direct product of all $ \widehat{U}_{{\mathbb{Q}}_p} \cdot \widehat{H}_{p}$ with respect to their intersections with $\wh{K}_p$.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{adelic:iwasawa} We next state the adelic analogues of the Iwasawa decompositions (\ref{iwasawa}) and (\ref{ida:deg}). First, for each $p \in \mc{V}$ there is the $p$-adic Iwasawa decomposition $\widehat{G}_{{\mathbb{Q}}_p} = \widehat{U}_{{\mathbb{Q}}_p} \widehat{H}_{{\mathbb{Q}}_p} \widehat{K}_{{\mathbb{Q}}_p}$,
which is not a direct product decomposition because $\widehat{H}_{{\mathbb{Q}}_p} \cap \widehat{K}_{{\mathbb{Q}}_p}$ is nontrivial. Together with the $p=\infty$ decomposition (\ref{iwasawa}), these local decompositions give the adelic Iwasawa decomposition \be{ad:iwasawa} \hg_{\A} \ \ = \ \ \hu_{\A} \, \hh_{\A} \, \hk_{\A}\,. \end{eqnarray}
Although the adelic Iwasawa factorization
\be{ad:iwa:elta} g \ \ = \ \ u_g \, h_g \, k_g\,, \ \ \ \ \ \ u_g \,\in\, \hu_{\A}, \, h_g \,\in \,\hh_{\A}, \, k_g \,\in\, \hk_{\A}\,, \end{eqnarray} is not in general unique, the element $|h_g|$ defined using (\ref{h:norm}) is independent of it. We shall write the projection onto this element as the map
\be{ad:iwa:1}
\aligned
\iw^{\A}_{\widehat{A}} \ : \ \hg_{\A} \ & \rightarrow \ \widehat{A} \, , \\
g \ & \mapsto \ |h_g| \,
\endaligned
\end{eqnarray}
onto the Iwasawa $\wh{A}$-factor of $\wh{G}$. Note that $|h_g|$ is an element of the real group $\wh{G}$; we do not adelize $\wh{A}$.
Recall that we have defined $\eta(s)$ in (\ref{eta(s)}) in the context of real groups, where it has a nontrivial action on $\wh{G}=\wh{G}_{{\mathbb{Q}}_\infty}$ by conjugation. After extending this action trivially to each $\wh{G}_{{\mathbb{Q}}_p}$, $p\in \mc{V}$, there is then a twisted Iwasawa decomposition \be{twist:iwa:ad} \eta(s)\,\hg_{\A} \ \ = \ \ \hu_{\A} \, \eta(s) \, \hh_{\A} \,\hk_{\A}\,. \end{eqnarray}
Moreover, if we write $\eta(s) g \in \eta(s)\hg_{\A}$ with respect to the above decomposition as \be{ad:iwa:eltb} \eta(s) \, g \ \ = \ \ u_g \, \eta(s) \, h_g \, k_g\,, \ \ \ \ \ \ u_g \, \in\, \hu_{\A}, \, h_g \, \in \, \hh_{\A}, \, k_g \, \in \, \hk_{\A}\,, \end{eqnarray} then the element $|h_g|$ from (\ref{h:norm}) is again uniquely determined. We shall denote this projection by \be{ad:iwa:2}
\aligned
\iw^{\A}_{\eta(s) \widehat{A}} \ : \ \eta(s) \,\hg_{\A} \ & \rightarrow \ \eta(s)\, \widehat{A} \, , \\
g \ & \mapsto \ \eta(s) \,|h_g| \,,
\endaligned \end{eqnarray}
generalizing (\ref{iweadef}).
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{sec:2.15} In \S\ref{groups:section} the group $\widehat{G}_{\mathbb{Q}}$ was defined over the field $k={\mathbb{Q}}$. It has embeddings $\widehat{G}_{{\mathbb{Q}}} \hookrightarrow \widehat{G}_{{\mathbb{Q}}_p}$ for each $p \in \mc{V}^e$, and hence a diagonal embedding
\be{diag} i \ : \ \widehat{G}_{{\mathbb{Q}}} \ \ \hookrightarrow \ \ \prod_{p \, \in\, \mc{V}^e} \widehat{G}_{{\mathbb{Q}}_p} \,.\end{eqnarray}
Note that the righthand side is the direct product, not the restricted direct product:~this is because
$i(\widehat{G}_{\mathbb{Q}})$ is actually not contained in $\hg_{\A}$ (see \cite[\S2]{ga:ragh})\footnote{In contrast to the finite-dimensional situation, an element of $\wh{G}_{\mathbb{Q}}$ can involve different prime denominators in each of infinitely many root spaces.}. Define
\be{gamma:Q} \wh{\Gamma}_{{\mathbb{Q}}} \ \ := \ \ i^{-1} (\hg_{\A}) \ \ \subset \ \ \widehat{G}_{\mathbb{Q}}\ \end{eqnarray}
as the subgroup of $\widehat{G}_{\mathbb{Q}}$ which does embed into $\hg_{\A}$. We shall generally follow the common convention of identifying $\wh{\G}_{\mathbb{Q}}$ with its diagonally embedded image $i(\wh{\G}_{\mathbb{Q}})$.
\section{Iwasawa Inequalities} \label{section:iwasawa}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} In (\ref{Deltawdef})
we associated to each $w \in \wh{W}$ the finite set of roots
\be{re:wflip} \wh{\Delta}_{w^{-1}} \ \ = \ \ \{a \in \wh{\Delta}\,|\,a>0, \ w a<0\}\,.\end{eqnarray}
Recall that Weyl group fixes all imaginary roots, so $\wh{\Delta}_{w^{-1}}\subset \wh{\Delta}_W$.
For each root $a \in \wh{\Delta}_W$ let $U_a$ denote the corresponding root group consisting of the elements $\{ \chi_a(s) | s \in {\mathbb{R}} \}$, and fix any order on $\wh{\Delta}$. Let $\wh{U}_{-}$ be the subgroup of $\wh{G}$ which acts by unipotent lower triangular matrices on the coherent basis $\mathcal B$ from section~\ref{groups:section}. We then have the \emph{subgroups}
\be{Uw}
U_{w} \ \ = \ \ \prod_{\ \ a \, \in \,\wh{\Delta}_{w^{-1}}} \!\!\! \! U_{a} \ \ = \ \ \wh{U}\,\cap\,w^{-1}\wh{U}_{-} \, w \ \ \subset \ \ \wh{U}
\end{eqnarray}
and
\be{U-w} U_{-, w} \ \ = \ \ w \, U_{w} \, w^{-1} \ \ = \ \ \prod_{ \ \ \ \ \gamma\, \in \,\wh{\Delta}_{-,w^{-1}}}\!\!\!\!\!\! U_{\gamma} \ \ = \ \ \wh{U}_{-}\,\cap\,w\wh{U}w^{-1} \ \ \subset \ \ \wh{U}_{-}\,, \end{eqnarray}
where the product is taken with respect to the fixed order and one has uniqueness of expression
(cf.~\cite[Lemma~6.4 and Corollary~6.5]{ga:lg2}).
Similarly, each $U_a$ has a rational subgroup $U_{a,{\mathbb{Q}}}=\{ \chi_a(s) | s \in {\mathbb{Q}} \}$. Rational subgroups $U_{w, {\mathbb{Q}}}\subset U_w$ and $U_{-, w ,{\mathbb{Q}}}\subset U_{-,w}$ are defined as products of $U_{a,{\mathbb{Q}}}$ and $U_{\g,{\mathbb{Q}}}$ over the roots $a$ and $\g$ appearing in (\ref{Uw}) and (\ref{U-w}), respectively. Because $\chi_a(s)$ is also defined for $s\in {\mathbb{Q}}_p$, we likewise have subgroups $U_{a,{\mathbb{Q}}_p}$, $U_{w,{\mathbb{Q}}_p}$, and $U_{-,w,{\mathbb{Q}}_p}$, and their adelic variants $U_{w,{\mathbb{A}}}$ and $U_{-, w, {\mathbb{A}}}$ defined as restricted direct products.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} Recall the notation for the Iwasawa decomposition and its adelic variant introduced in (\ref{iweadef}), (\ref{ad:iwa:1}), and (\ref{ad:iwa:2}). For elements $x,y$ in any group we shall use the shorthand notation $x^y := y x y^{-1}$.
\begin{nlem}\label{lemma:iwasawadecomp}
i) Let $\gamma_{\mathbb{Q}} \in \wh{\Gamma}_{{\mathbb{Q}}}$, regarded as diagonally embedded in $\wh{G}_{\mathbb{A}}$ as in \S\ref{sec:2.15}, and let $g \in \widehat{G}.$ Use (\ref{bruhat:elt}) to write $\gamma_{\mathbb{Q}} = u_1 z w u_2 $ with $u_1, u_2 \in \widehat{U}_{{\mathbb{Q}}},\, w \in \wh{W},$ and $z \in \widehat{H}_{{\mathbb{Q}}}.$ Regard the element $\gamma_{\mathbb{Q}} \eta(s) g \in \eta(s) \hg_{\A}$ and define the element $\iw^{\A}_{\eta(s) \widehat{A}}(\gamma_{\mathbb{Q}} \eta(s) g) \in \eta(s) \widehat{A}$ as in (\ref{ad:iwa:2}).
Then \begin{equation}\label{4:2:-1a}
\iw^{\A}_{\eta(s) \ha}\,(\,\g_{\mathbb{Q}} \,\eta(s)\, g\,) \ \ = \ \ \iw_{\eta(s) \ha}\,( \eta(s) g)^w \,\cdot\, \iw^{\A}_{\ha}(w u_w)
\end{equation} for some element $u_w \in U_{w, {\mathbb{A}}}$ (depending on $\gamma_{\mathbb{Q}}$ and $\eta(s) g$).
ii) If $\g\in \wh{\G}\subset \wh{G}$,
\begin{equation}\label{4:2:-1b}
\iw_{\eta(s) \ha}\,(\,\g \,\eta(s)\, g\,) \ \ = \ \ \iw_{\eta(s) \ha}\,( \eta(s) g)^w \,\cdot\, \iw^{\A}_{\ha}(w u_w)
\end{equation}
for some element $u_w \in U_{w, {\mathbb{A}}}$ (again depending on $\gamma$ and $\eta(s) g$).
\end{nlem}
\begin{proof}
First we note that since $z \in \widehat{H}_{{\mathbb{Q}}}$ we have that $\iw^{\A}_{\ha}(z)=1.$ Using the left $\hu_{\A}$-invariance, the commutativity of $\wh{A}$ with $\eta(s)$, and the right $\hk_{\A}$-invariance of $\iw^{\A}_{\eta(s) \ha}\,$, we compute
\begin{equation} \label{gammag:-1} \aligned \iw^{\A}_{\eta(s) \ha}\,( \, \gamma_{\mathbb{Q}} \, \eta(s) \, g \, ) & \ \ = \ \ \iw^{\A}_{\eta(s) \ha}\,( \, u_1 \, z \, w \, u_2 \, \eta(s) \, g \, ) \\ & \ \ = \ \ \iw^{\A}_{\eta(s) \ha}\,(\, z \, w \, u_2 \, \eta(s) \, g \, ) \\ & \ \ = \ \ \iw^{\A}_{\eta(s) \ha}\,(\, w \, u_2 \, u'\,\eta(s)\,h' \, )\,,
\endaligned
\end{equation}
where
\be{4.2:0}
\eta(s) \, g \ \ = \ \ u'\, \eta(s)\,h'\,k'\ , \ \ \ \text{ with } \ u' \,\in\, \widehat{U} \,, \ h' \,=\, \iw_{\ha}(g) \, \in\, \wh{A}\,, \ \, \text{and} \, \ k' \, \in \, \wh{K}\,,
\end{eqnarray}
is the (real, and hence also adelic) Iwasawa decomposition of $\eta(s)g$. Let $u= u_2 u' \in \hu_{\A}.$
Again using these properties plus (\ref{iweaequivariant}) we then have
\begin{equation}\label{gammag:-2}
\aligned
\iw^{\A}_{\eta(s) \ha}\,( \, \gamma_{\mathbb{Q}} \, \eta(s) \, g \, ) \ \ & = \ \
\iw^{\A}_{\eta(s) \ha}\,( w\,u\,\eta(s)\,h' \, )\\
& = \ \ \iw^{\A}_{\eta(s) \ha}\,( \, w \, \eta(s)\,h'\, w^{-1}\,\cdot\,w\, \cdot\,(\eta(s)h')^{-1}\,\cdot\,u\,\cdot\,(\eta(s)h')\,
) \\
& = \ \
\ \iw^{\A}_{\eta(s) \ha}\,(\,
(\eta(s) h')^w\,\cdot\,w\,u''\,
) \\
& = \ \ (\eta(s) h')^w \, \iw_{\ha}^{\mathbb{A}}(\,w\,u''\,
)
\,,
\endaligned
\end{equation}
where $u''=(\eta(s)h')^{-1}u(\eta(s)h')$ lies in $\hu_{\A}$ because $\eta(s)$ and $h'\in \widehat{A}$ both normalize $\hu_{\A}$. Furthermore, $u''$ can be factored as $u''=u'''{u}_w$, where $wu'''w^{-1}\in \hu_{\A}$ and ${u}_w\in U_{w,{\mathbb{A}}}$.
Thus $wu''=(wu'''w^{-1})w u_w$ and
formula (\ref{4:2:-1a}) now follows from once more applying the left $\hu_{\A}$-invariance of $\iw^{\A}_{\eta(s) \ha}\,$. This proves part i).
To prove part ii), let $\g_{\mathbb{Q}}\in \wh{\G}_{\mathbb{Q}}$ be the diagonal embedding of $\g$ into $\wh{G}_{\mathbb{A}}$. Since $\eta(s)g\in \wh{G}$, the archimedean component of $\g_{\mathbb{Q}}\eta(s)g$ is $\g\eta(s)g$. At the same time, its nonarchimedean components are $\g_p$. These lie in $\wh{K}_p$ since they preserve the lattice $V_{{\mathbb{Z}}_p}^\l$ (as a consequence of the defining fact that $\g\in \wh{G}$ preserves $V_{\mathbb{Z}}^\l$). Hence the projection of $\g_{\mathbb{Q}}\eta(s)g$ onto $\wh{G}_{{\mathbb{A}}_f}$ lies in $\wh{K}_{{\mathbb{A}}_f}$, and by definition (\ref{ad:iwa:1}) the left hand sides of (\ref{4:2:-1a}) and (\ref{4:2:-1b}) agree.
\end{proof}%
In particular part ii) of the lemma asserts that for $\g\in \wh{\G}$,
\begin{equation}\label{hpartandlogs}
\ln \(\iw_{\eta(s) \ha}\,\,(\,\g \,\eta(s)\, g\,)\) \ \ = \ \ \ln \(\eta(s)^w\) \ + \ \ln\(\iw_{\ha}( g)^w\) \ + \ \ln \(\iw^{\A}_{\ha}(\,w\, u_w\,)\)
\end{equation}
for some element $u_w\in U_{w,{\mathbb{A}}}$, where the logarithm map $\ln$ was defined in (\ref{lndef}) and (\ref{logslicerule}).
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} We next establish some estimates on the individual terms in (\ref{hpartandlogs}) for later use in section~\ref{section:convergence}. For any parameter $t>0$ let
\be{At}
\widehat{A}_t \ \ := \ \ \{ \,h \in \widehat{A} \ |\ h^{a_i} > t \; \text{ for each } \ i\,=\,1,\, \ldots, \,\ell+1 \,\}\,,
\end{eqnarray}
and let $\widehat{U}_{\mc{D}}$ be a fundamental domain for the action of $\wh{\Gamma} \cap \widehat{U}$ acting on $\widehat{U}$ by left translation. Siegel sets for $\eta(s)\hat{G}$ were defined in \cite{ga:ihes} as sets of the form
\be{loopsiegel}
\wh{\mathfrak{S}}_t \ \ := \ \ \widehat{U}_{\mc{D}} \, \eta(s)\, \widehat{A}_t\, \widehat{K}\,,
\end{eqnarray}
for some choices of $t>0$ and $\widehat{U}_{\mc{D}}$.
According to (\ref{newcoroot}) and (\ref{extended:cartan}), any element $X \in \hf{h}$ can be decomposed as
\be{hhat:decomp}
X \ \ = \ \ X_{cl} \ + \ \langle \Lambda_{\ell+1}, X \rangle h_{\iota} \ , \ \ \ \ \ \text{with} \ \ X_{cl} \, \in \, \mf{h} \ \ \text{and} \ \ \langle \Lambda_{\ell+1} , X \rangle \,\in\,{\mathbb{R}}\, ,
\end{eqnarray}
where we recall that $\Lambda_{\ell+1}:\wh{\frak h}\rightarrow {\mathbb{R}}$ from (\ref{aff:wts}) is the ($\ell+1$)-st fundamental weight (it is trivial on all classical coroots).
Note that because $h_\iota$ and $\mathbf{c}$ are nonzero multiples of each other, properties (\ref{2.28}) and (\ref{omega:ext}) assert that all classical roots $\a\in \Delta$, classical coroots, fundamental weights $\omega_1,\ldots,\omega_\ell$, and $\rho$ from (\ref{rho}) extend trivially to ${\mathbb{R}} h_\iota\oplus {\mathbb{R}}\mathbf{D}$.
The classical component $X_{cl}$ can be expanded as a linear combination \be{Xcl} X_{cl} \ \ = \ \ \sum_{j\,=\,1}^\ell \, \langle \omega_j, X \rangle \, h_j \end{eqnarray} of the coroot basis $\{h_1,\ldots,h_{\ell}\}$ of $\frak h$ using (\ref{omega:ext}).
Because of property (\ref{rho:h_i}) we have that \be{norm:H} \langle \rho , X \rangle \ \ = \ \ \langle \rho , X_{cl} \rangle \ \ = \ \ \sum_{j\, =\,1}^\ell \, \langle \omega_j, X \rangle \,. \end{eqnarray}
With the above notation, we can now state the following result, which is one of the main technical tools in this paper.
\begin{nthm} \label{iwineq} Fix an element $\eta(s)g$ of a Siegel set $\wh{\mathfrak{S}}_t \subset \eta(s)\wh{G}$ as in (\ref{loopsiegel}).
Recall from (\ref{eta(s)}) that $\eta(s)=\exp(r \mathbf{D})$ for $s=e^r$, $r\in {\mathbb{R}}_{>0}$.
Then there exist positive constants $E =E(s,t,\D),$ $C_1=C_1(s, t, \Delta),$ $C_2(s, t, \Delta)$ depending only on $s$, $t$, and the underlying classical root system $\Delta$, and another positive constant $C_3=C_3(g, s, t, \Delta)$ depending also on $g$ with the following properties:~for any element $w \in W^{\theta}$ as in (\ref{kostant}) and $u_w\in U_{w, {\mathbb{A}}}$, the vectors $H_1, H_2,$ and $H_3 \in \wh{\frak h}$ defined by
\begin{equation}\label{iwineqH1H2def}
H_1 \ \ := \ \ \ln(\,\iw^{\A}_{\ha}(wu_w)\,)\ ,\ \ \ \ \ \ H_2 \ \ := \ \
\ln( \eta(s)^w ) \ - \ r \,\mathbf{D}\ , \ \ \ \ \text{and} \ \ \ \ H_3 \ \ := \ \ \ln ( \iw_{\ha}(g)^w)
\end{equation}
satisfy the inequalities
\begin{equation}\label{iwineq1}
\langle \omega_j, H_1 \rangle \ \ \ge \ \ 0 \ \ \ \ \text{and} \ \ \ \ \langle \omega_j, H_2 \rangle \ \ \ge \ \ 0 \ \ \ \ \ \ \ \ \ \ \text{for} \, \ j\,=\,1,\,\ldots,\,\ell\,,
\end{equation}
\be{iwineq3} | \langle \omega_j, H_3 \rangle | \ \ \le \ \ E \ \ \ \ \ \ \ \ \ \ \text{for} \, \ j\,=\,1,\,\ldots,\,\ell\,, \end{eqnarray} and
\begin{equation}\label{iwineq2}
\aligned
\langle \Lambda_{\ell+1} , H_1 \rangle \ \ & \geq \ \ - \,C_1 \ ( \ell(w) + 1) \ \langle \rho , H_1 \rangle\,,\\
\langle \Lambda_{\ell+1}, H_2 \rangle \ \ & \geq \ \ - \,C_2\ (\ell(w) +1 ) \ \langle \rho , H_2 \rangle\,, \\ \text{and} \ \ \ \ \ \ \ \ \ \ \
\langle \Lambda_{\ell+1}, H_3 \rangle \ \ & \geq \ \ -\,C_3 \ ( \ell(w) +1 ) \,.
\endaligned
\end{equation}
\end{nthm}
\noindent
{\em Remark:} In fact, one can choose $C_1=1$ in the simply-laced case (as the proof below will show). The dependence of $C_3$ on $g$ is relatively mild, and only enters through $\langle \L_{\ell+1},\ln(\iw_{\ha}(g))\rangle$ in (\ref{new3.43}). In particular it is locally uniform in $g$.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{sec:3.4} The proof of the above inequalities will occupy
the rest of this section. We shall first draw a corollary which will useful in the sequel. Recall from (\ref{hh:dp}) that the group $\widehat{A}$ has a direct product decomposition
\be{new3.18} \widehat{A} \ \ = \ \ \widehat{A}_{cen} \ \times \ \wh{A}_{cl}\,, \end{eqnarray}
where $\widehat{A}_{cen}$ is the connected component of the one-dimensional central torus of $\widehat{G}$ and $\wh{A}_{cl}$ is the connected component of the split torus from the finite dimensional group $G$ underlying $\widehat{G}$.
As before we identify $\mathbb{R}_{> 0}$ with $\widehat{A}_{cen}$ via the map $s \mapsto h_{\iota}(s)$. There is a natural projection from $\eta(s) \widehat{A}$ onto $\wh{A}$, and subsequently onto each of the factors $\widehat{A}_{cen}$ and $\widehat{A}_{cl}$ in the decomposition (\ref{new3.18}).
\begin{ncor} \label{cor:iwineq:2} There exist constants $C, D > 0$, depending only on $s$, $t$, the underlying classical root system $\Delta$, and locally uniformly in $g$, with the following property:~for any $w\in W^\th$, $\g\in \wh{\Gamma} \cap \wh{B} w \wh{B}$ and $\eta(s) g \in \wh{\mathfrak{S}}_t$ we have the estimate
\be{main:ineq} x \ \ \geq \ \ D \, y^{-(C(\ell(w)+1))^{-1}}, \end{eqnarray}
where $a \in \widehat{A}_{cl}$ and $y \in \widehat{A}_{cen} \cong {\mathbb{R}}_{>0}$ are the projections of
\begin{equation}\label{coriwineq1}
\iw_{\eta(s) \ha}\,(\,\g\,\eta(s)\,g\,) \ \ \in \ \ \eta(s) \widehat{A}
\end{equation} onto $\widehat{A}_{cl}$ and $\widehat{A}_{cen}$, respectively, and $x = a^\rho \in {\mathbb{R}}_{>0}.$
\end{ncor}
\begin{proof
We start by
using part ii) of Lemma~\ref{lemma:iwasawadecomp} (in particular, its consequence (\ref{hpartandlogs})) to write the logarithm of (\ref{coriwineq1}) as $rD+H_1+H_2+H_3$, where $H_1$, $H_2$, and $H_3$ are defined in (\ref{iwineqH1H2def}).
This allows us to factor
\be{3.21} \begin{array}{lcr} x \ \ = \ \ x_1 \, x_2 \, x_3 & \text{ and } & y \ \ = \ \ y_1\, y_2\, y_3\,, \end{array} \end{eqnarray} where $x_i = e^{\langle \rho, H_i \rangle}$ and $y_i = e^{\langle \Lambda_{\ell+1}, H_i \rangle }$ for $i=1,2, 3$.
From the inequalities (\ref{iwineq2}), there exists a constant $C>0$ depending only on $s, t, \Delta$, and $g$ (in which it is locally uniform) such that
\begin{equation}\label{new3.22}\
\aligned
y_1 & \ \ \ge \ \ x_1^{-C(\ell(w)+1)}\,, \\ y_2 & \ \ \ge \ \ x_2^{-C(\ell(w)+1)}\,, \\
\text{and} \ \ \ \ \ \ \ \ \ y_3 & \ \ \ge \ \ e^{-C(\ell(w)+1)}\,. \endaligned
\end{equation}
Thus
\begin{equation}\label{3.22andahalf}
y^{-(C(\ell(w)+1))^{-1}} \ \ = \ \ (y_1\,y_2\,y_3)^{-(C(\ell(w)+1))^{-1}} \ \ \le \ \ x_1\,x_2\,e \ \ = \ \ x\,\smallf{e}{x_3}\,.
\end{equation}
Because of (\ref{rho}) and (\ref{iwineq3}), the ratio $\f{e}{x_3}$ is bounded above and below by positive constants depending only on $s$, $t$, and $\Delta$, establishing (\ref{main:ineq}).
\end{proof}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} In this subsection we prove the first inequalities of (\ref{iwineq1}) and (\ref{iwineq2}).
We begin by recalling from \cite[Lemma~6.1]{ga:ms2}\footnote{The equivalent result stated there uses the $\wh{G}=\wh{K}\wh{A}\wh{N}$ Iwasawa decomposition as opposed to the $\wh{G}=\wh{N}\wh{A}\wh{K}$ Iwasawa decomposition used here.} that for any $w \in W^{\theta}$ and $u_w \in U_{w, {\mathbb{A}}}$,
\be{arthur} H_1 \ \ = \ \
\ln(\iw^{\A}_{\ha}(wu_w))
\ \ = \sum_{ \ \ \ \ \ \ \ \g\,\in\,\Delta_{-, w^{-1} }} c_\g \,h_{\gamma} \ \ \ \ \
\text{with} \ \ c_\g \,\ge\,0\,,\end{eqnarray}
where $\Delta_{-, w^{-1} }=\{\g\in \wh{\D}_{-}|w^{-1}\g>0\}$ was introduced in (\ref{neg:flipped}). Recall from the definition (\ref{kostant}) of $W^\theta$ that $\g$ cannot be the negative of a simple classical root $\a_1,\ldots,\a_{\ell}$, nor in their span. Since $\wh{W}$ acts trivially $\wh{\D}_I$, $\Delta_{-, w^{-1} }$ is a subset of $\wh{\D}_W$ and its elements have the form (\ref{deltawhat}).
\begin{nlem} \label{rtsystem} Assume that $w \in W^{\theta}$ and $ \g \in \Delta_{-, w^{-1} }$. Then $\g$ has the form $\g = \beta + n \iota$ with $\beta \in \Delta_+$ and $n <0$.\end{nlem}
\begin{proof} Since $\g<0$ we must have that $n\le 0$, and in fact $n<0$ since it is not in the span of $\a_1,\ldots,\a_{\ell}$. Let us write $\beta$ as an integral linear combination $ \beta = \sum_{i=1}^\ell d_i \alpha_i$ of the positive simple roots. If $d_1,\ldots,d_\ell \leq 0$ then \be{iw1:lemma} w^{-1} \g \ \ = \ \ \sum_{i=1}^\ell \, d_i \ w^{-1} \alpha_i \ + \ n \iota \end{eqnarray}
exhibits the positive root $w^{-1}\g$ as a nonpositive integral combination of positive roots, a contradiction. Thus at least one (and hence all) $d_i > 0$ and $\b$ is a positive root. \end{proof}
If we write $\gamma = \beta + n \iota\in \Delta_{-, w^{-1}}$ with $\beta \in \Delta_+$ and $n<0$ as in the lemma, then (\ref{h_a:h'}) asserts the existence of a positive constant $m$ such that $h_{\gamma} = h_{\beta} + m n h_{\iota}.$ It follows from (\ref{arthur}), (\ref{new2.1}), (\ref{omega:ext}), (\ref{2.39}-\ref{2.40}), and the lemma that $\langle \omega_j ,H_1 \rangle \geq 0$ for $j=1, \ldots, \ell$, which proves the first inequality of (\ref{iwineq1}). Expand $H_1=\sum_{j=1}^\ell \langle \omega_j, H_1 \rangle h_j+\langle \Lambda_{\ell+1}, H_1 \rangle h_\iota$ as in (\ref{hhat:decomp}-\ref{Xcl}), so that
\be{wH} w^{-1}H_1 \ \ = \ \ \sum_{j\,=\,1}^\ell \langle \omega_j , H_1 \rangle \,h_{w^{-1} a_j} \ + \ \langle \Lambda_{\ell+1}, H_1\rangle \,h_{\iota } \end{eqnarray}
by (\ref{gencorootdef}).
For each $j=1, \ldots, \ell$ we write the positive root $w^{-1} a_j$ as $ \beta(j) + \kappa_j(w^{-1}) \iota$, with $\beta(j) \in \Delta$ and $\kappa_j(w^{-1}) \geq 0$. Hence, again from (\ref{h_a:h'}) there exists a positive constant $m_{w^{-1}a_j}$ (which equals $1$ in the simply-laced case, and which can only take on a finite number of values in general), such that \be{hw} h_{w^{-1} a_j} \ \ = \ \ h_{\beta(j)} \ + \ m_{w^{-1}a_j} \, \kappa_j(w^{-1}) \, h_{\iota}\,. \end{eqnarray}
The coefficient of $h_{\iota}$ in (\ref{wH}) is then given by the sum
\be{3.23.5} \langle \Lambda_{\ell+1}, H_1 \rangle \ + \ \sum_{j\,=\,1}^\ell m_{w^{-1}a_j} \, \langle \omega_j, H_1 \rangle \, \kappa_j(w^{-1}) \,. \end{eqnarray} On the other hand, from (\ref{weyloncoroot}), (\ref{arthur}), and the definition of $\gamma \in \Delta_{-, w^{-1}}$ it follows that $w^{-1}H_1$ is a nonnegative integral combination of positive coroots and so (\ref{3.23.5}) is nonnegative, in particular \be{m:bound} - \, \sum_{j\,=\,1}^{\ell} m_{w^{-1}a_j} \, \langle \omega_j , H_1 \rangle \, \kappa_j(w^{-1})
\ \ \le \ \ \langle \Lambda_{\ell+1} , H_1 \rangle \ \ < \ \ 0
\, . \end{eqnarray}
The second inequality here comes from (\ref{arthur}), the lemma, and (\ref{h_a:h'}).
Lemma~\ref{kap:ineq} asserts that $\kappa_j(w^{-1}) \le \ell(w)+1.$ Since the values $m_{w^{-1}a_j}$ are bounded by an absolute constant (and again all equal to $1$ in the simply-laced case), the first inequality of (\ref{iwineq2}) now follows from (\ref{norm:H}).
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} Next, let us turn to the second of the inequalities listed in (\ref{iwineq1}) and (\ref{iwineq2}). Recall that we have defined $H_2$ in (\ref{iwineqH1H2def}) through the formula
\be{m2:conj} w \,(r \, \mathbf{D}) \ \ = \ \ r\, \mathbf{D} \ + \ H_2\,. \end{eqnarray}
Now, let us write $w = \t{w} T_b$ with $\t{w} \in W$ and $b \in Q^{\vee}.$
Decomposing $H_2=H_{2,cl}+ \langle \Lambda_{\ell+1}, H_2 \rangle h_{\iota} $ as in (\ref{hhat:decomp}), formula (\ref{affw:actionandcenter}) implies
\be{iw2:1} \begin{array}{lcr} \langle \Lambda_{\ell+1}, H_2 \rangle \ \ = \ \ -r\,\f{(b, b)}{2} & \text{ and } & H_{ 2,cl} \ = \ -r \ \t{w}\,b\,. \end{array} \end{eqnarray}
We claim that
\be{pos:rts}
d_j \ \ := \ \ \langle a_j , \t{w}\,b\rangle \ \ \leq \ \ 0 \ \ \ \ \text{ for } \ j\,=\,1,\, \ldots,\, \ell\,
. \end{eqnarray} Indeed, from (\ref{Tbdef}) we have that \be{iw2:2}
w^{-1} a_j \ \ = \ \ T_{-b} \t{w}^{-1}a_j \ \ = \ \ \t{w}^{-1}a_j \, - \, \langle \t{w}^{-1}a_j, b \rangle \, \iota
\,. \end{eqnarray} By definition (\ref{kostant}), $w^{-1}a_j >0$ for $w\in W^\theta$ and $j=1,\ldots,n$, hence
\be{new3.35} - \langle \t{w}^{-1}a_j, b \rangle \ \ \geq \ \ 0\,, \end{eqnarray}
i.e.,
$d_j = \langle a_j,\t{w} b\rangle = \langle \t{w}^{-1}a_j, b \rangle \leq 0$ because of (\ref{affw:hact}).
By Lemma~\ref{ragh} the inequality (\ref{pos:rts}) implies that \be{pos:cowts}
q_j \ \ := \ \ \langle \omega_j , \t{w}\,b \rangle \ \ \leq \ \ 0 \ \ \ \ \text{ for } \ j\,=\,1, \ldots, \ell\,
. \end{eqnarray} Hence, \be{pos:cowts:2}
\langle \omega_j , H_2 \rangle \ \ = \ \ \langle \omega_j , H_{2,cl} \rangle \ \ = \ \ - r \,\langle \omega_j , \t{w}\,b \rangle \ \ \geq \ \ 0 \ \ \ \ \text{ for } \ j\,=\,1, \ldots, \ell\,
. \end{eqnarray} This proves the second inequality in (\ref{iwineq1}).
Let us now turn to the second inequality of (\ref{iwineq2}). From (\ref{pos:rts}) and (\ref{pos:cowts}) we have the simultaneous expressions
\begin{equation}\label{wb:f}
\aligned \t{w}\, b \ \ & \ \ = \ \ \ \ \sum_{j=1}^\ell\, q_j\, h_j \ \ \ \ \text{ with } q_j \ \ \leq \ \ 0\\ \text{and} \ \ \ \ \ \ \ \ \ \
\t{w}\, b \ \ & \ \ = \ \ \ \ \sum_{j=1}^\ell\, d_j\, \omega^{\vee}_j \ \ \ \ \text{ with } d_j \ \ \leq \ \ 0\,, \endaligned
\end{equation}
where $\omega_j$ and $\omega_j^{\vee}$ are defined in (\ref{new2.1})-(\ref{new2.2}). Using (\ref{cowt:corts}) we find that \be{m2:1} (\t{w}b, \t{w}b) \ \ = \ \ \sum_{i\,=\,1}^\ell \smallf{2}{(\alpha_i, \alpha_i)}\, d_i \, q_i. \end{eqnarray}
Since the denominators take on only a finite number of positive values, there exists a constant $B> 0$ such that
\be{m2:1.1} (\t{w}b, \t{w}b)
\ \ \leq \ \ B \, \sum_{i\,=\,1}^\ell d_i \, q_i
\ \ \leq - B \,\cdot\, \max_i |d_i| \,\cdot\, \sum_{j\,=\,1}^\ell q_j\,. \end{eqnarray}
Applying Lemma~\ref{l(w):ineq} results in the estimate $\max_i |d_i| \leq C' (\ell(w) +1 )$ for some constant $C' >0 $ independent of $w$. Thus there exists some constant $C'' >0$, again independent of $w$, such that
\begin{equation}\label{m2:2}
\aligned
-\,\langle \Lambda_{\ell+1}, H_2 \rangle & \ \ = \ \ r\,\frac{(b, b)}{2} \ \ = \ \ r\,\frac{(\t{w}b, \t{w}b)}{2} \ \ \leq \ \ - \,C'' \; (\ell(w) +1 ) \; \sum_{j\,=\,1}^\ell r \,q_j \,.
\endaligned \end{equation}
On the other hand, we have from (\ref{rho:h_i}), (\ref{iw2:1}), and (\ref{wb:f}) that
\begin{equation} \label{m2:3}
\langle \rho, H_{2, cl} \rangle \ \ = \ \ -\, \langle \rho, r \, \t{w} \, b \rangle \ \ = \ \ - \,r\,\langle \rho, \sum_{j\,=\,1}^{\ell} q_j\, h_j \rangle \ \ = \ \ - \sum_{j\,=\,1}^{\ell} r \, q_j\,.
\end{equation}
The second inequality of (\ref{iwineq2}) follows from (\ref{m2:2}) and (\ref{m2:3}).
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} Finally, we turn to (\ref{iwineq3}) and the last inequality (\ref{iwineq2}) in Theorem~\ref{iwineq}. Let $H_g:=\ln ( \iw_{\ha}(g))$, and write
\be{Hg}
H_g \ \ = \ \ H_{g, cl} \ + \ \langle \Lambda_{\ell+1}, H_g \rangle\, h_{\iota}\,,\end{eqnarray}
where $H_{g, cl} \in
\mf{h}$ as in (\ref{hhat:decomp}).
If we write $w = \t{w} T_b$ with $\t{w} \in W$ and $b \in Q^{\vee},$ we have
\begin{equation}\label{new3.42}
\aligned H_3\ \ := \ \ w H_g & \ \ = \ \ \t{w}\, T_b \, H_{g, cl} \ + \ \langle \Lambda_{\ell+1}, H_g \rangle \, h_{\iota} \\ & \ \ = \ \ \t{w} \,H_{g,cl} \ + \ \( \langle \Lambda_{\ell+1}, H_g \rangle + (H_{g,cl} , b ) \) h_{\iota} \,,\endaligned
\end{equation}
where we have used (\ref{affw:actionandcenter}) and the fact that $w$ fixes $h_\iota$.
This implies \be{H3:cl} (H_3)_{cl} \ \ = \ \ \t{w}\, H_{g, cl}\, , \end{eqnarray} in particular.
Recall that we have assumed that $\eta(s) g \in \hf{S}_t$ with $ t > 0.$ Because of (\ref{At}) we must have that \be{new3.44} \iw_{\ha}( \eta(s) g )^{a_i} \ \ = \ \ e^{ \langle a_i, r \,\mathbf{D}+ H_g \rangle } \ \ > \ \ t \ \ \ \ \ \text{ for } \ \ i\,=\,1,\, 2,\, \ldots,\, \ell+1\,, \end{eqnarray}
i.e., $\langle a_i, r \,\mathbf{D} +H_g\rangle > \ln(t)$ for $i=1, \ldots, \ell+1$.
Since (\ref{2.28})-(\ref{a:alpha}) imply that $ \langle a_i, h \rangle=0$ for $h \in {\mathbb{R}} h_\iota\oplus {\mathbb{R}} \mathbf{D}$ and $i=1, \ldots, \ell$,
we furthermore have that \be{Hgcl:abound} \langle a_i, r \,\mathbf{D} +H_g \rangle \ \ = \ \ \langle a_i, H_{g, cl} \rangle \ \ > \ \ \ln(t) \ \ \ \ \ \text{ for } i=\,1,\, \ldots, \,\ell\,. \end{eqnarray}
On the other hand, from (\ref{iotaX}) and (\ref{newroot}) we also have that \be{Hgcl:abound:2} \langle a_{\ell+1},r \,\mathbf{D} +H_g \rangle \ \ = \ \ \langle - \alpha_0 + \iota, r \,\mathbf{D} +H_g \rangle \ \ = \ \ - \, \langle \alpha_0, H_{g, cl} \rangle \,+\, r \ \ > \ \ \ln(t)\, , \end{eqnarray} where we recall that $\alpha_0$ is the highest root for the finite-dimensional Lie algebra $\mf{g},$ and as such is a positive linear combination of the roots $a_i,$ $i=1, \ldots, \ell.$
Thus, from (\ref{Hgcl:abound}) and (\ref{Hgcl:abound:2}) we conclude that as $\eta(s) g $ varies over the Siegel set $\hf{S}_t$, the quantity $| \langle a_i, H_{g, cl} \rangle |$ is bounded for $i=1, \ldots, \ell$ by a constant which depends only on $t, r,$ and the root system $\Delta.$ Since $\t{w}$ varies over a finite set, inequality (\ref{iwineq3}) now follows from (\ref{H3:cl}) and Lemma~\ref{ragh}.
Formula (\ref{new3.42}) also implies that \be{new3.43} \langle \Lambda_{\ell+1} , H_3 \rangle \ \ = \ \ \langle \Lambda_{\ell+1}, H_g \rangle \ + \ ( H_{g, cl} , b )\,. \end{eqnarray} Using (\ref{iwineq3}), Lemma~\ref{l(w):ineq}, and the Cauchy-Schwartz inequality, we see that there exists a constant $C= C(s,t, \Delta)$ depending on such that $ | (H_{3,cl}, b) | \leq C (\ell(w) + 1).$ Since $\langle \Lambda_{\ell+1}, H_g \rangle$ depends locally uniformly on
$H_g$, the last inequality of (\ref{iwineq2}) follows.
\section{Entire absolute convergence on loop groups} \label{section:convergence}
In this section, we combine our analysis from section~\ref{section:iwasawa} with a decay estimate on cusp forms to conclude that cuspidal loop Eisenstein series are entire. We begin with some discussion of the structure of parabolic subgroups of loop groups.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{Mtheta} Let $k$ be any field and consider the group $\widehat{G}_k$ constructed in section~\ref{groups:section}. For any subset $\theta \subset \{ 1, \ldots, \ell+1 \}$ let $\wh{W}_\th$ denote the subgroup of $\wh{W}$ generated by the reflections $\{w_i|i \in \th\}$, and define the parabolic subgroup $\wh{P}_{\theta, k} \subset \widehat{G}_k$ as \be{parabdef} \wh{P}_{\theta,k} \ \ := \ \ \wh{B}_k \; \wh{W}_\th \; \wh{B}_k\,,
\end{eqnarray}
where each $w\in \wh{W}_{\theta}$ is identified with a representative in $N(\wh{H}_k)$ as in section \ref{bruhat:section}. Thus $\wh{P}_{\theta,k}=\wh{G}_k$ when $\theta=\{1,\ldots,\ell+1\}$ (cf.~(\ref{bruhat})). Denote by $\wh{U}_{\th,k}$ the pro-unipotent radical of $\wh{P}_{\th,k}.$ If $\theta \subsetneq \{ 1, \ldots, \ell+1 \}$ the group $\wh{W}_\th$ is finite, and the group $\widehat{U}_{\th, k}$ can be written as the intersection of $\wh{U}_k$ with a conjugate by the longest element in $\wh{W}_\theta.$ Recall the subgroup $\widehat{H}_k\subset \wh{G}_k$ defined at the beginning of section \ref{bruhat:section} as the diagonal operators with respect to the coherent basis ${\cal B}$. We now set
\begin{equation}\label{a:theta}
\aligned
\widehat{H}_{\theta,k} \ \ & = \ \ \langle h \in \widehat{H}_k \,|\, h^{a_i}\,=\, 1 \ \ \text{for all} \ i \, \in \,\theta \rangle \\ \text{and} \ \ \ \ \ \ H( \theta )_k \ \ & = \ \ \langle h_i(s) \,|\, \text{ for } i \in \theta \ \text{and} \ s \in k^* \rangle\,.
\endaligned
\end{equation}
Analogously to (\ref{hh:dp}), there is an almost direct product decomposition $\widehat{H}_k = \widehat{H}_{\theta,k} \times H(\th)_k$. Let \be{L:theta} L_{\theta,k} \ \ = \ \ \langle \chi_\beta(s) \mid \beta \in [\theta], \; s \in k \rangle\,, \end{eqnarray} where $[\theta]$ denotes the set of all roots in $\wh{\Delta}$ which can be expressed as linear combinations of elements of $\theta.$ We now define
\be{M=AL} M_{\th,k} \ \ = \ \ L_{\th,k} \,\widehat{H}_{\th,k} \, , \, \end{eqnarray}
its semi-simple quotient \be{new4.5} L'_{\th,k} \ \ = \ \ M_{\th,k} / Z(M_{\th,k}) \ \ = \ \ L_{\th,k} / ( Z(M_{\th,k}) \cap L_{\th,k})\,, \end{eqnarray} and the natural projection \be{new4.6} \pi_{L'_{\th,k}}: M_{\th,k} \ \rightarrow \ L'_{\th,k} \,.\end{eqnarray} With the notation above, $\wh{P}_{\th,k}$ decomposes into the semidirect product \be{P=MU} \wh{P}_{\th,k} \ \ = \ \ \widehat{U}_{\th,k} \rtimes M_{\theta,k} \end{eqnarray}
(see \cite[Theorem 6.1]{ga:lg2}).
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} We now suppose that $k= {\mathbb{R}}$ and continue our convention of dropping the subscript $k$ when referring to real groups. Recall the subgroup $\widehat{A}$ of $\widehat{H}$ defined in section~\ref{groups:section}. Setting $\widehat{A}_{\th} = \widehat{H}_{\th} \cap \widehat{A}$ and $A(\th) = H(\th) \cap \widehat{A}$,
it has the direct product decomposition
\be{ha:th}
\widehat{A} \ \ = \ \ \widehat{A}_\th \, \times \, A(\th)
\end{eqnarray}
as a consequence of the fact the Cartan submatrix corresponding to $\theta$ is positive definite \cite[Lemma~4.4]{kac}.
The decomposition (\ref{M=AL}) is not a direct product, but can be refined to one as follows. Define \be{Mth:1} M_\th^1 \ \ = \ \ \bigcap_{\chi \,\in \, X(M_\th) } \ker(\chi^2)\, ,\end{eqnarray} where $X(M_\th)$ denotes the set of real algebraic characters of $M_\th.$\footnote{The group $M_\th^1$ was denoted $\t{L}_\th$ in \cite{ga:zuck}} Then the direct product decomposition \be{mth:dp} M_\th \ \ = \ \ M_\th^1 \, \times \, \widehat{A}_\th\,. \end{eqnarray}
holds.
The group $\wh{G}$ has an Iwasawa decomposition with respect to the parabolic $\wh{P}_\theta$, \be{iwasawa:pr} \widehat{G} \ \ = \ \ \wh{P}_{\th} \, \widehat{K} \ \ = \ \ \widehat{U}_\th \, M_\th \, \widehat{K}\, . \end{eqnarray} In contrast to (\ref{iwasawa}), this Iwasawa decomposition is typically not unique since $K_\th:= \widehat{K} \cap M_\th$ may be nontrivial; in general only the map \be{G:Mth1} \iw_{M_\th}: \widehat{G} \ \ \rightarrow \ \ M_\th/ K_\th
\end{eqnarray}
is well-defined. Noting that $K_\th $ is a compact subgroup of the finite-dimensional group $M_{\th},$ one in fact has that $K_\th \subset M^1_\th$. This and the direct product (\ref{mth:dp}) allow us to define the map \be{iw:ahat_th} \iw_{\widehat{A}_\th}: \widehat{G} \ \ \rightarrow \ \ M_\th/ M^1_\th \ \ \cong \ \ \widehat{A}_\th, \end{eqnarray}
which factors through $\iw_{M_\theta}$.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{Lprime:section} The finite-dimensional, real semi-simple Lie group $L'_\th$ admits an Iwasawa decomposition
\be{iw:L} L'_\th \ \ = \ \ U'_\th \, A'(\th) \, K'_\th \end{eqnarray}
of its own, where $U'_\th \subset L'_\th$ is a unipotent subgroup, $A'(\th) = \pi_{L'_\th}( A(\th) )$, and \be{K'th} K'_\th \ \ = \ \ \pi_{L'_\th}(K_\th) \ \ = \ \ \pi_{L'_\th}(\widehat{K} \cap M_\th) \,. \end{eqnarray} The projection from $L'_{\th}$ onto $A'(\th)$ will be denoted by \be{iw:L'} \iw_{A'(\th)}: L'_{\th} \rightarrow A'(\th). \end{eqnarray} The factors $\wh{A}_\theta$ and $A(\theta)$ have trivial intersection in (\ref{ha:th}) and the factors in (\ref{mth:dp}) commute. Therefore the intersection $Z(M_\theta)\cap A(\theta)=Z(M_\theta^1)\cap \wh{A}_\theta\cap A(\theta)$ is also trivial, and hence
the map $\pi_{L'_\theta}$ induces an isomorphism
\be{a':a}
\pi_{L'_\theta}\,:\,A(\th) \ \ \cong \ \ A'(\th) \,.
\end{eqnarray}
Define the map $\iw_{A(\th)}$ as the composition of the Iwasawa $\widehat{A}$-projection of $\widehat{G}$ defined after (\ref{iwasawa}) together with the projection onto the second factor in (\ref{ha:th}), \be{iwA(th)} \iw_{A(\th)}\,: \ \widehat{G} \ \ \stackrel{\iw_{\widehat{A}}}{\rightarrow} \ \ \widehat{A} \ \ \rightarrow \ \ A(\th)\,.\end{eqnarray}
Then the composition
\be{iw:a'} \widehat{G} \ \ \stackrel{\iw_{M_{\th}}}{\rightarrow} \ \ M_\th / K_\th \ \ \stackrel{\pi_{L'_\th}}{\rightarrow} \ \ L'_{\th} / K'_\th \ \ \stackrel{\iw_{A'(\th)}}{\rightarrow} \ \ A'(\th)\, \end{eqnarray}
coincides with $\pi_{L'_\theta}\circ \iw_{A(\th)}$.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{iw:iota}
We shall fix the choice $\theta=\{ 1, \ldots, \ell \}$ for the remainder of this section, so that $\wh{W}_{\theta}= W$.
The groups $\wh{A}_\theta$ and $A(\theta)$ are then respectively isomorphic to the groups $\wh{A}_{cen}$ and $\wh{A}_{cl}$ defined just after (\ref{hh}).
We will thus denote the maps $\iw_{\widehat{A}_\th}$ and $\iw_{A(\th)}$ simply by $\iw_{\widehat{A}_{cen}}$ and $\iw_{\widehat{A}_{cl}}$, respectively.
Since $\eta(s)$ normalizes $\wh{U}$
we may extend $\iw_{\ha_{cen}}$ and $\iw_{\widehat{A}_{cl}}$ to maps on the slice $\eta(s) \widehat{G}$ which we continue to denote by the same names, \be{mcen:eta} \iw_{\ha_{cen}}: \eta(s) \widehat{G} \ \ \rightarrow \ \ \widehat{A}_{cen} \ \ \ \ \text{ and } \ \ \ \ \iw_{\widehat{A}_{cl}}: \eta(s) \widehat{G} \ \ \rightarrow \ \ \widehat{A}_{cl} \, ,\end{eqnarray}
similarly to (\ref{iweadef}). Likewise we may furthermore define the map \be{iwM:eta} \iw_{M_{\th}}: \eta(s) \widehat{G} \ \ \rightarrow \ \ M_{\th} / K_\theta
\ \ = \ \ M_{\th} / (\widehat{K} \cap M_{\th}) ,\end{eqnarray}
noting from (\ref{etaandchi}) that $\eta(s)$ acts trivially on $M_{\theta}=M_{\{1,\ldots,\ell\}}$.
Recall that the group $\wh{A}_{cen}$ consists of the elements $h_{\iota}(s)$ for $s>0$. For a complex number $\nu \in {\mathbb{C}}$ define $h_{\iota}(s)^\nu=s^\nu$, and consider the function
\begin{equation}\label{Phi_nu}\aligned \eta(s) \widehat{G} & \ \ \rightarrow \ \ {\mathbb{C}}^* \\ \eta(s) \, g & \ \ \mapsto \ \ \iw_{\ha_{cen}}( \eta(s) \, g ) ^{\nu}\,. \endaligned\end{equation}
Let $\widehat{P}$ be the parabolic $\widehat{P}_{\th}=\widehat{P}_{\{ 1, \ldots, \ell \}}$.
The following convergence theorem for Eisenstein series\footnote{The result in \cite{ga:zuck} states absolute convergence for the left half plane $\re(\nu)< -2 h^{\vee}$, since the order of the Iwasawa decomposition there is reversed.} has been proven by the first named author.
\begin{nthm}[Theorem 3.2 in \cite{ga:zuck}] \label{gar:main} The Eisenstein series \be{deg:es} \sum_{\gamma \; \in \; ( \wh{\Gamma} \, \cap \, \widehat{P} ) \backslash \wh{\Gamma}} \iw_{\ha_{cen}}(\gamma \eta(s) \, g)^\nu \end{eqnarray} converges absolutely for $\Re{\nu}>2h^\vee$, where $h^{\vee}= \langle \rho, h_{\iota} \rangle$ is the dual Coxeter number. Moreover, the convergence is uniform when
$g$ is constrained to a subset of the form $\widehat{U}_{\mc{D}} \; \widehat{A}_{cpt} \; \widehat{K}$, where $\widehat{A}_{cpt} \subset \widehat{A}$ is compact and $\widehat{U}_{\mc{D}}$ is as in (\ref{loopsiegel}). \end{nthm}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} Our aim is to prove the convergence of the cuspidal analogs of (\ref{deg:es}) for all $ \nu \in {\mathbb{C}}$. These were defined in \cite{ga:zuck}, where they were shown to converge in a right half plane.
For any right $K_\th$-invariant function $f: M_{\theta} \rightarrow {\mathbb{C}}$ the assignment \be{f:K}
\qquad\qquad\qquad
\eta(s) g \ \ \mapsto \ \ f( \iw_{M_\th}(\eta(s) g ) ) \ \ \ \ \ \ \ \ \ \ \qquad\qquad
\text{(cf.~(\ref{iwM:eta}))}
\end{eqnarray}
is a well-defined function from $\eta(s)\wh{G}$ to $ {\mathbb{C}}.$
Recall that the finite-dimensional semisimple group $L'_{\th}$ was defined in (\ref{new4.5}) together with the projection map $\pi_{L'_\th}: M_\th \rightarrow L'_\th$. Set \be{gamma:th} \Gamma_\th \ \ := \ \ \wh{\Gamma} \, \cap \, M_\th\end{eqnarray}
and $\Gamma'_\th = \pi_{L'_\th} (\Gamma_\th).$ Recall that $K'_{\th}=\pi_{L'_\th}(K_\theta)$ from (\ref{K'th}) and let $\phi$ be a $K'_\th$-invariant cusp form on $\Gamma'_\th \setminus L'_\th.$ Set
\be{vphi} \Phi\,= \, \phi \circ \pi_{L'_\th}: \Gamma_\th \setminus M_\th \ \ \rightarrow \ \ {\mathbb{C}}\,, \end{eqnarray} which is right $K_\th$-invariant. The \emph{cuspidal loop Eisenstein series} from \cite{ga:zuck} is defined as the sum \be{cusp:es} E_{\phi, \nu}( \eta(s) g ) \ \ := \ \ \sum_{\gamma \, \in\, (\wh{\Gamma} \cap \widehat{P}) \backslash \wh{\Gamma} }\iw_{\ha_{cen}}(\gamma \,\eta(s)\, g)^{\nu}\, \Phi(\iw_{M_\th}(\gamma\, \eta(s)\, g )) \,. \end{eqnarray} We can now state the main result of this paper
\begin{nthm} \label{main} The cuspidal loop Eisenstein series, $ E_{\phi, \nu}(\eta(s) g )$ converges absolutely for any $\nu \in {\mathbb{C}}.$ Moreover, the convergence is uniform when
$g$ is constrained to a subset of the form $\widehat{U}_{\mc{D}} \; \widehat{A}_{cpt} \; \widehat{K},$ where $\widehat{A}_{cpt} \subset \widehat{A}$ is compact and $\widehat{U}_{\mc{D}}$ is as in (\ref{loopsiegel}). \end{nthm}
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} The proof of Theorem~\ref{main} is based on two main ingredients, namely the inequalities proved in Theorem~\ref{iwineq} and also the following decay estimate. To state it, we keep the notation as in $\S \ref{finrtsystems}- \ref{sd2}.$ Let $\phi:\Gamma \setminus G\rightarrow{\mathbb{C}}$ be a $K$-finite cusp form. The following result can be deduced from Theorem~\ref{thm:uniform} in the appendix. Stronger results are known for $K$-fixed cusp forms due to Bernstein and Kr\"otz-Opdam \cite{Kr:opdam}, and those are in fact sufficient for the applications here since we only consider Eisenstein series induced from $K$-fixed cusp forms. Their techniques extend to the $K$-finite setting, but have not been published. As mentioned in the introduction, we anticipate the $K$-finite statement will be useful for proving the convergence of $K$-finite loop Eisenstein series once a definition has been given, and so we give a complete proof of the following result in the appendix (using a different argument).
\begin{nthm} \label{decay:body} Let $\phi \in \Gamma \setminus G$ be a $K$-finite cusp form. Then there exists a constant $C > 0$ which depends only on $G$ and $\phi$ such that for every natural number $N \geq 1,$ we have \be{cor:ufrho} \phi(g ) \ \ \leq \ \ (CN)^{CN} \ \iw_A(g)^{-N \rho}\,. \end{eqnarray} \end{nthm}
\noindent This is exactly the statement of (\ref{thm:uf}) if $g$ is in some Siegel set $\mf{S}_t$. In general, if $g \in G$ we can find $\gamma \in \Gamma$ such that $\gamma g \in \mf{S}_t.$ By Remark~\ref{remark231} we have that \be{mineq} \iw_A(\gamma g )^{\rho} \ \ \geq \ \ \iw_A(g)^{\rho}\,, \end{eqnarray}
and so the asserted estimate for $\phi(g)=\phi(\g g)$ reduces to the estimate on $\mf{S}_t.$
\tpoint{Remark}\label{remark731} The explicit $N$-dependence in the estimate (\ref{cor:ufrho}) is needed in showing the boundedness of (\ref{est:-1}) below. Indeed, since $N$ there depends on the Bruhat cell, the classical rapid decay statements (in which $(CN)^{CN}$ is replaced by some constant depending on $N$) are insufficient.
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{sec:4.8.1}
This subsection contains the proof of Theorem~\ref{main}. Since cusp forms are bounded, the convergence for $\re \nu > 2 h^{\vee}$ follows from that of (\ref{deg:es}) (as observed by Garland in \cite{ga:zuck}).
We shall prove the theorem for $\re \nu \leq 2 h^{\vee}$ by leveraging the decay of the cusp form to dominate the series (\ref{cusp:es}) by a convergent series of the form (\ref{deg:es}), but with $\nu$ replaced by some $\nu_0 \gg \re\nu.$
\tpoint{ Step 1:~Main Analytic Estimate.} For $w \in W^\th$ let $\wh{\Gamma}(w) = \wh{\Gamma} \cap \wh{P} w \wh{P}$, which is independent of the representative in $N(\wh{H})$ taken for $w$.
Each set $\wh{\G}(w)$ is invariant under $\wh{\G}\cap \wh{P}$.
Using the Bruhat decomposition, the Eisenstein series can then be written as \be{eis:w:r} E_{\phi, \nu}( \eta(s) \, g) \ \ = \ \ \sum_{w \, \in \, W^\th} \sum_{ \gamma \, \in \, (\wh{\Gamma} \cap \wh{P}) \backslash \wh{\Gamma}(w) } \iw_{\ha_{cen}}(\gamma \, \eta(s) \, g )^{\nu} \ \Phi(\iw_{M_\th}(\gamma \, \eta(s) \, g ))\,.
\end{eqnarray}
Recall that we have assumed $\Re{\nu} \le 2h^\vee$. Choose $\nu_0 \in {\mathbb{R}}$ with $ \nu_0 > 2 h^\vee \ge \re \nu $, so that
the series (\ref{deg:es}) converges when $\nu$ is replaced by $\nu_0.$ Consider the inner sum on the right hand side of (\ref{eis:w:r}) for a fixed element $w \in W^{\theta},$
\be{es:w}
\sum_{\gamma \, \in \, (\wh{\Gamma} \cap \wh{P}) \backslash \wh{\Gamma}(w) } \iw_{\ha_{cen}}(\gamma \, \eta(s) \, g)^{\nu} \ \Phi(\iw_{M_{\th}}(\gamma \, \eta(s) \, g ) ) \, .
\end{eqnarray}
In the notation of Corollary~\ref{sec:3.4}, let
\be{y}
y \ \ = \ \ \iw_{\ha_{cen}}(\gamma \, \eta(s) \, g) \ \ \in \ \ {\mathbb{R}}_{> 0} \ \ \ \text{ and } \ \ \ x \ \ = \ \ \iw_{\wh{A}_{cl}} ( \gamma \, \eta(s) \, g )^{\rho} \ \ \in \ \ {\mathbb{R}}_{ > 0}\,.
\end{eqnarray}
By the definition of $\Phi$ given in (\ref{vphi}),
$
\Phi(\iw_{M_{\th}}(\gamma \eta(s) g)) = \phi( \pi_{L'_{\th}}(\iw_{M_{\th}}(\gamma \eta(s) g) ) )$.
Since $\widehat{A}_{cl}=A(\th),$ we may conclude from what we have noted after (\ref{iwA(th)})
that
\be{A':Acl} \iw_{A'(\th)}\( \pi_{L'_{\th}} ( \iw_{M_\th}(\gamma \eta(s) \, g)) \) \ \ = \ \ \pi_{L'_\th} ( \iw_{\widehat{A}_{cl}}(\gamma \, \eta(s)\, g) )\,.\end{eqnarray}
Applying
Theorem~\ref{decay:body}
we conclude that there exists a constant $C_1 >0$ such that \be{estimate:cusp}\Phi(\iw_{M_{\th}}(\gamma \,\eta(s) \, g)) \ \ = \ \ \phi\( \pi_{L'_{\th}} ( \iw_{M_\th}(\gamma \eta(s) \, g))\) &\leq& (C_1 N)^{C_1 N} x^{-N} \end{eqnarray}
for any $N\in {\mathbb{Z}}_{>0}$.
\tpoint{Step 2:~Comparing the central contribution.} From Corollary \ref{cor:iwineq:2} there exist constants $C_2, D >0$ independent of $\gamma \in \wh{\Gamma} \cap \widehat{B} w \widehat{B}$, but depending locally uniformly on $g$, such that
\be{y:inequ:2}
x \ \ \geq \ \ D \, y^{-(C_2 (\ell(w)+1))^{-1}}\,.
\end{eqnarray} Choose a positive real $d$ such that $dC_2 \in \mathbb{Z}_{>0}$ and
\be{d} d \ \ > \ \ \nu_0 \ - \ \re \nu \ \ > \ \ 0\,, \end{eqnarray} and let $N= d C_2 (\ell(w) + 1).$ From (\ref{y:inequ:2}) we obtain \be{est2} x^{-N}\, y^{\re\!\nu - \nu_0} \ \ \leq \ \ D^{-N} \, y^{ \frac{N}{C_2 (\ell(w)+1)} } y^{\re\!\nu - \nu_0} \ \ = \ \ D^{-N} \, y^{d} \, y^{\re\!\nu - \nu_0}\,. \end{eqnarray} Thus
\begin{equation} \label{estimate:term}
\aligned
\iw_{\ha_{cen}}(\gamma \, \eta(s) \, g)^{\re\!{\nu} }\ \Phi(\iw_{M_\th}(\gamma \, \eta(s) \, g ) \ \ & = \ \ y^{\re\! \nu} \ \Phi(\iw_{M_\th}(\gamma \, \eta(s) \, g )) \\ & \leq \ \ (C_1N)^{C_1N} \ y^{\re\!\nu} \ x^{-N} \\ & \leq \ \ (C_1N)^{C_1N} \ y^{ \nu_0} \ ( x^{-N} y^{\re \! \nu - \nu_0 })\, \\
& \leq \ \ (C_1N)^{C_1N} \,y^{ \nu_0 }\, D^{-N} \, y^a\,, \endaligned \end{equation}
where $a=d+\re\nu-\nu_0$ is positive by (\ref{d}).
On the other hand,
\be{lny}
\ln y \ \ = \ \ \langle \Lambda_{\ell+1} , \ln (\gamma \, \eta(s) \, g) \rangle \ \ = \ \ \langle \Lambda_{\ell+1}, H_1 \rangle \ + \ \langle \Lambda_{\ell+1} , H_2 \rangle \ + \ \langle \Lambda_{\ell+1} , H_3 \rangle
\end{eqnarray}
in the notation of Theorem~\ref{iwineq}. The second inequality in (\ref{m:bound}) states that $ \langle \Lambda_{\ell+1}, H_1 \rangle < 0$. From (\ref{iw2:1}) and Lemma~\ref{l(w):ineq}, the term $\langle \Lambda_{\ell+1} ,H_2 \rangle$ is bounded above by a quadratic polynomial in $\ell(w)$ with strictly negative quadratic term. Furthermore, for fixed $\eta(s) g \in \hf{S}_t,$ it follows from (\ref{new3.43}) that $| \langle \Lambda_{\ell+1}, H_3 \rangle | $ is bounded above by a linear polynomial in $\ell(w)$ with coefficients that depend locally uniformly in $g$. Hence, in total $y$ is bounded above by an expression of the form $e^{p(\ell(w))}$ where $p(\cdot)$ is a quadratic polynomial with strictly negative quadratic term. By our choice of $N$
the term $D^{-N} \, (C_1N)^{C_1N}$ is bounded above by a constant times $e^{c' \ell(w) \ln \ell(w)}$ for some positive constant $c'$. Hence, the expression \be{est:-1} (C_1 N)^{C_1N} \, D^{-N} \, y^a \end{eqnarray} is bounded as $\ell(w) \rightarrow \infty,$ and so we can (\ref{cusp:es}) by a sum of the form
\be{est:0}
\sum_{w \, \in \, W^\th} \sum_{ \gamma \, \in \, (\wh{\Gamma} \cap \wh{P}) \backslash \wh{\Gamma}(w) }
\iw_{\ha_{cen}}(\gamma \eta(s) \, g)^{\nu_0} \ \ = \ \
\sum_{\gamma \,\in\,( \wh{\Gamma} \, \cap \, \widehat{P}) \setminus \wh{\Gamma} } \iw_{\ha_{cen}}(\gamma \eta(s) \, g)^{\nu_0} \,. \end{eqnarray} The convergence of (\ref{cusp:es}) then follows from the fact that $\nu_0>2h^\vee$ is in the range of convergence of (\ref{deg:es}).
\vspace{3mm}\par \noindent \refstepcounter{subsection}{\bf \thesubsection.} \label{funfield} Instead of working over ${\mathbb{Q}},$ we can also study cuspidal loop Eisenstein series over a function field $F$ of a smooth projective curve $X$ over a finite field. It was in this setting that Braverman and Kazhdan originally noticed the entirety of cuspidal Eisenstein series \cite{bk:ad}. In fact, they observed something stronger which is peculiar to the function field setting. Namely, for every fixed element in the appropriate symmetric space, the Eisenstein series is a finite sum. Their argument is geometric in nature and relies on estimating the number of points in certain moduli spaces which arise in a geometric construction of Eisenstein series.
Alternatively, one can also easily adapt the arguments in this paper to the function field setting to reprove the result of Braverman and Kazhdan. We briefly indicate the argument here. As in the proof of Theorem~\ref{main}, we first break up the Eisenstein series into a sum over Bruhat cells indexed by $w \in W^{\theta}.$ On each such cell, it is easy to see using the arguments in \cite[Lemma~2.5]{ga:ms2} that there can be at most finitely many elements whose classical Iwasawa component lies in the support of any compactly supported function on $(Z(M_{\th,\mathbb{A}_F}) M_{\th, F})\backslash M_{\th, \mathbb{A}_F}/(K_{\mathbb{A}_F}\cap M_{\th, \mathbb{A}_F})$. It is a result of Harder (see \cite[Lemma~I.2.7]{mw}) that cusp forms do indeed have compact support on their fundamental domain. Hence, only finitely many terms contribute to the cuspidal loop Eisenstein series on each cell. Moreover, one can use an analogue of our main Iwasawa inequalities Theorem~\ref{iwineq} to show that cells corresponding to $w$ with $\ell(w)$ sufficiently large cannot contribute \emph{any} non-zero elements to the Eisenstein series. Indeed, as in the proof of Theorem~\ref{main}, one may again show that as $\ell(w) \rightarrow \infty$ the central term of the Iwasawa component is bounded above by $e^{p(\ell(w))}$ where $p(\cdot)$ is a quadratic polynomial with strictly negative quadratic term. On the other hand, the analogue of (\ref{y:inequ:2}) dictates that as $\ell(w) \rightarrow \infty$ the central piece grows smaller and the classical piece must grow large and therefore eventually lie outside the support of the cusp form.
|
1,116,691,498,342 | arxiv | \section{Introduction}
Current cosmic acceleration could be due to a cosmological constant,
a constant vacuum energy density. Quintessence poses an alternative where
there is a dynamical scalar field, rolling in a potential. Cosmic
acceleration is also possible without a potential, through changing the
kinetic structure of the scalar field, as in purely kinetic k-essence
\cite{kess1,kess2,0705.0400,gubilin}. Here we explore using neither the potential
nor kinetic structure, but modified gravity to deliver an effective
cosmological constant, so the cosmic expansion is not only accelerating but
identical to the $\Lambda$CDM cosmology.
Alternatives are interesting and useful since
numerous issues exist with employing a potential, or a cosmological constant,
as they should be altered by quantum corrections. An effective cosmological
constant, without any actual vacuum energy, is therefore an interesting idea
to pursue, especially if we do
so within the framework of a shift symmetric theory, which ameliorates many
quantum corrections.
Here we investigate the cosmology where Horndeski gravity acts like an
effective cosmological constant, exploring its observational impact on
the modified gravitational strengths for cosmic structure growth and light
propagation and prediction for growth rate measurable by redshift
space distortions in galaxy surveys,
the consequences for the form of the Horndeski functions, and theoretical
soundness. Section~\ref{sec:eqs} lays out the basic equations of motion
and classes of theories, while Section~\ref{sec:sols} derives the
solutions and observational impacts. Section~\ref{sec:sound} treats
the soundness of the theories and we conclude in Section~\ref{sec:concl}.
\section{How to Get a Cosmological Constant} \label{sec:eqs}
The Horndeski action is
\begin{equation}
S = \int d^{4}x \sqrt{-g} \,\Bigl[ G_4(\phi)\, R + K(\phi,X) -G_3(\phi,X)\Box\phi + {\cal L}_m[g_{\mu\nu}]\,\Bigr]\,,
\label{eq:action}
\end{equation}
where $\phi$ is the scalar field, $X=-(1/2)g^{\mu\nu}\phi_\mu\phi_\nu$,
$R$ is the Ricci scalar, ${\cal L}_m$ the matter Lagrangian, and
$K$, $G_3$, $G_4$ the Horndeski terms. We will work within a
Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) cosmology. Note that taking
$G_5=0$ and $G_4=G_4(\phi)$ ensures the speed of gravitational wave
propagation is equal to the speed of light.
The equations of motion, writing $2G_4=\mpl+A(\phi)$, are
\begin{eqnarray}
3H^2(\mpl+A)&=&\rho_m+2XK_X-K+6H\dot\phi XG_{3X}
-2XG_{3\phi}-3H\dot\phi A_\phi\label{eq:fried}\\
-2\dot H\,(\mpl+A)&=&
\ddot\phi\,(A_\phi-2XG_{3X})-H\dot\phi\,(A_\phi-6XG_{3X})
+2X\,(A_{\phi\phi}-2G_{3\phi})\notag\\
&\qquad&+2XK_X+\rho_m+P_m \label{eq:fulldh}\\
0&=&\ddot\phi\,\left[K_X+2XK_{XX}-2G_{3\phi}-2XG_{3\phi X}+6H\dot\phi(G_{3X}+XG_{3XX})\right]\notag\\
&\qquad&+3H\dot\phi\,(K_X-2G_{3\phi}+2XG_{3\phi X})-K_\phi\notag\\
&\qquad&+2X\,\left[K_{\phi X}-G_{3\phi\phi}+3G_{3X}(\dot H+3H^2)\right]
-3A_\phi(\dot H+2H^2)\,. \label{eq:fullddphi}
\end{eqnarray}
Thus the Horndeski and matter terms determine the cosmic expansion rate
$H$ and scalar field evolution $\phi$. Note the mix of dependent variables:
we have for example $G_3(\phi,X)$ but $\rho_m(a)$, $H(a)$.
We must solve the coupled equations to figure out how they are related, i.e.\
$a(\phi,X(\phi))$. This can be quite involved except for the simplest
forms of the Horndeski functions, e.g.\ power laws.
Here we will take a different approach, and specify not the Horndeski
functions but the Hubble parameter $H$, in particular as exactly that
of $\Lambda$CDM:
\begin{equation}
3\mpl H^2=\rho_m+\rho_\Lambda=(3\mpl H_0^2-\rhol)\,a^{-3}+\rhol\,,
\label{eq:hlam}
\end{equation}
where $H_0=H(a=1)$ and $\rhol$ is a constant (effective) energy density.
From this we will attempt to derive the Horndeski
functional forms and scalar field evolution.
This approach cannot be done in general. Consider the effective dark
energy density and pressure,
\begin{eqnarray}
\rho_{\rm de}&=&2XK_X-K+6H\dot\phi XG_{3X}-2XG_{3\phi}-3H\dot\phi A_\phi
\label{eq:rhode}\\
\rho_{\rm de}+P_{\rm de}&=&
\ddot\phi\,(A_\phi-2XG_{3X})-H\dot\phi\,(A_\phi-6XG_{3X})
+2X\,(A_{\phi\phi}-2G_{3\phi})+2XK_X\,. \label{eq:rhopp}
\end{eqnarray}
For our effective cosmological constant, $\rho_{\rm de}+P_{\rm de}=0$,
but we have a mix of $\dot\phi(t)$ and the Horndeski functions which
cannot be separately determined without further assumptions.
Following the idea of quintessence, which uses only the kinetic function
in the action Eq.~\eqref{eq:action},
(in the form $K=\dot\phi^2/2-V(\phi)$), or purely kinetic k-essence, which
uses only the kinetic function $K(X)$, let us explore one Horndeski function
at a time. If we try to use only $G_4(\phi)$, then the scalar field equation
\eqref{eq:fullddphi} is merely $A_\phi(\dot H+2H^2)=0$; this is only valid
for FLRW cosmology if $A_\phi=0$, i.e.\ $G_4$ is just the usual constant
Planck mass squared (over 2). If we take $G_3(\phi)$, i.e.\ no dependence
on $X$, this can be converted to k-essence \cite{2009.01720} (though not
purely kinetic k-essence in general, rather $K(\phi,X)$). For the
(noncanonical) kinetic term alone, the $\dot H$ equation~\eqref{eq:fulldh}
gives simply $XK_X=0$ for an effective cosmological constant; then
Eq.~\eqref{eq:fullddphi} implies $K_\phi=0$ too, so there is no such
cosmological constant behavior solution.
Thus we are left with $G_3(X)$. This also has the nice property of being
shift symmetric, though we will relax that at times.
Such an action -- though with a kinetic term -- has been used in
kinetic gravity braiding dark energy \cite{kgb}, inflation \cite{ginfl},
and to address the original cosmological constant problem \cite{temper}.
Now Eq.~\eqref{eq:rhopp}
becomes
\begin{equation}
0=-2XG_{3X}\,(\ddot\phi-3H\dot\phi)\,, \label{eq:gphi}
\end{equation}
with solution
\begin{eqnarray}
\ddot\phi&=&3H\dot\phi\\
\dot\phi&=&\dot\phi_i\,\left(\frac{a}{a_i}\right)^3\,. \label{eq:dphia}
\end{eqnarray}
With the solution for $\dot\phi(a)$ (and implicitly $\phi(a))$, we can
then use Eq.~\eqref{eq:rhode} (or equivalently Eq.~\ref{eq:fullddphi})
to determine $G_3(X)$. This demonstrates the inverse path of the
usual construction -- starting with $\Lambda$CDM and deriving $G_3(X)$,
rather than
starting with the Horndeski functions and deriving the cosmic expansion
evolution $H(a)$.
Interestingly, the scalar field motion is independent of the form of
$G_3$; the kinetic energy simply grows with scale factor as $X\sim a^6$
(recall no potential was introduced). This is like a time reversed
version of ``skating'' \cite{skate1,skate2}, where a field on a flat
potential glides with diminishing kinetic energy $X\sim a^{-6}$;
here the modified gravity speeds up the field rather than the field slowing
due to Hubble friction ultimately to a stop (cosmological constant).
\section{Deriving $G_3$ and Cosmological Influences} \label{sec:sols}
Let us proceed to derive $G_3(X)$ without allowing any other Horndeski term,
including the kinetic term $K(X)$. While we hold to no $K(X)$, we will
also explore the effect of having a potential as part of $K$ (recall that
for quintessence, for example, $K=X-V$).
\subsection{Zero Potential} \label{sec:nov}
In the absence of any potential, Eq.~\eqref{eq:rhode} becomes
\begin{equation}
6\sqrt{2}\,HX^{3/2}G_{3X}=\rhol\,. \label{eq:geqnov}
\end{equation}
Using Eq.~\eqref{eq:hlam} and recalling that $\rhol$ is a constant, the
solution is
\begin{equation}
G_3(X)=G_3(X_i)-\frac{\rhol}{\rhomo}\sqrt{\frac{2\mpl}{3X_i a_i^{-6}}}\,
\left[\sqrt{\rhomo(X/X_i)^{-1/2}+\rhol}-\sqrt{\rhomo+\rhol}\right]\,.
\end{equation}
Recall that $\rhomo=3\mpl H_0^2-\rhol$. It is interesting that even
in this extremely simple situation of a single Horndeski function of
a single variable, the functional form corresponding to the concordance
cosmology is not trivial. That is, if one attempted to parametrize the
Horndeski functions a priori, one might not guess forms like
$\sqrt{bX^{-1/2}+c}=\sqrt{b}\,X^{-1/4}\sqrt{1+(c/b)X^{1/2}}$.
Interestingly, a similar complexity was found for the reconstruction
of $K(X)$ for a simple physical system with kinetic gravity braiding
\cite{kgb2011}.
\subsection{Linear Potential} \label{sec:linv}
We can make a slight elaboration by adding a linear potential. Such a potential
$V=\lambda^3\phi$ is shift symmetric and has been
used for cosmic acceleration in inflation and dark energy \cite{linpoti,linpotde}.
We keep $K_X=0$ but now $K_\phi=-\lambda^3$.
This does not affect Eq.~\eqref{eq:gphi} or its solution
Eq.~\eqref{eq:dphia} for $\dot\phi(a)$, but does impact the solution for
the functional form $G_3(X)$. Equation~\eqref{eq:geqnov} now
becomes
\begin{equation}
6\sqrt{2}\,HX^{3/2}G_{3X}=\rhol-\lambda^3\phi\,. \label{eq:geqlinv}
\end{equation}
To solve this we need $\phi(X(a))$. The integral of Eq.~\eqref{eq:dphia}
yields
\begin{eqnarray}
\phi(a)&=&\phi(a_i)+\dot\phi_i a_i^{-3}\int dt\,a^3\\
&=&\phi(a_i)+\dot\phi_i a_i^{-3}\int_{a_i}^a dA \,A^2
\frac{3\mpl}{\sqrt{\rhomo A^{-3}+\rhol}}\\
&=&\phi(a_i)-\dot\phi_i a_i^{-3} \frac {\rhomo}{2\rhol} \sqrt{\frac{\mpl}{3\rhol}}
\left(\ln\left[\frac{\left(\sqrt{3\mpl H^2/\rhol+1}\right)\left(\sqrt{3\mpl H_i^2/\rhol-1}\right)}{\left(\sqrt{3\mpl H^2/\rhol-1}\right)\left(\sqrt{3\mpl H_i^2/\rhol+1}\right)}\right]-\frac{2\sqrt{3\mpl H^2/\rhol}}{3\mpl H^2/\rhol-1}+\frac{2\sqrt{3\mpl H_i^2/\rhol}}{3\mpl H_i^2/\rhol-1}\right) \label{eq:phih}\\
&=&\phi(a_i)-\dot\phi_i a_i^{-3} \frac{\rhomo}{2\rhol} \sqrt{\frac{\mpl}{3\rhol}}
\left(\ln\left[\frac{\left(1+\sqrt{\olm}\right)\left(1-\sqrt{\oli}\right)}{\left(1-\sqrt{\olm}\right)\left(1+\sqrt{\oli}\right)}\right]-\frac{2\sqrt{\olm}}{1-\olm}+\frac{2\sqrt{\oli}}{1-\oli}\right)\,. \label{eq:phio}
\end{eqnarray}
Recall that in terms of $X$,
\begin{eqnarray}
H^2(X)&=&\frac{\rhomo\,(X/X_i)^{-1/2}+\rhol}{3\mpl} \label{eq:hasx}\\
\olm(X)&=&\left[1+\frac{\rhomo}{\rhol} a_i^{-3}\left(\frac{X}{X_i}\right)^{-1/2}\right]^{-1}\,. \label{eq:olmasx}
\end{eqnarray}
Using that
\begin{equation}
G_{3X}=\frac{dG_3}{dH^2}\,\frac{dH^2}{dX}=\frac{dG_3}{dH^2}\,X^{-3/2}\frac{-\rhomo\sqrt{X_i a_i^{-3}}}{\mpl}\,,
\end{equation}
gives to $G_3(a)$ a contribution from the second term of
Eq.~\eqref{eq:geqlinv} as an integral of the form
\begin{equation}
\int dH\,\phi=\sqrt{\frac{\rho}{3\mpl}}\,\left[(y+1)\ln(y+1)-(y-1)\ln(y-1)-(y_i+1)\ln(y_i+1)-(y_i-1)\ln(y_i-1)-\ln\frac{y^2-1}{y_i^2-1}\right]\,,
\end{equation}
where $y=\sqrt{1/\olm}=\sqrt{3\mpl H^2/\rhol}$.
The form of $G_3$ is then
\begin{eqnarray}
G_3(X)&=&G_3(X_i)-\frac{\rhol}{\rhomo}\sqrt{\frac{2\mpl}{3X_i a_i^{-6}}}\,
\left[\sqrt{\rhomo(X/X_i)^{-1/2}+\rhol}-\sqrt{\rhomo+\rhol}\right]\notag\\
&+&\frac{\lambda^3\mpl\sqrt{2}}{\rhomo\sqrt{X_ia_i^{-6}}}
\left\{\phi_i\,(H-H_i)+\dot\phi_i a_i^{-3}\frac{\rhomo}{2\rhol}\sqrt{\frac{\mpl}{3\rhol}}\left[\ln\frac{\sqrt{1/\oli}+1}{\sqrt{1/\oli}-1}-\frac{2\sqrt{\oli}}{1/\oli-1}\right]\,(H-H_i)\right.\notag \\
&\qquad&-\dot\phi_i a_i^{-3}\,\frac{\rhomo}{6\rhol}\,\left[(\sqrt{1/\olm}+1)\ln(\sqrt{1/\olm}+1)-(\sqrt{1/\oli}+1)\ln(\sqrt{1/\oli}+1)\right.\notag\\
&\qquad&\quad-\left.\left.(\sqrt{1/\olm}-1)\ln(\sqrt{1/\olm}-1)+(\sqrt{1/\oli}-1)\ln(\sqrt{1/\oli}-1)+\ln\frac{1/\oli-1}{1/\olm-1}\right]\right\}\,. \label{eq:g3full}
\end{eqnarray}
For the full glory, one would expand the notation using Eqs.~\eqref{eq:hasx} and \eqref{eq:olmasx} to show
$G_3(X)$ for this very simple model, with only a linear potential and no $K(X)$ or $G_4(\phi)$,
or $G_3(\phi)$ -- we emphasize that $G_3(X)$ in the above expression, with $\phi(X)$ from
Eq.~\eqref{eq:phio} and $\dot\phi=(2X)^{1/2}$ in Eq.~\eqref{eq:dphia}, gives the $\Lambda$CDM
solution to the Friedmann equations.
It seems quite unlikely that someone starting from $G_3(X)$ would have chosen such a functional form;
we see that assuming simple, e.g.~power law, forms should have no expectation of capturing
accurately the detailed physics of a cosmic expansion near $\Lambda$CDM.
\subsection{Cosmological Impact}
Apart from the expansion history, modified gravity affects the growth of
cosmic structure through the time varying effective gravitational strength --
the gravity history. Generally there are two gravitational strengths, seen
in two modified Poisson equations: those relating the time-time metric
potential to the matter density perturbation and relating the sum of the
time-time and space-space metric potentials to the matter density
perturbation. These can be abbreviated $\gm$ and $\gl$ respectively.
They are related to the Horndeski functions and their derivatives, but
a compact notation in terms of property functions $\al_i$ was given by
\cite{belsaw}. For computing the gravitational strengths we need
\begin{eqnarray}
\alm=\frac{d\ln 2G_4}{d\ln a}\\
\alb=\frac{\dot\phi XG_{3X}}{HG_4}\,,
\end{eqnarray}
for our class of modified gravity, where $\alpha_T=0$ since we took
the speed of gravitational waves to be the speed of light. As our
$2G_4=\mpl$ is constant, we also have $\alm=0$. Thus our theory is
part of the No Run Gravity class \cite{nrg}, and $\gm=\gl$ (thus
there is no gravitational slip) so we will
simply refer to $\geff$.
Evaluating our solutions for $G_3$, we have
\begin{equation}
\alb(a)=\olm(a)\,\left(1-\frac{\lambda^3 \phi}{\rhol}\right)\,, \label{eq:alb}
\end{equation}
where $\phi(a)$ is given by Eq.~\eqref{eq:phih} or \eqref{eq:phio}.
Note the beautifully simple form for the no potential case:
$\alb(a)=\olm(a)$. This would go to zero in the early universe,
indicating an unmodified general relativity, and freeze to unity in
the de Sitter future.
The gravitational strength for the No Run Gravity class is
\begin{equation}
\geff=\frac{\alb+\alb'}{\alb-\alb^2/2+\alb'}\ ,
\end{equation}
where $\geff$ is in units of Newton's constant and
a prime denotes $d/d\ln a$. For the no potential and
linear potential models this yields
\begin{equation}
\geff=\frac{\olm(4-3\olm)(1-\lambda^3\phi/\rhol)-\lambda^3\dot\phi_i a_i^{-3}(\olm/\rhol)(a^3/H)}{\olm(4-3\olm)(1-\lambda^3\phi/\rhol)-\lambda^3\dot\phi_i a_i^{-3}(\olm/\rhol)(a^3/H)-(1/2)\olm^2(1-\lambda^3\phi/\rhol)^2}\ .
\end{equation}
At early times the $\olm^2$ term from $\alb^2$ is negligible
compared to the linear $\alb$ term, and $\geff\to1$. Again this
indicates general relativity holds in the early universe. At late
times, $\geff\to2$ in the no potential case ($\lambda^3=0$)
and $\geff\to0$ in the linear potential case (not surprising, as
the field rolls to infinity and $G_3$, and hence $\alb$, must grow to match it in
order to keep $\rho_{\rm de}=\rhol$ for the $\Lambda$CDM background).
For the no potential case the gravitational strength takes a
simple form,
\begin{equation}
\geff=\frac{1-(3/4)\olm}{1-(7/8)\olm}\ . \qquad{\rm [No\ potential]}
\end{equation}
Figure~\ref{fig:geff} plots the braiding property function $\alb$
and the gravitational strength $\geff$ for the no potential and
linear potential cases. For the linear potential model there are
two parameters, essentially the initial energy density $\lambda^3\phi_i$
and initial velocity $\dot\phi_i$. We use dimensionless
parameters $\kappa_i=\lambda^3\phi_i/\rhol$ and
$\kappa_0=\lambda^3\phi(a=1)/\rhol$ for better physical
interpretation, and set $\Omega_{\Lambda,0}=0.7$.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.6\columnwidth]{gcases.ps}
\caption{The effective gravitational strength $\geff$ is plotted
for the no potential and various linear potential models (solid
curves). The dotted curve is $\alb$ for the no potential case;
for the linear potential cases $\alb$ goes very negative near
the present. The linear potential cases have
$(\kappa_i,\kappa_0-\kappa_i)=(1,1)$, $(1, 2)$, $(1, 5$), $(2, 1$) for the
blue, red, magenta, dark green curves respectively, from
top to bottom at $a=0.7$.
}
\label{fig:geff}
\end{figure}
All cases indeed behave as general relativity in the past.
The no potential case has no free parameters, and shows
strengthening gravity, $\geff>1$, with $\geff(z=0.5)=1.08$.
It asymptotes to $\geff\to2$ in the future, where $\alb\to1$.
The linear potential cases exhibit weakened
gravity, with the behavior shown for a variety of initial conditions
given by $\kappa_i$ and velocities, or field distance rolled,
characterized by $\kappa_0-\kappa_i$. Increasing $\kappa_i$ causes the deviation from
general relativity to occur earlier, while increasing
$\kappa_0-\kappa_i$ increases the deviation nearer the
present and determines the future behavior, though they
all eventually asymptote to $\geff\to0$ in the future (while
$\alb\to-\infty$). However, during the observable epoch
these models are all consistent with growth measurements,
e.g.\ $\geff\in[0.93,1]$ for $z>0.5$.
Figure \ref{fig:fsig} shows the growth rate $f\sigma_8(z)$
as can be measured from redshift space distortions in
galaxy redshift surveys. The ongoing survey with the
Dark Energy Spectroscopic Instrument (DESI \cite{desisci,desi19})
can make percent level measurements over a wide range
of redshifts. The low redshift region where the modified
gravity effects are strongest can gain further precision
from peculiar velocity surveys \cite{pv1,pv2,pv3}. Note
that the different shapes of the predicted curves help lift
degeneracy with the amplitude of density perturbations,
e.g.\ $\sigma_{8,0}$.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.6\columnwidth]{fsig.ps}
\caption{The cosmic growth rate $f\sigma_8$ is
plotted vs redshift, for general relativity (GR),
the Horndessence no potential theory, and two
Horndessence linear potential models corresponding
to the same color curves as in Fig.~\ref{fig:geff} and
labeled by $(\kappa_i,\kappa_0-\kappa_i)$.
}
\label{fig:fsig}
\end{figure}
\section{Soundness} \label{sec:sound}
The property functions $\alpha_i$ are also convenient
for checking the soundness of the theory, specifically whether
it is ghost free and stable to scalar perturbations. The ghost
free condition is given by ${\cancel{g}}\equiv\alk+(3/2)\alb^2\ge0$.
For our class of theory,
\begin{equation}
\alk=\frac{12\dot\phi X(G_{3X}+XG_{3XX})}{H\mpl}\,.
\end{equation}
For the no potential and linear potential models this becomes
\begin{equation}
\alk=-\frac{3}{2}\olm(1+\olm)\left(1-\frac{\lambda^3\phi}{\rhol}\right)-\frac{\lambda^3\dot\phi/H}{3\mpl H^2}\,.
\end{equation}
Combining this with Eq.~\eqref{eq:alb}, the ghost free condition is
\begin{equation}
\cancel{g}=-\frac{3}{2}\olm+\frac{3}{2}\frac{\olm(1-\olm)\,\lambda^3\phi}{\rhol}-\frac{\lambda^3\dot\phi/H}{3\mpl H^2}+\frac{3}{2}\left(\frac{\olm\,\lambda^3\phi}{\rhol}\right)^2\ge0\,. \label{eq:noghost}
\end{equation}
We immediately see that this is violated for the no potential ($\lambda^3=0$)
model. For the linear potential model it can be ghost free.
In the asymptotic future, $\phi\sim a^3$ and the last term dominates
so the condition holds. In the asymptotic past, $\phi\sim\phi_i+a^{9/2}$,
and
\begin{equation}
\cancel{g}\to -\frac{3}{2}\olm\left(1-\frac{\lambda^3\phi_i}{\rhol}\right)\,.
\end{equation}
So we would require the field to start from a frozen state with
$\phi_i>\rho/\lambda^3$ to have a ghost free theory.
To ensure stability against scalar perturbations, the sound speed
squared must be nonnegative,
\begin{equation}
\cancel{g}c_s^2=\frac{\alb}{2}(3\olm-1-\alb)+\alb'\ge0\,.
\end{equation}
The expression evaluates to
\begin{equation}
\cancel{g}c_s^2=\frac{5}{2}\olm\left(1-\frac{\lambda^3\phi}{\rhol}\right)-\frac{\olm^2}{2}\left(1-\frac{\lambda^3\phi}{\rhol}\right)^2
-\frac{3\olm^2}{2}\left(1-\frac{\lambda^3\phi}{\rhol}\right)-\frac{\olm}{\rhol}\frac{\lambda^3\dot\phi_i(a/a_i)^3}{H}\,. \label{eq:stable}
\end{equation}
Recalling that at late times $\phi\sim a^3$, we see the second term
dominates and $\cancel{g}c_s^2\to-(1/2)(\lambda^3\phi/\rhol)^2<0$, giving a
late time instability. At early times $\cancel{g}c_s^2\to(5/2)\olm(1-\lambda^3\phi_i/\rhol)$
so we would require $\phi_i<\rho/\lambda^3$ -- the exact opposite
of the ghost free condition! Thus neither the no potential nor the
linear potential model is sound.
Can we extend this no go situation to an arbitrary potential (giving up the
desired property of shift symmetry)? For a general potential $V(\phi)$,
the solution $G_3(X)$ will change, but we can write the $\alpha_i$ and
the soundness conditions without solving for $G_3(X)$, just using
Eq.~\eqref{eq:geqlinv} with $\lambda^3\phi$ replaced with $V$.
This gives the minor change
\begin{eqnarray}
\alb&=&\olm\,\left(1-\frac{V}{\rhol}\right)\\
\alk&=&-\frac{3}{2}\olm(1+\olm)\left(1-\frac{V}{\rhol}\right)-\frac{\dot V/H}{3\mpl H^2}\,.
\end{eqnarray}
The no ghost condition is Eq.~\eqref{eq:noghost}, simply
with $\lambda^3\phi$ replaced with $V$ (so $\lambda^3\dot\phi\to\dot V$);
the stability condition is Eq.~\eqref{eq:stable} with the same
substitution (noting that $\lambda^3\dot\phi_i(a/a_i)^3=\lambda^3\dot\phi\to\dot V$).
However, the new element is that we now have freedom
in $\dot V$: it no longer has to go as $\dot\phi\sim a^3$. Studying
the equations, we see that at early times
\begin{eqnarray}
\cancel{g}\to -\frac{3\olm}{2}\left[1-\frac{V}{\rhol}-\olm\left(\frac{V}{\rhol}\right)^2\right]-\frac{\olm\dot V}{\rhol H}\\
\cancel{g}c_s^2\to \frac{5\olm}{2}\left(1-\frac{V}{\rhol}\right)-\frac{\olm^2}{2}\left(1-\frac{V}{\rhol}\right)^2-\frac{\olm\dot V}{\rhol H}\,.
\end{eqnarray}
If $V\lesssim\rhol$ then
we need $\dot V<0$. and the magnitude must be
$|\dot V|\gtrsim H\rhol\gtrsim HV$. But with $V$ changing rapidly
it will eventually break the criterion $V\lesssim\rhol$. If $V\gg\rhol$
then if $\olm V/\rhol<1$ we need $\dot V<0$ and $|\dot V|>HV$,
and eventually $\olm V/\rhol<1$ is overturned. If $\olm V/\rhol>1$
then we need $\dot V<0$ with $\olm|\dot V|/(\rhol H)>(\olm V/\rhol)^2$, or
$|\dot V|>HV\,[V/(\mpl H^2)]$.
At late times,
\begin{eqnarray}
\cancel{g}\to \frac{3}{2}\left[\left(\frac{V}{\rhol}\right)-1\right]-\frac{\dot V}{\rhol H}\\
\cancel{g}c_s^2\to -\frac{1}{2}\left[\left(\frac{V}{\rhol}\right)-1\right]-\frac{\dot V}{\rhol H}\,.
\end{eqnarray}
If $V<\rhol$ then we need $\dot V<0$ and $|\dot V|>\rhol H$,
which again will eventually become inconsistent with $V<\rhol$.
For $V>\rhol$, we need $\dot V<0$ and $|\dot V|\gtrsim HV\,(V/\rhol)$.
It's unclear if this can be realized: as $V$ is driven smaller by $\dot V<0$,
it may become inconsistent with $V>\rhol$. However, if $V\to\rhol$
then both the ghost free and stability conditions may be satisfied.
Another possibility is to allow a negative potential, so $V$ is
just driven more negative. (This does not have the usual Big Crunch
doomsday since the modified gravity term in
Eq.~\ref{eq:geqlinv} with $\lambda^3\phi$ replaced by $V$
compensates, so the cosmic expansion remains as in $\Lambda$CDM.)
\section{Conclusions} \label{sec:concl}
Horndessence generalizes scalar field evolution to achieve cosmic
acceleration in a manner parallel to how quintessence does with a
nonconstant potential or k-essence does with a noncanonical kinetic
term. Here it is modified gravity through the Horndeski braiding
function $G_3(X)$ that pushes the scalar field. While usually one
would start with a Lagrangian function $G_3(X)$, somehow
motivated, and derive the resulting cosmic expansion, here we
explore the inverse path of requiring a $\Lambda$CDM expansion
history, as consistent with observations, and investigating the
derived necessary function $G_3(X)$.
The results show that even for this simple expansion behavior,
the functional form of $G_3(X)$ can be more involved than a
simple a priori parametrization such as a power law or polynomial
in $X$ -- see Eq.~\eqref{eq:g3full}! In fact, it is not even a rational
function. Adding the simplest
possible elaboration in terms of a linear (shift symmetric) potential
greatly increases the complication. Thus one must take care
when starting from parametrization of the Horndeski action
(or effective field theory property) functions -- it is not clear that
any measure or prior on some simple functional parametrization
space will properly sample the observably viable cosmology.
Horndessence as treated in this article was limited to a
theory that preserves shift symmetry. This made it extremely
predictive: the cosmic expansion history is that of $\Lambda$CDM by
construction, but the gravitational strength for matter (cosmic
growth of structure) and light (gravitational lensing) differ
from general relativity. We have $\gm=\gl\ne G_{\rm Newton}$
so there is no gravitational slip, and Horndessence is a type of
No Run Gravity so gravitational wave propagation is not affected.
General relativity holds in the early universe. We exhibit the
braiding evolution $\alb(a)$ and $\geff(a)$, as well as the
observable cosmic structure growth rate $f\sigma_8(a)$, for
the two models of no potential and linear potential. For the no
potential model there are
the same number of free parameters as in $\Lambda$CDM.
The two simple, shift symmetric models, with no potential and
linear potential, however cannot satisfy the ghost free and stability
conditions at all times. We have outlined a way around this
by using a more general (not shift symmetric) potential. One could
also increase the complexity of the action by allowing a $K(X)$
term. We leave such extensions to future work, but do not
expect them to change the key result that cosmic expansion behavior
such as $\Lambda$CDM does not lead to simple functional
parametrizations of action functions like $G_3(X)$.
\acknowledgments
I thank Stephen Appleby for helpful discussions.
This work is supported in part by the Energetic Cosmos
Laboratory and the
U.S.\ Department of Energy, Office of Science, Office of High Energy
Physics, under contract no.~DE-AC02-05CH11231.
|
1,116,691,498,343 | arxiv | \section{Introduction}
A major open problem in mathematical physics is the existence of an
Anderson
transition in dimension three and higher for random Schrödinger operators.
These operators model transport in disordered media, a
classical example being electrical conductivity in metals with impurities.
In this paper, we consider the quantum mechanical problem of an electron
moving on a lattice $\mathbb Z^{d}$ and interacting with a random potential.
The corresponding mathematical model is the so-called
discrete Random Schrödinger operator,
or Anderson's tight binding model \cite{anderson1958},
acting on the Hilbert space $l^{2} (\mathbb Z^{d})$ and defined by
\begin{align*}
H \coloneqq - \Delta_{\mathbb{Z}^{d}} + \lambda V,
\end{align*}
where $\Delta_{\mathbb{Z}^{d}}$ is the lattice Laplacian
$(\Delta \psi) (j)=\sum_{k:|j-k|=1} (\psi (j) -\psi (k))$,
and $V$ is a multiplication operator $(V\psi)(j) = V_j \psi(j)$.
Here, $\{V_{j} \}_{j\in \mathbb{Z}^d}$ is a collection of random variables
(independent or correlated)
and $\lambda>0$ is a parameter expressing the strength of disorder.
Physical information are encoded in the spectral properties of $H$.
For a large class of random potentials $V$ localization
of the eigenfunctions has been proved
in $d=1$ for arbitrary disorder
and in $d\geq 2$ for large disorder or at the band edge.
A localization - delocalization transition
has been proved on tree graphs,
and is conjectured to hold on $\mathbb{Z}^{d},$ for $d\geq 3.$
A detailed up-to-date review on the model, known results and tools
can be found in the book by Aizenman and Warzel \cite{AizWar}.
Finite volume criteria allow to reconstruct properties of $H$
from the Green's function (or resolvent)
of a finite volume approximation $H_{\Lambda},$
by taking the thermodynamic limit $\Lambda \uparrow \mathbb{Z}^d.$
More precisely, let $\Lambda\subset \mathbb Z^d$ be a finite cube
centered around the origin with volume $|\Lambda| =N$.
We define the Random Schrödinger operator
$H_{\Lambda}\in l^2(\Lambda)$ on $\Lambda$ as
\begin{align}\label{eq:Hfinitevol}
H_{\Lambda }= -\Delta+\lambda V ,
\end{align}
where $\Delta=\Delta_{\Lambda } $ is the discrete Laplacian on $\Lambda$
\begin{align*}
(\Delta \psi)(j) = \sum_{k\in \Lambda :|j-k|=1} (\psi (k)- \psi (j))
+ \mbox{ eventual boundary terms. }
\end{align*}
The relevant quantities are expressions of the form
\begin{align}\label{eq:prod}
\mathbb{E}[ G_{\Lambda}(z_{1})_{j_{1},k_{1}}\dots
G_{\Lambda}(z_{n})_{j_{n},k_{n}}],
\end{align}
where $G_{\Lambda } (z)\coloneqq (z\mathds{1}_{\Lambda }-H_{\Lambda })^{-1},$
$z\in \mathbb{C}\setminus \sigma (H),$ and
$\mathbb{E}$ denotes the average with respect
to the random vector $V.$
In particular the (averaged) density of states $\bar\rho_{\lambda } (E)$
satisfies the relation,
\begin{align*}
\int_{\mathbb R} \frac{1}{z-E}\ \bar\rho_{\lambda } (E) \,\mathrm{d} E
= \frac{1}{\pi|\Lambda|}\mathbb{E}[{\rm Tr\,} G_{\Lambda }(z )],
\end{align*}
hence (see for example \cite[Section 4 and Appendix B]{AizWar})
\begin{align*
\bar\rho_\Lambda(E)
\coloneqq -\frac{1}{\pi|\Lambda|} \lim_{\varepsilon\to 0^{+}}
\mathbb{E}[ {\rm Im \,} {\rm Tr\,} G_{\Lambda }(E+i\varepsilon )],
\end{align*}
where $E\in\mathbb{R}.$
Regularity properties of $\bar\rho_\Lambda(E)$ and its derivatives
can be inferred from the generating function
\begin{align}\label{eq:generatorG}
\mathcal{G}_{\varepsilon }(E,\tilde E ) =
\mathbb{E}\left[
\frac{\det ((E+ i\varepsilon) \mathds{1}_{\Lambda } - H_{\Lambda })}
{\det ((\tilde E + i \varepsilon)\mathds{1}_{\Lambda }- H_{\Lambda })}
\right].
\end{align}
For example
\begin{align}\label{eq:gree-repr}
{\rm Tr\,} G_{\Lambda }(E+i\varepsilon )
= -\partial_{\tilde E} \mathcal{G}_{\varepsilon }(E,\tilde E)|_{\tilde E= E}
=\partial_{ E} \mathcal{G}_{\varepsilon }(E,\tilde E)|_{\tilde E= E}.
\end{align}
Information on the nature of the spectrum can be deduced
from the thermodynamic limit of
\begin{align*}
\mathbb{E}[|G_{\Lambda }(E+i\varepsilon )_{jk} |^2],
\qquad \mbox{or}\qquad
\rho_2(E, E+\omega)
\coloneqq \mathbb{E}[ \rho_\Lambda(E) \rho_\Lambda(E+\omega )]
\end{align*}
where the spectral parameter $\varepsilon $ and the energy difference $\omega $
must be taken of order $|\Lambda |^{-1}.$
\vspace{0,2cm}
A possible tool to analyse these objects is the so-called
supersymmetric (SUSY) approach.
It allows to rewrite averages of the form \eqref{eq:prod}
as an integral involving only the Fourier transform
of the probability distribution, at the cost
of introducing Grassmann variables in the intermediate steps.
A short introduction on Grassmann variables and their application
in our context is given in Appendix \ref{app:susy}.
For more details see for example the following monographs:
\cite{Varadarajan2004, Berezin1987,Wegner2016,DeWitt1992}.
This formalism proved to be especially useful in the case of random operators
arising from quantum diffusion problems \cite{Efetov1999}.
The supersymmetric approach was applied with success to study
Anderson localization as well as phase transitions on tree-graphs
\cite{Wang2001,bovier1990,campanino1986supersymmetric,klein1986rigorous}.
All these applications are based on variations of the following key fact.
\begin{theo}\label{theo:generalrepresentation}
Let $H_{\Lambda }$ be as in Eq. \eqref{eq:Hfinitevol}
and assume the $V_j$ are independent random variables with probability
measure $\mu_{j}$ such that
$\int v_{j}^{2}d\mu_{j} (v_{j})<\infty $ $\forall j$, i.e.,
its Fourier transform
$\hat{\mu}_{j} (t)\coloneqq \int e^{-itv_{j}}d\mu_{j} (v_{j})$
is twice differentiable with bounded first and second derivatives.
Let $\mathcal{A} = \mathcal{A}[\{\chi_{j},\bar\chi_{j} \}_{j\in \Lambda}\}]$
be a Grassmann algebra, $z\in \mathbb C^\Lambda$ a family of
complex variables and set
$\Phi_{j} \coloneqq (z_{j},\chi_{j})^t ,$ $\Phi^*_{j} \coloneqq (\bar z_{j}, \bar \chi_{j})$
such that
$\Phi^*_{j}\Phi_{k}= \bar z_{j} z_{k}+ \bar \chi_{j}\chi_{k}$
is an even element in $\mathcal{A}$ for all $j,k\in \Lambda $.
For any matrix $A\in \mathbb{C}^{\Lambda \times \Lambda }$, we define
\begin{align*}
\Phi^* A\Phi
\coloneqq \Phi^* {\rm diag\,} (A, A)\Phi
=\sum_{j,k\in \Lambda }A_{jk}\Phi^{*}_{j} \Phi_{k},
\end{align*}
where $ {\rm diag\,} (A, A)$ is a $2|\Lambda |\times 2 |\Lambda |$
block diagonal matrix.
In particular $\Phi^* \Phi= \sum_{j\in \Lambda }\Phi^{*}_{j} \Phi_{j}.$
Finally, for any even element $a=b_{a}+n_{a}$ in $\mathcal{A}^0$
with $n_{a}^{3}=0$ we define (cf. Eq. \eqref{eq:upgradefunc})
\begin{align}\label{eq:fouriera}
\hat{\mu}_j(a)
= \mathbb{E}[e^{iaV_{j}}]
\coloneqq \hat{\mu}_j(b_{a})+\hat{\mu}'_j(b_{a}) n_{a}
+\tfrac{1}{2}\hat{\mu}''_j(b_{a})n_{a}^{2} .
\end{align}
Then the generating function \eqref{eq:generatorG} can be written as
\begin{align}\label{eq:susyav1}
\mathcal{G}_{\varepsilon }(E,\tilde E)
= \int [\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi]\
\,\mathrm{e}^{i \Phi^* (\mathbf{E} + i \varepsilon + \Delta)\Phi}
\prod_{j\in \Lambda }\hat{\mu}_j(\lambda \Phi^*_j \Phi_j),
\end{align}
where we defined
$[\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi]
\coloneqq \prod_{j\in \Lambda } (2\pi )^{-1} \,\mathrm{d} \bar z_{j} \,\mathrm{d} z_{j}
\,\mathrm{d} \bar \chi_{j} \,\mathrm{d} \chi_{j}$,
$ \Phi^* \varepsilon \Phi=\varepsilon \Phi^* \Phi$
and $\mathbf{E}
= {\rm diag\,} (\tilde E \mathds{1}_{|\Lambda|}, E\mathds{1}_{|\Lambda|})$
is a diagonal matrix. Moreover
\begin{align}\label{eq:susyav2}
\begin{split}
\mathbb{E}[|G_{\Lambda }(E+i\epsilon )_{jk}|^2]
=&
\int [\,\mathrm{d} {\Phi}^* \,\mathrm{d} \Phi]\, [\,\mathrm{d} {\tilde \Phi}^* \,\mathrm{d} \tilde \Phi]\,
\,\mathrm{e}^{i {\Phi}^* (\mathbf{E} + i \varepsilon + \Delta)\Phi
-i {\tilde \Phi}^* (\mathbf{E} -i \varepsilon + \Delta)\tilde \Phi} \\
& \times z_{j}\bar z_k \tilde{z}_{k} \overline{\tilde z}_j\,
\prod_{j\in\Lambda}\hat{\mu}_{j}
(\lambda (\Phi_j^* \Phi_j- {\tilde \Phi_j}^* \tilde \Phi_j)).
\end{split}
\end{align}
A similar representation holds for the two-point function $\rho_2(E,\tilde E)$.
\end{theo}
\begin{rem}
In the formulas above both $\hat{\mu}_{j}(\lambda (\Phi_j^* \Phi_j))$ and
$\hat{\mu}_{j}(\lambda (\Phi_j^* \Phi_j- {\tilde \Phi_j}^* \tilde \Phi_j))$
are well defined.
Indeed, the even elements $a_{1}\coloneqq\Phi_j^* \Phi_j$ and
$a_{2}\coloneqq\Phi_j^* \Phi_j- {\tilde \Phi_j}^* \tilde \Phi_j,$
have nilpotent part
$n_{a_{1}}=\bar \chi_{j}\chi_{j}$ and
$n_{a_{2}}=\bar \chi_{j}\chi_{j}-\bar {\tilde{\chi}}_{j}\tilde{\chi}_{j} $,
respectively.
The result then follows from $n_{a_{1}}^{2}=0=n_{a_{2}}^{3},$
together with Eq. \eqref{eq:fouriera}.
Note that we have taken independent variables above
only to simplify notations.
In the general case, the product of one-dimensional Fourier transforms
is replaced by a joint Fourier transform.
The generalized formula will hold as long as the Fourier
transform admits enough derivatives.
\end{rem}
\begin{proof}
We write $\mathcal{G}_{\varepsilon }(E,\tilde E) $
and $\mathbb{E}[|G_{\Lambda }(E+i\varepsilon )_{jk}|^2]$
as a supersymmetric integral (cf. Theorem \ref{theo:susyapproach})
\begin{align*}
&\mathcal{G}(E,\tilde E)
= \mathbb{E}\left[ \int [\,\mathrm{d} \Phi^* \,\mathrm{d}\Phi]\,
\,\mathrm{e}^{i \Phi^* (\mathbf{E} + i \varepsilon + \Delta - \lambda V) \Phi}\right]\\
&\mathbb{E}[|G_{\Lambda }(E+i\varepsilon )_{jk}|^2] =\\
&\qquad \quad \mathbb{E}\left[ \int [\,\mathrm{d} \Phi^* \,\mathrm{d}\Phi]\,
\, [\,\mathrm{d} {\tilde \Phi}^* \,\mathrm{d} \tilde \Phi]\,
\,\mathrm{e}^{i \Phi^* (\mathbf{E} + i \varepsilon + \Delta - \lambda V) \Phi
-i {\tilde \Phi}^* (\mathbf{E} -i \varepsilon + \Delta-\lambda V)\tilde \Phi}
z_{j}\bar z_k \tilde{z}_{k} \overline{\tilde z}_j\right ]
\end{align*}
This step holds for any choice of $V\in \mathbb R^{\Lambda}.$
Note that we need two copies of SUSY variables to represent
$\mathbb{E}[|G_{\Lambda }(E+i\varepsilon )_{jk}|^2]$.
When $\,\mathrm{d} \mu_{j}$ admits two finite moments, we can move the
average inside. The result follows.
\end{proof}
The aim of this paper is to extend this representation
to probability distributions with less regularity.
To this purpose we introduce a supersymmetric version of polar coordinates
which allows to reexpress
$e^{i\lambda V_{j} \Phi^*_{j}\Phi_{j}}$ as $e^{i\lambda V_{j} x_{i}},$
where $x_{j}\in \mathbb R$ is a real variable.
As a result, the formula can be extended to any probability distribution
on $N=|\Lambda |$ real variables.
In contrast to the ordinary ones,
supersymmetric polar coordinates introduce correction terms
due to the boundary of the integration domain.
The simple formula above will then be replaced by a sum of integrals.
As a concrete example, we consider the so-called Lloyd model,
with $V$ defined as
$V_j\coloneqq \sum_{k\in \Lambda } T_{jk} W_k$,
where $\{W_{k} \}_{k\in \Lambda }$
is a family of i.i.d. random variables with Cauchy distribution
$\,\mathrm{d} \mu(x)= \pi^{-1}(1+x^2)^{-1} \,\mathrm{d} x.$
The standard (uncorrelated) Lloyd model corresponds to $T_{jk}=\delta_{jk}.$
In this case the variables
$\{V_{j} \}_{j\in \Lambda }$ are independent and Cauchy distributed.
Note that $ \,\mathrm{d} \mu (x)$ has no finite moments.
For this model, the averaged Green's function
(and hence the density of states) can be computed exactly whenever
$T_{jk}\geq 0$ $\forall j,k$ (non-negative correlation)
\cite{Lloyd1969,Simon1983}.
Using supersymmetric polar coordinates, we show here that
for the non-negative linearly correlated Lloyd model Eq.
\eqref{eq:susyav1} and \eqref{eq:susyav2} remain valid,
with an appropriate redefinition of $\hat{\mu} (b_{a}+n_{a}).$
In this case, one can easily recover the exact formula
for the averaged Green's function.
The formula remains valid also in the case of linear negative correlation,
at the price of adding additional correction
terms, due to boundary effects.
We expect the supersymmetric representation will help to study
problems not yet accessible via other tools, such as negative correlations
or the two point function at weak disorder.
As a first test, we considered a simplified model
with small negative correlations localized on one site.
For this toymodel we used the supersymmetric representation to prove
that the density of states remains in the vicinity of the
exact formula. Our result holds in any dimension and arbitrary volume.
\paragraph{Overview of this article.}
In Section \ref{sec:main} we state the main results of the paper,
and give some ideas about the proofs.
More precisely, Section \ref{sec:main1} introduces
supersymmetric polar coordinates
(Theorem \ref{theo:polar}), with a general integrated function $f,$
not necessarily compactly supported.
Applications to $\mathcal{G}_{\varepsilon }(E,\tilde E) $ and
$\mathbb{E}[|G_{\Lambda }(E+i\varepsilon )_{jk}|^2]$
are given in Theorem \ref{theo:generalpolar}.
The detailed proofs of both theorems can be found in Section \ref{sec:polar}.
In Subsection \ref{sec:main2} we consider the Lloyd model
and give an application of the formula for a simple toymodel.
The corresponding proofs are in Section \ref{sec:application}.
\section{Main results} \label{sec:main}
\subsection{Supersymmetric polar coordinates}\label{sec:main1}
For an introduction to the supersymmetric formalism
see Appendix \ref{app:susy}.
Consider first $\mathcal{A} [\bar \chi,\chi]$
a Grassmann algebra with two generators.
The idea of supersymmetric polar coordinates is to transform between generators
$(\bar z, z,\bar\chi,\chi)$ of
$\mathcal{A}_{2,2}(\mathbb C)$ and $(r,\theta,\bar \rho,\rho)$ of
$\mathcal{A}_{2,2}(\mathbb R^+\times(0,2\pi))$
\footnote{cf. Definition \ref{def:generators}.
Note that $\bar \rho,\rho \in \mathcal{A}^1[\bar \chi,\chi]$.}
such that $\bar z z + \bar \chi \chi = r^2$.
A reasonable change is
\begin{align}\label{eq:polarcoordinates}
\Psi (r,\theta,\bar\rho,\rho)=
\begin{pmatrix}
z (r,\theta,\bar\rho,\rho)\\
\bar z (r,\theta,\bar\rho,\rho)\\
\chi(r,\theta,\bar\rho,\rho) \\
\bar\chi(r,\theta,\bar\rho,\rho)
\end{pmatrix}
\coloneqq
\begin{pmatrix}
\,\mathrm{e}^{i\theta } (r-\tfrac12 \bar\rho \rho )\\
\,\mathrm{e}^{-i\theta } (r-\tfrac12 \bar\rho \rho )\\
\sqrt{r} \rho \\
\sqrt{r} \bar\rho
\end{pmatrix}
\end{align}
Indeed, we have
$\bar z z + \bar \chi \chi
= (r-\frac 12 \bar\rho \rho)^2 + r \bar\rho\rho=r^2$.
Note that $0$ is a boundary point for polar coordinates
since it maps $\mathbb R^+\times (0,2\pi)$ to $\mathbb C\backslash \{0\} $.
For functions with compact support in $U = \mathbb C \backslash \{0\}$
a SUSY version of the standard coordinate change formula applies,
where the Jacobian is replaced by a Berezinian,
c.f. Theorem \ref{theo:changeofvariables}.
On the contrary, functions with $f(0) \neq 0$ have no compact support in the domain
$U= \mathbb C\backslash \{0\}$ and we collect additional boundary terms
as the following theorem shows.
\begin{theo}[Supersymmetric polar coordinates]\label{theo:polar}
Let $N\in\mathbb N$, $\mathcal{A}_{2N}$ the complex Grassmann algebra
generated by $\{\bar\chi_j ,\chi_j \}_{j=1}^N$ and
$\{\Phi_j^*,\Phi_j\}_{j=1}^N$ a set of supervectors
defined as in Theorem \ref{theo:generalrepresentation}.
Let $f\in \mathcal{A}_{2N,2N}(\mathbb C^N)$ be integrable,
i.e., all $f_I:\mathbb C^N\to \mathbb C$ are integrable.
Then
\begin{align}\label{eq:Ipolarsum}
I(f) = \int_{\mathbb C^N} [\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi] \ f (\Phi^*,\Phi) =
\sum_{\alpha\in\{0,1\}^N} I_\alpha(f)
\end{align}
with multiindex $\alpha$ and
\begin{align}\label{eq:I2polar}
I_\alpha(f) = \pi^{-|1-\alpha|}
\int_{(\mathbb R^+\times(0,2\pi))^{1-\alpha}}
(\,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho)^{1-\alpha}
\ f\circ \Psi_\alpha (r,\theta, \bar\rho, \rho),
\end{align}
where
$(\,\mathrm{d} r)^{1-\alpha}= \prod_{j:\alpha_j=0}\,\mathrm{d} r_j$ and $\Psi_\alpha$
is given by
$\Psi_\alpha : (r,\theta,\bar\rho,\rho) \mapsto (z,\bar z,\chi,\bar \chi)$ with
\begin{align*}
\begin{cases}
z_j(r_j,\theta_j,\bar\rho_j,\rho_j)
&=\delta_{\alpha_j0 }\, \,\mathrm{e}^{i\theta_j} (r_j-\tfrac12 \bar\rho_j\rho_j),\\
\bar z_j(r_j,\theta_j,\bar\rho_j,\rho_j)
&=\delta_{\alpha_j0 }\, \,\mathrm{e}^{-i\theta_j} (r_j-\tfrac12 \bar\rho_j\rho_j),\\
\chi_j (r_j,\theta_j,\bar\rho_j,\rho_j)
&= \delta_{\alpha_j0 }\, \sqrt{r_j} \rho_j,\\
\bar\chi_j (r_j,\theta_j,\bar\rho_j,\rho_j)
&= \delta_{\alpha_j0}\, \sqrt{r_j} \bar\rho_j.
\end{cases}
\end{align*}
\end{theo}
\begin{proof}
See Section \ref{sec:polar}.
\end{proof}
\begin{rem}
For $f$ compactly supported on $\mathbb C\backslash \{0\}$
(this means in particular $f(0) = 0$),
we recover the result of Theorem \ref{theo:changeofvariables}.
Namely for $\alpha = 0$, we obtain the right-hand side
of Theorem \ref{theo:changeofvariables}
while all contributions from $\alpha \neq 0$ vanish.
\end{rem}
\begin{example}
To illustrate the idea behind the above result,
consider the following simple example.
Let $\varphi$ be the smooth compactly supported function
$\varphi:\mathbb R\to\mathbb R$, given by
\begin{align*}
\varphi(x) =
\begin{cases}
\,\mathrm{e}^{-(1-2|x|^2)^{-1}} & \text{if }|x|<\frac{1}{\sqrt{2}}\\
0 & \text{otherwise}.
\end{cases}
\end{align*}
Note that $\varphi(0) = \,\mathrm{e}^{-1} \neq 0,$
hence $f(\bar z,z,\bar \chi,\chi) = \varphi(\bar z z + \bar \chi \chi)$
is a smooth function without compact support in $\mathbb C\backslash\{0\}$.
By a straightforward computation, we have
\begin{align*}
I(f) &= \int_{|z|<\frac{1}{\sqrt 2}}
[ \,\mathrm{d} \Phi^* \,\mathrm{d} \Phi] \
\,\mathrm{e}^{-(1-2\bar z z)^{-1}} (1-2 (1-2\bar z z)^{-2} \bar\chi \chi )\\
&= \frac{1}{2\pi}\int_0^{\frac{1}{\sqrt 2}}
\,\mathrm{d} r \int_0^{2\pi} \,\mathrm{d} \theta \
4r \,\mathrm{e}^{-(1-2r^2)^{-1}} (1-2r^2)^{-2} = e^{-1},
\end{align*}
where we expand the expression in the Grassmann variables
and change to ordinary polar coordinates after integrating
over the Grassmann variables.
Applying formulas \eqref{eq:Ipolarsum} and \eqref{eq:I2polar},
we obtain directly
\begin{align*}
I (f)
= \pi^{-1}\int_{\mathbb R^+\times(0,2\pi)} \mkern-30mu
\,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho \
f\circ \Psi (r,\theta,\bar\rho,\rho) + \ f \circ \Psi (0)
= e^{-1},
\end{align*}
where the first integral vanishes, since $f\circ \Psi$
is independent of $\bar\rho$ and $\rho$.
\end{example}
Now consider the generating function \eqref{eq:generatorG}.
In the case of an integrable density without other regularity conditions,
we obtain the following result.
\begin{theo}\label{theo:generalpolar}
Let $\Lambda\subset \mathbb Z^{d}$ be a finite volume and
$H_{\Lambda } = -\Delta + \lambda V$
be the Schrödinger operator introduced in Eq. \eqref{eq:Hfinitevol},
where $\{V_{j} \}_{j\in \Lambda }$ is a family of real random variables
with integrable joint density $\mu$.
Then the generating function \eqref{eq:generatorG} can be written as
\begin{align}\label{eq:polarrepresentation}
\mathcal{G}_\varepsilon(E,\tilde E)
=\hspace{-0,4cm} \sum_{\alpha\in\{0,1\}^\Lambda} \pi^{-|1-\alpha|}
\int_{(\mathbb R^+\times(0,2\pi))^{1-\alpha}} \mkern-40mu
(\,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho)^{1-\alpha}
\hat{ \mu}(\{\lambda r_j^2\}_{j\in\Lambda})\,
g\circ \Psi_\alpha (r,\theta,\bar \rho,\rho )
\end{align}
where $g (\Phi^*,\Phi)
= \,\mathrm{e}^{-i \Phi^* (\mathbf{E} + i \varepsilon + \Delta)\Phi}$,
$\mathbf {E}
= {\rm diag\,} (\tilde E \mathds{1}_{|\Lambda|}, E\mathds{1}_{|\Lambda|})$
and $\hat{ \mu}(\{\lambda r_j^2\}_{j\in\Lambda})$
is the $|\Lambda|$-dimensional, joint Fourier transform of $\mu$.
Similarly
\begin{align*}
&\mathbb{E}[|G_\Lambda(E+ i\varepsilon)_{jk}|^2]\\
&=
\sum_{\substack{\alpha\in\{0,1\}^\Lambda\\
\tilde\alpha\in\{0,1\}^\Lambda}}\pi^{-|1-\alpha| - |1-\tilde \alpha|}
\int_{\substack{(\mathbb R^+\times(0,2\pi))^{1-\alpha}\\
\times (\mathbb R^+\times(0,2\pi))^{1-\tilde\alpha}}}
(\,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho)^{1-\alpha}
(\,\mathrm{d} \tilde r \,\mathrm{d} \tilde \theta
\,\mathrm{d} \bar{\tilde \rho} \,\mathrm{d} \tilde\rho)^{1-\tilde\alpha}\\
&\qquad
\hat{ \mu}(\{\lambda (r_j^2-\tilde r_j^2)\}_{j\in\Lambda})\,
g^+\circ \Psi_\alpha (r,\theta,\bar \rho,\rho ) \
g^-\circ \Psi_{\tilde\alpha}(r,\theta,\bar \rho,\rho ),
\end{align*}
where $g^+ (\Phi^*,\Phi)
= \bar z_k z_j\,\mathrm{e}^{i \Phi^* (\mathbf{E} + i \varepsilon + \Delta)\Phi}$ and
$g^- (\tilde\Phi^*,\tilde\Phi) =
\overline{\tilde z_j} \tilde z_k
\,\mathrm{e}^{-i {\tilde\Phi}^* (\mathbf{E} - i \varepsilon + \Delta)\tilde\Phi}$.
\end{theo}
\begin{proof}[Idea of the proof]
Again we write $\mathcal{G}_\varepsilon(E,\tilde E) $ and
$|G_\Lambda(E+i\varepsilon)_{jk}|^2$ as a supersymmetric integral
(Theorem \ref{theo:susyapproach}).
Note that we need two copies of SUSY variables to represent
$|G_\Lambda(E+i\varepsilon)_{jk}|^2$.
Taking the average inside at this point would cause problems.
Hence we apply first our polar-coordinate formula Theorem \ref{theo:polar}.
Since $r$ is now real, the expression
$\mathbb{E}[\,\mathrm{e}^{i\lambda \sum_{j} V_j r_j^2}] $ is the
standard Fourier transform $\hat{ \mu}(\{\lambda r_j^2\}_{j\in\Lambda})$.
Details can be found in Section \ref{sec:polar}.
\end{proof}
\subsection{Applications to the Lloyd model}\label{sec:main2}
As a concrete example, we consider the Lloyd model with linear correlated random potentials, i.e.
$V_j= \sum_{k} T_{jk} W_k$, where
$W_k \sim \text{Cauchy}(0,1)$ are i.i.d. random variables, $T_{jk}=T_{kj} \in \mathbb R$ and $\sum_{j} T_{jk}> 0$.
We discuss three cases:
\begin{enumerate}
\item the classical Lloyd model, where $T_{jk} = \delta_{jk}$, hence $V_j \sim \text{Cauchy}(0,1)$ are i.i.d.
\item the (positive) correlated Lloyd model, where $T_{jk} \geq 0$ with $\sum_{j}T_{jk} >0$.
\item a toymodel with single negative correlation, i.e. $T_{jj} = 1$ and $T_{21} = T_{12} = -\delta^2 $ with $0<\delta < 1$ and $T_{jk} = 0$ otherwise. The indices $1$ and $2$ denote two fixed, nearest neighbour points $i_{1},i_{2}\in \Lambda$ with $|i_{1}-i_{2}|=1.$
\end{enumerate}
\begin{prop}\label{prop:Cauchyrepresentations} When $T_{jk} \geq 0 $ for all $j,k$ (Case 1. and 2. above) we have
\begin{align*
\mathcal{G}_\varepsilon (E,\tilde E)
= \int [\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi ]\
g(\Phi^*,\Phi)
\,\mathrm{e}^{-\sum_{k}\lambda \sum_{j} T_{jk}\Phi_j^* \Phi_j}.
\end{align*}
where $g(\Phi^*,\Phi) \coloneqq\,\mathrm{e}^{i \Phi^* (\mathbf{ E} - i \varepsilon + \Delta)\Phi}$.
For the toymodel (Case 3. above) a similar formula holds with additional correction terms. Precisely
\begin{align*
\begin{split}
\mathcal{G}_\varepsilon (E,\tilde E)
= \hspace{-0,2cm}&\sum_{ \beta \in \{++,+-,-+\} } \int_{{ \mathcal{I}}_\beta}[\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi] \ h(\Phi^*,\Phi) \,\mathrm{e}^{-\lambda \sum_{j=1}^2 T^\beta_j\Phi_j^* \Phi_j} + R(h)
\end{split}
\end{align*}
where $h (\Phi^*, \Phi) = g(\Phi^*,\Phi) \,\mathrm{e}^{-\lambda \sum_{j\neq 1,2}\Phi^*_j \Phi_j}$, we defined $T^{++} = (1-\delta^2)(1,1) $,
$T^{+-} = (1+\delta^2)(1,-1)$ and
$T^{-+} = (1+\delta^2)(-1,1)$ and \begin{align}\label{eq:Ibeta}
\begin{split}
\mathcal{I}_{++} &=
\{z\in \mathbb C^N : \delta |z_2| < |z_1| < |z_2|/\delta\} ,\\
\mathcal{I}_{+-} &=
\{z\in \mathbb C^N : |z_1| > |z_2|/\delta\} ,\\
\mathcal{I}_{-+} &=
\{z\in \mathbb C^N : |z_1| < \delta |z_2|\} .
\end{split}
\end{align}
Moreover, the additional boundary term is given by
\begin{align}\label{eq:R(h)}
\begin{split}
R(h)= & -\frac{1}{\pi^2}\int_{\mathbb R^+ \times (0,2\pi)^2} \,\mathrm{d} r_2 \,\mathrm{d} \theta_1 \,\mathrm{d} \theta_2 [\,\mathrm{d} \hat{\Phi}^* \,\mathrm{d} \hat \Phi] \ h \circ \Psi_{12}(r_2,\theta_1,\theta_2, \hat\Phi^*,\hat\Phi) \\
&\times \lambda r_2 \delta^2 \left[\,\mathrm{e}^{-\lambda (1-\delta^4)r_2^2} +\delta^2 \,\mathrm{e}^{-\lambda (1-\delta^4)\delta^2 r_2^2} \right],
\end{split}
\end{align}
where $\hat\Phi = (\Phi_j)_{j\in \Lambda \backslash \{1,2\}}$ and
\begin{align*}
\Psi_{12} (r_2,\theta_1,\theta_2)
= (\Phi_1, \Phi_2)
=
\begin{pmatrix}
\,\mathrm{e}^{i \theta_1} \delta r_2 & \,\mathrm{e}^{i \theta_2} r_2 \\ 0& 0\\
\end{pmatrix}
\end{align*}
The same formulas hold for $\mathbb{E}[{\rm Tr\,} G_\Lambda(E+i \varepsilon)]$ with $g$ replaced by $g_1 (\Phi^*,\Phi) =\sum_{j} |z_j|^2 \,\mathrm{e}^{i \Phi^* ( E - i \varepsilon + \Delta)\Phi}.$
\end{prop}
\begin{proof}[Idea of the proof]
We use the representation from Theorem \ref{theo:generalpolar} and insert the Fourier transform of the given
density. For non-negative correlations we can then undo the coordinate change. When negative correlations are present this operation generates additional correction terms. For details see Section~\ref{sec:application}.
\end{proof}
\noindent In the case of non-negative correlations we recover exact formulas, as follows.
\begin{theo}\label{theo:mainresult}
Let $T_{jk}=\delta_{jk}$ (classical Lloyd model). We have
\begin{align}\label{eq:mainresult}
\lim_{\varepsilon\to 0}\mathcal{G}_\varepsilon(E,\tilde E) = \frac{\det ((E+ i\lambda)\mathds{1}_\Lambda - H_0)}{\det ((\tilde E+ i\lambda)\mathds{1}_\Lambda - H_0)},
\end{align}
where $H_0 = -\Delta$. In particular
\begin{align}\label{eq:mainresult2}
\lim_{\varepsilon\to 0}\mathbb{E}[ {\rm Tr\,} G_\Lambda(E+ i\varepsilon)] = {\rm Tr\,} ((E+ i\lambda)\mathds{1}_\Lambda - H_0)^{-1}.
\end{align}
For $T_{jk}\geq 0$ (non-negative correlation) Eq. \eqref{eq:mainresult} and \eqref{eq:mainresult2} still hold, with
$\lambda\mathds{1}_\Lambda$ replaced
by the diagonal matrix $\lambda \hat T$, where $\hat T_{ij} = \delta_{ij} \sum_{k} T_{jk}$.
In particular both, the classical and the (positive) correlated Lloyd model have the same (averaged) density of states as the
free Laplacian $H_0=-\Delta$ with imaginary mass $\lambda$ and $\lambda \hat T$, respectively.
\end{theo}
\begin{proof}[Idea of the proof]
Follows from Proposition \ref{prop:Cauchyrepresentations}. For details see Section \ref{sec:application}.
\end{proof}
Note that the results on the density of states above can be derived also by other methods (cf. \cite{Lloyd1969} and \cite{Simon1983}).
In the case of localized negative correlation (the toymodel in Case 3. above) we obtain the following result.
\begin{theo}[Toymodel]
\label{theo:toymodel}
Consider $T_{jk}$ be as in Case 3. above, $\lambda >0$ and
$0<\delta \ll (1+\lambda^{-1})^{-1}.$ Then
\begin{align*}
&\lim_{\varepsilon\to 0}\mathbb{E}[ {\rm Tr\,} G_\Lambda(E+ i\varepsilon)] =\\
&\mkern100mu {\rm Tr\,} (E\mathds{1}_\Lambda +i \lambda \hat T- H_0)^{-1} \left [1+
\mathcal{O}\Big ((\delta(1+\lambda^{-1}))^{2}\Big )+\mathcal{O} (|\Lambda |^{-1})\right ].\nonumber
\end{align*}
\end{theo}
\begin{proof}[Idea of the proof]
Follows from Proposition \ref{prop:Cauchyrepresentations} by integrating first over the uncorrelated variables in $\Lambda$ and estimating the remaining integral. For details see Section \ref{sec:application}.
\end{proof}
\section{Supersymmetric polar coordinates}\label{sec:polar}
\subsection{Proof of Theorem \ref{theo:polar}}
\begin{proof}[Proof of Theorem \ref{theo:polar}]
The idea is to apply the coordinate change $\Psi$ from Eq. \eqref{eq:polarcoordinates} for each $j\in\{0,\dots,N\}$.
To simplify the procedure, we divide it into $\Psi_1\circ\Psi_2\circ\Psi_3$,
where $\Psi_1$ is a change from ordinary polar coordinates into complex variables,
$\Psi_2$ rescales the odd variables
and $\Psi_3$ translates the radii into super space.
Note that only the last step mixes ordinary and Grassmann variables and produces boundary terms.
We first change the complex variables $z_j,\bar z_j$ for all $j$ into polar coordinates
\begin{align*}
\psi_1:(0,\infty) \times [0,2\pi) &\to \mathbb C\backslash \{0\}\\
(r,\theta)& \mapsto z(r,\theta), \quad z_j(r_j,\theta_j) = r_j \,\mathrm{e}^{i\theta_j} \quad\forall j.
\end{align*}
The Jacobian is $\prod_{j=1}^N 2r_j$ and by an ordinary change of variables
\begin{align*}
I(f)= \frac{1}{(2\pi)^N} \int_{(\mathbb R^+\times(0,2\pi))^N} \,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar\chi \,\mathrm{d} \chi
\prod_{j=1}^N 2r_j \ f \circ \Psi_1 (r,\theta,\bar \chi, \chi),
\end{align*}
where $\Psi_1 = \psi_1 \times \mathds{1}$.
Note that no boundary terms arise.
Now we rescale the odd variables by
\begin{align*}
\psi_2(\bar\rho,\rho) \coloneqq(\bar\chi(\bar\rho,\rho),\chi(\bar\rho,\rho)) \quad
\begin{cases} \bar\chi_j (\bar\rho_j,\rho_j) \coloneqq \sqrt{r_j} \bar\rho_j \\
\chi_j (\bar\rho_j,\rho_j)\coloneqq \sqrt{r_j} \rho_j \end{cases}\quad \forall j.
\end{align*}
There are again no boundary terms since we have a purely odd transformation.
The Berezinian is given by $\prod_{j=1}^N r_j^{-1}$. Since $\psi_2$ is a linear transformation, this can also be computed directly.
This cancels with the Jacobian from $\Psi_1$ up to a constant.
Hence
\begin{align*}
I(f) = \frac{1}{\pi^N}\int_{(\mathbb R^+\times(0,2\pi))^N} \,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar\rho \,\mathrm{d} \rho
\ f \circ \Psi_1\circ \Psi_2(r,\theta, \bar \rho,\rho),
\end{align*}
where $\Psi_2 = \mathds{1} \times \psi_2$.
After these transformations, we have $\bar z_j z_j+\bar \chi_j \chi_j = r_j^2+r_j \bar\rho_j\rho_j = (r_j + \tfrac12 \bar\rho_j\rho_j)^2$.
We set
$\Psi_3(r,\theta,\bar\rho,\rho) =(r-\frac12 \bar \rho \rho,\theta,\bar\rho,\rho) $.
Hence $\Psi = \Psi_1 \circ \Psi_2 \circ \Psi_3 $ is the $\Psi$ from Eq. \eqref{eq:polarcoordinates}:
\begin{align*}
z_j &&\overset{\Psi_1}{\mapsto}&& r_j e^{i\theta_j}
&&\overset{\Psi_2}{\mapsto}&& r_j e^{i\theta_j}
&&\overset{\Psi_3}{\mapsto}&& \left(r_j-\tfrac 12 \bar \rho_j \rho_j\right) e^{i\theta_j}, \\
\chi_j &&\overset{\Psi_1}{\mapsto}&&\chi_j
&&\overset{\Psi_2}{\mapsto}&& \sqrt{r_j} \rho_j
&&\overset{\Psi_3}{\mapsto}&& \sqrt{r_j-\tfrac 12 \bar \rho_j \rho_j}\ \rho_j = \sqrt{r_j} \rho_j .
\end{align*}
We expand
$\tilde f = f\circ\Psi_1\circ\Psi_2\circ\Psi_3$ as follows
\begin{align}
\label{eq:expansionpsi3}
f\circ\Psi_1\circ\Psi_2(r,\theta,\bar\rho,\rho)
= \tilde f (r+\tfrac{\bar\rho\rho}{2}, \theta,\bar\rho,\rho)
= \mkern-10mu\sum_{\alpha\in\{0,1\}^N} \mkern-10mu \left(\tfrac{\bar\rho\rho}{2}\right)^\alpha \partial_r^\alpha \tilde f(r,\theta, \bar\rho,\rho).
\end{align}
Note that we can set $\rho_j= 0$ and $\bar \rho_{j}=0$ for $\alpha_j = 1$ in $\partial_r^\alpha \tilde f$. We use the short-hand notation $\partial_r^\alpha \tilde f(r,\theta, \bar\rho,\rho)|_{ \bar\rho^\alpha = \rho^\alpha = 0}$.
Inserting this into the integral $I$ and applying integration by parts in $r^\alpha$, we obtain
\begin{align}\label{eq:integrationbyparts}
I(f) =& \tfrac{1}{\pi^N}\int_{(\mathbb R^+\times(0,2\pi))^N} \,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar\rho \,\mathrm{d} \rho
\sum_{\alpha\in\{0,1\}^N}2^{-|\alpha|}\left(\bar\rho\rho\right)^\alpha \partial_r^\alpha \tilde f(r,\theta, \bar\rho,\rho)\\ \nonumber
=& \sum_{\alpha\in\{0,1\}^N} \tfrac{1}{2^{|\alpha|}\pi^N}
\int_{(\mathbb R^+)^{1-\alpha}\times(0,2\pi)^N} \mkern-40mu (\,\mathrm{d} r)^{1-\alpha} \,\mathrm{d} \theta (\,\mathrm{d} \bar\rho \,\mathrm{d} \rho)^{1-\alpha}\tilde f(r,\theta, \bar\rho,\rho)|_{r^\alpha = \bar\rho^\alpha = \rho^\alpha = 0},
\end{align}
where we applied $\int_{(\mathbb R^{+})^{\alpha }} (dr)^{\alpha } \partial_{r}^{\alpha }\tilde{f}
= (-1)^{\alpha } \tilde{f}_{|r^{\alpha }=0}$ and
$\int(\,\mathrm{d}\bar\rho\,\mathrm{d}\rho)^\alpha (\bar\rho\rho)^\alpha = (-1)^\alpha.$
Note that $\tilde f (r,\theta, \bar\rho,\rho)|_{r^\alpha = \bar\rho^\alpha = \rho^\alpha = 0} = f\circ \Psi_\alpha$ is independent of $\theta_j$ for $\alpha_j =1 $ and we can integrate $\int (\,\mathrm{d} \theta)^\alpha = (2\pi)^{|\alpha|}$. This proves the theorem.
\end{proof}
\subsection{Proof of Theorem \ref{theo:generalpolar}}
\begin{proof}[Proof of Theorem \ref{theo:generalpolar}]
Applying Theorem \ref{theo:susyapproach} to $\mathcal{G}_\varepsilon(E,\tilde E)$ yields
\begin{align*}
\mathcal{G}_\varepsilon(E,\tilde E) =
\mathbb{E}\left[
\int [\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi ]\,\mathrm{e}^{i \Phi^* (\mathbf{E} + i \varepsilon- \lambda V + \Delta)\Phi} \right].
\end{align*}
Note that we cannot interchange the average with the integral, since the average of the supersymmetric expression $\,\mathrm{e}^{i\lambda \Phi^* V \Phi}$ may be ill-defined if infinite moments are present. But after applying Theorem \ref{theo:polar} we get
\begin{align*}
\mathcal{G}_\varepsilon(E,\tilde E) = \hspace{-0,4cm}
\sum_{\alpha\in\{0,1\}^\Lambda} \hspace{-0,2cm}\pi^{-|1-\alpha|}
\mathbb{E}\left[\int_{(\mathbb R^+ \times(0,2\pi))^{1-\alpha}} \hspace{-1,4cm}
(\,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho)^{1-\alpha}
\,\mathrm{e}^{-i\lambda \sum_{j}V_j r_j^2}
g\circ \Psi_\alpha (r,\theta,\bar \rho,\rho )
\right],
\end{align*}
where $g(\Phi^*, \Phi) = \,\mathrm{e}^{i \Phi^* (\mathbf{E} + i \varepsilon+ \Delta)\Phi} $.
Now we can take the average inside the integral.
The same arguments hold for $ \mathbb{E}[|G_\Lambda(E+ i\varepsilon)_{jk}|^2]$.
\end{proof}
\section{Applications to the Lloyd model} \label{sec:application}
\subsection{Proof of Proposition \ref{prop:Cauchyrepresentations}}
We will need the following well-known result for the proof of the proposition.
\begin{lemma}\label{lem:princvalue}
Let $A\sim\text{Cauchy}(0,1)$ and $t\in\mathbb R$.
Then $\mathbb{E}[\,\mathrm{e}^{itA}] = \,\mathrm{e}^{-|t|}$.
\end{lemma}
\begin{proof}
Let $t\geq 0$. We take the principal value and apply the residue theorem.
\begin{align*}
\lim_{R\to\infty} \int_{[-R,R]}\frac{\,\mathrm{e}^{it x}}{\pi(1+x^2)}\,\mathrm{d} x
=\lim_{R\to\infty}\left[ 2\pi i {\rm Res \,}_{i}\frac{\,\mathrm{e}^{it x}}{\pi(1+x^2)} -
\int_{\gamma}\frac{\,\mathrm{e}^{it x}\,\mathrm{d} x}{\pi(1+x^2)}\right]
= \,\mathrm{e}^{-t},
\end{align*}
where $\gamma(s) = R \,\mathrm{e}^{is}$ for $s\in[0,\pi]$. The case $t <0$ follows analogously by
closing the contour from below.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:Cauchyrepresentations}]
Starting from the representation \eqref{eq:polarrepresentation} of Theorem \ref{theo:generalpolar}, we use Lemma \ref{lem:princvalue} to determine the Fourier transform
\begin{align*}
\hat{ \mu}(\{\lambda r_j^2\}_{j\in\Lambda}) = \mathbb{E}[\,\mathrm{e}^{i \lambda \sum_{j, k} T_{jk} W_k r_j^2}]
= \,\mathrm{e}^{-\sum_{k} \lambda | \sum_{j} T_{jk} r_j^2|}.
\end{align*}
As long as $r_j\in \mathbb R$, this is well-defined and the integral remains finite for arbitrary correlation $T$. When $T_{jk} \geq 0$ for all $j,k$, we can drop the absolute value and obtain
\begin{align*}
\hat{ \mu}(\{\lambda r_j^2\}_{j\in\Lambda}) = \,\mathrm{e}^{-\sum_{k} \lambda \sum_{j} T_{jk} r_j^2}
= \tilde \mu \circ \Psi_\alpha,
\end{align*}
where $\tilde \mu(\Phi^*, \Phi) = \exp[-\sum_{k} \lambda \sum_{j} T_{jk} \Phi^*_j \Phi_j]$ is a smooth, integrable function in
$\mathcal{A}_{2N,2N}(\mathbb C^N)$, which can be transformed back to ordinary supersymmetric coordinates by Theorem \ref{theo:polar}.
In the case of the toymodel, our function is continuous but only piecewise smooth. We partition the integration domain into regions, where our function is smooth.
In polar coordinates the regions \eqref{eq:Ibeta} become
\begin{align*}
\mathcal{I}_{++} &=\{0< \delta r_2 < r_1 < \tfrac{r_2}{\delta}\} \times (0,\infty)^{\Lambda\backslash\{1,2\}} &=&
\{r\in (0,\infty)^\Lambda : \delta r_2 < r_1 < \tfrac{r_2}{\delta}\}, \\
\mathcal{I}_{+-} &= \{0< \tfrac{r_2}{\delta}< r_1\} \times (0,\infty)^{\Lambda\backslash\{1,2\}} &=&
\{r\in (0,\infty)^\Lambda : r_1 > \tfrac{r_2}{\delta}\}, \\
\mathcal{I}_{-+} &= \{0< r_1 < \delta r_2\} \times (0,\infty)^{\Lambda\backslash\{1,2\}} &=&
\{r\in (0,\infty)^\Lambda : r_1 < \delta r_2\}.
\end{align*}
Hence $(0,\infty)^\Lambda$ can be written as the disjoint union $ \mathcal{I}_{++} \cup \mathcal{I}_{+-} \cup \mathcal{I}_{-+} \cup \mathcal{N}$, where $\mathcal{N}$ is a set of measure $0$.
Using $T^\beta$ defined above, we can write
\begin{align*}
I_1 = &\sum_{\alpha\in\{0,1\}^\Lambda} \pi^{-|1-\alpha|} \int_{(\mathbb R^+\times(0,2\pi))^{1-\alpha}} \mkern-45mu
(\,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho)^{1-\alpha}
\hat{ \mu}(\{\lambda r_j^2\}_{j\in\Lambda})|_{r^\alpha = 0} \
g\circ \Psi_\alpha (r,\theta,\bar \rho,\rho)
\\=&
\sum_{\beta}
\sum_{\alpha\in\{0,1\}^\Lambda} \pi^{-|1-\alpha|} \int_{(\mathbb R^+\times(0,2\pi))^{1-\alpha}} \mkern-40mu
(\,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho)^{1-\alpha} \chi(\mathcal{I}_\beta)|_{r^\alpha = 0} \\
&\mkern150mu\times
\,\mathrm{e}^{-\lambda (\delta_{\alpha_1 0} r_1^2 T^\beta_1+ \delta_{\alpha_2 0} r_2^2 T^\beta_2)}
h\circ \Psi_\alpha (r,\theta,\bar\rho,\rho),
\end{align*}
where $\beta \in \{++,+-,-+\}$ and $h (\Phi^*, \Phi) = g (\Phi^*,\Phi) \,\mathrm{e}^{-\lambda \sum_{j\neq 1,2} \Phi_j^* \Phi_j}$ is independent of $\beta$. Finally, $\chi(\mathcal{I}_\beta)$ is the characteristic function of $\mathcal{I}_\beta$ and $r^\alpha = 0$ means $r_j = 0$ for $\alpha_j = 1$.
To transform back we need to repeat the proof of Theorem \ref{theo:polar} on the different domains. Consider the integral
\begin{align*}
I_2 = \sum_{\beta } \int_{{ \mathcal{I}}_\beta}[\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi] \ h(\Phi^*,\Phi) \,\mathrm{e}^{-\lambda \sum_{j=1}^2 T^\beta_j\Phi_j^* \Phi_j} ,
\end{align*}
where ${\mathcal{I}}_\beta $ are the corresponding subsets of $\mathbb C^\Lambda$ (cf. Eq. \eqref{eq:Ibeta}).
We will show that inserting polar coordinates in $I_2$, we recover $I_1$ plus some correction terms.
In each region, the integrated function is smooth and we can apply the first two transformations $\Psi_1$ and $\Psi_2$ from the proof of Theorem \ref{theo:polar} and obtain
\begin{align*}
I_2 = \frac{1}{\pi^{|\Lambda|}}\sum_{\beta}\int_{\mathcal{I}_\beta\times(0,2\pi)^{|\Lambda|}} \mkern-20mu \,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho \ \,\mathrm{e}^{-\lambda \sum_{j=1}^2 T^\beta_j (r_j+\frac 12 \bar \rho_j \rho_j)^2} h\circ \Psi_1 \circ \Psi_2(r,\theta,\bar\rho,\rho).
\end{align*}
Replacing as in Eq. \eqref{eq:expansionpsi3} the integrand by the Taylor-expansion of $\tilde f_\beta =
\,\mathrm{e}^{-\lambda \sum_{j=1}^2 (T_\beta)_j r_j^2} \tilde h$, with $\tilde h =
h \circ \Psi _1 \circ \Psi_2 \circ \Psi_3$, we obtain
\begin{align*}
I_2 &= \sum_{\alpha \in\{0,1\}^\Lambda} I_\alpha , \text{ where} \\
I_\alpha &= \frac{1}{\pi^{|\Lambda|}2^{|\alpha|}}\sum_{\beta} \int_{\mathcal{I}_\beta\times(0,2\pi)^{|\Lambda|}} \,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho \ (\bar \rho \rho)^\alpha\ \partial_r^\alpha \tilde f_\beta(r,\theta,\bar \rho,\rho).
\end{align*}
Applying now integration by parts as in Eq. \eqref{eq:integrationbyparts} generates additional boundary terms. More precisely, when $\alpha_1 = \alpha_2 = 0$, no derivatives in $r_1$ and $r_2$ appear and $\mathcal{I}_\beta = \tilde {\mathcal{I}}_\beta\times (0,\infty)^{\Lambda \backslash\{1,2\}}$. Hence no additional terms arise and
\begin{align*}
I_\alpha &= \pi^{-|1-\alpha|}\sum_{\beta}
\int_{(\mathbb R^+\times (0,2\pi))^{1-\alpha} } \mkern-45mu (\,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho)^{1-\alpha}
\,\mathrm{e}^{-\lambda \sum_{j=1}^2 (T_\beta)_j r_j^2}
\chi(\tilde {\mathcal{I}}_\beta) \ h\circ \Psi_\alpha(r,\theta,\bar \rho,\rho).
\end{align*}
For $\alpha_1 =1$ and $\alpha_2 = 0$ (or vice versa), additional boundary terms do appear but cancel since the function is continuous:
\begin{align*}
I_\alpha =&
\frac{1}{\pi^{|\Lambda|}2^{|\alpha|}}\sum_{\beta} \int_{\mathcal{I}_\beta\times(0,2\pi)^{|\Lambda|}} \mkern-20mu \,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho \ (\bar \rho \rho)^\alpha \ \partial_{r_1} \left[h^{(\alpha)}(r,\theta,\bar \rho,\rho) \,\mathrm{e}^{-\lambda \sum_{j=1}^2 T^\beta_j r_j^2}\right] \\
=& \frac{1}{\pi^{|\Lambda|}2^{|\alpha|}} \int_{(\mathbb R^+)^{|\Lambda|-1}\times(0,2\pi)^{|\Lambda|}} \mkern-20mu \,\mathrm{d} \hat r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho \ (\bar \rho \rho)^\alpha
\left[
h^{(\alpha)}\,\mathrm{e}^{-\lambda \sum_{j=1}^2 T^{-+}_j r_j^2}\right]_{r_1= 0}^{r_1 = \delta r_2}
\\ &
+ \left[
h^{(\alpha)}\,\mathrm{e}^{-\lambda \sum_{j=1}^2 T^{++}_j r_j^2}\right]_{r_1= \delta r_2}^{r_1 = r_2/ \delta}
+\left[
h^{(\alpha)}\,\mathrm{e}^{-\lambda \sum_{j=1}^2 T^{+-}_j r_j^2}\right]_{r_1= r_2/\delta}^{r_1 = \infty}
\\
=& -\frac{1}{\pi^{|\Lambda|}2^{|\alpha|}} \int_{(\mathbb R^+)^{|\Lambda|-1}\times(0,2\pi)^{|\Lambda|}} \mkern-20mu \,\mathrm{d} \hat r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho \ (\bar \rho \rho)^\alpha
h^{(\alpha)}|_{r_1 = 0}\,\mathrm{e}^{-\lambda T^{-+}_2 r_2^2},
\end{align*}
where $\,\mathrm{d} \hat r = \prod_{j\neq 1} \,\mathrm{d} r_j$ and $h^{(\alpha)} = \partial_{r}^{\hat \alpha}\tilde h $ and $\hat \alpha_j = \alpha_j$ for all $j\neq 1,2$, $\hat\alpha_1 = \hat{\alpha}_2= 0$.
Note that in the second step all terms except the first one cancel because of continuity: $\sum_{j=1}^2 T^{-+}_j r_j^2|_{r_1 = \delta r_2} = \sum_{j=1}^2 T^{++}_j r_j^2|_{r_1 = \delta r_2}$ and $\sum_{j=1}^2 T^{++}_j r_j^2|_{r_1 = r_2/\delta} = \sum_{j=1}^2 T^{+-}_j r_j^2|_{r_1 = r_2/\delta}$.
We can apply now integration by parts for $r^{\hat\alpha}$ as before.
Note that for $r_1=0$ the sets $\mathcal{I}_{++} = \mathcal{I_{+-} = \emptyset}$ and we obtain only contributions from the set $\mathcal{I}_{-+}= \{r_2 \in \mathbb R^+\}$ which is the same as writing $\sum_{\beta } \chi(\mathcal{I}_\beta)|_{r_1 = 0}$.
\noindent
When $\alpha_1 = \alpha_2 = 1$, we obtain additional boundary terms which do not cancel. Applying integration by parts in $r_1$, we need to evaluate
\begin{align*}
\partial_{r_2} [h^{(\alpha)} \,\mathrm{e}^{-\lambda \sum_{j=1}^2 (T_\beta)_j r_j^2}] = (\partial_{r_2} h^{(\alpha)}-2 \lambda T^\beta_2 r_2 h^{(\alpha)})\,\mathrm{e}^{-\lambda \sum_{j=1}^2 (T_\beta)_j r_j^2}
\end{align*}
on the different boundaries. The contributions of $\partial_{r_2} h^{(\alpha)} \,\mathrm{e}^{-\lambda\sum_{j=1}^2 (T_\beta)_j r_j^2}$ cancel as above by continuity except for the term at $r_1=0$. The contributions from the second summand remain:
\begin{align*}
I_\alpha =&
\frac{1}{\pi^{|\Lambda|}2^{|\alpha|}}\sum_{\beta} \int_{\mathcal{I}_\beta\times(0,2\pi)^{|\Lambda|}} \,\mathrm{d} r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho \ (\bar \rho \rho)^\alpha \partial_{r_1}\partial_{r_2} \left[h^{(\alpha)} \,\mathrm{e}^{-\lambda \sum_{j=1}^2 (T_\beta)_j r_j^2}\right] \\
=& \frac{1}{\pi^{|\Lambda|}2^{|\alpha|}} \int_{(\mathbb R^+)^{|\Lambda|-1}\times(0,2\pi)^{|\Lambda|}} \mkern-40mu\,\mathrm{d} \hat r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho \ (\bar \rho \rho)^\alpha \partial_{r_2}\left[
- h^{(\alpha)}\,\mathrm{e}^{-\lambda T^{-+}_2 r_2^2}
\right ]_{r_1= 0} + R_\alpha(h),
\end{align*}
where $R_\alpha (h)$ is the remaining part defined below in Eq. \eqref{eq:Ralpha(h)}. In the first integral, we can apply integration by parts in $r_2$ and $r^{\hat{\alpha}}$ as before and the result is independent of $\beta$. It remains to consider
\begin{align}\label{eq:Ralpha(h)}
&R_\alpha(h) = \frac{1}{\pi^{|\Lambda|}2^{|\alpha|}} \int_{(\mathbb R^+)^{|\Lambda|-1}\times(0,2\pi)^{|\Lambda|}} \mkern-20mu\,\mathrm{d} \hat r \,\mathrm{d} \theta \,\mathrm{d} \bar \rho \,\mathrm{d} \rho \ (\bar \rho \rho)^\alpha \ 2 \lambda r_2
\\
&\times\!\left [ h^{(\alpha)}|_{r_1 = \delta r_2} (T^{++}_2\!\! -T^{-+}_2) \,\mathrm{e}^{-\lambda (1-\delta^4) r_2^2} + h^{(\alpha)}|_{r_1 = \frac{r_2}{\delta} } (T^{+-}_2\!\! -T^{++}_2) \,\mathrm{e}^{-\lambda (1-\delta^4) \delta^{-2} r_2^2}
\right]. \nonumber
\end{align}
Here, we can integrate over $r^{\hat \alpha}$, but the integral over $r_2$ remains:
\begin{align*}
&R_\alpha(h) = - \pi^{-|1-\hat \alpha|}
\int_{(\mathbb R^+\times (0,2\pi))^{1-\hat\alpha}\times \mathbb R^+ \times (0,2\pi)^2 } \mkern-50mu (\,\mathrm{d} r \,\mathrm{d} \theta )^{1- \hat \alpha }(\,\mathrm{d} \bar \rho \,\mathrm{d} \rho)^{1- \alpha} \,\mathrm{d} r_2 \,\mathrm{d} \theta_1 \,\mathrm{d} \theta_2 \ \lambda r_2 \\
&\times \left[
\tilde h|_{r^{\hat \alpha} = \bar \rho^\alpha= \rho^\alpha= 0, r_1 = \delta r_2 } \delta^2 \,\mathrm{e}^{-\lambda (1-\delta^4)r_2^2} + \ \tilde h|_{r^{\hat \alpha} = \bar \rho^\alpha = \rho ^ \alpha = 0, r_1 = r_2/\delta } \,\mathrm{e}^{-\lambda (1-\delta^4)\delta^{-2} r_2^2} \right].
\end{align*}
By rescaling the second term $r_2 \mapsto \delta^2 r_2$, we obtain
\begin{align*}
I_2- I_1 =& - \sum_{\hat\alpha}
\pi^{-|1-\hat \alpha|}
\int_{(\mathbb R^+\times (0,2\pi))^{1-\hat\alpha}\times \mathbb R^+ \times (0,2\pi)^2 } \mkern-50mu (\,\mathrm{d} r \,\mathrm{d} \theta )^{1- \hat \alpha }(\,\mathrm{d} \bar \rho \,\mathrm{d} \rho)^{1- \alpha} \,\mathrm{d} r_2 \,\mathrm{d} \theta_1 \,\mathrm{d} \theta_2 \\
&\times
\lambda r_2 \delta^2 \ \tilde h|_{r^{\hat \alpha}= \bar \rho^\alpha = \rho^\alpha = 0, r_1 = \delta r_2 } \left[\,\mathrm{e}^{-\lambda (1-\delta^4)r_2^2} +\delta^2 \,\mathrm{e}^{-\lambda (1-\delta^4)\delta^2 r_2^2} \right].
\end{align*}
Note that we can transform the variables of $\Lambda \backslash \{1,2\}$ back to flat coordinates by Theorem \ref{theo:polar} and obtain $ I_2 - I_1 = R(h)$ that finishes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{theo:mainresult}}
\begin{proof}[Proof of Theorem \ref{theo:mainresult}]
We start from the result of Propostion \ref{prop:Cauchyrepresentations}.
In both models, the classical and the positive correlated one, we have $T_{jk}\geq 0$ and $\sum_k T_{jk} >0$, hence the body of $\lambda \sum_{j} T_{jk} \Phi^*_j \Phi_j$ is strictly positive except on a set of measure $0$. We end up with
\begin{align*}
\mathcal{G}_\varepsilon(E,\tilde E) =
\int [\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi ]\,\mathrm{e}^{i \Phi^* (\hat E + i \varepsilon + i \lambda \hat T+ \Delta)\Phi} ,
\end{align*}
where we can take the average $\varepsilon \to 0$ and go back to the original representation.
\end{proof}
\subsection{Proof of Theorem \ref{theo:toymodel}}
\begin{proof}[Proof of Theorem \ref{theo:toymodel}]
Using Eq. \eqref{eq:gree-repr} and the result of Proposition \ref{prop:Cauchyrepresentations}, we obtain
\begin{align*}
\mathbb{E}[{\rm Tr\,} G_{\Lambda }(E+i\varepsilon )]&= \mathbb{E}\Big[ \int [\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi] \,\mathrm{e}^{i\Phi^* (E+i\varepsilon-\lambda V + \Delta) \Phi } \sum_{j\in\Lambda} |z_j|^2\Big] \\
&= I_{++}+I_{+-}+I_{-+}+ R(h)
\end{align*}
where for $\beta = (++), (+-)$ or $(-+)$ we have
\begin{align*}
I_\beta &= \int_{\mathcal{I}_\beta} [\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi] \,\mathrm{e}^{i\Phi^* (E+i\varepsilon + \Delta) \Phi } \sum_{j\in\Lambda} |z_j|^2 \,\mathrm{e}^{-\lambda(T_{1}^{\beta }\Phi_1^{*}\Phi_1 + T_{2}^{\beta } \Phi_2^{*}\Phi_2+ \sum_{k\neq 1,2} \Phi_k^* \Phi_k)},
\end{align*}
and for $h(\Phi^*,\Phi)=\sum_{j} |z_j|^2 \,\mathrm{e}^{-\lambda \sum_{j\neq 1,2}\Phi^*_j \Phi_j}\,\mathrm{e}^{i \Phi^* ( E - i \varepsilon + \Delta)\Phi}$ the remainder $R(h)$ is defined in Eq. \eqref{eq:R(h)}.
We will show that the main contribution comes from $\mathcal{I}_{++}$ and indeed
\[
\text{ body } ( T_{1}^{++} \Phi_1^{*}\Phi_1 + T_{2}^{++}\Phi_2^{*}\Phi_2)= (1-\delta^{2}) [|z_{1}|^{2}+|z_{2}|^{2}]>0\quad \forall (z_{1},z_{2})\neq (0,0).
\]
In the following we show that $I_{+-}$ and $I_{-+},$ as well as $R(h)$ are small in terms of $\delta$.
\paragraph{Analysis of the $\pmb{I_{\beta }}$ terms.}
Integrating out the Grassmann variables, we obtain for all $\beta$
\begin{align*}
I_\beta = \int_{\mathcal{I}_\beta} \,\mathrm{d} \bar z \,\mathrm{d} z \sum_{j\in\Lambda} |z_j|^2 \det\left[ \tfrac{C_\beta+\varepsilon }{2\pi} \right] \,\mathrm{e}^{-\bar z ( C_\beta+\varepsilon ) z } ,
\end{align*}
where $C_{\beta }$ has the block structure
\begin{align}
C_\beta& =
\begin{pmatrix}
A_\beta & -iD \\
-iD^T & B
\end{pmatrix},
\quad A_\beta \coloneqq A_0+ \lambda {\rm diag\,} T^\beta, \quad A_0 \coloneqq -i(E+\Delta)|_{\{1,2\}}\nonumber\\
\label{eq:defblockmatrix}
B& \coloneqq (\lambda - i (E+\Delta))_{|\Lambda \setminus \{1,2 \}},\quad D^T\coloneqq (d_{1},d_{2}),
\end{align}
and we defined the vectors $d_{1},d_{2}\in \mathbb R^{\Lambda \setminus \{1,2 \}}$ as
$d_{1} (j)=\delta_{|i_{1}-j|,1},$ $d_{2} (j)=\delta_{|i_{2}-j|,1},$ where $i_{1},i_{2}$ are the positions of $1,2.$
Note that the blocks $B$ and $D$ are independent of $\beta$ and ${\rm Re \,} B>0.$
On the contrary ${\rm Re \,} C_{\beta}>0$ holds only for $\beta = (+,+).$
We set then $\varepsilon =0$ in our formulas and reorganize $I_{++}+I_{+-}+I_{-+}$ as follows
\begin{align*}
&[I_{++}+I_{+-}+I_{-+}]_{|\varepsilon =0} = \int\,\mathrm{d} \bar z \,\mathrm{d} z \sum_{j\in\Lambda} |z_j|^2 \det\left[ \tfrac{C_{++}}{2\pi} \right] \,\mathrm{e}^{-\bar z C_{++} z } \\
&\qquad +
\int_{\mathcal{I}_{+-}} \,\mathrm{d} \bar z \,\mathrm{d} z \sum_{j\in\Lambda} |z_j|^2\left( \det\left[ \tfrac{C_{+-}}{2\pi} \right] \,\mathrm{e}^{-\bar z C_{+-} z }- \det\left[ \tfrac{C_{++}}{2\pi} \right] \,\mathrm{e}^{-\bar z C_{++} z }\right)\\
&\qquad +
\int_{\mathcal{I}_{-+}} \,\mathrm{d} \bar z \,\mathrm{d} z \sum_{j\in\Lambda} |z_j|^2\left( \det\left[ \tfrac{C_{-+}}{2\pi} \right] \,\mathrm{e}^{-\bar z C_{-+} z }- \det\left[ \tfrac{C_{++}}{2\pi} \right] \,\mathrm{e}^{-\bar z C_{++} z }\right)\\
&\qquad = {\rm Tr\,} C_{++}^{-1} + \int_{\mathcal{I}_{+-}} (\cdots) + \int_{\mathcal{I}_{-+}}(\cdots)=
{\rm Tr\,} C_{++}^{-1} \left ( 1+ \mathcal{E}_{+-}+ \mathcal{E}_{-+} \right )
\end{align*}
To estimate $\mathcal{E}_{+-}$ and $\mathcal{E}_{-+}$, we integrate over the variables $w = (z_j)_{j\in\Lambda, j\neq 1,2}$ exactly.
We define $z= (\hat z, w), \hat z = (z_1, z_2).$ Then
\begin{align*}
\bar z C_{\beta } z&= \bar w B w -i \bar w D^{t} \hat{z}-i \overline{ \hat z} D w+ \overline{ \hat z} A_{\beta } \hat z ,\\
\sum_{j\in\Lambda} |z_j|^2&=\overline{ \hat z}\hat z+ \sum_{l\in \Lambda \setminus \{1,2 \}} |w_l|^2.
\end{align*}
Integrating over $w$ we get
\begin{align*}
&\int \,\mathrm{d} \bar w \,\mathrm{d} w \det\left[ \tfrac{B}{2\pi} \right] \,\mathrm{e}^{-\bar w B w } \,\mathrm{e}^{-i\bar w D^{t} \hat{z}-i \overline{ \hat z} D w}
(\bar w w+ \overline{ \hat z}\hat z)\\
&= \,\mathrm{e}^{- \overline{ \hat z} DB^{-1}D^{t} \hat{z} }
({\rm Tr\,} B^{-1} - \overline{ \hat z} D B^{-2}D^{t} \hat{z}+ \overline{ \hat z}\hat z)=
\,\mathrm{e}^{- \overline{ \hat z} DB^{-1}D^{t} \hat{z} }
({\rm Tr\,} B^{-1} + \overline{ \hat z} M \hat{z} ),
\end{align*}
where we defined $M \coloneqq 1 - DB^{-2}D^T.$
Then for $\beta= (+-),(-+)$ and $\beta'=\beta$ or $\beta'= (++)$ we have
\begin{align*}
&\int_{\mathcal{I}_\beta} \,\mathrm{d} \bar z \,\mathrm{d} z \det\left[ \tfrac{C_{\beta'}}{2\pi} \right] \,\mathrm{e}^{-\bar z C_{\beta'} z } \sum_{j\in \Lambda} |z_j|^2 \\& \qquad =
\int_{\mathcal{I}_\beta} \,\mathrm{d} \overline{ \hat z} \,\mathrm{d} \hat z \det \left[ \tfrac{S_{\beta'}}{2\pi} \right] \,\mathrm{e}^{-\overline {\hat z} S_{\beta'} \hat z }
\left(
{\rm Tr\,} B^{-1} + \overline{\hat z } M \hat z
\right),
\end{align*}
where $S_{\beta'} = A_{\beta'}+D B^{-1} D^T$ is the Schur complement of the $2\times 2$ block of $C_{\beta'}$
corresponding to $1,2$. We
also used $\det C_{\beta'}= \det B \det S_{\beta'}$.
We consider now the error term $\mathcal{E}_{-+}.$ The error term $\mathcal{E}_{+-}$ works analogously.
From the results above we get
\begin{align*}
&\mathcal{E}_{-+} =\frac{1}{{\rm Tr\,} C_{++}^{-1}}\int_{\mathcal{I}_{-+}}\,\mathrm{d} \bar z \,\mathrm{d} z \sum_{j\in\Lambda} |z_j|^2\left( \det\left[ \tfrac{C_{-+}}{2\pi} \right] \,\mathrm{e}^{-\bar z C_{-+} z }- \det\left[ \tfrac{C_{++}}{2\pi} \right] \,\mathrm{e}^{-\bar z C_{++} z }\right) \\
&= \int_{|z_1| <\delta |z_2|} \,\mathrm{d}\overline{\hat z} \,\mathrm{d} \hat z \det \left[\tfrac{S_{++}}{2\pi}\right]
\,\mathrm{e}^{-\overline {\hat z } S_{++} \hat z}
\, \tfrac{{\rm Tr\,} B^{-1} + \overline{\hat z } M \hat z}{{\rm Tr\,} C_{++}^{-1}}
\left(\,\mathrm{e}^{- \overline{\hat z } X \hat z}\det (1 + S_{++}^{-1}X) -1 \right),
\end{align*}
where we used $S_{++}^{-1}$ invertible and we defined
\begin{align*}
X\coloneqq A_{-+}- A_{++}=2\lambda
\begin{pmatrix}
-1 & 0 \\ 0 & \delta^2
\end{pmatrix}, \, \text{ hence } \,
\overline{\hat z }X \hat z = 2\lambda (\delta^2 |z_2|^2 - |z_1|^2) >0 .
\end{align*}
Now we change the coordinate $z_1$ to $v = z_1 z_2^{-1} \delta^{-1}.$
As a short-hand notation write $S = S_{++}$. We have
\[
\overline {\hat z } S \hat z= |z_{2}|^{2} (\mathbf{v}^* S\mathbf{v}),\qquad \overline{\hat z } M \hat z=
|z_2|^2 \mathbf{v}^* M \mathbf{v},
\qquad \overline {\hat z } X \hat z= |z_{2}|^{2} (\mathbf{v}^* X\mathbf{v}),
\]
where $\mathbf{v}= (\delta v,1)^t$ and $\mathbf{v}^* = (\delta\bar v,1 )$.
Note that ${\rm Re \,} S>0$ and
\begin{align}\label{eq:Xestimate}
(\mathbf{v}^* X\mathbf{v})= 2\lambda \delta^{2} (1-|v|^{2})\geq 0,
\end{align}
therefore we can perform the integral over $z_{2}$ exactly
\begin{align*}
\mathcal{E}_{-+} =& \det \left[\tfrac{S}{2\pi}\right]
\int_{|v|<1}
\,\mathrm{d} \bar z_2 \,\mathrm{d} z_2 \,\mathrm{d} \bar v \,\mathrm{d} v \ \delta^2 |z_2|^2 \,\mathrm{e}^{-|z_2|^2 \mathbf{v}^* S\mathbf{v}}
\tfrac{{\rm Tr\,} B^{-1} + |z_2|^2 \mathbf{v}^* M \mathbf{v}}{{\rm Tr\,} C_{++}^{-1}}\\
&\times \left(\,\mathrm{e}^{-|z_2|^2 \mathbf{v}^*X \mathbf{v}}\det (1 + S^{-1}X)-1 \right)\\
=&
\delta^2\int_{|v|<1}
\frac{\,\mathrm{d} \bar v \,\mathrm{d} v}{2\pi } \left[
\frac{{\rm Tr\,} B^{-1}}{{\rm Tr\,} C_{++}^{-1}} \left(\frac{\det (S + X)}{(\mathbf{v}^* (S+ X)\mathbf{v})^2}-
\frac{\det S}{(\mathbf{v}^* S\mathbf{v})^2}\right)\right. \\
& \, + \left.
\frac{2 \mathbf{v}^* M \mathbf{v}}{{\rm Tr\,} C_{++}^{-1}} \left(\frac{\det (S + X)}{(\mathbf{v}^* (S+X)\mathbf{v})^3}-
\frac{\det S}{(\mathbf{v}^* S\mathbf{v})^3}\right)
\right]\\
=& \delta^2\int_{|v|<1}
\frac{\,\mathrm{d} \bar v \,\mathrm{d} v}{2\pi }
\left(\frac{\det (S + X)}{(\mathbf{v}^* (S+ X)\mathbf{v})^2}-
\frac{\det S}{(\mathbf{v}^* S\mathbf{v})^2}\right) + O (|\Lambda |^{-1}),
\end{align*}
where we applied Lemma \ref{lem:matrixentries} below and
\begin{align}\label{eq:traceBC}
\frac{{\rm Tr\,} B^{-1}}{{\rm Tr\,} C_{++}^{-1}}=1- \frac{{\rm Tr\,} S^{-1}_{++}M}{{\rm Tr\,} C^{-1}_{++}}= 1+O (|\Lambda |^{-1}).
\end{align}
Applying Lemma \ref{lem:matrixentries} again, together with Eq. \eqref{eq:Xestimate} we get
\begin{align*}
&\frac{\det (S + X)}{(\mathbf{v}^* (S+ X)\mathbf{v})^2}-
\frac{\det S}{(\mathbf{v}^* S\mathbf{v})^2} =
-\frac{(\mathbf{v}^*X\mathbf{v})\det S}{(\mathbf{v}^* (S+ X)\mathbf{v})^2(\mathbf{v}^* S\mathbf{v})}
\left[2+\frac{(\mathbf{v}^*X\mathbf{v})}{(\mathbf{v}^* S\mathbf{v})} \right]\\
&+ \frac{X_{11}S_{22}+X_{22}S_{11}+X_{11}X_{22}}{(\mathbf{v}^* (S+ X)\mathbf{v})^2}= \mathcal{O}\Big ( (1+\frac{1}{\lambda^{2}})
\left[1+\delta^{2}(1+\frac{1}{\lambda^{2}}) \right]\Big ).
\end{align*}
\paragraph{Analysis of $\pmb{R(h)}$.}
Note that we can set $\varepsilon = 0$ in $R(h)$. By the notations in Eq. \eqref{eq:defblockmatrix}, we can write
\begin{align*}
R(h) =&- \tfrac{1}{\pi^2} \int_{\mathbb R^+ \times (0,2\pi)^2} \mkern-20mu \,\mathrm{d} r_2 \,\mathrm{d} \theta_1 \,\mathrm{d} \theta_2 [\,\mathrm{d} \hat{\Phi}^* \,\mathrm{d} \hat \Phi] \lambda r_2 \delta^2 \left[\,\mathrm{e}^{-\lambda (1-\delta^4) r_2^2} +\delta^2 \,\mathrm{e}^{-\lambda (1-\delta^4)\delta^2 r_2^2} \right]
\\&\times
\left((1+\delta^2) r_2^2 +\sum_{j} |w_j|^2\right) \,\mathrm{e}^{-\hat{\Phi}^* B \hat \Phi }
\,\mathrm{e}^{ir_2 (\bar w D^T v_\theta+ \bar v_\theta D w)}
\,\mathrm{e}^{-r_2^2 \bar v_\theta A_0 v_\theta},
\end{align*}
where $v_\theta^t = (\,\mathrm{e}^{i \theta_1}\delta, \,\mathrm{e}^{i \theta_2})$.
Integrating over the Grassmann variables, we obtain
\begin{align*}
R(h) =& - \tfrac{1}{\pi^2} \int_{\mathbb R^+ \times (0,2\pi)^2} \mkern-20mu \,\mathrm{d} r_2 \,\mathrm{d} \theta_1 \,\mathrm{d} \theta_2\,\mathrm{d} \bar w \,\mathrm{d} w \lambda r_2 \delta^2 \left[\,\mathrm{e}^{-\lambda (1-\delta^4)r_2^2} +\delta^2 \,\mathrm{e}^{-\lambda (1-\delta^4)\delta^2 r_2^2} \right]
\\&\times
\left((1+\delta^2) r_2^2 +\sum_{j} |w_j|^2\right) \det\left[\tfrac{B}{2\pi}\right]\,\mathrm{e}^{-\bar w B w }
\,\mathrm{e}^{ir_2 (\bar w D^T v_\theta+ \bar v_\theta D w)}
\,\mathrm{e}^{- r_2^2\bar v_\theta A_0 v_\theta},
\end{align*}
Define $S_0 = A_0 + DB^{-1}D^T$. Integrating over $w$ and $r_2$, we obtain
\begin{align*}
\frac{R(h)}{{\rm Tr\,} C_{++}^{-1}} = & - \tfrac{1}{\pi^2}\tfrac{1}{{\rm Tr\,} C_{++}^{-1}}\int_{\mathbb R^+ \times (0,2 \pi)^2 } \mkern-30mu \,\mathrm{d} r_2 \,\mathrm{d} \theta_1 \,\mathrm{d} \theta_2 \lambda r_2 \delta^2 \left[\,\mathrm{e}^{-\lambda (1-\delta^4)r_2^2} +\delta^2 \,\mathrm{e}^{-\lambda (1-\delta^4)\delta^2 r_2^2} \right]
\\&\times
\left( \bar v_\theta M v_\theta r_2^2 +{\rm Tr\,} B^{-1}\right) \,\mathrm{e}^{-r_2^2 \bar v_\theta S_0 v_\theta} \\
=&- \tfrac{1}{\pi^2} \tfrac {{\rm Tr\,} B^{-1}}{{\rm Tr\,}{C_{++}^{-1}}}\int_{ (0,2 \pi)^2 } \mkern-20mu \,\mathrm{d} \theta_1 \,\mathrm{d} \theta_2 \tfrac{\lambda \delta^2 }{2}\left[
\tfrac{1}{\lambda (1-\delta^4)+ \bar v_\theta S_0 v_\theta } + \tfrac{\delta^2}{\lambda \delta^2 (1-\delta^4)+ \bar v_\theta S_0 v_\theta }
\right]
\\& - \tfrac{1}{\pi^2}\int_{(0,2\pi)^2} \mkern-20mu \,\mathrm{d} \theta_1 \,\mathrm{d} \theta_2 \tfrac{\lambda \delta^2}{2} \tfrac{\bar v_\theta M v_\theta }{{\rm Tr\,}{C_{++}^{-1}}}
\left[
\tfrac{1}{(\lambda (1-\delta^4)+ \bar v_\theta S_0 v_\theta )^2} + \tfrac{\delta^2}{(\lambda \delta^2 (1-\delta^4)+ \bar v_\theta S_0 v_\theta)^2 }
\right].
\end{align*}
Similar to the estimates above, we insert absolute values and use Lemma \ref{lem:matrixentries} and Eq. \eqref{eq:traceBC} to bound the first term by $\delta^2 (1+ \mathcal{O}(|\Lambda|^{-1}))$ and the second one by $\delta^2\mathcal{O}(\lambda^{-1}|\Lambda|^{-1})$.
\end{proof}
\begin{lemma}\label{lem:matrixentries}
Let
$\eta >0$ and $\mu_{\lambda } = \frac{\lambda \eta}{\lambda + 4 d \,\mathrm{e}^\eta}$. Let $B, M, C_{++}$ and $S_{++}$ be
the matrices in the proof above. Set $0<\delta\leq \frac{1}{2}. $ Then
\begin{enumerate}[(i)]
\item \label{lem:1} $|B^{-1}_{ij}| \leq \frac{2}{\lambda} \,\mathrm{e}^{-\mu_{\lambda }|i-j|}$ and
${\rm Re \,} (f^{*}B^{-1}f)\geq \frac{\lambda }{\lambda^{2}+ (4d)^{2}}\|f\|^2$ $\forall f\in \mathbb C^{\Lambda \setminus\{1,2 \}}$
\item \label{lem:6}$|{\rm Tr\,} C_{++}^{-1} | \geq \frac{ |\Lambda| \lambda }{K(\lambda+1)^{2}}.$
\item \label{lem:3} ${\rm Re \,} (f^{*}S_{++}f)\geq \frac{\lambda }{2 }\|f\|^2$ $\forall f\in \mathbb C^{\Lambda \setminus\{1,2 \}}.$
Moreover
$ |(S_{++})_{jk}| \leq K (\lambda+\frac{1}{\lambda })$ for all $j,k= 1,2$ .
\end{enumerate}
\end{lemma}
\begin{proof
$(i)$ We have $B = i (-\Delta_{|\Lambda \setminus \{1,2 \}}- (E+i\lambda))$. The upper bound follows by Combes-Thomas \cite{AizWar}[Sect 10.3]. For the lower bound note that
\[
f^{*}{\rm Re \,} B^{-1} f = \lambda \|B^{-1} f\|^2
\]
Moreover $\| B g \|^2 = \lambda^2 \|g\|^2+ g^{*}(E+\Delta_{|\Lambda \setminus \{1,2 \}} )g \leq (\lambda^{2}+ (4d)^{2}) \|g\|^2.$
The result follows setting
$g= B^{-1} f$.
\\
$(ii)$ As in $(i)$ above $ f^{*}{\rm Re \,} C_{++}^{-1} f \geq \lambda (1-\delta^{2})\|C_{++}^{-1}f\|^2.$
We can write $C_{++}=\lambda -\lambda \delta^{2}\mathds{1}_{1,2}-i (E+\Delta),$ where $\mathds{1}_{1,2}$ is the diagonal matrix
$(\mathds{1}_{1,2})_{ij}=\delta_{ij}[\delta_{ji_{1}}+\delta_{ji_{2}}]. $ Hence
\[
C_{++}^{*}C_{++}= (\lambda -\lambda \delta^{2}\mathds{1}_{1,2})^{2}+ (E+ \Delta)^2+i\lambda \delta^{2} [\mathds{1}_{1,2},\Delta ].
\]
The result follows by inserting this decomposition in $\|C_{++}g\|^{2}$ for $g=C_{++}^{-1}f.$
\\
$(iii)$ Using $(i)$ we have
\[
{\rm Re \,} f^{*}Sf=\lambda (1-\delta^{2})\|f\|^{2} +{\rm Re \,} f^{*}DB^{-1}D^{t}f\geq \lambda (1-\delta^{2})\|f\|^{2}.
\]
The upper bound follows from $(i)$ too.
\end{proof}
\begin{appendix}
\section{Super analysis}\label{app:susy}
We collect here only a minimal set of definitions for our purpose.
For details, see \cite{Berezin1987, Varadarajan2004,Wegner2016,DeWitt1992}.
\subsection{Basic definitions}
Let $q\in\mathbb N$.
Let $\mathcal{A}=\mathcal{A}_q = \mathcal{A}[\chi_1,\dots,\chi_q]$
be the Grassmann algebra over $\mathbb C$ generated by $\chi_1, \dots,\chi_q$, i.e.
\begin{align*}
\mathcal{A} = \oplus_{i=0}^q V^i,
\end{align*}
where $V$ is the complex vector space with basis $(\chi_1,\dots,\chi_q)$,
$V^0 = \mathbb C$ and $V^{j} = V^{j-1} \wedge V$
for $j\geq 2$ with the anticommutative product $\wedge$
\begin{align*}
\chi_i \wedge \chi_j = - \chi_j \wedge \chi_i.
\end{align*}
As a short hand notation, we write in the following
$\chi_i \chi_j = \chi_i \wedge\chi_j$ and for
$I\subset \{1,\dots,q\}$ denote $\chi^I = \prod_{j\in I} \chi_j$
the \emph{ordered} product of the $\chi_j$ with $j\in I$.
Then each $a\in\mathcal{A}$ has the form
\begin{align}\label{eq:grassmannelement}
a = \sum_{I\in\mathcal{P}(q)} a_I \chi^I,
\end{align}
where $\mathcal{P}(q)$ is the power set of $\{1,\dots, q\}$ and $a_I\in\mathbb C$
for all $I\in\mathcal{P}(q)$.
We distinguish even and odd elements
$\mathcal{A}=\mathcal{A}^0 \oplus\mathcal{A}^1$, where
\begin{align*}
\mathcal{A}^0 = \oplus_{i=0}^{\lfloor q/2\rfloor} V^{2i},
\quad
\mathcal{A}^1 = \oplus_{i=0}^{\lfloor q/2\rfloor} V^{2i+1}.
\end{align*}
The parity operator $p$ for homogeneous
(i.e. purely even, resp. purely odd) elements is defined by
\begin{align*}
p(a) =
\begin{cases}
0 & \text{if } a\in\mathcal{A}^0,\\
1 & \text{if } a\in\mathcal{A}^1.
\end{cases}
\end{align*}
Note that even elements commute with all elements in $\mathcal{A}$
and two odd elements anticommute.
For an even element $a\in\mathcal{A}^0$, we write $a = b_a+ n_a$,
where $n_a$ is the nilpotent part
and $b_a = a_\emptyset \in\mathbb C$ is called the body of $a$.
Let $U\subset \mathbb R$ open. For any function $f\in C^{\infty}(U)$, we define
\begin{align}
\label{eq:upgradefunc}
\begin{split}
f:\mathcal{A}^0 &\to \mathcal{A}^0\\
a & \mapsto f(a) = f(b_a+ n_a)
= \sum_{k=0}^\infty \frac{1}{k!}f^{(k)} (b_a) n_a^k
\end{split}
\end{align}
via its Taylor expansion. Note that the sum above is always finite.
\subsection{Differentiation}
Let $I' \subset I$.
We define the signs $\sigma_l (I,I')$ and $\sigma_r (I,I')$ via
\begin{align*}
\chi^I = \sigma_l (I,I') \chi^{I'} \chi^{I\backslash I'}
\quad
\chi^I = \sigma_r (I,I') \chi^{I \backslash I'} \chi^{I'} .
\end{align*}
Then the left- resp. right-derivative of an element $a$
of the form \eqref{eq:grassmannelement} is defined as
\begin{align*}
\overrightarrow{\frac{\partial}{\partial \chi_j}} a
\coloneqq \sum_{I\in\mathcal{P}(q): j \in I}
a_I\ \sigma_l (I,\{j\}) \ \chi^{I\backslash\{j\}}, \\
a \overleftarrow{\frac{\partial}{\partial \chi_j}}
\coloneqq \sum_{I\in\mathcal{P}(q): j \in I}
a_I\ \sigma_r (I,\{j\}) \ \chi^{I\backslash\{j\}}.
\end{align*}
\subsection{Integration}
The integration over a subset of (odd) generators $\chi_j, j\in I$ is defined by
\begin{align*}
\int \,\mathrm{d} \chi^I a
\coloneqq \left(\overrightarrow{\frac{\partial}{\partial \chi}}\right)^I a
= \sum_{J\in\mathcal{P}(q): I \subset J}
a_J\ \sigma_l (J,I) \ \chi^{J\backslash I},
\end{align*}
where $a$ has the form \eqref{eq:grassmannelement}
and $\,\mathrm{d} \chi^I = \prod_{j\in I} \,\mathrm{d} \chi_j$ is again a ordered product.
Note that the one forms $\,\mathrm{d} \chi_i$ are anticommutative objects and e.g.
$\int \,\mathrm{d} \chi_i \,\mathrm{d} \chi_j \chi_i \chi_j
= - \int \,\mathrm{d} \chi_i \,\mathrm{d} \chi_j \chi_j \chi_i = - 1$.
\paragraph{Gaussian integral.}
There is a useful Gaussian integral formula for Grassmann variables.
We rename our basis as $(\chi_1,\dots,\chi_q,\bar\chi_1,\dots\bar\chi_q)$.
Then for $M\in \mathbb C^{q\times q}$
\begin{align}\label{eq:susygauss}
\int \,\mathrm{d} \bar\chi \,\mathrm{d} \chi \,\mathrm{e}^{-\sum_{i,j}\bar\chi_i M_{ij} \chi_j}
= \det M,
\end{align}
where $\,\mathrm{d} \bar \chi \,\mathrm{d} \chi
= \prod_{j=1}^q \,\mathrm{d} \bar\chi_j \,\mathrm{d} \chi_j$.
Combining this with complex Gaussian integral formulas,
we obtain the following result.
\begin{theo}[Supersymmetric integral representation]\label{theo:susyapproach}
Let $A_1,A_2\in \mathbb C^{n\times n }$ with ${\rm Re \,} A_1 >0$.
Let $\Phi = (z,\chi)^t \in \mathbb C^n \times V^n$ be a supervector and
$\Phi^* = (\bar z, \bar \chi) \in \mathbb C^n \times V^n$ its transpose.
With the notations $[\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi] =
(2\pi)^{-n} \,\mathrm{d} \bar z \,\mathrm{d} z \,\mathrm{d} \bar \chi \,\mathrm{d} \chi$ and
$\Phi^* A \Phi
= \sum_{j,k=1}^n \bar z_j (A_1)_{jk} z_k + \bar \chi_j (A_2)_{jk} \chi_k$
for a block matrix
$A =
\begin{pmatrix}
A_1 & 0 \\ 0 & A_2
\end{pmatrix}$
(a supermatrix with odd parts $0$) we can write
\begin{align*}
\frac{\det A_2}{\det A_1}
= \int [\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi] \,\mathrm{e}^{-\Phi^* A \Phi}
\end{align*}
and
\begin{align*}
(A_1^{-1})_{jk}
=\int [\,\mathrm{d} \Phi^* \,\mathrm{d} \Phi] \ \bar z_k z_j\,\mathrm{e}^{-\Phi^* \hat A_1 \Phi},
\end{align*}
where
$\hat A_1 =
\begin{pmatrix}
A_1 & 0 \\ 0 & A_1
\end{pmatrix}$.
\end{theo}
\begin{proof}
Combine Eq. \eqref{eq:susygauss} with the complex Gaussian integral formulas
\begin{align*}
\det A_1 =
\frac{1}{(2\pi)^n} \int \,\mathrm{d} \bar z \,\mathrm{d} z \,\mathrm{e}^{-\bar z A_1 z},
\quad
(A_1^{-1})_{jk}
= \frac{\det A_1}{(2\pi)^n} \int \,\mathrm{d} \bar z \,\mathrm{d} z
\ \bar z_k z_j\,\mathrm{e}^{-\bar z A_1 z} .
\end{align*}
Note that while Eq. \eqref{eq:susygauss} holds for all matrices
$A\in \mathbb C^{n\times n}$, we need the additional condition ${\rm Re \,} A >0$
for the complex ones to ensure that the complex integral is finite.
\end{proof}
\subsection{Grassmann algebra functions and change of variables}
In this section, we denote the body of an even element $a$ by $b(a)$
instead of $b_a$.
\begin{defi}\label{def:generators}
Let $U\subset \mathbb R^p$ open. The
\emph{algebra of smooth $\mathcal{A}[\chi_1,\dots,\chi_q]$-valued functions}
on a domain $U$ is defined by
\begin{align*}
\mathcal{A}_{p,q}(U)
\coloneqq \left\{
f = f(x,\chi)= \sum_{I\in\mathcal{P}(q)}f_I(x)\chi^I: f_I \in C^\infty (U)
\right\}.
\end{align*}
We call $y_i(x,\chi)$, $ \eta_j(x,\chi)$, for ${i=1,\dots p, j = 1,\dots,q}$
\emph{generators} of $\mathcal{A}_{p,q}(U)$
if $p(y_i)= 0$, $p(\eta_j)=1$ and
\begin{enumerate}[(i)]
\item $ \{(b(y_1(x,0)),\dots, b(y_p(x,0))), x\in U\} $ is a domain in $\mathbb R^p$,
\item we can write all $f\in\mathcal{A}_{p,q}(U)$ as $f= \sum_{I}f_I(y)\eta^I$.
\end{enumerate}
\end{defi}
Note that $(x,\chi)$ are generators for $\mathcal{A}_{p,q}(U)$.
A change of variables is then a parity preserving transformation
between systems of generators of $\mathcal{A}_{p,q}(U)$.
A practical change of variable formula for super integrals
is currently only known for functions with compact support,
i.e. functions $f\in\mathcal{A}_{p,q}(U)$
such that $f_I\in C_c^\infty(U)$ for all $I\in\mathcal{P}(q)$.
\begin{theo}\label{theo:changeofvariables}
Let $U\subset\mathbb R^p$ open, $x,\chi$ and $y(x,\chi)$, $ \eta(x,\chi)$
two sets of generators of $\mathcal{A}_{p,q}(U)$.
Denote the isomorphism between the generators by
\begin{align*}
\psi: (x,\chi) \mapsto(y(x,\chi), \eta(x,\chi))
\end{align*}
and $V= b( \psi(U)) = \{(b(y_1(x,0)),\dots, b(y_p(x,0))), x\in U\}$.
Then for all $f\in \mathcal{A}_{p,q}(V)$ with compact support, we have
\begin{align*
\int_V \,\mathrm{d} y \,\mathrm{d}\eta f(y,\eta)
= \int_U \,\mathrm{d} x \,\mathrm{d}\chi f\circ \psi(x,\chi) {\rm Sdet \,}(J\psi) ,
\end{align*}
where ${\rm Sdet \,} (J\psi)$ is called the Berezinian defined by
\begin{align*}
J\psi =
\begin{pmatrix}
\tfrac{\partial y}{\partial x}
& y \overleftarrow{\tfrac{\partial }{\partial \chi} }\\
\tfrac{\partial \eta}{\partial x}
& \eta \overleftarrow{\tfrac{\partial }{\partial \chi}}
\end{pmatrix},
\qquad
{\rm Sdet \,}
\begin{pmatrix}
a & \sigma \\ \rho &b
\end{pmatrix}
= \det (a-\sigma b^{-1} \rho) \det b^{-1}.
\end{align*}
Integration over even elements $x$ and $y$ means
integration over the body $b(x)$ and $b(y)$
in the corresponding regions $U$ and $V$.
\end{theo}
\begin{proof}
See \cite[Theorem 2.1]{Berezin1987} or \cite[Theorem 4.6.1]{Varadarajan2004}.
\end{proof}
\begin{rem}
Applying an isomorphism $\psi$ that changes only the odd elements,
Theorem \ref{theo:changeofvariables} holds
also for smooth, integrable functions
that are not necessarily compactly supported.
Changing also the even elements for a non compactly supported function,
boundary integrals can occur.
\end{rem}
\end{appendix}
\bibliographystyle{alpha}
|
1,116,691,498,344 | arxiv | \section{Introduction}
Scheduling problems with incomplete knowledge of the input data have been
studied extensively. There are different ways to model such uncertainty, the major
frameworks being {\em online optimization}, where parts of the input are
revealed incrementally, {\em stochastic optimization}, where parts of the input
are modeled as random variables, and {\em robust optimization}, where
uncertainty in the data is bounded.
Most scheduling research in this context assumes uncertainty about the job
characteristics. Examples include online scheduling, where the job set is a priori unknown
\cite{AlbersH17,PruhsST04}, mixed-criticality scheduling, where
the processing time
comes from a given set \cite{BaruahBDLMMS12},
stochastic scheduling, where the processing times are modeled as random
variables with known distributions~\cite{Nino-Mora09}, robust scheduling, where
the unknown processing times are within a given interval \cite{KouvelisYu97},
two/multi-stage stochastic and robust scheduling with recourse, where the set of
jobs that has to be scheduled
stems from a known super set and is revealed in stages \cite{ChenMRS15,ShmoysS07},
and scheduling with explorable uncertainty,
where the processing time of a job can potentially be reduced by testing the job at extra cost~\cite{DuerrEMM20}.
A lot less research addresses uncertainty about the machine
environment, particularly, where the processing speeds of machines change in an
unforeseeable manner. The majority of such research focuses on the special case of
scheduling with unknown non-availability periods, that is, machines break down
temporarily~\cite{AlbersS01,DiedrichJST09} or permanently~\cite{SteinZ20}.
Arbitrarily changing machine speeds have been considered for scheduling on a
single machine~\cite{EpsteinLMMMSS12}.
Fluctuations in the processing speeds of machines are pervasive in real-world
environments. For example, machines can be shared computing resources in
data centers, where a sudden increase of workload may cause a general slowdown
or, for some users, the machine may become unavailable due to priority issues.
As another example, machines that are production units may change their
processing speed due to aging or, unexpectedly, break down completely. In any
case,~(unforeseen) changes in the processing speed may have a drastic impact on
the quality of a given schedule.
In this paper, we are concerned with the question of how to design a partial
schedule by committing to groups of jobs, to be scheduled on the same machine,
before knowing the actual machine speeds. This question is motivated, for
example, by MapReduce computations done in large data centers.
A MapReduce
function typically groups workload before knowing the actual number or precise
characteristics of the available computing resources~\citep{MapReduce}.
We consider a two-stage robust scheduling problem in which we aim for a schedule
of minimum makespan on multiple machines of unknown speeds.
Given a set of~$n$ jobs and~$\nrbags$ machines, we ask
for a partition of the jobs into~$\nrbags$ groups, we say {\em bags}, that have to be scheduled on the machines after their speeds are revealed without being split up.
That is, in the second stage, when the machine speeds are known, a feasible schedule must assign all jobs in the same
bag to the same machine.
The goal is to minimize the second-stage makespan.
More formally, we define the {\em \srs} problem as follows. We are given~$n$
jobs with processing times~$p_j\geq 0$, for~$j\in\{1, \ldots, n\}$, and the
number of machines,~$\nrbags \in \N$. Machines run in parallel but their speed
is a priori unknown.
In the first stage, the task is to group jobs into at most~$\nrbags$ bags. In
the second stage, the machine speeds~$s_i\geq 0$, for~$i\in\{1,\ldots,m\}$, are
revealed. The time needed to execute job~$j$ on machine~$i$ is~$p_j/s_i$, if~$s_i>0$.
If a machine has speed~$s_i=0$, then it cannot process any job; we say the
machine {\em fails}. Given the machine speeds, the second-stage task is to
assign bags to the machines such that the makespan~$\cmax$ is minimized, where
the makespan is the maximum sum of execution times of jobs assigned to the same
machine.
Given a set of bags and machine speeds, the second-stage problem emerges as
classical makespan minimization on related
parallel machines. It is well-known that this problem can
be solved arbitrarily close to optimality by polynomial-time approximation
schemes~\cite{AlonAWY1998,HochbaumS87,Jansen10}. As we are interested in the
information-theoretic tractability, we
allow superpolynomial running times for our algorithms -- ignoring any computational concern -- and assume that
the second-stage problem is solved optimally. Thus, an \emph{algorithm} for \srs defines a job-to-bag allocation, i.e., it gives a solution to the first-stage problem. We may use non-optimal bag-to-machine assignment to simplify the analysis.
We evaluate the performance of algorithms by a worst-case analysis comparing the
makespan of the algorithm with the optimal makespan achievable when all machine
speeds are known in advance. We say that an algorithm is~$\rho$-{\em robust} if,
for any instance, its makespan is within a factor~$\rho\geq 1$ of the optimal
solution. The {\em robustness factor} of the algorithm is defined as the infimum over all such~$\rho$.
The special case of \srs where all machine speeds are either~$0$ or~$1$ has
been studied previously by Stein and Zhong~\cite{SteinZ20}. They introduced the
problem with identical machines and an unknown number of machines that fail (speed
$0$) in the second stage. They present a simple lower bound of~$4/3$ on the
robustness factor with equal jobs and design a general~$5/3$-robust algorithm.
For infinitesimal jobs, they give an improved $1.2333$-robust algorithm complemented by a lower bound for each number of machines which tends
to~$(1+\sqrt{2})/2\approx 1.207$ for large~$m$. Stein and Zhong also consider
the objective of minimizing the maximum difference between the most loaded and
the least loaded machine,
motivated by questions on fair allocation.
\subsection*{Our Contribution}
We introduce the \srs problem and present robust algorithms. The
algorithmic difficulty of this problem is to construct bags in the first stage that are robust
under any choice of machine speeds in the second stage. The straight-forward
approach of using any makespan-optimal solution on~$m$ identical machines is not
sufficient. \Cref{lem:optnotrobust} shows that such an algorithm might
have an arbitrarily large robustness factor. Using \emph{Longest Processing Time}
first (\lpt) to create bags does the trick and is~$(2 - \frac1m)$-robust
for arbitrary job sizes (\Cref{thm:speedLPT}). While this was known for speeds in~$\{0,1\}$~\cite{SteinZ20}, our most
general result is much less obvious.
Note that \lpt aims at ``balancing'' the bag sizes which cannot lead to a
better robustness factor than~$2-\frac1m$ as we show
in~\Cref{lem:balancedlowerbound}. Hence, to improve upon this factor, we need to
carefully construct bags with imbalanced bag sizes. There are two major
challenges with this approach: (i) finding the ideal imbalance in the bag sizes
independent from the actual job processing times that would be robust for all
adversarial speed settings simultaneously and (ii) to adapt bag sizes to
accommodate discrete jobs.
A major contribution of this paper is an optimal solution to the first challenge
by considering infinitesimal jobs. In this case, the speed-robust scheduling problem boils down
to identifying the best bag sizes as placing the jobs into bags becomes trivial.
We give, for any number of machines, optimally imbalanced bag sizes and prove a
best possible robustness factor of
\[
\rhosand = \frac{\nrbags^\nrbags}{\nrbags^\nrbags-(\nrbags-1)^\nrbags} \leq
\frac{e}{e-1} \approx 1.58\,.
\]
For infinitesimal jobs in the particular machine environment in which all
machines have either speed~$0$ or~$1$, we obtain an algorithm with robustness
factor
\[
\rhosandid(m) = \max_{t\leq \frac m2,~t\in\mathbb N}\ \frac{1}{\frac t{m-t}
+\frac{m-2t}m} \ \leq\ \frac{1+\sqrt{2}}{2} \approx 1.207 = \rhoid\,.
\]
This improves the previous upper bound of~$1.233$ by Stein and
Zhong~\cite{SteinZ20} and matches exactly their lower bound for each~$m$.
Furthermore, we show that the lower bound in \cite{SteinZ20} holds even for
randomized algorithms and, thus, our algorithm is optimal for both,
deterministic and randomized scheduling.
The above tight results for infinitesimal jobs are crucial for our further
results for discrete jobs. Building on those ideal bag sizes, our approaches
differ substantially from the methods in \cite{SteinZ20}. When all jobs have equal processing time, we obtain a~$1.8$-robust solution through a careful analysis of the trade-off
between using slightly imbalanced bags and a scaled version of the ideal bag sizes computed for the infinitesimal setting (\Cref{theo:speedbrickUB}).
When machines have only speeds in~$\{0,1\}$ and jobs have arbitrary equal sizes,
i.e., unit size, then we give an optimal~$\frac{4}{3}$-robust algorithm. This is
an interesting class of instances as the best known lower bound of~$\frac43$ for
discrete jobs uses only unit-size jobs~\cite{SteinZ20}. To achieve this
result, we, again, crucially exploit the ideal bag sizes computed for
infinitesimal jobs by using a scaled variant of these sizes. Some cases,
depending on~$m$ and the optimal makespan on~$m$ machines, have to be handled
individually. Here, we use a direct way of constructing bags with at most four
different bag sizes, and some cases can be solved by an integer linear program.
We summarize our results in Table~\ref{tab:results}. Missing proofs can be found in the Appendix.
Inspired by traditional one-stage scheduling problems where jobs have {\em
machine-dependent execution times} (unrelated machine scheduling), one might ask
for such a generalization of our problem. However, it is easy to rule out any robustness
factor for such a setting: Consider four machines and five
jobs, where each job may be executed on a unique pair of machines. Any algorithm
must build at least one bag with at least two jobs. For this bag there is
at most one machine to which it can be assigned with finite makespan. If this
machine fails, the algorithm cannot complete the jobs whereas an optimal
solution can split this bag on multiple machines to get a finite makespan.
\begin{table}
\renewcommand*\arraystretch{1.5}
\centering
\def\arraystretch{1.1}
\begin{tabular}{lcccc}
&\multicolumn{2}{c}{General speeds}& \multicolumn{2}{c}{Speeds from~$\{0,1\}$ } \\
\cmidrule{2-5}
&Lower bound&Upper bound&Lower bound&Upper bound\\
\midrule
\multirow{2}{*}{Discrete jobs}&~$\rhosandm m~$&~$2 - \tfrac1m$ &$ \tfrac43$&~$\tfrac53$ \\
\footnotesize & (\cref{lem:speedsandLB}) & (\cref{thm:speedLPT}) & \cite{SteinZ20} & \cite{SteinZ20} \\
\midrule
\multirow{2}{*}{Equal-size jobs}&$\rhosandm m~$&$1.8$&\multicolumn{2}{c}{$\tfrac43$} \\
& (\cref{lem:speedsandLB}) & (\cref{theo:speedbrickUB}) & \multicolumn{2}{c}{(\cite{SteinZ20}, \cref{thm:BricksUB})} \\
\midrule
\multirow{2}{*}{Infinitesimal jobs}& \multicolumn{2}{c}{$\rhosandm m \leq\tfrac{e}{e-1} \approx 1.58$}
& \multicolumn{2}{c}{$\rhoid(m) \leq \tfrac{1+\sqrt{2}}{2} \approx 1.207$} \\
& \multicolumn{2}{c}{(\cref{lem:speedsandLB}, \cref{thm:RelatedWSand})}
& \multicolumn{2}{c}{(\cite{SteinZ20}, \cref{theo:UBSandOnParallel})} \\
\bottomrule
\end{tabular}
\caption{Summary of results on \srs.}\label{tab:results}
\end{table}
\section{Speed-Robust Scheduling with Infinitesimal Jobs}
In this section, we assume that jobs have infinitesimal processing time.
We give optimal algorithms for \srs in both, the general case and for the
special case with speeds in~$\{0,1\}$.
\subsection{General Speeds}\label{sec:SandOnRelated}
\begin{restatable}{theorem}{RelatedWSand}
\label{thm:RelatedWSand}
There is an algorithm for \srs with
infinitesimal jobs that is~$\rhosand$-robust for all~$m\geq 1$, where
\[
\rhosand = \frac{\nrbags^\nrbags}{\nrbags^\nrbags-(\nrbags-1)^\nrbags}
\leq \frac{e}{e-1} \approx 1.58\,.
\]
This is the best possible robustness factor that can be achieved by any algorithm.
\end{restatable}
To prove Theorem~\ref{thm:RelatedWSand}, we first show that, even when we
restrict the adversary to a particular set of speed configurations, no
algorithm can achieve a robustness factor better than~$\rhosand$. Note that
since we can scale all speeds equally by an arbitrary factor without
influencing the robustness factor, we can assume that the sum of the speeds is
equal to~$1$. Similarly, we can assume that the total processing time of the
jobs is equal to~$1$, such that the optimal makespan of the adversary is equal
to~$1$ and the worst-case makespan of an algorithm is equal to its robustness
factor.
Intuitively, the idea
behind the set of $m$ speed configurations is that the adversary can set
$\nrbags-1$ machines to equal low speeds and one machine to a high speed. The low speeds are set such
that one particular bag size just fits on that machine when aiming for the given robustness
factor. This immediately implies that all larger bags have to be put on the fast
machine together. This way, the speed configuration can \emph{target} a certain
bag size. We provide specific bag sizes that achieve a robustness
of~$\rhosand$ and show that for the speeds targeting these bag sizes, other bag sizes would result in even larger robustness factors.
We define~$U=\nrbags^\nrbags$,~$L=\nrbags^\nrbags-(\nrbags-1)^{\nrbags}$, and
$t_k = (\nrbags-1)^{\nrbags-k} \nrbags^{k-1}$ for~$k\in\{1,\ldots,\nrbags\}$.
Intuitively, these values are chosen such that the bag sizes~$t_i/L$ are optimal
and~$t_i/U$ corresponds to the low speed of the~$i$-th speed configuration.
It is easy to verify that~$\rhosand=U/L$ and for all~$k$ we have
\begin{equation}\label{eq:ULt}
\sum_{i<k} t_i = (\nrbags-1) t_{k} -U + L\,.
\end{equation}
In particular, this implies that~$\sum_{i\leq \nrbags} t_i = \nrbags t_\nrbags -U+L = L$
and therefore that the sum of the bag sizes is~$1$.
Let~$a_1\le\cdots\le a_\nrbags$ denote the bag sizes chosen by an algorithm
and~$s_1\le\cdots\le s_\nrbags$ the speeds chosen by the adversary.
\begin{lemma}
\label{lem:speedsandLB}
For any~$\nrbags\geq 1$, no algorithm for \srs
with
infinitesimal jobs can have a robustness factor less than~$\rhosand$.
\end{lemma}
\begin{proof}
We restrict the adversary to the following~$\nrbags$ speed configurations
indexed by~$k\in\{1,\ldots,\nrbags\}$:
\[
\mc S_k := \big\{s_1 = t_k/U,~ s_2 = t_k/U,~ \dots ,~ s_{\nrbags-1}
= t_k/U,~ s_\nrbags = 1 - (\nrbags-1) t_k/U\big\}\,.
\]
Note that for all~$k\in\{1,\ldots,\nrbags\}$, we have~$\nrbags t_k \leq U$ and, thus,~$s_\nrbags\geq s_{\nrbags-1}$.
We show that for any bag sizes~$a_1,\ldots,a_\nrbags$, the adversary can force
the algorithm to have a makespan of at least~$U/L$ with some~$\mc S_k$. Since the
optimal makespan is fixed to be equal to~$1$ by assumption, this implies a robustness
factor of at least~$U/L$.
Let~$k^\star$ be the smallest index such that~$a_k\ge t_k/L$. Such an index exists
because the sum of the~$t_i$'s is equal to~$L$ (\Cref{eq:ULt}) and the sum of
the~$a_i$'s is equal to~$1$. Now, consider the speed configuration~$\mc S_{k^\star}$. If
one of the bags~$a_i$ for~$i\geq k^\star$ is not scheduled on the~$\nrbags$-th machine,
the makespan is at least~$a_i/s_1\geq a_{k^\star}U/t_{k^\star} \geq U/L$. Otherwise, all~$a_i$
for~$i\geq k^\star$ are scheduled on machine~$\nrbags$. Then, using Equation~\eqref{eq:ULt}, the load on that machine
is at least
\begin{align*}
\sum_{i\geq k^\star} a_i = 1 - \sum_{i< k^\star} a_i \geq 1 - \frac 1L \sum_{i< k^\star} t_i
= \frac{1}{L}\left(L - (\nrbags-1)t_{k^\star} + U - L\right)
= \frac UL s_\nrbags\,.
\end{align*}
Thus, either a machine~$i<m$ with a bag~$i'\geq k^*$
or machine~$i=\nrbags$ has a load of at least~$s_i \cdot U/L~$ and determines the makespan.
\end{proof}
For given bag sizes, we call a
speed configuration that maximizes the minimum makespan a \emph{worst-case speed configuration}.
Before we provide the strategy that obtains a matching robustness factor,
we state a property of such best strategies for the adversary.
\begin{restatable}{lemma}{LemmaFullAssignment}\label{clm:fullassignment}
Given bag sizes and a worst-case speed configuration, for each machine~$i$, there exists an optimal assignment of the bags to the machines such that only machine~$i$ determines the makespan.
\end{restatable}
Note that, by~\Cref{clm:fullassignment}, for a worst-case speed configuration, there are many different bag-to-machine assignments that obtain the
same optimal makespan. \Cref{clm:fullassignment} also implies that for such
speed configurations all speeds are non-zero.
Let \sandalg denote the algorithm that creates~$\nrbags$ bags of the following sizes
\[
a_1 = t_1/L,~ a_2 = t_2/L,~ \dots ,~ a_m = t_m/L\,.
\]
Note that this is a valid algorithm since the sum of these bag sizes is equal to~$1$.
Moreover, these bag sizes are exactly such that if we take the speed configurations
from Lemma~\ref{lem:speedsandLB}, placing bag~$j$ on a slow machine in configuration~$j$
results in a makespan that is equal to~$\rhosand$.
We proceed to show that \sandalg has a robustness factor of~$\rhosand$.
\begin{lemma}\label{lem:speedsandUB}
For any~$m\geq 1$, \sandalg is~$\rhosand$-robust for \srs with infinitely many infinitesimal jobs.
\end{lemma}
\begin{proof}
Let~$a_1,\ldots,a_\nrbags$ be the bag sizes as prescribed by \sandalg and let
$s_1,\ldots,s_\nrbags$ be a speed configuration that maximizes the minimum
makespan given these bag sizes. Further, consider an optimal assignment of
bags to machines and let~$\cmax^*$ denote its makespan. We use one particular
(optimal) assignment to obtain an upper bound on~$\cmax^*$. By
\Cref{clm:fullassignment}, there exists an optimal assignment where only
machine~$1$ determines the makespan, i.e., machine~$1$ has load~$\cmax^*\cdot
s_1$ and any other machine~$i$ has load strictly less than~$\cmax^* \cdot
s_i$. Consider such an assignment. If there are two bags assigned to
machine~$1$, then there is an empty machine with speed at least~$s_1$.
Therefore, we can put one of the two bags on that machine and decrease the
makespan. This contradicts that~$\cmax^*$ is the optimal makespan, so there is
exactly one bag assigned to machine~$1$. Let~$k$ be the index of the unique
bag placed on machine~$1$, i.e.,~$\cmax^*=a_k/s_1$, and let~$\ell$ be the
number of machines of speed~$s_1$.
If~$a_k>a_\ell$, machine~$i\in\{1,\ldots,\ell\}$ with speed~$s_1$ can
be assigned bag~$i$ with a load that is strictly less than~$\cmax^*\cdot s_1$.
Thus, given the current assignment, we can remove bag~$a_k$ from machine~$1$ and place
the~$\ell$ smallest bags on the~$\ell$ slowest machines, one per machine, e.g., bag~$a_i$ on machine~$i$ for~$i\in\{1,\ldots,\ell\}$. This empties at least one
machine of speed strictly larger than~$s_1$. Then, we can place bag~$a_k$ on this, now
empty, machine, which yields a makespan that is strictly smaller than~$\cmax^*$. This
contradicts the assumption that~$\cmax^*$ is the optimal makespan and,
thus,~$a_k\le a_\ell$, which implies~$k \leq \ell$.
Let~$\machineload_i$ denote the total processing time of bags that are
assigned to machine~$i$ and let~$C$ be the total \emph{remaining capacity} of
the assignment, that is,~$C:=\sum_{i=1}^{m} s_i\cmax^*-\machineload_i$. We
construct an upper bound on~$C$, which allows us to bound~$\cmax^*$.
Machines in the set~$\{2,\ldots,\ell\}$ cannot be assigned a bag of size
larger than~$a_k$ since their load would be greater than~$\cmax^* \cdot s_1$,
causing a makespan greater than~$\cmax^*$. Therefore, we assume without loss
of generality that all bags~$a_j<a_k$ are assigned to a machine with
speed~$s_1$. The total remaining capacity on the first~$k$ machines is
therefore equal to~$(k-1) a_k - \sum_{i< k} a_i$.
Consider a machine~$i>k$. If the remaining capacity of this machine is greater than~$a_k$, then
we can decrease the makespan of the assignment by moving bag~$k$ to
machine~$i$. Therefore, the remaining capacity on machine~$i$ is at most~$a_k$.
Combining the above and using \eqref{eq:ULt}, we obtain:
\begin{align*}
C &\leq (m-1) a_k - \sum_{i< k} a_i = \frac{1}{L}\left( (m-1) t_k - \sum_{i<
k} t_i\right) = \frac{1}{L}\left(U-L\right)\,.
\end{align*}
The total processing time is~$\sum_{i=1}^m a_i = 1$, and the maximum total
processing time that the machines could process with makespan~$\cmax^*$
is~$\sum_{i=1}^m s_i \cmax^* = \cmax^*$. Since the latter is equal to the total processing time
plus the remaining capacity, we have~$\cmax^* = 1+C \leq U/L$, which proves the~lemma.
\end{proof}
The robustness factor~$\rhosandm{m}$ is not best possible for every~$m$ when we allow
algorithms that make randomized decisions and compare to an oblivious
adversary. For~$m=2$, uniformly randomizing between bag sizes~$a_1=a_2=1/2$
and~$a_1=1/4$,~$a_2=3/4$ yields a robustness factor that is slightly better
than~$\rhosandm{2} = 4/3$. Interestingly, with speeds in~$\{0,1\}$ the optimal
robustness factor is equal for deterministic and randomized~algorithms.
\subsection{Speeds in \texorpdfstring{$\{0,1\}$}{0,1}}\label{sec:SandOnParallel}
\begin{restatable}{theorem}{IdenticalWSand}\label{thm:IdenticalWSand}\label{theo:UBSandOnParallel}
For all~$m\geq 1$, there is a deterministic~$\rhoid(m)$-robust algorithm for \srs with speeds in~$\{0,1\}$ for infinitesimal jobs,
where
\[
\rhosandid(m) = \max_{t\in\N,~t\leq \frac m2}\ \frac{1}{\frac t{m-t}
+\frac{m-2t}m} \ \leq\ \frac{1+\sqrt{2}}{2} = \rhoid \approx 1.207\,.
\]
This is the best possible robustness factor that can be achieved by any algorithm, even a randomized algorithm against an oblivious adversary.
\end{restatable}
The deterministic version of the lower bound and some useful insights were
already presented in~\cite{SteinZ20}. We recall some of these
insights here, because they are used in the proof. To do so, we introduce
some necessary notation used in the remainder this paper. The number
of failing machines (i.e., machines with speed equal to~$0$) is referred to as~$t\geq0$, and
we assume w.l.o.g.\ that these are machines~$1,\dots,t$. Furthermore, we assume
for this subsection again w.l.o.g.\ that the total volume of infinitesimal jobs
is~$m$, and we will define bags~$1,\dots,m$ with respective sizes
$a_1\leq\dots\leq a_m$ summing to at least~$m$ (the potential excess being unused).
\begin{lemma}[Statement (3) in~\cite{SteinZ20}]\label{lem:folding}
For all~$m\ge1$ and~$t \leq m/2$, there exists a makespan-minimizing allocation of bags to
machines for \srs with speeds in~$\{0,1\}$ and infinitely many infinitesimal jobs that assigns the smallest~$2t$ bags to machines~$t+1,\ldots,2t$.
\end{lemma}
Since \Cref{lem:folding} only works for~$t\leq m/2$, one may worry that, for larger~$t$,
there is a more difficult structure to understand. The following insight shows
that this worry is unjustified. Indeed, if~$\nrmach< m/2$ is the number of
machines that do \emph{not} fail, one can simply take the solution for
$2\nrmach$ machines and assign the bags from any two machines to one machine.
The optimal makespan is doubled and that of the algorithm is at most
doubled, so the robustness is conserved.
\begin{lemma}[Proof of Theorem 2.2 in~\cite{SteinZ20}]\label{lem:m-half}
Let~$\rho>1$. For all~$m\ge1$, if an algorithm is~$\rho$-robust for
\srs with speeds in~$\{0,1\}$ and infinitely many infinitesimal jobs
for~$t\leq m/2$, it is~$\rho$-robust for $t\leq m-1$.
\end{lemma}
We will thus focus on computing bag sizes such that the makespan of a best
allocation according to Lemma~\ref{lem:folding} is within a~$\rhosandid(m)$
factor of the optimal makespan when~$t\leq m/2$. The approach
in~\cite{SteinZ20} to obtain the (as we show tight) lower bound~$\rhosandid(m)$
is as follows. Given some~$t\leq m/2$ and a set of bags allocated according to Lemma~\ref{lem:folding},
\begin{enumerate}
\item[(i)] the makespan on machines~$t+1,\dots,2t$ is at most~$\rhosandid(m)$
times the optimal makespan~\mbox{$m/(m-t)$}, and
\item[(ii)] the makespan on machines~$2t+1,\dots m$ is a most
$\rhosandid(m)$ because those machines only hold a \emph{single} bag after a
simple ``folding'' strategy for assigning bags to machines, which we define
below.
\end{enumerate}
In particular, since~$t=0$ is possible, (ii) implies that all bag sizes are at most
$\rhosandid(m)$.
The fact that the total processing volume of~$m$ has to be accommodated and maximizing over
$t$ results in the lower bound given in Theorem~\ref{thm:IdenticalWSand}.
In order to define the bag sizes leading to a matching upper bound, we further
restrict our choices when~$t \leq m/2$ machines fail. Of course, as we match
the lower bound, the restriction is no limitation but rather a simplification.
When~$t \leq m/2$ machines fail, we additionally assume that the machines~$t+1,
\ldots, 2t$ receive exactly two bags each: Assuming~$t\leq m/2$, the
\simplefold of these bags onto machines assigns bags~$i\geq t+1$ to machine
$i$, and bag~$i=1,\dots,t$ (recall machine~$i$ fails) to machine~$2t-i+1$.
Hence, bags~$1,\dots,t$ are ``folded'' onto machines~$2t,\dots,t+1$ (sic),
visualized in Figure~\ref{fig:folding}.
For given~$m$, let~$t^\star$ be an optimal adversarial choice for~$t$ in
\cref{theo:UBSandOnParallel}. Assuming there are bag sizes~$a_1,\ldots,a_m$
that match the bound~$\rhosandid(m)$ through simple folding, by (i) and (ii), we
precisely know the makespan on all machines after folding when~$t=t^\star$.
That fixes~$a_{i}+a_{2t+1-i}=\rhosandid(m)\cdot m/(m-t)$ for all~$i=1,\dots,t$
and~$a_{2t+1},\dots,a_m=\rhosandid(m)$, see Figure~\ref{fig:folding}. In
contrast to~\cite{SteinZ20}, we show that defining~$a_i$ for~$i=1,\dots,t$ to
be essentially a linear function of~$i$, and thereby fixing all bag sizes,
suffices to match~$\rhosandid(m)$. The word ``essentially'' can be dropped when
replacing~$\rhosandid(m)$ by~$\rhosandid$.
\begin{figure}
\begin{tikzpicture}[fct/.style={very thick, domain=0:1}, samples=100, font=\scriptsize]
\begin{scope}[xscale=5.58,yscale=2.66]
\draw[fill=black!5,draw=black!20] (0,0) rectangle (0.05,0.530504);
\draw[fill=black!5,draw=black!20] (0.05,0) rectangle (0.1,0.530504);
\draw[fill=black!5,draw=black!20] (0.1,0) rectangle (0.15,0.66313);
\draw[fill=black!5,draw=black!20] (0.15,0) rectangle (0.2,0.66313);
\draw[fill=black!5,draw=black!20] (0.2,0) rectangle (0.25,0.795756);
\draw[fill=black!5,draw=black!20] (0.25,0) rectangle (0.3,0.795756);
\draw[fill=black!25] (0.3,0) rectangle (0.35,0.928382);
\draw[fill=black!25] (0.35,0) rectangle (0.4,0.928382);
\draw[fill=black!25] (0.4,0) rectangle (0.45,1.06101);
\draw[fill=black!25] (0.45,0) rectangle (0.5,1.06101);
\draw[fill=black!25] (0.5,0) rectangle (0.55,1.19363);
\draw[fill=black!25] (0.55,0) rectangle (0.6,1.19363);
\draw[fill=black!25] (0.6,0) rectangle (0.65,1.2069);
\draw[fill=black!25] (0.65,0) rectangle (0.7,1.2069);
\draw[fill=black!25] (0.7,0) rectangle (0.75,1.2069);
\draw[fill=black!25] (0.75,0) rectangle (0.8,1.2069);
\draw[fill=black!25] (0.8,0) rectangle (0.85,1.2069);
\draw[fill=black!25] (0.85,0) rectangle (0.9,1.2069);
\draw[fill=black!25] (0.9,0) rectangle (0.95,1.2069);
\draw[fill=black!25] (0.95,0) rectangle (1,1.2069);
\draw[fill=black!25] (0.3,0.928382) rectangle (0.35,1.7241);
\draw[fill=black!25] (0.35,0.928382) rectangle (0.4,1.7241);
\draw[fill=black!25] (0.4,1.06101) rectangle (0.45,1.7241);
\draw[fill=black!25] (0.45,1.06101) rectangle (0.5,1.7241);
\draw[fill=black!25] (0.5,1.19363) rectangle (0.55,1.7241);
\draw[fill=black!25] (0.55,1.19363) rectangle (0.6,1.7241);
\draw[->] (0.1,0.8) to [bend left = 55] (0.25,1.3);
\node at (1.17,-0.1) {bags/};
\node at (1.17,-0.2) {machines};
\end{scope}
\begin{axis}[
axis lines=middle,
axis line style={->},
ylabel near ticks,
xlabel near ticks,
xtick = \empty,
extra x ticks = {0.025,0.275,0.975},
extra x tick labels = {1,$t=t^\star$,$m=20$},
extra y ticks={1.2069,1.7241},
extra y tick labels={$\bar{\rho}_{01}(20
$,$\frac{20\bar{\rho}_{01}(20)}{14}
$},
domain=0:1, ymin=0, xmax=1.15, xmin=0, ymax=2,
width=8cm]
\addplot[domain=0:2.36, gray, densely dotted, thick] (1,x);
\addplot[domain=0:0.6, gray, densely dotted, thick] (x,1.207);
\addplot[domain=0:0.3, gray, densely dotted, thick] (x,1.724);
\end{axis}
\end{tikzpicture}
\begin{tikzpicture} [fct/.style={thick, domain=0:1}, samples=100, font=\scriptsize]
\begin{scope}[xscale=5.58,yscale=2.66]
\draw (0.0,0) rectangle (0.05,0.53017766953);
\draw (0.05,0) rectangle (0.1,0.590533008589);
\draw (0.1,0) rectangle (0.15,0.650888347648);
\draw (0.15,0) rectangle (0.2,0.711243686708);
\draw (0.2,0) rectangle (0.25,0.771599025767);
\draw (0.25,0) rectangle (0.3,0.831954364826);
\draw (0.3,0) rectangle (0.35,0.892309703886);
\draw (0.35,0) rectangle (0.4,0.952665042945);
\draw (0.4,0) rectangle (0.45,1.013020382);
\draw (0.45,0) rectangle (0.5,1.07337572106);
\draw (0.5,0) rectangle (0.55,1.13373106012);
\draw (0.55,0) rectangle (0.6,1.19408639918);
\draw (0.6,0) rectangle (0.65,1.20710678119);
\draw (0.65,0) rectangle (0.7,1.20710678119);
\draw (0.7,0) rectangle (0.75,1.20710678119);
\draw (0.75,0) rectangle (0.8,1.20710678119);
\draw (0.8,0) rectangle (0.85,1.20710678119);
\draw (0.85,0) rectangle (0.9,1.20710678119);
\draw (0.9,0) rectangle (0.95,1.20710678119);
\draw (0.95,0) rectangle (1.0,1.20710678119);
\node at (.5,-0.1) {\phantom{bags/}};
\node at (.5,-0.2) {\phantom{machines}};
\end{scope}
\begin{axis}[
axis lines=middle,
axis line style={->},
ylabel near ticks,
xlabel near ticks,
xtick = \empty,
extra y ticks = {1.2071},
extra y tick labels = {$\bar{\rho}_{01
$},
extra x ticks = {0.2,0.4,0.586,0.8,1},
extra x tick labels = {$0.2$,$0.4$,$\beta\approx0.586$,$0.8$,$1$},
domain=0:1, ymin=0, xmax=1.15, xmin=0, ymax=2,
width=8cm]
\addplot[fct, domain=0:0.586] (x,0.5+1.2071*x) node[pos=0.6,above] {$\bar f$};
\addplot[fct, domain=0.586:1] (x,1.2071);
\addplot[domain=0:2.36, gray, densely dotted, thick] (1,x);
\addplot[domain=0:1.2071, gray, densely dotted, thick] (0.586,x);
\addplot[domain=0:0.586, gray, densely dotted, thick] (x,1.2071);
\end{axis}
\end{tikzpicture}
\caption{Left: The situations prior to and after folding the optimally sized
bags when~$t^\star$ machines fail. Right: The profile function~$\bar f$ and
the equidistant ``sampling'' to obtain actual bag sizes.} \label{fig:folding}
\end{figure}
A clean way of thinking about the bag sizes is through \emph{profile functions}
which reflect the distribution of load over bags in the limit case
$m\rightarrow\infty$. Specifically, we identify the set~$\{1,\dots,m\}$ with
the interval~$[0,1]$ and define a continuous non-decreasing profile function
$\bar{f}:[0,1]\rightarrow\mathbb{R_+}$ integrating to~$1$. A simple way of
getting back from the profile function to actual bag sizes of total size
approximately~$m$ is equidistantly ``sampling''~$\bar{f}$, i.e., defining
$a_i:=\bar{f}(\frac{i-1/2}m)$ for all~$i$.
Our profile function~$\bar f$ implements the above observations and ideas in
the continuous setting. Indeed, our choice
\[
\bar f(x)= \min\left\{\frac12+\rhoid\cdot x,\rhoid\right\}=\min\left\{\frac12+\frac{(1+\sqrt2)\cdot x}{2},\frac{1+\sqrt2}{2}\right\}
\]
is linear up to~$\beta = 2-\sqrt{2}=\lim_{m\rightarrow\infty}2t^\star/m$ and then
constantly~$\rhoid=\lim_{m\rightarrow\infty}\rhoid(m)$. We give some intuition
for why this function works using the continuous counterpart of folding. When
$t\leq t^\star$ machines fail, i.e., a continuum of machines with measure
$x\leq \beta/2$, we fold the corresponding part of~$\bar f$ onto the interval
$[x,2x]$, yielding a rectangle of width~$x$ and height~$\bar f(0)+\bar
f(2x)=2\bar f(x)$. We have to prove that the height does not exceed the optimal
makespan~$1/(1-x)$ by more than a factor of~$\rhoid$. Equivalently, we maximize~$2 \bar f(x)(1-x)$ (even over~$x \in \R$) and observe the maximum of~$\rhoid=(1+\sqrt2)/2$ at
$x=\beta/2$. When~$x\in(\beta/2,1/2]$, note that by folding we \emph{still}
obtain a rectangle of height~$2\bar f(x)$ (but width~$\beta-x$), dominating the
load on the other machines. Hence, the makespan is at most~$\rhoid/(1-x)$ for every~$x \in [0,1/2]$.
Directly ``sampling''~$\bar f$, we obtain a weaker bound (stated below) than that in Theorem~\ref{thm:IdenticalWSand}. The proof (and the algorithm) is substantially easier than that of the main theorem: Firstly, we translate the above continuous discussion into a discrete proof. Secondly, we exploit that~$\bar f$ is concave to show that the total volume of the ``sampled'' bags is larger than~$m$ for every~$m \in \N$.
Later, we make use of the corresponding simpler algorithm.
Let $\sandalg_{01}$ denote the algorithm that creates~$m$ bags of size
$a_i:=\bar{f}(\frac{i-1/2}m)$, for~$i \in \{1,\ldots,m\}$.
\begin{restatable}{theorem}{theoidenticalsteps}\label{prop:identical-steps}
$\sandalg_{01}$ is~$\rhoid$-robust for \srs with speeds in~$\{0,1\}$ and
infinitely many infinitesimal jobs for all~$m\geq 1$.
\end{restatable}
As the profile function disregards specific machines, obtaining bag sizes
through this function seems too crude to match~$\rhoid(m)$ for every~$m$.
Indeed, our proof of \Cref{thm:IdenticalWSand} is based on a much more
careful choice of the bag sizes.
\section{Speed-Robust Scheduling with Discrete Jobs}\label{sec:JobsOnRelated}
We focus in this part on the general case of \srs. By a scaling argument, we
may assume w.l.o.g.\ that the machine speeds satisfy~$\sum_{i=1}^m s_i =
\sum_{j=1}^n p_j$. We first notice that obtaining a robust algorithm is not
trivial in this case, as even an algorithm which minimizes the largest bag size
cannot have a constant robustness factor.
\begin{lemma}\label{lem:optnotrobust}
Algorithms for \srs that minimize the size of the largest bag may not have a constant robustness factor.
\end{lemma}
\begin{proof}
Consider any integer~$k\geq 1$, a number of machines~$m=k^2+1$,~$k^2$ unit-size
jobs and one job of processing time~$k$. The maximum bag size is equal to~$k$,
so an algorithm building~$k+1$ bags of size~$k$ respects the conditions of the
lemma. Consider the speed configuration where~$k^2$ machines have speed~$1$ and
one machine has speed~$k$. It is possible to schedule all jobs within a
makespan~$1$ on these machines. However, the algorithm must either place a bag on
a machine of speed 1 or all bags on the machine of speed~$k$, hence leading to
a makespan of~$k$, and proving the result. Note that by adding~$k^2$ unit-size
jobs, we can build a similar example where the algorithm does not leave empty
bags, which is always beneficial.
\end{proof}
Note that such algorithms are~$(2-\frac2m)$-robust with machine speeds in~$\{0,1\}$:
once the number~$\nrmach$ of speed-1 machines is revealed, simply combine the
two smallest bags repetitively if~$\nrmach<m$. The makespan obtained is then at
most twice the average load on~$\nrmach+1$ machines, so~$\frac{2\nrmach}{\nrmach+1}$
times the average load on~$\nrmach$ machines.
One feature of the algorithm considered in \Cref{lem:optnotrobust} that is
exploited in the lower bound is that the bags sizes are too unbalanced. A way
to prevent this behavior would be to maximize the size of the minimum bag as
well. But this criteria becomes useless if we consider the same example as
above with~$m=k^2+2$. Then, the minimum bag size is~$0$ as there are more
machines than jobs, and the same lower bound holds.
Hence, in order to obtain a robust algorithm in the general case, we focus on
algorithms that aim at balanced bag sizes, for which the best lower bound is
described in the following lemma. An algorithm is called balanced if, for an instance of unit-size jobs, the bag sizes created by the algorithm differ by at most one unit. In particular, a balanced algorithm creates~$m$ bags of size~$k$ when
confronted with~$mk$ unit-size jobs and~$m$ bags. For balanced algorithms,
we give a lower bound in Lemma~\ref{lem:balancedlowerbound} and a matching upper bound in Theorem~\ref{thm:speedLPT}.
\begin{lemma}\label{lem:balancedlowerbound}
No balanced algorithm for \srs can obtain a better robustness factor
than~$2-\frac{1}{m}$ for any~$m\geq 1$.
\end{lemma}
\begin{proof}
Consider any~$m\geq 1$,~$km$ unit-size jobs, with~$k=2m-1$. Assume the adversary
puts~$m$ jobs on the first machine and~$2m$ jobs on each of the remaining
machines. An algorithm that uses evenly balanced bags builds~$m$ bags of size
$k$. It must either place a bag of size~$k$ on the machine of speed~$m$ or~$2k$
jobs on a machine of speed~$2m$. In any case, the robustness factor is at
least~$2-\frac1m$.
\end{proof}
We now show that this lower bound is attained by a simple algorithm, commonly
named as {\em Longest Processing Time First} (\lpt) which considers jobs in
non-increasing order of processing times and assigns each job to the bag that
currently has the smallest size, i.e., the minimum allocated processing time.
\begin{theorem}\label{thm:speedLPT}
\lpt is~$(2-\frac{1}{m})$-robust for \srs for all~$m\ge1$.
\end{theorem}
\begin{proof}
While we may assume that the bags are allocated optimally to the machines once
the speeds are given, we use a different allocation for the analysis. This
can only worsen the robustness factor.
Consider the~$m$ bags and let~$b$ denote the size of a largest bag,~$B$, that
consists of at least two jobs. Consider all bags of size strictly larger
than~$b$, each containing only a single job, and place them on the same machine
as \opt{} places the corresponding jobs. We define for each machine~$i$ with
given speed~$s_i$ a capacity bound of~$(2-\frac{1}{m}) \cdot s_i$. Then, we
consider the remaining bags in non-increasing order of bag sizes and iteratively assign
them to the -- at the time of assignment --
least loaded machine with sufficient remaining capacity.
By the assumption~$\sum_{i=1}^m s_i = \sum_{j=1}^n p_j$ and the capacity
constraint~$(2-\frac{1}{m}) \cdot s_i$, it is sufficient to show that \lpt can
successfully place all bags.
The bags larger than~$b$ fit by definition as they contain a single job.
Assume by contradiction that there is a bag which cannot be assigned. Consider
the first such bag and let~$T$ be its size. Let~$k<m$ be the number of bags
that have been assigned already. Further, denote by~$w$ the size of a smallest
bag. Since we used \lpt in creating the bags, we can show that~$w \geq
\frac{1}{2}b$. To see that, consider bag~$B$ and notice that the smallest job
in it has a size at most~$\frac 12b$. When this job was assigned to its
bag,~$B$ was a bag with smallest size, and this size was at least~$\frac 12b$
since we allocate jobs in \lpt-order. Hence, the size of a smallest bag
is~$w\geq \frac{1}{2}b\geq \frac 12 T$, where the second inequality is true as
all bags larger than~$b$ can be placed.
We use this inequality to give a lower bound on the total remaining capacity on
the~$m$ machines when the second-stage algorithm fails to place the~$(k+1)$\nobreakdash-st bag.
The~$(m-k)$ bags that were not placed have a combined volume of at
least~$V_\ell = (m-k-1)w+T \geq (m-k+1)\frac{T}{2}~$. The bags that were placed
have a combined volume of at least~$V_p = kT$. The remaining capacity is then
at least~$C =(2-\frac{1}{m}) V_\ell + (1-\frac{1}{m})V_p$, and we have
\begin{align*}
C & = \left(2-\frac 1m \right)V_\ell+\left(1-\frac 1m \right)V_p
\geq \left(2-\frac 1m \right)(m-k+1)\frac{T}{2} + \left(1-\frac 1m \right)kT \\
& \geq (m-k+1)T - (m-k+1)\frac T{2m} + kT -\frac 1m kT
\geq m T + T - \frac{m+k+1}{2m}T\\
&\geq m T.
\end{align*}
Thus, there is a machine with remaining capacity~$T$ which contradicts the
assumption that the bag of size~$T$ does not fit.
\end{proof}
\section{Speed-Robust Scheduling with Equal-Size Jobs}
We consider the special case in which jobs have equal processing times. By a
scaling argument, we may assume that all jobs have unit processing time. Before
focusing on a specific speed setting, we show that in both settings we can use
any algorithm for infinitesimal jobs with a proper scaling to obtain a
robustness factor which is degraded by a factor decreasing with~$n/m=:\avg$.
Assume~$\avg>1$, as otherwise the problem is trivial.
We define the algorithm \sandtobricks that builds on the optimal algorithm for
infinitesimal jobs,~$\sandalg^*$, which is~$\sandalg$ for general speeds
(Section~\ref{sec:SandOnRelated}) or~$\sandalg_{01}$ for speeds
in~$\{0,1\}$~(Section~\ref{sec:SandOnParallel}). Let~$a_1,\ldots,a_m$ be the
bag sizes constructed by~$\sandalg^*$ scaled such that a total processing
volume of~$n$ can be assigned, that is,~$\sum_{i=1}^m a_i = n$. For unit-size
jobs, we define bag sizes as~$a'_i=(1+\frac{1}{\avg}) \cdot a_i$ and assign
the jobs greedily to the bags.
\begin{lemma}\label{lem:speedScaledSand}
For~$n$ jobs with unit processing times and~$m$ machines,
\sandtobricks for \srs is $(1+\frac{1}{\avg})\cdot \rho(m)$-robust,
where~$\avg=n/m$ and~$\rho(m)$ is the robustness factor for
$\sandalg^*$ for~$m$~machines.
\end{lemma}
\begin{proof}
To prove the lemma, it is sufficient to show that all~$n$ unit-size jobs can be
assigned to the constructed bags of sizes~$a'_1,\ldots,a'_m$. Suppose there is
a job~$j$ that does not fit into any bag without exceeding the bag size. The
remaining volume in the bags is at least the total capacity minus the
processing volume of all jobs except~$j$, that is,
\[
\sum_{i=1}^m a'_i - (n-1) = \left(1+\frac{1}{\avg}\right) \cdot n - n +1
>\frac{1}{\avg} \cdot n = m\,.
\]
Hence, there must exist some bag that has a remaining capacity of at least~$1$ and can fit job~$j$.
\end{proof}
\subsection{General Speeds}\label{sec:BricksOnSpeed}
For unit-size jobs, we show how to beat the factor~$2-\frac1m$ (Theorem
\ref{thm:speedLPT}) for \srs with a~$1.8$-robust algorithm. For~$m=2$ and~$m=3$,
we give algorithms with best possible robustness factors~$\frac43$ and~$\frac32$,
respectively.
We argued earlier that the algorithm \lpt has robustness factor~$2-\frac1m$
(Theorem~\ref{thm:speedLPT}), even for unit-size jobs. However, in this case we
can show, for a slightly different algorithm \oddalgo a robustness factor
increasing with the ratio between the number of jobs and the number of
machines. \oddalgo builds bags of three possible sizes: for~$q\in \mathbb N$
such that~$\avg=\frac nm\in[2q-1,2q+1]$, bags of sizes~$2q-1$ and~$2q+1$ are built,
with possibly one additional bag of size~$2q$.
\begin{restatable}{lemma}{LemSpeedLPTavg}\label{lem:speedLPTavg}
For~$n$ unit-size jobs,~$m$ machines and~$q\in\mathbb N$ with~$\lambda\in[2q-1,2q+1]$,
\oddalgo is~$(2-\frac{1}{q+1})$-robust for \srs.
\end{restatable}
\begin{proof}[Proof sketch]
Using the set of bags built by \oddalgo, we can show in a manner similar to the
proof of \Cref{thm:speedLPT} that the robustness factor is smaller than~$2$. The
worst case happens for instance when a bag of size~$2q+1$ needs to be scheduled
on a machine of speed~$q+1$.
\end{proof}
Notice that the robustness guarantees in Lemmas \ref{lem:speedScaledSand} and
\ref{lem:speedLPTavg} are functions that are decreasing in~$\avg$ and increasing in
$\avg$, respectively. By carefully choosing between \oddalgo and \sandtobricks, depending on
the input, we obtain an improved algorithm for unit-size jobs. For~$\avg< 8$, we
execute \oddalgo, which yields a robustness factor of at most~$1.8$ by
\Cref{lem:speedLPTavg}, as~$q\leq 4$ for~$\avg<8$. Otherwise, when~$\avg \geq 8$,
we run \sandtobricks having a guarantee of~$\frac{9}{8}\cdot \frac{e}{e-1}
\approx 1.78$ by Lemma~\ref{lem:speedScaledSand}.
\begin{theorem}\label{theo:speedbrickUB}
There is an algorithm for \srs with unit-size jobs that has a robustness
factor of at most~$1.8$ for any~$m\geq 1$.
\end{theorem}
We give a general lower bound on the best achievable robustness factor. Note
that the lower bound of~$\rhosand$ from \Cref{lem:speedsandLB} remains valid in
this setting and is larger than~$1.5$ for~$m\geq 6$.
\begin{restatable}{lemma}{LemmaLBOneFive}\label{lem:LB3:1.5}
For every~$m\geq3$, no algorithm for \srs can have a robustness factor smaller
than~$1.5$, even restricted to unit-size jobs.
\end{restatable}
For special cases with few machines, we give best possible algorithms which
match the previously mentioned lower bounds. We also show for~$m=6$ a lower
bound larger than~$\rhosandm{6}>1.5$, details being in the appendix. Similar
lower bounds have been found by computer search for many larger values of~$m$, for
which the difference with~$\rhosand$ tends towards zero when~$m$ grows.
\begin{restatable}{lemma}{genSpeedsTwoThree} \label{lem:speedTwoThreeMach}
An optimal algorithm for \srs for unit-size jobs has robustness factor~$4/3$
on~$m=2$ machines and~$3/2$ on~$m=3$ machines, and larger than
$\rhosandm{6}>1.5$ for~$m=6$.
\end{restatable}
\subsection{Speeds in \texorpdfstring{$\{0,1\}$}{0,1}}\label{sec:BricksUB}
We consider \srs of unit-size jobs on machines of speed~$0$ or~$1$.
This type of instance is of particular interest as the currently best known
lower bound for discrete jobs is~$\frac43$ and uses only unit-size
jobs~\cite{SteinZ20}. We present an algorithm with a matching upper bound.
\begin{theorem}\label{thm:BricksUB}
There exists a~$\frac 4 3$-robust algorithm for \srs with~$\{0,1\}$-speeds and unit-size jobs.
\end{theorem}
In the proof, we handle different cases depending on~$\nrbags$
and~$\opt{m}$, carefully tailored methods. Recall that~$\opt{m}$ is equal to the optimal makespan on~$m$ machines.
We only give an overview of the four cases and defer all details to \Cref{apx:BricksUB}.
When~$\opt{\nrbags}\geq 11$, then we use \sandtobricks based on \sandalgid and
obtain a robustness factor at most~$4/3$ by Lemma~\ref{lem:speedLPTavg}. The
proof uses a volume argument to show that jobs fit into the scaled optimal bag
sizes for infinitesimal jobs (\sandalgid), even after rounding bag sizes down
to the nearest integer. Here the loss in volume due to rounding is upper
bounded by~$1$ per bag.
When~$\opt{\nrbags}\in\{9,10\}$, this method is too crude. We refine it to show
that for~$m\geq 40$ it is still possible to properly scale bag sizes from
\sandalgid and round them to integral sizes such that all~$n$ unit-size jobs
can be placed. We exploit in the analysis an amortized bound on the loss due to
rounding over consecutive bags.
For the case that~$\opt{\nrbags}\leq 8$ and~$m\geq 50$ we use a more direct,
constructive approach and give a strategy that utilizes at most four different
bag sizes. More precisely, if~$\optm \in \{1,2\}$, then packing bags according
to the optimal schedule on~$\nrbags$ machines is~$\frac43$-robust. If~$3 \leq
\optm \leq 8$, we create bags so that roughly~$\frac 2 5$-th of the bags have
size either~$\ceil{\frac 2 3 \optm}$ or~$\ceil{\frac 2 3 \optm}-1$,~$\frac 1
5$-th of the bags have size~$\optm$, and~$\frac 2 5$-th of the bags have
size~$\floor{\frac 4 3 \optm}$. We show that this
strategy~is~$\frac{4}{3}$-robust.
The remaining cases,~$\opt{\nrbags}\leq 10$ and~$m\leq 50$, can be verified by
enumerating over all possible instances and using an integer linear program to
verify that there is a solution of bag sizes that is~$\frac{4}{3}$-robust.
\section*{Concluding Remarks}
In this work, we have been able to establish matching lower and upper bounds
for the \srs problem with infinitesimal jobs and design optimal algorithms when
speeds are restricted to~$\{0,1\}$ and either infinitesimal jobs or equal-size
jobs. We believe that the insights from our optimal algorithms will be useful
to improve the more general~upper~bounds.
We have also shown that randomization does not help when the speeds belong to
$\{0,1\}$ and jobs are infinitesimal. However, the other known lower bounds do
not hold in a randomized setting, so designing better randomized algorithms
remains an interesting challenge.
We conclude with an observation about adversarial strategies which might be
useful for further research. We give two somewhat orthogonal examples proving
the lower bound of~$\frac 43$ for \srs with unit processing time jobs and
speeds from~$\{0,1\}$. In both examples, there are only two relevant
adversarial strategies: either one machine fails or none. This may seem
sub-optimal, but the lower bound of~$\frac 43$ is tight for unit-size jobs
(\Cref{thm:BricksUB}). Further, we show in the proof of
\Cref{thm:IdenticalWSand} (\Cref{apx:SandOnIdentical}) that, for infinitesimal
jobs, an adversary only requires two strategies to force all algorithms to have
a robustness factor at least~$\rhosandid(m)$, which is optimal.
{\em Example 1} (from \cite{SteinZ20}). Consider~$2m$ jobs and~$m>2$
machines. If an algorithm places~$2$ jobs per bag, let one machine fail. This
leads to a makespan of~$4$ while the optimal makespan is~$3$ which gives a lower
bound of~$\frac 43$. Otherwise, one bag has at least three jobs, and, if no
machine fails, the algorithm's makespan is~$3$ while the optimal makespan is~$2$
yielding a lower bound of~$\frac 32$.
{\em Example 2.} Our new dual example has~$3m$ jobs for~$m>3$ machines.
If an algorithm places~$3$ jobs per bag, let one machine fail. This leads to a
makespan of~$6$ while the optimal makespan is~$4$, implying a lower bound of
$\frac 32$. Otherwise, one bag has at least~$3$ jobs, and if no machine fails,
the algorithm's makespan is~$4$ whereas the optimal makespan is~$3$, which again
gives a lower bound~of~$\frac 43$.
\bibliographystyle{abbrvnat}
|
1,116,691,498,345 | arxiv | \section{Introduction}
Spin current plays central roles in spintronics.
Moreover, intrinsic spin current in solids has been shown to induce several peculiar interactions.
As spin current breaks inversion symmetry keeping the time reversal symmetry, the interactions induced by spin current have the same symmetry \cite{TataraReview19}, like the antisymmetric Dzyaloshinskii-Moriya (DM) interaction between spins.
It was theoretically shown that the DM interaction constant is $D_i^a=\hbar a^3 j_{{\rm s},i}^a$, where $i$ and $a$ represent directions in space and spin, respectively, and $a$ is the lattice constant. Namely, it is proportional to the expectation value of the spin current $j_{{\rm s},i}^a$ intrinsically induced by the spin-orbit interaction when inversion symmetry is broken \cite{Kikuchi16}.
In this picture, the origin of the DM interaction is the Doppler shift due to the intrinsic flow of spin.
The picture naturally explains the Doppler shift of spin waves in the presence of the DM interaction observed in Refs. \cite{Iguchi15,Seki16}.
The spin current picture was explored further in Refs. \cite{FreimuthDM17,Koretsune18}.
The relation between the DM constant and spin current has been experimentally confirmed by injecting external spin current \cite{Karnad18,Kato19}.
In the case of insulator magnets, Ref. \cite{Katsura05} pointed out that an electric polarization is induced by the vector chirality, which is equivalent to an intrinsic spin current.
Similar flexo-magnetoelectric effects due to intrinsic spin current were discussed in Refs. \cite{62Bruno,Zvezdin12}.
The second harmonic generation was discussed to arise from intrinsic spin current under the action of optical electromagnetic wave \cite{Wang1,Werake1,Karashtin1,Karashtin3}.
The Doppler shift picture can be applied for discussing anomalous optical properties of solids.
When a spin-orbit interaction breaking inversion symmetry, like the Rashba spin-orbit interaction, coexists with interactions breaking time-reversal invariance, like the coupling to a magnetization, intrinsic charge flow is allowed, resulting in a directional dichroism of light \cite{Shibata16,Shibata18}.
The dichroism here is induced by the effective gauge field proportional to $\uv\equiv \bm{\alpha}_{\rm R}\times\bm{M}$, where $\bm{\alpha}_{\rm R}$ and $\bm{M}$ denote the Rashba field and the magnetization, respectively.
The effective gauge field induces charge current, and the effect is viewed as Doppler shift of light, mentioned in Ref. \cite{Sawada05}.
It was in fact demonstrated theoretically that the Rashba interaction and magnetization leads to a coupling proportional to $\uv\cdot(\Ev\times\Bv)$ between the electric and magnetic fields, $\Ev$ and $\Bv$ \cite{Kawaguchi16}.
This coupling is equivalent to switching to a moving frame with velocity $\uv$.
As seen above, various intrinsic flow or effective gauge field causing Doppler shift in solids have been detected as asymmetric transport effects \cite{TataraReview19}.
Spin current is manipulated by spin gauge fields \cite{TataraReview19}, and spin-orbit interaction argued above is an example of spin gauge field driving the spin current.
Slowly-varying spin textures also act as spin gauge field for conduction electron \cite{Volovik87,TKS_PR08}.
In this paper, we study nonlinear effects of intrinsic spin currents induced by spin gauge field on the optical properties of metallic systems.
We include the spin gauge field to the second order, and derive linear response expression of the optical conductivity.
We consider the case of uniform spin current, i.e., spin gauge field uniform in space and time.
We have in mind the case where one spin gauge field is due to an intrinsic spin-orbit interaction like Rashba interaction and another generated by magnetization structures.
As magnetization structure, we consider a spiral spin structure where the spins change with a constant pitch.
\section{Formalism}
The spin gauge field approach for spin structures is valid in the adiabatic regime where the electron spin follows the magnetization structure due to $sd$ exchange interaction \cite{TataraReview19}.
The present manuscript therefore studies a strong $sd$ exchange case, different from the perturbative treatment of the $sd$ exchange interaction carried out in Ref. \cite{Shibata16}.
The model we consider is the $sd$ exchange model described by a Hamiltonian
\begin{align}
H&=\intr c^\dagger \lt(-\frac{\nabla^2}{2m}-\ef-M\nv(\rv)\cdot\sigmav \rt)c
\end{align}
where $c$ and $c^\dagger$ are electron field operators, $\ef$ is the Fermi energy, $m$ is the electron mass, $M$ is the spin splitting energy due to the $sd$ exchange interaction, and $\nv(\rv)$ is a unit vector field representing the magnetization direction, $\sigmav$ being a vector or Pauli matrices.
We apply a unitary transformation in the spin space to diagonalize the exchange interaction \cite{TKS_PR08}.
The new electron field in the transformed rotated frame is $\tilde{c}\equiv U^{-1}(\rv)c$, where $U(\rv)$ is a $2\times 2$ unitary matrix, which is chosen to satisfy $U^{-1}(\nv(\rv)\cdot\sigmav)U=\sigma_z$.
The Hamiltonian in the rotated frame reads
\begin{align}
H&=\intr \tilde{c}^\dagger \lt(-\frac{1}{2m}\lt(\nabla_i+iA_{{\rm s},i}\rt)^2-\ef-M\sigma_z \rt)\tilde{c}
\end{align}
where $A_{{\rm s},i}\equiv -iU^{-1}\nabla_i U$ is the spin gauge field of a $2\times2$ matrix.
For the uniform spin gauge fields, the conductivity tensor at linear response to the applied electric field with angular frequency $\Omega$ can be calculated straightfowardly including all the orders of the spin gauge field because the spin gauge field does not carry wave vectors nor angular frequency. Nevertheless we focus on the second order effects at the end.
We denote in this section the uniform total spin gauge field as $A_{{\rm s},i}^\alpha$, where $i$ and $\alpha$ are directions in space and spin space, respectively, and derive the expression for the conductivity tensor.
Later in the next section various spin gauge fields are introduced, and the results in this section should be read replacing $A_{{\rm s},i}^\alpha$ by the sum of all the spin gauge fields of interest.
The conductivity tensor is
\begin{align}
\sigma_{ij}(\Omega)&=\frac{1}{-i\Omega}\sumom\sumkv
\tr[v_i G_{\kv\omega}v_j G_{\kv,\omega+\Omega}+\delta_{ij}G_{\kv\omega}]^<
\label{sigmadef}
\end{align}
Here the velocity operator is
\begin{align}
v_i &= \frac{k_i}{m}+A_{{\rm s},i}^\alpha \sigma_\alpha
\end{align}
and the Green's function includes spin gauge field $A_{\rm s}$ and $^<$ denotes the lesser component.
The last term of Eq. (\ref{sigmadef}) arise from the 'diamagnetic' contribution to the electric current, namely the second term of $j_i=v_i+A_i$, where $A$ represents the gauge field of electromagnitism.
Retarded Green's function is
\begin{align}
G^\ret_{\kv\omega} &= \frac{1}{\omega-\frac{k^2}{2m}-\sum_{i\alpha}
(\gammav_k)_\alpha\sigma_\alpha +i\eta}
\end{align}
where
\begin{align}
(\gammav_k)_\alpha & \equiv M_\alpha+\sum_i k_i A_{{\rm s},i}^\alpha
\end{align}
$\eta$ represents small positive imaginary part due to the electron damping, and
$\Mv\equiv M\sigma_z$ is the diagonalized spin splitting.
(Being in the rotated frame, $\Mv$ is diagonalized along the $z$ axis.)
Evaluating the lesser component, we have
\begin{align}
\sigma_{ij}(\Omega)
&= \frac{1}{-i\Omega}\sumom\sumkv \nnr
& \tr\biggl[
(f(\omega+\Omega)-f(\omega)) v_i G^\ret_{\kv\omega}v_j G^\adv_{\kv,\omega+\Omega}
+f(\omega) v_i G^\adv_{\kv\omega}v_j G^\adv_{\kv,\omega+\Omega}
-f(\omega+\Omega) v_i G^\ret_{\kv\omega}v_j G^\ret_{\kv,\omega+\Omega}
\nnr
& +\delta_{ij}f(\omega) \lt( G^\adv_{\kv\omega}-G^\ret_{\kv\omega}\rt)
\biggr]
\label{sigmaijra}
\end{align}
where $f(\omega)\equiv [e^{\beta \omega}+1]^{-1}$ is the Fermi distribution function ($\beta=(\kb T)^{-1}$ is the inverse temperature).
Trace of spin ($a,b=\ret,\adv$),
\begin{align}
\sumkv \tr[v_i G^a_{\kv\omega}v_j G^b_{\kv,\omega'}]
& \equiv \sumkv K^{ab}_{ij} (\kv,\omega,\omega')
\end{align}
is written defining
\begin{align}
G^a_{\kv\omega}=\frac{\pi_{k\omega}^a+\gammav_k\cdot\sigma}{\Pi_{k\omega}^a}
\end{align}
where ($\ekv=\frac{k^2}{2m}-\ef$)
\begin{align}
\pi_{k\omega}^\ret & \equiv \omega-\ekv+i\eta , &
\Pi_{k\omega}^a \equiv (\pi_{k\omega}^a)^2 -\gamma_k^2 ,
\end{align}
as
\begin{align}
K^{ab}_{ij}&(\kv,\omega,\omega') =
\frac{1}{ \Pi_{k\omega}^a\Pi_{k,\omega'}^b } \nnr
& \times
\tr \biggl[ \lt(\frac{k_i}{m}+{\Av}_{{\rm s},i}\cdot\sigmav\rt) \lt( \pi_{k\omega}^a+(\Mv+k_k {\Av}_{{\rm s},k})\cdot\sigmav \rt)
\lt(\frac{k_j}{m}+{\Av}_{{\rm s},j}\cdot\sigmav\rt)
\lt( \pi_{k,\omega'}^b+(\Mv+k_l {\Av}_{{\rm s},l})\cdot\sigmav \rt) \biggr]
\end{align}
Focusing on the second order contribution in the spin gauge field, we obtain
\begin{align}
\sumkv & K^{ab}_{ij}(\kv,\omega,\omega')
= 2\biggl[
\frac{1}{m^2} (\kappa_{ij}^{(2)ab}+M^2\kappa_{ij}^{(0)ab})
+ \frac{1}{m} \lt( \kappa_i^{ab} ({\Av}_{{\rm s},j}\cdot\Mv) +\kappa_j^{ab} ({\Av}_{{\rm s},i}\cdot\Mv) \rt) \nnr
& + \frac{1}{m}(\kappa_{ik}^{(1)ab}({\Av}_{{\rm s},j}\cdot{\Av}_{{\rm s},k})+\kappa_{jk}^{(1)ab}({\Av}_{{\rm s},i}\cdot{\Av}_{{\rm s},k}) )
+ \frac{2}{m^2}\kappa_{ijk}^{ab} ({\Av}_{{\rm s},k}\cdot\Mv) +
\frac{1}{m^2}\kappa_{ijkl}^{ab} ({\Av}_{{\rm s},k}\cdot{\Av}_{{\rm s},l})\nnr
&
+\lt[ \lambda^{(2)ab} -M^2 \lambda^{(0)ab} \rt] ({\Av}_{{\rm s},i}\cdot{\Av}_{{\rm s},j})
+ 2\lambda^{(0)ab}({\Av}_{{\rm s},i}\cdot\Mv)({\Av}_{{\rm s},j}\cdot\Mv)
- i \Omega \lambda^{(0)ab}({\Av}_{{\rm s},i}\times {\Av}_{{\rm s},j})\cdot\Mv\biggr]
\label{Kijcalculation2}
\end{align}
where the coefficients are
\begin{align}
\sumkv \frac{1}{ \Pi_{k\omega}^a\Pi_{k,\omega'}^b }
& \equiv \lambda^{(0)ab}(\omega,\omega') ,
\sumkv \frac{\pi_{k\omega}^a\pi_{k\omega'}^b}{ \Pi_{k\omega}^a\Pi_{k,\omega'}^b }
\equiv \lambda^{(2)ab}(\omega,\omega') \nnr
\sumkv \frac{k_ik_j}{ \Pi_{k\omega}^a\Pi_{k,\omega'}^b }
& \equiv \kappa_{ij}^{(0)ab}(\omega,\omega') , \sumkv k_ik_j \frac{\pi_{k\omega}^a+\pi_{k\omega'}^b}{ \Pi_{k\omega}^a\Pi_{k,\omega'}^b }
\equiv \kappa_{ij}^{(1)ab}(\omega,\omega') , \sumkv k_ik_j \frac{\pi_{k\omega}^a\pi_{k\omega'}^b}{ \Pi_{k\omega}^a\Pi_{k,\omega'}^b }
\equiv \kappa_{ij}^{(2)ab}(\omega,\omega') \nnr
\sumkv k_i \frac{\pi_{k\omega}^a+\pi_{k\omega'}^b}{ \Pi_{k\omega}^a\Pi_{k,\omega'}^b }
& \equiv \kappa_{i}^{ab}(\omega,\omega') ,
\sumkv k_ik_jk_k \frac{1}{ \Pi_{k\omega}^a\Pi_{k,\omega'}^b }
\equiv \kappa_{ijk}^{ab}(\omega,\omega') ,
\sumkv k_ik_jk_k k_l \frac{1}{ \Pi_{k\omega}^a\Pi_{k,\omega'}^b }
\equiv \kappa_{ijkl}^{ab}(\omega,\omega')
\end{align}
Note that contributions odd in $\kv$ such as $ \kappa_{i}^{ab}$ and $\kappa_{ijk}^{ab}$ are finite because the energy $\ekv\pm\gamma_k$ contains contribution odd in $\kv$ when $M\neq0$.
The summations over $\kv$ are evalulated expanding $\gamma_k$ in $\Pi_{k\omega}^a$ with respect to the spin gauge field.
The odd terms turn out to be
\begin{align}
\kappa_{i}^{ab}(\omega,\omega')
&= \kappa_{1}^{ab}(\omega,\omega') ( {\Av}_{{\rm s},i} \cdot\Mv) \nnr
\kappa_{ijk}^{ab}(\omega,\omega')
&=\kappa_3^{ab}(\delta_{ij} {\Av}_{{\rm s},k}+\delta_{ik} {\Av}_{{\rm s},j}+\delta_{jk} {\Av}_{{\rm s},i})\cdot \Mv
\end{align}
where $\kappa_{1}^{ab}$ and $\kappa_{3}^{ab}$ do not depend on the spin gauge field to the lowest order.
Other coefficients depend on the gauge field to the lowest order as ($\mu=0,1,2$)
\begin{align}
\lambda^{(\mu)ab}
&= \lambda^{(\mu)ab}_{0}+ \lambda^{(\mu)ab}_{2}\sum_{k}({\Av}_{{\rm s},k}\cdot\Mv)^2 \nn
\kappa_{ij}^{(\mu)ab}
&= \delta_{ij} [ \kappa_{0}^{(\mu)ab} +\kappa_{{\rm d}2}^{(\mu)ab} \sum_{k}({\Av}_{{\rm s},k}\cdot\Mv)^2 ]
+\kappa_{2}^{(\mu)ab} ({\Av}_{{\rm s},i} \cdot \Mv)({\Av}_{{\rm s},j}\cdot\Mv) \nnr
\kappa_{ijkl}^{ab}
&= \kappa_4^{ab}(\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl} +\delta_{il}\delta_{jk})
\end{align}
where ${\rm d}$ denotes diagonal and $\lambda^{(\mu)ab}_{\nu}$, $\kappa_{\nu}^{(\mu)ab} $ ($\nu=0,2, {\rm d}2$) and $ \kappa_4^{ab}$ do not depend on the gauge field to the lowest order.
\section{Result}
From the above consideration, the conductivity tensor to the second order in the spin gauge field is written as
\begin{align}
\sigma_{ij}
&= \delta_{ij}[\chi_0+\chi_{{\rm d}2} \sum_{k} ({\Av}_{{\rm s},k}\cdot{\Av}_{{\rm s},k})+\chi_{{\rm d}2}^{\rm (ad)} \sum_{k} ({\Av}_{{\rm s},k}\cdot\Mv)({\Av}_{{\rm s},k}\cdot\Mv)] \nnr
& +\chi_2({\Av}_{{\rm s},i}\cdot{\Av}_{{\rm s},j})
+\chi_{2}^{\rm (ad)} ({\Av}_{{\rm s},i}\cdot\Mv)({\Av}_{{\rm s},j}\cdot\Mv)
+\chi_3 ({\Av}_{{\rm s},i}\times {\Av}_{{\rm s},j})\cdot\Mv
\label{sigmafinalresult}
\end{align}
where $\chi_i$'s are functions of the external angular frequency $\Omega$.
The scalar product $({\Av}_{{\rm s},i}\cdot\Mv)$ represents the adiabatic component
(denoted by $^{\rm (ad)}$) of the spin gauge field, i.e., the component along the magnetization.
The nonadiabatic (perpendicular) components of the gauge field affects the terms with $\chi_{{\rm d}2}$, $\chi_2$ and $\chi_3$.
Although the form in Eq. (\ref{sigmafinalresult}) is natural from the symmetry consideration, the expressions for the coefficients are in principle known by the present microscopic study.
For example, we have
\begin{align}
\chi_2(\Omega) &= \frac{1}{-i\Omega} \sumom \biggl[
(f(\omega+\Omega)-f(\omega)) \tilde{\chi_2}^{\ret\adv}(\omega,\omega+\Omega)
+f(\omega) \tilde{\chi_2}^{\adv\adv}(\omega,\omega+\Omega)
-f(\omega+\Omega) \tilde{\chi_2}^{\ret\ret}(\omega,\omega+\Omega)
\biggr] \nnr
\chi_3(\Omega) &= \sumom \biggl[
(f(\omega+\Omega)-f(\omega)) \lambda^{(0)\ret\adv}(\omega,\omega+\Omega)
+f(\omega) \lambda^{(0)\adv\adv}(\omega,\omega+\Omega)
-f(\omega+\Omega) \lambda^{(0)\ret\ret}(\omega,\omega+\Omega)
\biggr]
\label{chi2chi3def}
\end{align}
where
\begin{align}
\tilde{\chi_2}^{ab}(\omega,\omega')
&\equiv 2\lt[
\lambda^{(2)}(\omega,\omega')-M^2 \lambda^{(0)}(\omega,\omega') +\frac{2}{m}\kappa_{0}^{(1)}(\omega,\omega')
+\frac{2}{m^2} \kappa_4(\omega,\omega') \rt]^{ab}
\end{align}
The coefficients $\chi_i$'s are finite at $\Omega=0$, in spite of the factor of $\Omega^{-1}$ in the definition, Eqs. (\ref{sigmaijra})(\ref{chi2chi3def}). This is checked easily based on Eq. (\ref{sigmaijra}).
In fact, the square bracket in Eq. (\ref{sigmaijra}) vanishes linearly at $\Omega\ra0$, as
$\sumkv \tr[v_i G_{\kv\omega}^\adv v_j G_{\kv\omega}^\adv]
=\sumkv \tr[(G_{\kv\omega}^\adv)^{-1}(\partial_{k_i} G_{\kv\omega}^\adv) v_j G_{\kv\omega}^\adv]
= -\delta_{ij}\sumkv \tr[G_{\kv\omega}^\adv]$, where we used
$(\partial_{k_i} G_{\kv\omega}^\adv)= G_{\kv\omega}^\adv v_i G_{\kv\omega}^\adv$ and integral by parts with respect to $\kv$.
In the low frequency limit ($\Omega\ra0$), the expression of the conductivity Eq. (\ref{sigmaijra}) is simplified to be
\begin{align}
\sigma_{ij}(\Omega\ra0)
& = \sumom f'(\omega) \sumkv \lt[
K^{\ret\adv}_{ij} (\kv,\omega,\omega)
-\frac{1}{2}\lt( K^{\adv\adv}_{ij} (\kv,\omega,\omega)+ K^{\ret\ret}_{ij} (\kv,\omega,\omega) \rt)\rt]
\nnr
&\simeq - \frac{1}{2\pi} \sumkv \lt[
K^{\ret\adv}_{ij} (\kv,0,0)
-\frac{1}{2}\lt( K^{\adv\adv}_{ij} (\kv,0,0)+ K^{\ret\ret}_{ij} (\kv,0,0) \rt)\rt]
\end{align}
where we used $f'(\omega)\simeq - \delta(\omega)$ assuming low temperatures in the last line.
The parameter $\chi_2$ in Eq,. (\ref{sigmafinalresult}) characterizes the magnitude of anisotropy of optical response.
Let us derive explicit expression for $\chi_2$.
Using
\begin{align}
\frac{1}{ \Pi_{k\omega}^a } =\frac{1}{\gamma_\kv}\sum_{\sigma=\pm} \sigma g^a_{\kv\omega\sigma}
\end{align}
where
\begin{align}
g^a_{\kv\omega\sigma} &\equiv \frac{1}{ \pi_{k\omega}^a -\sigma \gamma_{\kv}}
\end{align}
is the spin-polarized Green's function,
we obtain
\begin{align}
\lambda^{(2)ab}(\omega,\omega')
&= \frac{1}{4}\sum_{\kv\sigma}(g^a_{\kv\omega\sigma} g^b_{\kv\omega'\sigma}+g^a_{\kv\omega\sigma}g^b_{\kv\omega',-\sigma}) \nnr
\lambda^{(0)ab}(\omega,\omega')
&= \frac{1}{4}\sum_{\kv\sigma}\frac{1}{(\gamma_k)^2} g^a_{\kv\omega\sigma}( g^b_{\kv\omega'\sigma}-g^b_{\kv\omega',-\sigma}) \nnr
\kappa_{0}^{(1)ab}(\omega,\omega')
&= \frac{1}{12}\sum_{\kv\sigma}\frac{k^2}{\gamma_k} \sigma g^a_{\kv\omega\sigma} g^b_{\kv\omega'\sigma} \nnr
\kappa_4^{ab}(\omega,\omega')
&= \frac{1}{60}\sum_{\kv\sigma}\frac{k^4}{(\gamma_k)^2} g^a_{\kv\omega\sigma}( g^b_{\kv\omega'\sigma}-g^b_{\kv\omega',-\sigma})
\end{align}
Those coefficients with $a=\ret$ and $b=\adv$ turn out to be dominant for $\eta/M\ll1$.
Moreover, contributions containing the Green's functions with different spins are neglected for $\eta/M\ll1$.
Considering low frequency ($\Omega\ra0$) limit, the coefficient $\chi_2$ is
\begin{align}
\chi_2(\Omega) &= -i \sum_{\sigma}\nu_\sigma \tau_\sigma\lt(1+\frac{(k_\sigma)^2}{3mM}+\frac{(k_\sigma)^4}{15m^2M^2}\rt)
\label{chi2_2}
\end{align}
where $\nu_\sigma$, $k_\sigma$ and $\tau_\sigma(\equiv 1/(2\eta_\sigma))$ are spin-resolved electron density of states, Fermi wave vector and elastic lifetime, respectively.
Noting that the Boltzmann conductivity is $\sigma_0\sim \sum_\sigma \nu_\sigma (k_\sigma)^2 \tau_\sigma$, the anisotropic terms induced by spin gauge field is of the relative order of
$(A_{\rm s}/\kf)^2$ ($\kf$ being the Fermi wave vector) compared to $\sigma_0$.
For spiral magentization structure with a pitch $Q$, this ratio is $A_{\rm s}/\kf \sim (Q/\kf)$ and for Rashba spin gauge field,
$A_{\rm R}/\kf \sim \alpha_{\rm R}(\kf)^2/\ef$, as will be discussed in the next section.
For a short spiral wave length (several nanometers like in Ho \cite{Koehler65}) and for large Rashba coupling like in BiTeI \cite{Ishizaka11}, the anisotropy would be easily detected experimentally.
\section{Application to spiral structures}
We consider examples of spiral magnetization structures (Fig. \ref{FIGspiral}).
\begin{figure}
\includegraphics[width=0.4\hsize]{spiral_n}
\includegraphics[width=0.4\hsize]{spiral_b}
\caption{Magnetization structure of two spirals, N\'eel (left) and Bloch (right). The direction of the spiral denoted by $\bm{Q}$ is the $x$-axis.\label{FIGspiral}}
\end{figure}
\subsection{N\'eel type spiral}
The first one is a N\'eel type spiral along the $x$-direction;
\begin{align}
\nv(\rv)&= \xvhat \sin Qx+\zvhat \cos Qx \label{Neelspiral}
\end{align}
where $\hat{\ }$ denotes the unit vector along the coordinate axis and $Q$ is the pitch of the spiral.
Equilibrium spin current induced by magnetization textures is given by \cite{TataraReview19}
$\jv_{{\rm s},i}=\nv\times\nabla_i\nv$, which in the present case is perpendicular to the magnetization plane ($xz$-plane);
$\jv_{{\rm s},i}=\delta_{i,x}Q\yvhat$.
The unitary transformation to diagonalize the $sd$ exchange interaction for this magnetization structure is $U=\mv\cdot\sigmav$, where
\begin{align}
\mv=\lt(\sin\frac{Qx}{2},0,\cos\frac{Qx}{2} \rt)
\end{align}
The spin gauge field arising from the structure is $A_{{\rm s},i}^\alpha=\frac{-i}{2}\tr[\sigma_\alpha U^{-1}\nabla_i U]$, which for the N\'eel structure is
\begin{align}
A_{{\rm s},i}^{{\rm N},\alpha} &= \frac{Q}{2}\delta_{i,x} \delta_{\alpha,y}
\label{AsN}
\end{align}
The direction of the spin polarization of the gauge field is $y$, which is consistent with the equilibrium spin current flow.
\subsection{Bloch type spiral}
The second one is a bloch type spiral along the $x$-direction, where magnetization rotates in the plane perpendicular to the $x$-direction,
\begin{align}
\nv(\rv)&= \yvhat \sin Qx+\zvhat \cos Qx \label{Blochspiral}
\end{align}
The vector $\mv$ is
\begin{align}
\mv=\lt(0,\sin\frac{Qx}{2},\cos\frac{Qx}{2} \rt)
\end{align}
and the equilibrium spin current is
$\jv_{{\rm s},i}=-\delta_{i,x}Q\xvhat$, and the spin gauge field arising from the structure is
\begin{align}
A_{{\rm s},i}^{{\rm B},\alpha} &= - \frac{Q}{2}\delta_{i,x} \delta_{\alpha,x}
\label{AsB}
\end{align}
Both N\'eel and Bloch type of spiral, the spin gauge field is finite for spatial direction of the spiral, i.e., $x$-axis, and so the second-order contribution to the conductivity tensor (Eq. (\ref{sigmafinalresult})) has only the diagonal components.
If the direction of the spiral deviates from the coordinate axis, symmetric off-diagonal components
$\sigma_{ij}=\sigma_{ji}\propto \Av_{{\rm s},i}\cdot \Av_{{\rm s},j}$ appear, where $ij$ denotes the directions in the plane containing the spiral direction.
The optical response can thus detect the direction of the intrinsic spin current induced by spin gauge field.
However Bloch and N\'eel spirals cannot be distinguished by the present optical response.
This is because the optical response does not see the spin polarization direction (denoted by $\alpha$ of $A_{{\rm s},i}^\alpha$) but only the scalar product or the trace in the spin index (Eq. (\ref{sigmafinalresult})).
Spin direction affects optical response if an additional spin polarization is introduced by an external field or a spin-orbit interaction, which we consider in next two subsections.
\subsection{Spirals in an external magnetic field}
We consider the case of a magnetic field applied along the $x$-axis for Bloch and N\'eel spirals.
We simply assume that the magnetization structure has a constant component of magnitude $\beta$ along the field without deriving solutions including magnetic field, and so the argument may not be applicable for large $\beta$.
The magnetization profiles with $\beta$ are
\begin{align} \label{Neelfield}
\nv_N &= \xvhat \frac{\beta + \sin Qx}{\sqrt{1 + \beta^2 + 2 \beta \sin Qx}} + \zvhat \frac{\cos Qx}{\sqrt{1 + \beta^2 + 2 \beta \sin Qx}},
\end{align}
\begin{align} \label{Blochfield}
\nv_B &= \xvhat \frac{\beta}{\sqrt{1 + \beta^2}} + \yvhat \frac{\sin Qx}{\sqrt{1 + \beta^2}} + \zvhat \frac{\cos Qx}{\sqrt{1 + \beta^2}}.
\end{align}
The tilted Bloch case is the one representing the excitation around a magnetic skyrmion lattice \cite{Petrova11,Tatara14}.
Note that the limit $\beta \to 0$ corresponds to (\ref{Neelspiral}), (\ref{Blochspiral}), while $\beta \to \infty$ stands for a uniformly magnetized medium.
The vector $\mv$ which generates the unitary transformation to diagonalize the exchange interaction now has the form
\begin{equation} \label{mNeel}
\mv_N = \left(
\sin \frac{Q x}{2} \sqrt{\frac{1 - \frac{\cos Qx}{\sqrt{1 + \beta^2 + 2 \beta \sin Qx}}}{1 - \cos Qx}},0,
\cos \frac{Q x}{2} \sqrt{\frac{1 + \frac{\cos Qx}{\sqrt{1 + \beta^2 + 2 \beta \sin Qx}}}{1 + \cos Qx}} \right),
\end{equation}
\begin{align} \label{mBloch}
\bm{m}_B = \left( \beta \sin \frac{Qx}{2} \sqrt{\frac{1-\frac{\cos Qx}{\sqrt{1+\beta^2}}}{\left(1 - \cos Qx\right) \left(\beta^2 + \sin^2 Qx\right)}},
\sin \frac{Qx}{2}\sqrt{\frac{\left(1-\frac{\cos Qx}{\sqrt{1+\beta^2}}\right) \sin^2 Qx}{2 \left(\beta^2 + \sin^2 Qx\right)}},
\cos \frac{Qx}{2}\sqrt{\frac{1+\frac{\cos Qx}{\sqrt{1+\beta^2}}}{1 + \cos Qx}} \right)
\end{align}
for N\'eel and Bloch magnetization.
In the N\'eel case, the vector-potential takes the form
\begin{align} \label{NeelA}
A_{{\rm s},i}^{{\rm N},\alpha} = \delta_{i,x} \delta_{\alpha,y}
\frac{Q}{2} \frac{\sin Q x}{\beta + \sin Q x} \sqrt{\frac{\left(\beta + \sin Q x\right)^2}{\sin^2 Q x}} \frac{1 + \beta \sin Q x}{1 + \beta^2 + 2 \beta \sin Q x}.
\end{align}
Obviously, it has the same component as it was with no mangetic field applied (see (\ref{AsN})). After averaging over the coordinates we arrive at
\begin{align} \label{NeelAaver}
\left<A_{{\rm s},i}^{{\rm N},\alpha}\right> = \left\{
\begin{matrix}
\delta_{i,x} \delta_{\alpha,y} \frac{Q}{2} \left( 1 - \frac{2}{\pi} \arctan \left|\beta\right|\right), \, \left|\beta\right| < 1 \\
\delta_{i,x} \delta_{\alpha,y} \frac{Q}{2} \frac{1}{\pi} \arcsin \frac{2 \left|\beta\right|}{1 + \beta^2}, \, \left|\beta\right| > 1.
\end{matrix}
\right.
\end{align}
where $\left<\ \right>$ stands for spatial averaging. This function continuously decreases from $\frac{Q}{2}$ to zero as $\beta$ grows.
Gauge field component having a perpendicular spin emerges if we apply an out-of plane magnetic field
for the Bloch case (\ref{mBloch}):
\begin{align} \label{BlochA}
A_{{\rm s},i}^{{\rm B},\alpha} = \delta_{i,x} \frac{Q}{2} \left(
- \frac{\beta^2 \cos Qx + \sqrt{1+\beta^2} \sin^2 Qx}{\sqrt{1+\beta^2} \left(\beta^2 + \sin^2 Qx\right)} ,
\frac{\beta}{\sqrt{1+\beta^2}} \frac{\sqrt{\sin^2 Q x}}{1 + \frac{\cos Q x}{\sqrt{1+\beta^2}}},
\beta \frac{\tan \frac{Q x}{2} \cos Q x \left(1 - \frac{\cos Q x}{\sqrt{1 + \beta^2}}\right)}{\sqrt{\tan^2 \frac{Q x}{2}} \left(\beta^2 + \sin^2 Q x\right)}
\right).
\end{align}
After averaging over coordinates, $\left<A^{{\rm B},z}_{{\rm s} ,x} \right>= 0$, while two other
components are finite:
\begin{align} \label{BlochAaverX}
\left<A_{{\rm s},x}^{{\rm B},x}\right> = -\frac{Q}{2} \left(1 - \frac{\left|\beta\right|}{\sqrt{1+\beta^2}}\right),
\end{align}
\begin{align} \label{BlochAaverXz}
\left<A_{{\rm s},x}^{{\rm B},y}\right> = \frac{Q}{2} \frac{\beta}{\pi \sqrt{1 + \beta^2}} \log \frac{\sqrt{1 + \beta^2} + 1}{\sqrt{1 + \beta^2} - 1}.
\end{align}
One can see that (\ref{BlochAaverX}) is the same as (\ref{AsB}) that decays with the magnitude of magnetic oscillations. The $y$-component $\left<A_{{\rm s},x}^{{\rm B},y}\right>$ is odd with respect to $\beta$. It is zero at $\beta = 0 $ but has an infinite derivative at this point, thus growing very fast at small applied field. It reaches its maximum at $\beta^* \approx 0.66$ with the value $\left<A_{{\rm s},x}^{{\rm B},y}\right> \left(\beta^*\right) \approx 0.4 \left<A_{{\rm s},x}^{{\rm B},x}\right> \left(\beta = 0\right)$.
As we saw, response of the gauge field to a magnetic field depends much on the magnetic structure.
Observation of optical response with an applied field is therefore expected to be useful to distinguish the structure.
\subsection{Spin-orbit interaction}
Let us consider spin-orbit interactions that break the inversion symmetry.
The first one is the Rashba interaction, whose Hamiltonian is
\begin{align}
H_{\rm R} &= -\frac{i}{2} \intr c^\dagger \alphav_{\rm R}\cdot(\nablalr\times\sigmav) c
\end{align}
We consider first the case the Rashba field vector $\alphav_{\rm R}$ is along the $z$-axis.
In the rotated frame, the interaction reads
\begin{align}
H_{\rm R} &= -\frac{i}{2} \intr \tilde{c}^\dagger \alphav_{\rm R}\cdot(\nablalr\times\tilde{\sigmav}) \tilde{c} + \mbox{\rm spin density part}
\end{align}
where the first terms is the interaction between the spin current and the Rashba spin gauge field, while the last term describing the spin density is neglected.
The electron field in the rotated frame is $ \tilde{c}\equiv U^{-1} c$ and $\tilde{\sigmav}=U^{-1} \sigmav U$ is the spin operator in the rotated frame.
The Rashba spin gauge field read from the interaction is
\begin{align}
A_{{\rm R},i} &= -im\epsilon_{ijk}\alpha_{{\rm R},j}\tilde{\sigma}_k \label{Rashbagf}
\end{align}
Explicit form for each magnetization profile is calculated using $\tilde{\sigma}_k=2m_k(\mv\cdot\sigmav)-\sigma_k$.
For the N\'eel type spiral, Eq. (\ref{Neelspiral}),
\begin{align} \label{tildesigmaN}
\tilde{\sigma}_k &=(-\cos (Qx) \sigma_x+\sin (Qx) \sigma_z,-\sigma_y, \sin (Qx) \sigma_x+\cos (Qx) \sigma_z)
\end{align}
and we have (in the vector representation with respect to spatial direction $i$)
\begin{align}
\Av^{\rm N}_{\rm R} &= -im\alpha_{{\rm R}} (\sigma_y,-\cos (Qx) \sigma_x+\sin (Qx) \sigma_z,0)
\end{align}
whose Fourier transform is
\begin{align}
\Av^{\rm N}_{{\rm R}}(\qv) &= -im\alpha_{{\rm R}}\delta_{\qv_\perp,0}
\lt[ \delta_{q_x,0} \sigma_y \xvhat - \frac{1}{2}\sum_\pm \delta_{q_x,\pm Q}(\sigma_x \pm i \sigma_z)\yvhat \rt]
\end{align}
where $\qv_\perp \equiv (0,q_y,q_z)$.
Uniform component is $\Av^{\rm N}_{{\rm R}}(\qv=0) = -im\alpha_{{\rm R}} \sigma_y \xvhat $.
For the Bloch type spiral, Eq. (\ref{Blochspiral}),
\begin{align} \label{tildesigmaB}
\tilde{\sigma}_k &=(-\sigma_x, -\cos (Qx) \sigma_y+\sin (Qx) \sigma_z,\sin (Qx) \sigma_y+\cos (Qx) \sigma_z)
\end{align}
and we have (in the vector representation with respect to spatial direction $i$)
\begin{align}
A^{\rm B}_{{\rm R},i} &= -im\alpha_{{\rm R}} (\cos (Qx) \sigma_y-\sin (Qx) \sigma_z,-\sigma_x,0)
\end{align}
and
\begin{align}
\Av^{\rm B}_{{\rm R}}(\qv) &= -im\alpha_{{\rm R}}\delta_{\qv_\perp,0}
\lt[ -\delta_{q_x,0} \sigma_x \yvhat + \frac{1}{2}\sum_\pm \delta_{q_x,\pm Q}(\sigma_y \pm i \sigma_z)\xvhat \rt]
\end{align}
Uniform component is $\Av^{\rm B}_{{\rm R}}(\qv=0) = im\alpha_{{\rm R}} \sigma_x \yvhat $.
Similar calculations for $\alphav_{\rm R}$ along $\xvhat$ or $\yvhat$ with the use of (\ref{tildesigmaN}) and (\ref{tildesigmaB}) give uniform components shown in Table~\ref{TableSGF}.
For the Weyl type spin-orbit interaction,
\begin{align}
H_{\rm W} &= -\lambda_{\rm W}\frac{i}{2} \intr c^\dagger (\nablalr\cdot\sigmav) c
\end{align}
the gauge field is $A_{{\rm R},i} = \lambda_{\rm W}\tilde{\sigma}_i$ and uniform component is as in Table \ref{TableSGF}.
The spin gauge field and Rashba and Weyl spin gauge fields are summarized in Table \ref{TableSGF}.
The Rashba spin gauge field for different directions of $\alphav_{\rm R}$ in the magnetization plane has different coordinate components that are determined by the choice of coordinate system reference point and do not differ physically; the third $\alphav_{\rm R}$ direction gives zero value. Hence the most representative case is $\alphav_{\rm R} || \zvhat$ that is considered in detail above.
Besides, it is seen that $ A_{{\rm s},i}$ and uniform component of the Rahsba gauge field have the same spin polarization direction, i.e., perpendicular to the vector $\mv$ and diagonalization axis $\zvhat$.
The adiabatic components of the conductivity, $\chi_i^{\rm (ad)}$ therefore do not arise from the spin structure and uniform contribution of Rashba gauge field.
The antisymmetric term $\chi_3$ does not arise either.
\begin{table}[hbt]
\begin{tabular}{c|c|c|c|c|c}
& & \multicolumn{3}{c|}{Rashba ($q=0$), $A_{{\rm R},i}$}
& \\ \cline{3-5}
&\raisebox{2.6ex}[0cm][0cm]{Spin structure, $A_{{\rm s},i}$}&$\bm{\alpha}_{\rm R} || \hat{x}$&$\bm{\alpha}_{\rm R} || \hat{y}$&$\bm{\alpha}_{\rm R} || \hat{z}$& \raisebox{2.6ex}[0cm][0cm]{Weyl ($q=0$), $A_{{\rm W},i}$} \\
\hline
N\'eel spiral & $ \frac{Q}{2}\delta_{i,x} \sigma_y $& $im\alpha_{{\rm R}} \sigma_y \delta_{i,z}$ & 0 & $-im\alpha_{{\rm R}} \sigma_y \delta_{i,x}$ &
$- \lambda_{\rm W}\sigma_y \delta_{i,y}$ \\
Bloch spiral & $ -\frac{Q}{2}\delta_{i,x} \sigma_x $ & 0 & $-im\alpha_{{\rm R}} \sigma_x \delta_{i,z}$ & $im\alpha_{{\rm R}} \sigma_x \delta_{i,y}$ &
$- \lambda_{\rm W}\sigma_x \delta_{i,x}$
\end{tabular}
\caption{ Table of uniform components of spin gauge field and Rashba spin gauge fields for N\'eel and Bloch type spirals. \label{TableSGF}}
\end{table}
\begin{figure}
\includegraphics[width=0.3\hsize]{spinGF}
\includegraphics[width=0.3\hsize]{spinGFyy}
\caption{Schematic figure showing spin polarized flow induced by $A_{{\rm s},x}^y$ and
$A_{{\rm s},y}^y$. \label{FIGAs}}
\end{figure}
Having two spin gauge fields from different origins offers interesting possibilities of manipulation of spin and charge.
A spin gauge field $A_{{\rm s},i}^\alpha$ induces a flow in the direction $i$ polarized along spin direction $\alpha$, namely spin current $j_{{\rm s},i}^\alpha$ (Fig. \ref{FIGAs}).
Such spin polarized flow does not directly trigger optical responses of material as those responses are governed by charge sector.
The same spin gauge field maps the spin current to the charge one but the effect on the conductivity tensor is diagonal in simple settings, as the resultant charge flow is along the original direction of spin flow.
Rich possibilities appear if there is another spin gauge field with different symmetry.
For instance, if we have $A_{{\rm so},j}^\beta$ arising from spin-orbit interaction (like $A_{{\rm R}}$ or $A_{{\rm W}}$), the cross product of the two gauge fields $\sum_\alpha A_{{\rm s},i}^\alpha A_{{\rm so},j}^\alpha$ can induce off-diagonal charge correlation, $i\neq j$, as a result of conversion of spin polarization along $\alpha$-direction to charge flow in spatial direction $j$.
From Table \ref{TableSGF}, we see that such off-diagonal optical response arises for the Weyl spin-orbit interaction with N\'eel spiral and Rashba interaction with Bloch spiral structure.
Optical response can thus be used to identify spin structures.
Particularly, sudden change of magnetization structures and formation of domains would be detected as emergence of anisotropic and/or off-diagonal optical responses when external field or temperature is varied.
\section{Directional effects}
As the spin current and the corresponding spin gauge field breaks inversion symmetry, the gauge field appears in the uniform component of the conductivity tensor from the second order.
The information on the direction of the spin current flow induced by the spin gauge field is therefore smeared in the uniform optical response considered so far.
Direct effects due to the spin current flow are contained in the directional effects which depend on the wave vector $\qv$ of the external electric field.
The effects linear in $\qv$ turn out to be linear (or higher-odd order) in the spin gauge field.
Let us briefly study these directional effects.
For the non-uniform component of the conductivity tensor, we need to calculate ($a,b=\ret,\adv$)
\begin{align}
K_{ij}^{ab}(\qv,\omega,\Omega) &\equiv
\sumkv \tr[v_i(\kv) G^a_{\kv-\frac{\qv}{2},\omega}v_j(\kv) G^b_{\kv+\frac{\qv}{2},\omega+\Omega}]
\end{align}
The Green's function is expanded with respect to $\qv$ as
\begin{align}
G_{\kv\mp\frac{\qv}{2},\omega}^a
&= G_{\kv,\omega}^a \mp \frac{q_k}{2} (\partial_{k_k} G_{\kv,\omega}^a)+O(q^2),
\end{align}
to obtain
\begin{align}
K^{ab}_{ij}(\qv,\omega,\Omega)&=
\sum_k {q_k} [ K_{ijk}^{ab} -K_{jik}^{ba} ]+K^{ab}_{ij}(\qv=0,\omega,\Omega)
\end{align}
where
\begin{align}
K^{ab}_{ijk}(\omega,\Omega) & \equiv
\sumkv \tr[v_i G_{\kv,\omega}^a v_j G_{\kv,\omega+\Omega}^b v_k G_{\kv,\omega+\Omega}^b]
\end{align}
The trace is calculated in the same way as Eq. (\ref{Kijcalculation2}).
To the first order in the gauge field, the result is ($\omega'=\omega+\Omega$)
\begin{align}
K^{ab}_{ijk}(\omega,\Omega)
& =
\sumkv \frac{2}{ 3\Pi_{k\omega}^a (\Pi_{k,\omega'}^b)^2 }
\lt[\delta_{ij} (\Av_{{\rm s},k}\cdot\Mv) \lt[k^2 (\pi_{k\omega'}^b)^2+\frac{2}{5}k^4 \pi_{k\omega'}^a \rt] \rt. \nnr
& \lt.
+\delta_{ik}(\Av_{{\rm s},j}\cdot\Mv) \lt[ k^2[(\pi_{k\omega'}^b)^2+M^2] + \frac{2}{5}k^4 \pi_{k\omega'}^a \rt]
+\delta_{jk}(\Av_{{\rm s},i}\cdot\Mv) \lt[ k^2[(\pi_{k\omega'}^b)^2+M^2] + \frac{2}{5}k^4 \pi_{k\omega'}^a \rt]
\rt]
\end{align}
Here $\Av_{{\rm s}}$ denotes the total spin gauge field including the one due to magnetization structure and spin-orbit interaction. We therefore obtain the conductivity tensor linear (denoted by $^{(1)}$) in both $\qv$ and the spin gauge field as
\begin{align}
\sigma^{(1)}_{ij}(\qv,\Omega)
&=\delta_{ij} q_k (\Av_{{\rm s},k}\cdot\Mv) \gamma_1
+ [ q_i(\Av_{{\rm s},j}\cdot\Mv) +q_j(\Av_{{\rm s},i}\cdot\Mv) ] \gamma_2
\label{sigma1result}
\end{align}
where $\gamma_i$'s are functions of $\Omega$.
Contribution linear in $\qv$ changes sign for opposite light injection, resulting in directional effects like directional dichroism.
The directional feature arises in the symmetric components in the conductivity to the linear order in the spin gauge field.
As seen from Eq. (\ref{sigma1result}), directional effects arise from the adiabatic component of the gauge field, $\Av_{{\rm s},i}\cdot\Mv $, which vanishes for the spin configurations considered in Table \ref{TableSGF}.
The directional effects predicted by Eq. (\ref{sigma1result}) emerges when $\Av_{\rm s}$ is due to the spin-orbit interaction and when the magnetization $\Mv$ is uniform.
(Note that Eq. (\ref{sigma1result}) applies to arbitrary direction of $\Mv$ if $\Mv$ is uniform.)
In fact, directional effect was pointed out in Ref. \cite{Shibata16} for the case of the Rashba spin-orbit interaction treating uniform $M$ perturbatively.
For the Rashba gauge field, Eq. (\ref{Rashbagf}), its uniform component is
$A_{{\rm R},i}^\alpha(q=0)=im\epsilon_{ij\alpha}\alpha_{{\rm R},j}$ and thus
the diagonal term of Eq. (\ref{sigma1result}) is proportional to $q_k(\Av_{{\rm R},k}\cdot\Mv)\propto \qv\cdot(\alphav_{\rm R}\times \Mv)$.
The vector $\alphav_{\rm R}\times \Mv$, sometimes called a troidal moment, describes intrinsic velocity of charge as noted in Refs. \cite{Shibata16,Kawaguchi16,TataraReview19}.
For Weyl type, the gauge field is $A_{{\rm W},i}^\alpha =\lambda_{\rm W}\delta_{i\alpha}$, connecting the space and spin diagonally, and so the directional dichroism is with respect to the magnetization direction, $q_k(\Av_{{\rm W},k}\cdot\Mv)\propto \qv\cdot\Mv$.
\section{Summary}
We have theoretically explored optical properties induced by the second-order effects of spin gauge fields.
The conductivity matrix was calculated in the slowly-varying limit with vanishing wave vector and angular frequency of the spin gauge field.
Possibility of optical detection of spin structure was pointed out. Additional information is provided by studying the behaviour of optical response under the action of an external magnetic field.
Wave-vector ($q$)-resolved optical response, partially studied here as directional effects, is expected to provide detailed information on the magnetization structures, and this is to be studied in a future work.
\acknowledgements
This investigation was supported by
a Grant-in-Aid for Exploratory Research (No.16K13853)
and
a Grant-in-Aid for Scientific Research (B) (No. 17H02929) from the Japan Society for the Promotion of Science
and
a Grant-in-Aid for Scientific Research on Innovative Areas (No.26103006) from The Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.
E.K. would like to thank the Russian Science Foundation (Grant No. 16-12-10340).
|
1,116,691,498,346 | arxiv |
\section{Single particle molecular orbitals}
\subsection{General remarks}
Unlike the case of NV center in diamond, and other similar defects such as the axial divacancy in SiC, although we expect the single charged Si vacancy to still have a $C_{3\nu}$ symmetry, it can still be interpreted as a very weakly broken $T_d$ symmetry. This is because all four nearest neighbors to the vacancy are carbon atoms with very similar distances (differing by ${\sim}1\%$) from the nearest silicon atom in a perfect crystal, and we do not expect this qualitative feature to change much upon removal of the silicon atom. We thus anticipate that the form of $A_1$ symmetry molecular orbitals will be very close to those of $T_d$:
\begin{align}
\begin{split}
u^{T_d} &= a+b+c+d,\\
v^{T_d} &= a+b+c-3d,
\end{split}\label{1}
\end{align}
where the normalization has been omitted for brevity. Based on the `nearly-$T_d$' symmetry we also anticipate that state $v$ will be near-degenerate with states $e$ (in $T_d$ they are degenerate). As we show below, our single-particle molecular orbitals obtained using DFT indeed confirm these qualitative expectations. Combining results from DFT with analytical calculations we can derive further information, such as the coefficients in the molecular orbitals, overlap integrals and on-site Coulomb energies.
\subsection{First principles calculations}
In order to complement the main group theoretic results, density functional theory (DFT) was used to obtain single particle molecular orbitals (MOs) of the charged Si-vacancy center in 4H-SiC. The ordering of the defect states is obtained from the calculated Kohn-Sham eigenstates around the bandgap of 4H-SiC. The spin-polarized calculations were carried out using the Quantum-ESPRESSO package~\cite{QE-2009}, within the generalized gradient approximation (GGA)~\cite{GGA} of Perdew-Burke-Ernzerhof (PBE)~\cite{PBE}. In this work, we report the results for the V$_{\mathrm{Si}}^{-1}$ at the h-site in a $6\times6\times2$ (576-atoms) supercell with $\Gamma$-centered $2\times2\times2$ k-point sampling according to Monkhort-Pack method.
\begin{figure}[ht]
\centering
\subfigure[\,$\bar{u}$ ($A_1$-symmetry)]{\includegraphics[width=3.5 cm]{SuppFig1a.png}}\hspace{1cm}
\subfigure[\,$\bar{v}$ ($A_1$-symmetry) ]{\includegraphics[width=3.5 cm]{SuppFig1b.png}} \\
\subfigure[\,$\bar{e}_{x,y}$ ($E$-symmetry) ]{\includegraphics[width=3.5 cm]{SuppFig1c.png}}
\caption{(color online) Isosurface plots ($5\times10^{-3} e/a.u^{-3}$) for the optically-active minority spin MOs of the negatively charged silicon vacancy center $\textrm{V}_{\textrm{Si}}^-$ in 4H-SiC: (a) the highest occupied orbital, $\bar{u}$, (b) the lowest unoccupied orbital, $\bar{v}$, and (c) the next higher unoccupied orbital, $\bar{e}_{x,y}$.}
\label{Fig1_suppl}
\end{figure}
The large size of the supercell considered here ensures a reduction in the defect-defect interactions. This produces nearly-flat defect states that are labeled as $u$/$\bar{u}$ ($A_1$-symmetry), $v$/$\bar{v}$ ($A_1$-symmetry)and $e$/$\bar{e}$ ($E$-symmetry). Here, the letters with bar overhead represent the minority spin state, with the excess of three-electrons in the majority spin states. The MOs of the defect plotted in Figure~\ref{Fig1_suppl} differ from those obtained with group theoretic methods (using symmetry-adapted $sp^3$-orbitals) in that they are not restricted to the dangling bonds only. In DFT-calculations no such restriction is made and it includes contributions from other electronic states of the crystal as well. Nonetheless, the defect states can be seen to be highly localized on the carbon atoms surrounding the defect. The majority spin $u$, is found to be resonant with the valence band, while the higher energy defect-states lie in the band gap. This ordering of the defect states can be seen in Fig.\ref{Fig2_suppl}. Thus, the DFT-results reproduce the correct symmetries expected from the group theoretic results and provide the ordering of the defect states relative to each other.
\begin{figure}[ht]
\centering
\includegraphics[width=7.5 cm]{SuppFig2.pdf}
\caption{The energy ordering of the defect-induced majority- and minority-spin states.}
\label{Fig2_suppl}
\end{figure}
In the main text we used group-theoretic approach to obtain single-particle MOs from the symmetry-adapted linear combinations of the $sp^3$-orbitals belonging to the four carbons surrounding the silicon vacancy. However, group theory does not yield the relative ordering of states with same symmetry, which can be obtained from the DFT calculations. In Fig.\ref{Fig3_suppl}, we choose a different isosurface (compared to the isosurface plots in Fig.\ref{Fig1_suppl}) to showcase the bonding- and the anti-bonding characters of the $A_1$ symmetry states $u$ and $v$, respectively. Thus, DFT results can be used to shed light on the relative ordering of the MOs qualitatively (bonding vs. anti-bonding) and quantitatively (Fig.\ref{Fig2_suppl}).
\begin{figure}[ht]
\centering
\subfigure[\,$\bar{u}$ ($A_1$-symmetry)]{\includegraphics[width=3.5 cm]{SuppFig3a.png}}\hspace{1cm}
\subfigure[\,$\bar{v}$ ($A_1$-symmetry) ]{\includegraphics[width=3.5 cm]{SuppFig3b.png}} \\
\caption{(color online) Isosurface plots ($5\times10^{-4} e/a.u^{-3}$) for the optically-active minority spin MOs with $A_1$-symmetry, showing: (a) bonding character of $u$, and (b) anti-bonding character of $v$.}
\label{Fig3_suppl}
\end{figure}
\subsection{Coulomb interaction and overlap integrals}
The Coulomb interaction Hamiltonian can be grouped as $V_c=\sum_{i\neq j}v_{ij}+\sum_i v_{ii}$ in terms of interactions between different sites (denoted by $ij$) and on-site ($ii$) interactions. Therefore, the Schr\"{o}dinger equation in the basis of $sp^3$ dangling bonds \cite{Huckel31} takes the form of
\begin{equation}
\left(\begin{array}{cccc}
v_{aa} & v_{ab} & v_{ab} & v_{ad}\\
v_{ab} & v_{aa} & v_{ab} & v_{ad}\\
v_{ab} & v_{ab} & v_{aa} & v_{ad}\\
v_{ab} & v_{ab} &v_{ab} & v_{dd}
\end{array}\right)=E_n\left(\begin{array}{cccc}
1 & \lambda_1 & \lambda_1 & \lambda_2\\
\lambda_1 & 1 & \lambda_1 & \lambda_2\\
\lambda_1 & \lambda_1 & 1 & \lambda_2\\
\lambda_2 & \lambda_2 & \lambda_2 & 1
\end{array}\right)\label{2}
\end{equation}
in terms of the overlap integrals $\lambda_1=\int \psi_a\psi_b\,d^3r$ and $\lambda_2=\int \psi_a\psi_d\,d^3r$ between the bonds. For Eq.~(\ref{2}) to have non-trivial solutions for each eigenenergy $E_n$ ($n=u,v,e_x,e_y$), the following determinant has to be zero,
\begin{equation}
\begin{vmatrix}
v_{aa}-E_n & v_{ab}-E_n\lambda_1 & v_{ab}-E_n\lambda_1 & v_{ad}-E_n\lambda_2\\
v_{ab}-E_n\lambda_1 & v_{aa}-E_n & v_{ab}-E_n\lambda_1 & v_{ad}-E_n\lambda_2\\
v_{ab}-E_n\lambda_1 & v_{ab}-E_n\lambda_1 & v_{aa}-E_n & v_{ad}-E_n\lambda_2\\
v_{ab}-E_n\lambda_2 & v_{ab}-E_n\lambda_2 &v_{ab}-E_n\lambda_2 & v_{dd}-E_n
\end{vmatrix}=0\label{3}
\end{equation}
Note that the sites $a,b,c$ are equivalent due to the symmetry of the basal plane. The Coulomb interaction between the sites $a$ and $d$ is roughly equal to that of between $a$ and $b$, i.e. $|v_{ad}|=(1-\delta) |v_{ab}|$ where $\delta\gtrapprox 0$, since the bond length along the c-axis is only slightly distorted from the basal ones as shown by the density functional theory calculations. Since all sites $a$-$d$ have carbon atoms, $|v_{dd}|=|v_{aa}|=v_0$. Moreover, the off-site Coulomb interactions are smaller than the on-site interactions, because of the $1/r$ dependence of the electrostatic potentials, which can be expressed as $|v_{ab}|=\epsilon |v_{aa}|$.
Solutions of Eq.~(\ref{3}), with the realistic assumption $\lim{\delta\to 0}$, leads to the energies of MOs:
\begin{align}
E_u=&-\frac{v_0(\kappa+\Delta\kappa)}{1+2\lambda_1-3\lambda_2^2},\nonumber\\
E_v=&-\frac{v_0(\kappa-\Delta\kappa)}{1+2\lambda_1-3\lambda_2^2},\label{4}\\
E_{e_{x,y}}=&-\frac{v_0(1-\epsilon)}{1-\lambda_1},\nonumber
\end{align}
up to $O(\epsilon^2)$. The coefficients $\kappa$ and $\Delta\kappa$ are given by
\begin{align}\begin{split}
\kappa=&1+\lambda_1+\epsilon(1-3\lambda_2),\\
\Delta\kappa=&\left[\lambda_1^2+3\lambda_2^2+\epsilon^2 (6\lambda_1-6\lambda_2+4)\right.\\
&\left.+\epsilon\left(-6\lambda_1\lambda_2-2\lambda_1+6\lambda_2^2-6\lambda_2\right)\right]^{1/2},
\end{split}\label{5}\end{align}
in terms of the overlap integrals $\lambda_1$, $\lambda_2$, and the off-site to on-site Coulomb ratio $\epsilon$. In the case of zero overlap between the bonds ($\lambda_1=\lambda_2=0$), according to Eq.~(\ref{4}), the energies $E_v$ and $E_{e_{x,y}}$ become equal, i.e. $E_v=E_{e_{x,y}}=-v_0(1-\epsilon)$ and $E_u=-v_0(1+3\epsilon)$, also indicating that $E_u<E_v$. This can be understood as the defect's asymptotic limit to tetrahedral symmetry.
The true benefit of the above treatment is realized once it is used in conjunction with the energies calculated by DFT. By using the MO energies obtained by DFT (Fig.\ref{Fig2_suppl}) in Eq.~(\ref{4}), we find the previously unknown overlap integrals, the on-site potential energy and the Coulomb ratio of the defect to be $\lambda_1=0.0034$, $\lambda_2=0.054$, $v_0=1.177$eV, and $\epsilon=0.285$, respectively. Furthermore, we calculate the eigenfunctions satisfying Eq.~(\ref{3}) as,
\begin{align}\begin{split}
u &= \alpha_u (a+b+c)+\beta_u d,\\
v &= \alpha_v (a+b+c)+\beta_v d,\\
e_x &= \alpha_x (2c-a-b),\\
e_y &= \alpha_y (a-b),
\end{split}\label{6}\end{align}
with the coefficients obtained as $\alpha_u=0.523$, $\beta_u=0.423$, $\alpha_v=-0.272$, $\beta_v=0.882$, $\alpha_x=0.408$, and $\alpha_y=0.707$. The coefficients of $u$ and $v$ only slightly differ from the readily known coefficients of $T_d$ symmetry \cite{Tinkham2003}, i.e. $\alpha_u=\beta_u=0.5$, $\alpha_v=-0.289$, and $\beta_v=0.866$. Later on, we use these coefficients to estimate the zero-field splitting of the ground state leading to a remarkable agreement with the experimentally measured values.
\subsection{Energy order of the doublets}
Due to the many-particle nature of the doublets, we cannot obtain the ordering of the states using DFT, which is an effective single-particle description of the system. Therefore, we analyzed the ordering via Coulomb Hamiltonian $H_c{=}\sum h_i+\sum_{i,j}V_{ee}(r_i,r_j)$ using the wave functions of the states $\Psi^i_{\textrm{d1-d5}}$ given in Table. I of the main text. One electron (hole) Coulomb terms are included in $h_i$, whereas $V_{ee}(r_i,r_j)=e^2/(4\pi\epsilon_0 |r_i-r_j|)$ is the two-particle Coulomb repulsion potential. Eigen values of $h_i$ in MOs basis are represented by $\chi$ and can be estimated from DFT. Many-particle Coulomb integrals are given as $j^0_{ll}{=}{\int}\rho_{ll}(1)V_{ee}\rho_{ll}(2)d^3r_1 d^3r_2$, $j_{lm}{=}{\int}\rho_{ll}(1)V_{ee}\rho_{mm}(2)d^3r_1 d^3r_2$, and $k_{lm}{=}{\int}\rho_{lm}(1)V_{ee}\rho_{lm}(2)d^3r_1 d^3r_2$. The integrals $j^0_{ll}$, $j_{lm}$, and $k_{lm}$ are the one-center Coulomb integral, two-particle Coulomb repulsion direct and exchange integrals, respectively. Charge density is defined as $\rho_{lm}(i)=\psi_l(i)^*\psi_m(i)$ belonging to the $i^\textrm{th}$ particle in the basis of sp$^3$ hybridized dangling bond wave functions with $l,m{=}\{a,b,c,d\}$ and $l{\neq}m$. We obtain the Coulomb energies of doublets as,
\begin{align}
\textrm{E}^{E}_{e^3}=&\chi_{e^3}+0.67 j^0_{aa}+2.33 j_{ab}-0.33 k_{ab}\label{7}\\
\textrm{E}^{A_2}_{ve^2}=&\chi_{ve^2}+0.22 j^0_{aa}+1.22 j_{ab}+1.56 j_{ad}\nonumber\\
&-1.22 k_{ab}+0.78 k_{ad}\label{8}\\
\textrm{E}^{E}_{ve^2}=&\chi_{ve^2}+0.41 j^0_{aa}+1.04 j_{ab}+1.56 j_{ad}\nonumber\\
&-0.04 k_{ab}-0.78 k_{ad}\label{9}\\
\textrm{E}^{A_1}_{ve^2}=&\chi_{ve^2}+0.74 j^0_{aa}+0.70 j_{ab}+1.56 j_{ad}\nonumber\\
&+1.30 k_{ab}-0.78 k_{ad}\label{10}\\
\textrm{E}^{E}_{v^2e}=&\chi_{v^2e}+0.09 j^0_{aa}+0.61 j^0_{dd}+0.40 j_{ab}+1.90 j_{ad}\nonumber\\
&-0.30 k_{ab}-0.89 k_{ad}\label{11}
\end{align}
where the relationship $\chi\gg j^0\gg j \gg k$ holds and due to the nearly $T_d$ symmetry of the center charge localization on the basal and z-axis carbon atoms are assumed to be similar, i.e., $j^0_{aa}\simeq j^0_{dd}$. Furthermore, we obtain the ground state energy in a similar way:
\begin{equation}
\textrm{E}_g=1.44 (j_{ab}-k_{ab})+1.56(j_{ad}-k_{ad}).\label{12}
\end{equation}
Assuming $\chi_{v^2e}>\chi_{ve^2}>\chi_{e^3}$, the ordering of doublets becomes $\textrm{E}^{E}_{e^3}$, $\textrm{E}^{A_2}_{ve^2}$, $\textrm{E}^{E}_{ve^2}$, $\textrm{E}^{A_1}_{ve^2}$, and $\textrm{E}^E_{v^2e}$ increasing in energy.
\section{Three-particle states}
Because of the near-degeneracy of state $v$ with states $e_x$ and $e_y$, it is energetically favorable for two electrons to occupy the $e$ states instead of paying the energetic cost of doubly occupying the only slightly lower in energy state $v$. As a result, in the ground state the occupied states are $v$, $e_x$ and $e_y$, in the three-hole picture.
For the $ve^2$ ground state, the three hole configuration space is spanned by $32$ ($2\otimes 4\otimes 4$) basis functions in the form of single particle Kronecker products ${f_\kappa^j}=(\{v\}\otimes\{\alpha,\beta\})\otimes(\{e_x,e_y\}\otimes\{\alpha,\beta\})\otimes(\{e_x,e_y\}\otimes\{\alpha,\beta\})$. However, consideration of the Pauli exclusion principle discards $8$ of them leaving $24$ basis states.
Moreover, the single particle irreducible matrix representations for the cases where the degeneracy lies only in the orbital, only in the spin, or in both spaces are simply $\Gamma_{E}(R)\otimes\mathbb{1}_s$, $\mathbb{1}_o\otimes\Gamma_{E_{1/2}}(R)$, or $\Gamma_{E}(R)\otimes\Gamma_{E_{1/2}}(R)$, respectively. Note that the identity matrices are defined as $\mathbb{1}_o$ for the orbital and $\mathbb{1}_s$ for the spin subspace. The explicit form of the matrices $\Gamma(R)$ are given in Table \ref{St1}.
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c}
$R$ & $\Gamma_E (R)$ & $\Gamma_{E_{1/2}}(R)$\\
\hline\hline
$\left\{\begin{array}{c} E\\ \bar{E}\end{array}\right\}$ & $\left[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right]$ & $\pm\left[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right]$\Tstrut\\[12pt]
$\left\{\begin{array}{c} C_3^+ \\ \bar{C}_3^+\end{array}\right\}$ & $\left[\begin{array}{cc} -\frac{1}{2} & -\frac{\sqrt{3}}{2} \\ \frac{\sqrt{3}}{2} & -\frac{1}{2} \end{array}\right]$ & $\pm\left[\begin{array}{cc} \bar{\epsilon} & 0 \\ 0 & \bar{\epsilon}^* \end{array}\right]$\\[12pt]
$\left\{\begin{array}{c} C_3^- \\ \bar{C}_3^-\end{array}\right\}$ & $\left[\begin{array}{cc} -\frac{1}{2} & \frac{\sqrt{3}}{2} \\ -\frac{\sqrt{3}}{2} & -\frac{1}{2} \end{array}\right]$ & $\pm\left[\begin{array}{cc} \bar{\epsilon}^* & 0 \\ 0 & \bar{\epsilon} \end{array}\right]$\\[12pt]
$\left\{\begin{array}{c} \sigma_{\nu 1} \\ \bar{\sigma}_{\nu 1}\end{array}\right\}$ & $\left[\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right]$ & $\pm\left[\begin{array}{cc} 0 & \bar{1} \\ 1 & 0 \end{array}\right]$\\[12pt]
$\left\{\begin{array}{c} \sigma_{\nu 2} \\ \bar{\sigma}_{\nu 2}\end{array}\right\}$ & $\left[\begin{array}{cc} -\frac{1}{2} & -\frac{\sqrt{3}}{2} \\ -\frac{\sqrt{3}}{2} & \frac{1}{2} \end{array}\right]$ & $\pm\left[\begin{array}{cc} 0 & \bar{\epsilon}^* \\ \epsilon & 0 \end{array}\right]$\\[12pt]
$\left\{\begin{array}{c} \sigma_{\nu 3} \\ \bar{\sigma}_{\nu 3}\end{array}\right\}$ & $\left[\begin{array}{cc} -\frac{1}{2} & \frac{\sqrt{3}}{2} \\ \frac{\sqrt{3}}{2} & \frac{1}{2} \end{array}\right]$ & $\pm\left[\begin{array}{cc} 0 & \bar{\epsilon} \\ \epsilon^* & 0 \end{array}\right]$
\end{tabular}
\caption{Irreducible matrix representations of $E$ and $E_{1/2}$ for orbital and spin degrees of freedom, respectively. $\Gamma_{E_{1/2}}$ is given in helicity basis with $\epsilon=\exp{i2\pi/3}$.}
\label{St1}
\end{table}
For the multi-particle $ve_xe_y$ ground state configuration, the irreducible matrix representation $\Gamma_{\lambda\kappa}^{(j)}(R)$ can be decomposed into its orbital and spin components for each particle, i.e. $\Gamma_{\lambda\kappa}^{(j)}(R)=\left[(\mathbb{1}\otimes\Gamma_{E_{1/2}})\otimes(\Gamma_E\otimes\Gamma_{E_{1/2}})\otimes(\Gamma_E\otimes\Gamma_{E_{1/2}})\right]_{\lambda\kappa}^{(j)}(R)$. In this form, application of the projection operator \cite{Tinkham2003} on each basis function,
\begin{equation}
\mathcal{P}^{(j)}f_\kappa^j=(I_j/h)\sum_R\sum_\lambda^{I_j}\chi^{(j)}(R)^*\Gamma_{\lambda\kappa}^{(j)}(R)f_\lambda^j,\label{13}
\end{equation}
yields the symmetry adapted basis functions belonging to the $j^{\textrm{th}}$ representation of the ground state. Character table of $C_{3\nu}$ is given in Table \ref{Table1}. This gives us a prescription for generating all the partners of any basis function belonging to a given representation. Further combinations of these symmetry adapted basis functions are then formed according to the spin configurations listed in Table \ref{St2} to finally obtain all the quartet and doublet wave functions of $ve^2$ configuration listed in Table II of the main text. The wave functions for the $uve$ excited state quartet (q2) are also produced in the same way.
\setlength{\tabcolsep}{0.5pt}
\renewcommand{\arraystretch}{1.2}
\begin{table}[!hb]
\centering
\begin{tabular}{|c|C{0.5cm}|C{0.5cm}|C{0.5cm}|C{0.5cm}|C{0.5cm}|C{0.5cm}|C{0.5cm}|C{0.5cm}|C{0.5cm}|C{0.5cm}|C{0.5cm}|C{0.5cm}|}
\hline
& $E$ & $C_3^+$ & $C_3^-$ & $\sigma_{\nu 1}$ & $\sigma_{\nu 2}$ & $\sigma_{\nu 3}$ & $\bar{E}$ & ${\bar{C}_3^+}$ & $\bar{C}_3^-$ & $\bar{\sigma}_{\nu 1}$ & $\bar{\sigma}_{\nu 2}$ & $\bar{\sigma}_{\nu 3}$\\
\hline
$A_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \rowcolor{gray!10}
$A_2$ & 1 & 1 & 1 & -1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 & -1\\
$E$ & 2 & 1 & 1 & 0 & 0 & 0 & 2 & 1 & 1 & 0 & 0 & 0 \\ \rowcolor{gray!10}
$E_{1/2}$ & 2 & 1 & 1 & 0 & 0 & 0 & -2 & -1 & -1 & 0 & 0 & 0 \\
$\prescript{1}{}E_{3/2}$ & 1 & -1 & -1 & i & i & i & -1 & 1 & 1 & -i & -i & -i \\ \rowcolor{gray!10}
$\prescript{2}{}E_{3/2}$ & 1 & -1 & -1 & -i & -i & -i & -1 & 1 & 1 & i & i & i \\
\hline
\end{tabular}
\caption{Character table of $C_{3\nu}$ double group. 3-particle coordinate (spin) space belongs to the first (last) three rows.}
\label{Table1}
\end{table}
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.1}
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|l|}
\hline
\multicolumn{4}{|c|}{$D^{1/2}\otimes D^{1/2}\otimes D^{1/2}$} \\
\hline
$\Gamma_s$ & $S$ & $m_s$ & \multicolumn{1}{c|}{$\psi_{S}^{m_s}$} \\
\hline
\multirow{4}{*}{$D^{3/2}$} & \multirow{4}{*}{$3/2$} & $+3/2$ & $|\alpha\alpha\alpha\rangle$ \\
& & $+1/2$ & $|\alpha\alpha\beta\rangle+|\alpha\beta\alpha\rangle+|\beta\alpha\alpha\rangle$ \\
& & $-1/2$ & $|\beta\beta\alpha\rangle+|\beta\alpha\beta\rangle+|\alpha\beta\beta\rangle$ \\
& & $-3/2$ & $|\beta\beta\beta\rangle$ \\
\hline
\multirow{2}{*}{$D^{1/2}$} & \multirow{2}{*}{$1/2$} & $+1/2$ & $|\alpha\beta\alpha\rangle-|\beta\alpha\alpha\rangle$ \\
& & $-1/2$ & $|\beta\alpha\beta\rangle-|\alpha\beta\beta\rangle$ \\
\hline
\multirow{2}{*}{$D^{1/2}$} & \multirow{2}{*}{$1/2$} & $+1/2$ & $|\alpha\beta\alpha\rangle+|\beta\alpha\alpha\rangle-2|\alpha\alpha\beta\rangle$ \\
& & $-1/2$ & $|\beta\alpha\beta\rangle+|\alpha\beta\beta\rangle-2|\beta\beta\alpha\rangle$ \\
\hline
\end{tabular}
\caption{Free space spin configuration of three holes in terms of spin up $\alpha$ and down $\beta$ states reduced into irrep. of a quartet $D^{3/2}$ and two doublets $D^{1/2}$.}
\label{St2}
\end{table}
\section{Spin-orbit assisted transitions amongst dark doublet states}
We show the spin-orbit coupling matrix elements between all doublet manifolds $\textrm{d1}{-}\textrm{d}5$ in Table \ref{St3} using Eq.1 of our manuscript with the symmetry-adapted basis functions given in Table I. The spin-orbit coupling parameters that are perpendicular and parallel to the $C_3$ axis of the defect are represented by $\lambda_\bot$ and $\lambda_{z}$, respectively. Each element of the matrix is evaluated by $\langle \Psi_i || \mathscr{H}_{SO} || \Psi_j \rangle$ where $i$ and $j$ are the wave functions given as the row and column headings. We also omit the dark doublets, much higher in energy, lying in between the excited quartet states q1 and q2. These will either transition to the lowest excited quartet state (q1) or along the doublet ladder to the five lower doublet states. The key thing to notice here is, as shown in Table \ref{St3}, all doublet states except d5 have spin-orbit assisted allowed transitions to the lowest d1 doublet. However, d5 doublet can transition to d1 through the other doublets in between and also has strong transition rate -just like d1- into the ground spin $m_s=\pm 3/2$ states by itself which will assist the optical spin polarization process. Therefore, any other high lying doublet states we omitted in this fine structure will follow the general paths shown in our manuscript and will not affect the dominant spin-polarization channel identified as to be through the d1 doublet in our manuscript. Note that d1 is energetically the closest doublet to the ground state (as shown above) and it is also the only one connected to the q1 quartet with a directly allowed spin-orbit assisted transition.
\setlength{\tabcolsep}{0.5pt}
\renewcommand{\arraystretch}{1.2}
\begin{table*}[!ht]
\centering
\begin{tabular}{|c|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|C{0.8cm}|}
\hline
& $\Psi_{\textrm{d}1}^1$ & $\Psi_{\textrm{d}1}^2$ & $\Psi_{\textrm{d}1}^3$ & $\Psi_{\textrm{d}1}^4$ & $\Psi_{\textrm{d}2}^1$ & $\Psi_{\textrm{d}2}^2$ & $\Psi_{\textrm{d}3}^1$ & $\Psi_{\textrm{d}3}^2$ & $\Psi_{\textrm{d}3}^3$ & $\Psi_{\textrm{d}3}^4$ & $\Psi_{\textrm{d}4}^1$ & $\Psi_{\textrm{d}4}^2$ & $\Psi_{\textrm{d}5}^1$ & $\Psi_{\textrm{d}5}^2$ & $\Psi_{\textrm{d}5}^3$ & $\Psi_{\textrm{d}5}^4$ \\
\hline
$\Psi_{\textrm{d}1}^1$ & $-\frac{\lambda_z}{2}$ & 0 & 0 & 0 & 0 & $\frac{-\lambda_\bot}{2\sqrt{3}}$ & 0 & 0 & 0 & 0 & 0 & $\frac{i\lambda_\bot}{2}$ & 0 & 0 & 0 & 0 \\ \rowcolor{gray!10}
$\Psi_{\textrm{d}1}^2$ & 0 & $-\frac{\lambda_z}{2}$ & 0 & 0 & $\frac{-\lambda_\bot}{2\sqrt{3}}$ & 0 & 0 & 0 & 0 & 0 & $\frac{i\lambda_\bot}{2}$ & 0 & 0 & 0 & 0 & 0 \\
$\Psi_{\textrm{d}1}^3$ & 0 & 0 & $\frac{\lambda_z}{2}$ & 0 & 0 & 0 & $\frac{-i\lambda_\bot}{2\sqrt{2}}$ & $\frac{-i\lambda_\bot}{2\sqrt{2}}$ & 0 & $\frac{-\lambda_\bot}{2}$ & 0 & 0 & 0 & 0 & 0 & 0 \\ \rowcolor{gray!10}
$\Psi_{\textrm{d}1}^4$ & 0 & 0 & 0 & $\frac{\lambda_z}{2}$ & 0 & 0 & $\frac{i\lambda_\bot}{2\sqrt{2}}$ & $\frac{i\lambda_\bot}{2\sqrt{2}}$ & 0 & $\frac{-\lambda_\bot}{2}$ & 0 & 0 & 0 & 0 & 0 & 0 \\
$\Psi_{\textrm{d}2}^1$ & 0 & $\frac{-\lambda_\bot}{2\sqrt{3}}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $\frac{i\lambda_z}{\sqrt{3}}$ & 0 & 0 & $\frac{-i\lambda_\bot}{2\sqrt{3}}$ & 0 & 0 \\ \rowcolor{gray!10}
$\Psi_{\textrm{d}2}^2$ & $\frac{-\lambda_\bot}{2\sqrt{3}}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $\frac{i\lambda_z}{\sqrt{3}}$ & $\frac{i\lambda_\bot}{2\sqrt{3}}$ & 0 & 0 & 0 \\
$\Psi_{\textrm{d}3}^1$ & 0 & 0 & $\frac{i\lambda_\bot}{2\sqrt{2}}$ & $\frac{-i\lambda_\bot}{2\sqrt{2}}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $\frac{-\lambda_\bot}{2\sqrt{2}}$ & $\frac{\lambda_\bot}{2\sqrt{2}}$ \\ \rowcolor{gray!10}
$\Psi_{\textrm{d}3}^2$ & 0 & 0 & $\frac{i\lambda_\bot}{2\sqrt{2}}$ & $\frac{-i\lambda_\bot}{2\sqrt{2}}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $\frac{-\lambda_\bot}{2\sqrt{2}}$ & $\frac{\lambda_\bot}{2\sqrt{2}}$ \\
$\Psi_{\textrm{d}3}^3$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \rowcolor{gray!10}
$\Psi_{\textrm{d}3}^4$ & 0 & 0 & $\frac{-\lambda_\bot}{2}$ & $\frac{-\lambda_\bot}{2}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $\frac{-i\lambda_\bot}{2}$ & $\frac{-i\lambda_\bot}{2}$ \\
$\Psi_{\textrm{d}4}^1$ & 0 & $\frac{-i\lambda_\bot}{2}$ & 0 & 0 & $\frac{-i\lambda_z}{\sqrt{3}}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $\frac{-\lambda_\bot}{2}$ & 0 & 0 \\ \rowcolor{gray!10}
$\Psi_{\textrm{d}4}^2$ & $\frac{-i\lambda_\bot}{2}$ & 0 & 0 & 0 & 0 & $\frac{-i\lambda_z}{\sqrt{3}}$ & 0 & 0 & 0 & 0 & 0 & 0 & $\frac{\lambda_\bot}{2}$ & 0 & 0 & 0 \\
$\Psi_{\textrm{d}5}^1$ & 0 & 0 & 0 & 0 & 0 & $\frac{-i\lambda_\bot}{2\sqrt{3}}$ & 0 & 0 & 0 & 0 & 0 & $\frac{\lambda_\bot}{2}$ & $\frac{\lambda_z}{2}$ & 0 & 0 & 0 \\ \rowcolor{gray!10}
$\Psi_{\textrm{d}5}^2$ & 0 & 0 & 0 & 0 & $\frac{i\lambda_\bot}{2\sqrt{3}}$ & 0 & 0 & 0 & 0 & 0 & $\frac{-\lambda_\bot}{2}$ & 0 & 0 & $\frac{\lambda_z}{2}$ & 0 & 0 \\
$\Psi_{\textrm{d}5}^3$ & 0 & 0 & 0 & 0 & 0 & 0 & $\frac{-\lambda_\bot}{2\sqrt{2}}$ & $\frac{-\lambda_\bot}{2\sqrt{2}}$ & 0 & $\frac{i\lambda_\bot}{2}$ & 0 & 0 & 0 & 0 & $\frac{-\lambda_z}{2}$ & 0 \\ \rowcolor{gray!10}
$\Psi_{\textrm{d}5}^4$ & 0 & 0 & 0 & 0 & 0 & 0 & $\frac{\lambda_\bot}{2\sqrt{2}}$ & $\frac{\lambda_\bot}{2\sqrt{2}}$ & 0 & $\frac{i\lambda_\bot}{2}$ & 0 & 0 & 0 & 0 & 0 & $\frac{-\lambda_z}{2}$\\
\hline
\end{tabular}
\caption{Spin-orbit matrix elements amongst the dark doublet states. Spin-orbit parameters of the defect are given by $\lambda_z$ and $\lambda_\bot$ along the $C_3$ axis and the basal plane of the defect, respectively.}
\label{St3}
\end{table*}
\section{Spin-spin interaction}
\subsection{Spherical tensor components}
The spin dipole-dipole operator given in terms of the single particle operators $S_d=\bm{s}^i\cdot \bm{s}^j-3\left(\bm{s}^i\cdot \bm{\hat{r}}^{ij}\right)\left(\bm{s}^j\cdot \bm{\hat{r}}^{ij}\right)$ can be expressed as $\left\{ A+B+C+D+E+F\right\}$, using the following spherical tensor components,
\begin{align}
&A=-4\sqrt{\pi/5}\; Y_2^0\ s_z^i s_z^j, \quad B=\sqrt{\pi/5}\; Y_2^0\left(s_-^is_+^j+s_+^is_-^j\right),\nonumber\\
&C,D=\mp\sqrt{6\pi/5}\; Y_2^{\mp 1}\left(s_\pm^is_z^j+s_z^is_{\pm}^j\right),\nonumber\\
&E,F=-\sqrt{6\pi/5}\; Y_2^{\mp 2}s_{\pm}^is_{\pm}^j.\label{14}
\end{align}
Orbital parts of $A$ and $B$ terms involving the spherical harmonic $Y_2^0$ belong to the $A_1$ symmetry, whereas all other terms belong to the $E$ symmetry. Since the ground state wave functions (Table II of the main text) possess an $A_2$ orbital symmetry, only $A$ and $B$ terms of Eq.~(\ref{14}) will cause the zero field spin-splitting of the ground state; however, for a q2 excited state with $E$ orbital symmetry and corresponding spin symmetries listed in Table II of the main text, all terms can contribute to the splitting.
We first calculate the matrix elements of $S_d$ for each wave function listed in Table II of the main text by direct evaluation of its spin components. The remaining spatial dependence of the matrix elements can then be analyzed through the Wigner-Eckart theorem using the spatial components of the spherical tensors listed above.
\subsection{Ground state zero-field spin splitting}
In the main text, we report the zero-field spin (ZFS) splitting of the ground state in a compact form as
\begin{equation}
\gamma_g=\gamma_0 \langle\phi^{A_2}_{ve^2}||I_2||\phi^{A_2}_{ve^2}\rangle/\sqrt{10},\label{15}
\end{equation}
where $I_2$ is an irregular solid harmonic of second rank, i.e. $I_l^m=\sqrt{4\pi/(2l+1)}Y_l^m/r^{l+1}$ and $\gamma_0=\mu_0g^2\mu_B^2/(4\pi)$. In its open form, it can be written as
\begin{equation}
\gamma_g=\gamma_0 \sqrt{\frac{\pi}{5}}\sum_{i>j} \langle ve_xe_y||\frac{Y_{2,ij}^0}{r_{ij}^3}||ve_xe_y\rangle,\label{16}
\end{equation}
where $Y_{2,ij}^0/r_{ij}^3$ can be treated as a pair operator. In terms of the direct and exchange integrals \cite{Tinkham2003,Mahan2000}, the expectation value of any pair operator $F$ is given by,
\begin{align}\begin{split}
\langle X||F||X\rangle=\sum_{i>j}&\left\{\langle a_i a_j |f(i,j)|a_i a_j\rangle\right.\\
&\left.-\langle a_i a_j |f(i,j)|a_j a_i\rangle\right\},
\end{split}\label{17}
\end{align}
where $F=\sum_{i>j}f(i,j)$ is the total pair operator and $X$ is the multi-particle antisymmetrized product (Slater determinant) defined as $AP\left[a_1(1)a_2(2)\cdots a_n(N)\right]$. In the case of ground state ZFS, these are $f(i,j)=Y_{2,ij}^0/r_{ij}^3$ and $X=AP\left[v(1)e_x(2)e_y(3)\right]$.
A quantitative estimate of ZFS splitting can be obtained by switching back to the atomic orbitals,
\begin{align}
\gamma_g&=\langle\Psi_g^{1,2}|\mathcal{H}_{S}|\Psi_g^{1,2}\rangle_{\pm\frac{3}{2}}-\langle\Psi_g^{3,4}| \mathcal{H}_{S}|\Psi_g^{3,4}\rangle_{\pm\frac{1}{2}}\nonumber\\
&=\frac{\gamma_0}{4}\left[\eta_{ad}\langle r_{ad}^{-3}\rangle(1-3\cos^2\theta_{ad})+\eta_{ab}\langle r_{ab}^{-3}\rangle\right],\label{18}
\end{align}
where the $\eta_{ab}=1.443$ and $\eta_{ad}=1.557$ are the respective weight factors of the expectation value $\langle Y_{2,ij}^0/r_{ij}^3\rangle$ originating from total $ad$ and $ab$ pair contributions of MOs after evaluating the determinantal multi-particle wave functions according to Eq.~(\ref{17}) and using the explicit forms of $u$, $v$, and $e_{x,y}$ given in Eq.~(\ref{6}). This equation can also be written in a more familiar form starting from the spin dipole-dipole interaction as
\begin{equation}
\gamma_g=\frac{3}{2}\gamma_0\left\langle \frac{1-3\cos^2\theta}{r_{ij}^3}\right\rangle_{\phi_{ve^2}^{A_2}}\left[S_{z}^2-\frac{1}{3}S(S+1)\right],\label{19}
\end{equation}
where $\gamma_g$ becomes $D[S_z^2-S(S+1)/3]$. So far we assumed all the charge of unpaired electrons is localized on the neighboring carbon atoms. However, as previously reported \cite{Mizuochi2002}, only $62.3\%$ of total charge is localized on the neighboring carbon atoms, and this yields to a reduction of roughly $\tau=(1-(0.377)^2)=0.858$ in $\gamma_g$, i.e. $\gamma_g\rightarrow\tau\,\gamma_g$.
Evaluation of Eq.~(\ref{18}) with these weight factors and structure parameters calculated via DFT, i.e. $r_{ab}=3.3563$\AA, $r_{ad}=3.3567$\AA, and $\theta_{ad}=35.259^\circ$, as well as accounting for the missing charge, results with a ground state ZFS splitting of $2\gamma_g\approx -68$MHz ($D<0$) for an h-site $\textrm{V}_{\textrm{Si}}^-$ defect in good agreement with the experimentally observed values \cite{Kraus_NP14,Carter_preprint15}.
\section{Lambda system}
A $\Lambda$-type three-level system can be created by a magnetic field transverse to the $C_3$-axis. Such a field will mix states in the ground and excited manifolds in different ways. This is because in the ground state manifold there is a small spin-spin splitting between states with $|S_z|=3/2$ and $|S_z|=1/2$, whereas the corresponding states in the excited manifold are split by the much larger spin-orbit interaction. We assume a weak enough magnetic field such that the coupling of the spin states is much smaller than the spin-orbit term $\Delta_e$. This allows the eigenstates in the excited manifold to remain in the form shown in the main text (without a B-field). There are several choices for the composition of the $\Lambda$ system. Below we present some of these options. In all cases, the lower levels are eigenstates of $\hat{S}_x$, which in terms of the states in Table I of the main text are given by
\begin{align}
\Psi_{g,x}^{1} &\simeq \left[\frac{(1-i)}{4}\Psi_{\textrm{g}}^1+\frac{(1+i)}{4}\Psi_{\textrm{g}}^2 + \sqrt{\frac{3}{8}} (\Psi_{\textrm{g}}^3+\Psi_{\textrm{g}}^4) \right]\nonumber
\\
\Psi_{g,x}^{2} &\simeq \left[-\frac{\sqrt{3}}{4}(1+i)\Psi_{\textrm{g}}^1-\frac{\sqrt{3}}{4}(1-i)\Psi_{\textrm{g}}^2- \frac{1}{\sqrt{8}}(\Psi_{\textrm{g}}^3-\Psi_{\textrm{g}}^4) \right]\nonumber
\\
\Psi_{g,x}^{3} &\simeq \left[\frac{\sqrt{3}}{4}(1-i)\Psi_{\textrm{g}}^1+\frac{\sqrt{3}}{4}(1+i)\Psi_{\textrm{g}}^2- \frac{1}{\sqrt{8}}(\Psi_{\textrm{g}}^3+\Psi_{\textrm{g}}^4) \right]\nonumber
\\
\Psi_{g,x}^{4} &\simeq \left[-\frac{(1+i)}{4}\Psi_{\textrm{g}}^1-\frac{(1-i)}{4}\Psi_{\textrm{g}}^2 + \sqrt{\frac{3}{8}} (\Psi_{\textrm{g}}^3-\Psi_{\textrm{g}}^4) \right]\nonumber,
\end{align}
in descending $\langle S_x\rangle$ value, 3/2,1/2,-1/2, -3/2 and where, for simplicity, we have ignored a small correction from the ZFS.
In the first approach for a $\Lambda$ system we can select states with the same weight of $|\uparrow\uparrow\uparrow\rangle$, e.g., states $\{\Psi_{g,x}^{1},\Psi_{g,x}^{4}\}$ or $\{\Psi_{g,x}^{2},\Psi_{g,x}^{3}\}$. In the excited state manifold then we select $\Psi_e^+$, which is defined in the main text and has well-defined projection of spin along the $z$ (or $C_3$) axis, due to the suppression of Zeeman mixing originating from the large SO interaction. The effect is similar to the selection rules in self-assembled quantum dot electron-trion systems under a Voigt B field. Because of this composition of the Lambda system, the polarization of the two transitions is the same. The frequency however is different, and that degree of freedom can be used as the `handle' with which to the emitted photon can be manipulated.
An alternate scheme for a $\Lambda$ system is to select as lower levels the eigenstates of $\hat{S}_x$ with eigenvalues -1/2 and 3/2, $\Psi_{g,x}^{3}$ and $\Psi_{g,x}^{1}$. In the excited state manifold the relevant state is then $(\Psi_{\textrm{q2}}^7+\Psi_{\textrm{q2}}^8)/\sqrt{2}$, i.e., again mixing to the states with different spin projection along $z$ has been ignored in the excited manifold due to the large SO splitting. We note that here the two transitions have the same polarization but, unlike the scheme above, different dipole moments, originating from the different coefficients of $\Psi_{\textrm{g}}^3$, $\Psi_{\textrm{g}}^4$ in the states $\Psi_{g,x}^{3}$ and $\Psi_{g,x}^{1}$.
|
1,116,691,498,347 | arxiv | \section{Introduction}
Iron-chalcogenide superconductors are usually related to the selenium deficient compound FeSe$_{1-x}$, having a transition temperature $T_{\rm c}\simeq8$~K.\cite{Hsu2009, Pomjakushina2009} Higher $T_{\rm c}$'s can be accessed by applying hydrostatic pressure $p$,\cite{Medvedev2009} by inducing chemical pressure,\cite{Mizuguchi2009, Sales2009} or by intercalating alkali atoms between the Fe$_2$Se$_2$-layers, yielding $A_x$Fe$_{2-y}$Se$_2$ ($A$ = K, Rb, Cs).\cite{Guo2010, Krzton2011, Li2011} Besides superconductivity, many iron-chalcogenides feature coexisting magnetic order, where subtle modifications of the crystal structure lead to drastic changes in superconducting and magnetic properties. This is the case for the compound Rb$_{x}$Fe$_{2-y}$Se$_{2}$, which is superconducting below $T_{\rm c}\simeq33$~K and antiferromagnetic below the N\'eel temperature $T_{\rm N}$ as high as 500~K to 540~K.\cite{Shermadini2011, Liu2011} In addition to these superconducting and magnetic orders, iron-vacancy ordering accompanied by a structural distortion at the temperature $T_{\rm s}$ as well as phase separation in magnetic and nonmagnetic domains at the temperature $T_{\rm p}$ are observed.\cite{Pomjakushin2011}
\\\indent
Although it was shown by various groups that $A_x$Fe$_{2-y}$Se$_2$ exhibits bulk superconductivity,\cite{Ying2011, Tsurkan2011, Bosma2012} muon spin rotation ($\mu$SR) experiments reported that only a minor volume fraction of $\sim$10\% of the sample is superconducting, whereas $\sim$90\% of the volume is antiferromagnetic.\cite{Shermadini2012} From neutron experiments the minority phase was identified to have the $I4/mmm$ space group with a small in-plane lattice constant $a$ and a large out-of-plane lattice constant $c$.\cite{Pomjakushin2012} It was discussed whether $A_x$Fe$_{2-y}$Se$_2$ should be treated as a filamentary or granular superconductor.\cite{Shen2011} Besides, mesoscopic phase separation in Rb$_x$Fe$_{2-y}$Se$_2$ was reported to prevail down to the nanoscale.\cite{Ricci2011,Ricci2011_b, Lei2011_c, Yan2011, Wang2011, Bosak2011} Microscopic techniques probing the stoichiometry of these distinct phases yield in average the composition Rb$_2$Fe$_4$Se$_5$ for the antiferromagnetic vacancy ordered majority phase (245-phase) and the composition Rb$_{1-x}$Fe$_2$Se$_2$ for the superconducting Rb-deficient minority phase (122-phase).\cite{Texier2012, Speller2012} Thus, the studied material may be treated as follows: the minority 122-phase is superconducting and is embedded in an antiferromagnetic matrix of the vacancy ordered 245-phase.
\\\indent
Interestingly, it was observed that some post-annealed iron-chalcogenide samples may become superconducting despite their insulating as-grown behavior.\cite{Ryu2011, Ozaki2011, Lei2011, Han2012} It was discussed that a possible change in the vacancy ordering and the related phase separation might be related to the observed changes in the electronic properties.\cite{Han2012} Obviously, by carefully tuning the conditions of annealing, one may gain direct control of the phase separation in $A_x$Fe$_{2-y}$Se$_2$ and by that of the superconducting and magnetic properties. In order to examine this scenario and to investigate the influence of vacancy ordering and phase separation on superconductivity and magnetism, we performed an extended study of thermally treated Rb$_{x}$Fe$_{2-y}$Se$_2$ single crystals.
\\\indent
\section{Experimental details}
\begin{figure}[b!]
\centering
\includegraphics[width=\linewidth]{fig1.pdf}
\caption{(color online) Differential heat $\Delta Q$ for a Rb$_x$Fe$_{2-y}$Se$_2$ single crystal recorded between 400 and 600~K with a constant heating rate of 20 K/min. Three distinct peaks are observed, related to the three onset temperatures $T_{\rm p}\simeq489$~K, $T_{\rm N}\simeq517$~K, and $T_{\rm s}\simeq540$~K (see text). The three annealing temperatures 413~K, 488~K, and 563~K were chosen to post-anneal the as-grown Rb$_x$Fe$_{2-y}$Se$_2$ crystals for the subsequent experiments.}
\label{fig1}
\end{figure}
A set of Rb$_x$Fe$_{2-y}$Se$_2$ single crystals with nominal composition Rb$_{0.85}$Fe$_{1.90}$Se$_2$ was grown by the Bridgman method, similarly as described in Refs.~\onlinecite{Krzton2011} and \onlinecite{Krzton2012}. Here, a mixture of high purity Fe, Se, and Rb (at least 99.99\%; Alfa Aesar) was sealed in an evacuated quartz ampoule. This ampoule, protected by a surrounding evacuated quartz tube, was heated to $1030~^\circ$C for 2~h. The melt was cooled first with $-6~^\circ$C/h to $750~^\circ$C and finally to room temperature at a fast rate of $-200~^\circ$C/h. After synthesis the ampoule was transferred to a glove box and opened there to protect the crystals from degradation in air.
\\\indent
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig2.pdf}
\caption{(color online) STEM images of as-grown Rb$_x$Fe$_{2-y}$Se$_2$ single crystal taken with the direction of the electron beam perpendicular to the tetragonal $c$-axis. Picture (a) was taken on a square of $\sim1.5\times1.5~\mu$m$^2$, (b) on a square of $\sim250\times250$~nm$^2$, (c) on a square of $\sim50\times50$~nm$^2$. The atomic composition of the darker regions was found to correspond to Rb$_{0.5}$Fe$_{2}$Se$_{2}$, whereas in the brighter regions, the composition is Fe and Rb deficient Rb$_{0.4}$Fe$_{1.6}$Se$_{2}$.}
\label{fig2}
\end{figure}
\begin{table}[t!]
\caption{List of all as-grown and annealed Rb$_x$Fe$_{2-y}$Se$_2$ single-crystal samples investigated by various experimental techniques in this work. The samples with almost identical $T_{\rm c}$ were annealed at a certain temperature $T_{\rm ann}$ during a certain time $t_{\rm ann}$. The as-grown samples are those with $t_{\rm ann}=0$~h. The sample exhibiting the highest $T_{\rm c}$ among the as-grown crystals was annealed at 488~K and is named as A$^*_{488}[t_{\rm ann}]$.}
\label{table0}
\begin{tabular}{p{25mm} c c c }
\hline\hline
Sample & $T_{\rm ann}$ & $t_{\rm ann}$ & Experiment \\\hline\hline
A$_{413}[0~{\rm h}]$ & 413~K & 0~h & magnetometry \\
A$_{413}[3~{\rm h}]$ & 413~K & 3~h & magnetometry \\
A$_{413}[36~{\rm h}]$ & 413~K & 36~h & magnetometry \\\hline
A$_{488}[0~{\rm h}]$ & 488~K & 0~h & magnetometry \\
A$_{488}[3~{\rm h}]$ & 488~K & 3~h & magnetometry \\
A$_{488}[36~{\rm h}]$ & 488~K & 36~h & magnetometry \\\hline
A$_{563}[0~{\rm h}]$ & 563~K & 0~h & magnetometry \\
A$_{563}[3~{\rm h}]$ & 563~K & 3~h & magnetometry \\
A$_{563}[36~{\rm h}]$ & 563~K & 36~h & magnetometry \\\hline
A$^*_{488}[0~{\rm h}]$ & 488~K & 0~h & magnetometry \\
A$^*_{488}[3~{\rm h}]$ & 488~K & 3~h & magnetometry \\
A$^*_{488}[36~{\rm h}]$ & 488~K & 36~h & magnetometry \\\hline
B$_{488}[0~{\rm h}]$ & 488~K & 0~h & transport \\
B$_{488}[3~{\rm h}]$ & 488~K & 3~h & transport \\\hline
C$_{488}[0~{\rm h}]$ & 488~K & 0~h & $\mu$SR \\
C$_{488}[60~{\rm h}]$ & 488~K & 60~h & $\mu$SR \\\hline
D$_{488}[0~{\rm h}]$ & 488~K & 0~h & STEM \\
D$_{488}[3~{\rm h}]$ & 488~K & 3~h & STEM \\\hline\hline
\end{tabular}
\end{table}
In order to study the thermal evolution of the mesoscopic phase separation, an as-grown Rb$_x$Fe$_{2-y}$Se$_2$ single crystal was initially characterized by differential scanning calorimetry (DSC). With DSC, the differential amount of heat $\Delta Q$ required to increase the sample temperature $T$ by $\Delta T$ with respect to a reference is recorded.\cite{Wunderlich1990} Measurements were performed with a \emph{Netzsch} DSC 204F1 system, by heating up from 290~K to 670~K with a constant heating rate of 20~K/min. Both, sample and reference were always maintained at the same temperature throughout the experiment. In Fig.~\ref{fig1} the measured $\Delta Q$ in the temperature range between 400~K and 600~K for the as-grown single crystal is presented. The three peaks at the temperatures $T_{\rm s}$, $T_{\rm N}$, and $T_{\rm p}$ are related to three distinct onset temperatures of this system: (i) $T_{\rm s}\simeq540$~K corresponds to the onset temperature of iron vacancy ordering, at which the unit cell transforms from the high-temperature $I4/mmm$ structure into a low-temperature superstructure $I4/m$, (ii) $T_{\rm N}\simeq517$~K is the N\'eel temperature, and (iii) $T_{\rm p}\simeq489$~K corresponds to the onset temperature of phase separation between coexisting $I4/mmm$ and $I4/m$ phases.\cite{Pomjakushin2012}
\\\indent
The mesoscopic phase separation of as-grown Rb$_x$Fe$_{2-y}$Se$_2$ is visualized with scanning transmission electron microscopy (STEM) at room temperature using a {\it Titan} 80-300 {\it Cubed} instrument operating at $300$~keV. The specimens for STEM investigations were carefully prepared by a focused ion beam (FIB) to avoid degradation on air exposition. The STEM images taken with the electron beam perpendicular to the tetragonal $c$-axis are shown in Fig.~\ref{fig2}. The brightness of the STEM images allows to distinguish the actual composition of the sample. According to the results of energy dispersive x-ray spectroscopy (EDXS) the composition of the darker and brighter regions is Rb$_{0.5}$Fe$_{2}$Se$_{2}$ and Rb$_{0.4}$Fe$_{1.6}$Se$_{2}$, respectively.
\\\indent
Although the transition temperatures $T_{\rm N}$ and $T_{\rm s}$ both correspond to thermodynamic ordering phenomena in this system, the onset of phase separation $T_{\rm p}$ is of different origin. It can be presumed that thermal history of this material crucially influences the phase separation in the sample. This rises the question whether it might be possible to tune the phase separation in Rb$_x$Fe$_{2-y}$Se$_2$ by proper thermal treatment, and by that to control the superconducting and magnetic properties. In order to study the influence of post annealing on the properties of Rb$_x$Fe$_{2-y}$Se$_2$ single crystals, a set of samples was annealed with an \emph{Elite Thermal Systems Ltd.} single zone high temperature furnace at three annealing temperatures characteristic for the studied samples (see Fig.~\ref{fig1}): (i) $T\simeq413$~K (well below $T_{\rm p}$), (ii) $T\simeq488$~K (just at $T_{\rm p}$), and (iii) $T\simeq563$~K (well above $T_{\rm p}$). For this purpose the samples were loaded in a furnace, which was heated from room temperature with a fast rate of $\sim10$~K/min. Having reached the desired annealing temperature $T_{\rm ann}$, the temperature was kept constant for a time $t_{\rm ann}$, after which the samples were removed from the hot furnace and were rapidly cooled back to room temperature. As-grown and annealed samples were systematically studied by various experimental methods. The superconducting and normal-state magnetization was studied with a \emph{Quantum Design} Magnetic Property Measurement System (MPMS) XL with a differential superconducting quantum interference device (SQUID) equipped with a reciprocating sample option (RSO). In order to prevent these samples from degradation in air, all investigated crystals were vacuum sealed in quartz ampoules of $5$~mm diameter and approximately $10$~cm length. The plate-like crystals were oriented with their crystallographic $c$-axis along the ampoule axis and were fixed between two quartz cylinders of approximately $5$~cm length. The diameter of the crystals was adapted to the inner diameter of the quartz tube. Such sample mounting provides a homogenous surrounding of the examined crystal and produces only a minor background signal during the measurements. Resistivity measurements with electrical current flowing in the $ab$-plane were performed with a \emph{Quantum Design} Physical Property Measurement System (PPMS). The Rb$_x$Fe$_{2-y}$Se$_2$ single crystal was cleaved along the $ab$-plane in argon atmosphere inside a glove box and contacted on the cleaved surface by the four-probe technique with gold wires (50~$\mu$m diameter) and silver epoxy. The as-grown sample was sealed directly after the initial measurements inside a quartz ampoule and was subsequently annealed and remeasured. By this procedure we ensured the measurement geometry to stay exactly the same for all the measurements. The $\mu$SR investigations with magnetic fields applied along the $c$-axis were performed with the General Purpose Surface (GPS) $\mu$SR Instrument located at the $\pi$M3 beam line at the Swiss Muon Source (S$\mu$S) at the Paul Scherrer Institute. The $\mu$SR time spectra have been analyzed using the free software package MUSRFIT.\cite{Suter} STEM measurements were done as described above. A list of the various as-grown and annealed samples studied in this work is presented in Table~\ref{table0}.
\\\indent
\section{Results}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig3.pdf}
\caption{(color online) Normalized zfc magnetization $M(T)/M(0)$ for the Rb$_x$Fe$_{2-y}$Se$_2$ single crystals A$_{413}[t_{\rm ann}]$, A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$ in a magnetic field $\mu_0 H=0.3$~mT applied along the $c$-axis. The panels present the data for the as-grown samples with $t_{\rm ann}=0$~h (a), annealed samples for $t_{\rm ann}=3$~h (b), and for $t_{\rm ann}=36$~h (c). The respective insets show close-ups of the onset of diamagnetism.}
\label{fig3}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig4.pdf}
\caption{(color online) Zero-field cooled (zfc) magnetization curves for the Rb$_x$Fe$_{2-y}$Se$_2$ single crystals A$_{413}[t_{\rm ann}]$, A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$ measured at 2.0~K as a function of $H_{\rm int}$ along the $c$-axis. The corresponding $t_{\rm ann}$ of the different panels are $t_{\rm ann}=0$~h (a), $t_{\rm ann}=3$~h (b), and $t_{\rm ann}=36$~h (c). }
\label{fig4}
\end{figure}
In Fig.~\ref{fig3} the zero-field cooled (zfc) magnetization, measured in a magnetic field $\mu_0H=0.3~{\rm mT}$ applied along the $c$-axis for the samples A$_{413}[t_{\rm ann}]$, A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$ (see Table~\ref{table0}) with $t_{\rm ann}=0$, 3, and 36~h are shown. The magnetization $M$ was normalized to the individual linearly extrapolated value of $M(0)$. This allows us to directly compare the curves of the various crystals to each other despite their different masses and shapes. In a first step the properties of the pristine as-grown samples ({\it i.e.}, for $t_{\rm ann}=0$~h) were investigated [see Fig.~\ref{fig3}(a)]. After these measurements, the samples were annealed at $T_{\rm ann}$ for 3~h and were remeasured afterwards [see Fig.~\ref{fig3}(b)], then again annealed at $T_{\rm ann}$ for another 33~h (leading to a total annealing time of $t_{\rm ann}=36$~h), and finally remeasured [see Fig.~\ref{fig3}(c)]. During all the measurements and annealings the samples were kept inside the sealed ampoules. The as-grown samples A$_{413}[0~{\rm h}]$, A$_{488}[0~{\rm h}]$, and A$_{563}[0~{\rm h}]$ show very similar behavior, exhibiting superconducting diamagnetism with a rather broad transition width. Only the sample A$^*_{488}[0~{\rm h}]$ exhibits a slightly higher $T_{\rm c}$ and a narrower transition width. The insets to Fig.~\ref{fig3} present a close-up of the onset of diamagnetism. Importantly, the transition temperature $T_{\rm c}$ clearly changes for most of the samples after annealing for $t_{\rm ann}=3$~h and for $t_{\rm ann}=36$~h. Only $T_{\rm c}$ for the sample A$_{413}[t_{\rm ann}]$ is essentially independent of $t_{\rm ann}$. Note that both samples A$_{488}[36~{\rm h}]$ and A$^*_{488}[36~{\rm h}]$ exhibit a clearly narrower transition to the superconducting state with a higher $T_{\rm c}$. In contrast, sample A$_{563}[36~{\rm h}]$ behaves in the opposite way, showing a drastically lower $T_{\rm c}$. The transition width $\Delta T_{\rm c}$ was defined as the inverse of the maximal slope of the normalized magnetization $M/M(0)$ as a function of $T$:
\begin{equation}
\label{eq1}
\Delta T_{\rm c}=\left(\frac{1}{M(0)}\cdot{\rm max}\left[\frac{dM}{dT}\right]\right)^{-1}.
\end{equation}
The estimated values for $T_{\rm c}$ and $\Delta T_{\rm c}$ for all the samples studied are listed in Table~\ref{table1}. In order to better specify the change for a measured property $P$ with annealing time $t_{\rm ann}$, we introduce the following quantity:
\begin{equation}
\label{eq2}
\delta_{t_{\rm ann}}(P)=\frac{P(t_{\rm ann})-P(0~{\rm h})}{P(0~{\rm h})}.
\end{equation}
With this formula, a clear increase of $T_{\rm c}$ by $\sim5-6\%$ is found for the samples A$_{488}[36~{\rm h}]$ and A$^*_{488}[36~{\rm h}]$ in comparison to the as-grown specimens (see Table~\ref{table1}), whereas $T_{\rm c}$ decreases for sample A$_{563}[36~{\rm h}]$ by $\simeq27.3\%$ and remains almost constant for sample A$_{413}[36~{\rm h}]$. The relative transition width $\Delta T_{\rm c}/T_{\rm c}$ of the samples A$_{413}[t_{\rm ann}]$ and A$_{563}[t_{\rm ann}]$ changes only slightly with annealing, whereas for the samples A$_{488}[t_{\rm ann}]$ and A$^*_{488}[t_{\rm ann}]$ a clear improvement is seen. Note that the transition for A$^*_{488}[t_{\rm ann}]$ becomes almost ideally sharp with long annealing.
\\\indent
\begin{table}[t!]
\caption{Evolution of the transition temperature $T_{\rm c}$ and transition width $\Delta T_{\rm c}$ [see Eq.~(\ref{eq1})] of the samples A$_{413}[t_{\rm ann}]$, A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$ with annealing time $t_{\rm ann}$. The changes with annealing $\delta_{t_{\rm ann}}(T_{\rm c})$ and $\delta_{t_{\rm ann}}(\Delta T_{\rm c})$ were calculated applying Eq.~(\ref{eq2}).}
\label{table1}
\begin{tabular}{p{20mm} c c c c c}
\hline\hline
Sample & $T_{\rm c}$ & $\delta_{t_{\rm ann}}(T_{\rm c})$ & $\Delta T_{\rm c}$ & $\delta_{t_{\rm ann}}(\Delta T_{\rm c})$ & $\Delta T_{\rm c}/T_{\rm c}$ \\
& (K) & & (K) & & \\\hline\hline
A$_{413}[0~{\rm h}]$ & 30.1(1) & & 14(1) & & 47(2)\% \\
A$_{413}[3~{\rm h}]$ & 30.1(1) & $\pm0.0$\% & 14(1) & $\pm0$\% & 47(2)\% \\
A$_{413}[36~{\rm h}]$ & 29.5(1) & $-$2.0\% & 16(1) & +14\% & 54(2)\% \\\hline
A$_{488}[0~{\rm h}]$ & 30.0(1) & & 16(1) & & 53(3)\% \\
A$_{488}[3~{\rm h}]$ & 31.7(1) & +5.7\% & 9.5(5) & $-$41\% & 30(1)\% \\
A$_{488}[36~{\rm h}]$ & 31.8(1) & +6.0\% & 7.0(4) & $-$56\% & 22(1)\% \\\hline
A$_{563}[0~{\rm h}]$ & 30.0(1) & & 17(1) & & 57(3)\% \\
A$_{563}[3~{\rm h}]$ & 28.0(1) & $-$6.7\% & 15(1) & $-$12\% & 54(3)\% \\
A$_{563}[36~{\rm h}]$ & 21.8(1) & $-$27.3\% & 10(1) & $-$41\% & 46(4)\% \\\hline
A$^*_{488}[0~{\rm h}]$ & 31.6(1) & & 13(1) & & 41(2)\% \\
A$^*_{488}[3~{\rm h}]$ & 33.1(1) & +4.7\% & 2.2(2) & $-$83\% & 6.6(3)\% \\
A$^*_{488}[36~{\rm h}]$ & 33.3(1) & +5.4\% & 2.1(2) & $-$84\% & 6.3(3)\% \\
\hline\hline
\end{tabular}
\end{table}
\begin{table}[t!]
\caption{Evolution of the superconducting susceptibility $\chi_{\rm sc}(\mu_0H_{\rm int})$ [see Eq.~(\ref{eq5})] of the samples A$_{413}[t_{\rm ann}]$, A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$ with annealing time $t_{\rm ann}$.}
\label{table2}
\begin{tabular}{p{25mm} c c c c c c}
\hline\hline
Sample & $\chi_{\rm sc}(1~{\rm mT})$ & $\chi_{\rm sc}(10~{\rm mT})$ \\
& & \\\hline\hline
A$_{413}[0~{\rm h}]$ & -0.954(6) & -0.172(8) \\
A$_{413}[3~{\rm h}]$ & -0.962(5) & -0.174(9) \\
A$_{413}[36~{\rm h}]$ & -0.920(7) & -0.134(5) \\\hline
A$_{488}[0~{\rm h}]$ & -0.915(7) & -0.175(9) \\
A$_{488}[3~{\rm h}]$ & -0.955(9) & -0.453(9) \\
A$_{488}[36~{\rm h}]$ & -0.984(3) & -0.881(4) \\\hline
A$_{563}[0~{\rm h}]$ & -0.906(3) & -0.124(6) \\
A$_{563}[3~{\rm h}]$ & -0.908(6) & -0.162(8) \\
A$_{563}[36~{\rm h}]$ & -0.977(4) & -0.322(9) \\\hline
A$^*_{488}[0~{\rm h}]$ & -0.976(3) & -0.240(9) \\
A$^*_{488}[3~{\rm h}]$ & -0.989(2) & -0.940(2) \\
A$^*_{488}[36~{\rm h}]$ & -0.990(2) & -0.954(2) \\
\hline\hline
\end{tabular}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig5.pdf}
\caption{(color online) In-plane resistivity $\rho$ of the Rb$_x$Fe$_{2-y}$Se$_2$ samples B$_{488}[0~{\rm h}]$ and B$_{488}[3~{\rm h}]$. The pronounced hump in the normal-state resistivity of the as-grown sample B$_{488}[0~{\rm h}]$ decreases dramatically after annealing and the superconducting $T_{\rm c}$ increases from 31.5~K to 33.1~K.}
\label{fig5}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig6.pdf}
\caption{(color online) Resistivity of the samples B$_{488}[t_{\rm ann}]$ for magnetic fields between 0 and 9~T, varied by 0.5~T steps, for fields in the $ab$-plane and along the $c$-axis. The measurements were performed for the as-grown sample B$_{488}[0~{\rm h}]$ [panels (a) and (b)] and for the annealed sample B$_{488}[3~{\rm h}]$ [panels (c) and (d)], with $H$ applied along the $c$-axis and in the $ab$-plane. The dashed lines denotes 50~\% of the extrapolated normal-state resistivity, which was used as a criterion to determine $H_{\rm c2}(T)$, shown in panel (e). The transition temperature $T_{\rm c}$ increases by 1.6~K as a result of annealing. The solid lines are guides to the linear part of the $H_{\rm c2}(T)$ curves, used in the WHH-approximation [Eq.~(\ref{eq6})].}
\label{fig6}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig7.pdf}
\caption{(color online) Measured magnetic moment $m(T)$ of pristine and annealed Rb$_x$Fe$_{2-y}$Se$_2$ single crystals in the temperature range between 50 and 370~K for magnetic fields of 1~T (a) and 3~T (b), applied along the $c$-axis.}
\label{fig7}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig8.pdf}
\caption{(color online) (a) Antiferromagnetic susceptibility $\chi_{\rm AFM}(T)$ in the normal state of pristine and annealed Rb$_x$Fe$_{2-y}$Se$_2$ single crystals determined from the data shown in Fig.~\ref{fig7} using Eq.~(\ref{eq8}). For clarity, the curves representing the four different annealing sets are vertically shifted to each other. Whereas no change of $\chi_{\rm AFM}(T)$ is found by annealing for sample A$_{413}[t_{\rm ann}]$, for all other samples $\chi_{\rm AFM}(T)$ increases with increasing $t_{\rm ann}$. (b) Ferromagnetic component $M_{\rm FM}(T)$, being constant for sample A$_{413}[t_{\rm ann}]$ as a function of $t_{\rm ann}$. For all other samples $M_{\rm FM}(50~{\rm K})$ increases substantially with increasing $t_{\rm ann}$. }
\label{fig8}
\end{figure}
\begin{table*}[t!]
\caption{Evolution of $T_{\rm c}$, $-dH_{\rm c2}^{||c}/dT$, and $-dH_{\rm c2}^{||ab}/dT$ with annealing time $t_{\rm ann}$ for fields applied parallel to the $c$-axis and to the $ab$-plane for samples B$_{488}[0~{\rm h}]$ and B$_{488}[3~{\rm h}]$. The changes with annealing $\delta_{t_{\rm ann}}(T_{\rm c})$, $\delta_{t_{\rm ann}}(dH_{\rm c2}^{||c}/dT)$, and $\delta_{t_{\rm ann}}(dH_{\rm c2}^{||ab}/dT)$ were calculated applying Eq.~(\ref{eq2}).}
\label{table3}
\begin{tabular}{ p{30mm} c c c c c c c c c c c c}
\hline\hline
Sample & $T_{\rm c}$ & $\delta_{t_{\rm ann}}(T_{\rm c})$ & $-\mu_0dH_{\rm c2}^{||c}/dT$ & $\delta_{t_{\rm ann}}(dH_{\rm c2}^{||c}/dT)$ & $-\mu_0dH_{\rm c2}^{||ab}/dT$ & $\delta_{t_{\rm ann}}(dH_{\rm c2}^{||ab}/dT)$ \\
& (K) & & (T/K) & & (T/K) & \\\hline\hline
B$_{488}[0~{\rm h}]$ & 31.54(5) & & 1.58(3) & & 4.6(1) & \\
B$_{488}[3~{\rm h}]$ & 33.07(5) & +4.9\% & 1.59(2) & +0.6\% & 5.8(1) & +26.1\% \\
\hline\hline
\end{tabular}
\end{table*}
\begin{table*}[t!]
\caption{Evolution of $H_{\rm c2}^{||c}(0)$, $H_{\rm c2}^{||ab}(0)$, and $\gamma_H$ with annealing time $t_{\rm ann}$ for samples B$_{488}[0~{\rm h}]$ and B$_{488}[3~{\rm h}]$. The changes with annealing $\delta_{t_{\rm ann}}(H_{\rm c2}^{||c}(0))$, $\delta_{t_{\rm ann}}(H_{\rm c2}^{||ab}(0))$, and $\delta_{t_{\rm ann}}(\gamma_H)$ were calculated applying Eq.~(\ref{eq2}).}
\label{table4}
\begin{tabular}{ p{30mm} c c c c c c c c c c c c}
\hline\hline
Sample & $\mu_0H_{\rm c2}^{||c}(0)$ & $\delta_{t_{\rm ann}}(H_{\rm c2}^{||c}(0))$ & $\mu_0H_{\rm c2}^{||ab}(0)$ & $\delta_{t_{\rm ann}}(H_{\rm c2}^{||ab}(0))$ & $\gamma_H$ & $\delta_{t_{\rm ann}}(\gamma_H)$ \\
& (T) & & (T) & & & \\\hline\hline
B$_{488}[0~{\rm h}]$ & 34.6(7) & & 101(3) & & 2.9(2) & \\
B$_{488}[3~{\rm h}]$ & 36.5(5) & +5.5\% & 133(3) & +31.7\% & 3.6(2) & +24.1\% \\
\hline\hline
\end{tabular}
\end{table*}
Field dependent magnetization measurements were performed to further investigate the superconducting properties of the samples A$_{413}[t_{\rm ann}]$, A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$. In Fig.~\ref{fig4} the corresponding zfc magnetization curves measured at $T=2.0$~K with variable $t_{\rm ann}$ are presented. The internal magnetic field $H_{\rm int}$ was calculated by correcting the applied magnetic field $H$ for the demagnetization of the samples
\begin{equation}
\label{eq3}
H_{\rm int}=H-DM,
\end{equation}
where $D$ is the demagnetization factor. The dimensions of the crystals used in this experiment were \mbox{$\sim2\times2\times0.5$~mm$^3$}, yielding $D\simeq0.8$ for the measurements with $H$ applied along the $c$-axis being the shortest dimension.\cite{Osborn1945} Hence, it was possible to determine the magnetization $M$ as a function of $H_{\rm int}$. In Fig.~\ref{fig4}(a) the $M(H_{\rm int})$ data for $t_{\rm ann}=0$~h are presened. All samples show rather poor superconducting properties. Although $M(H_{\rm int})\simeq -H_{\rm int}$ for low magnetic fields (almost ideal diamagnetism), the $M(H_{\rm int})$ curves strongly deviate from this linear behavior for field exceeding $1-2$~mT, indicating a rather small out-of-plane lower critical field $H_{\rm c1}^{||c}$. By means of the relation between $H_{\rm c1}^{||c}$ and the in-plane magnetic penetration depth $\lambda_{ab}$:
\begin{equation}
\label{eq4}
\mu_0H_{\rm c1}^{||c}=\frac{\Phi_0}{4\pi\lambda_{ab}^2}\left(\ln\kappa_{ab}+\frac{1}{2}\right),
\end{equation}
it was argued that a very small $\mu_0H_{\rm c1}\simeq0.3$~mT is consistent with a large $\lambda\simeq1-2$~$\mu$m.\cite{Bosma2012} However, this behavior is drastically changed with annealing as seen in Figs.~\ref{fig4}(b) and \ref{fig4}(c). Although the measurements for sample A$_{413}[t_{\rm ann}]$ reveal no obvious change with increasing $t_{\rm ann}$, the samples A$_{488}[t_{\rm ann}]$ and A$^*_{488}[t_{\rm ann}]$ show both a considerably higher diamagnetic response at higher $H_{\rm int}$, indicating an improved screening of the applied magnetic field. By defining $H_{\rm c1}$ as the magnetic field where the curves deviate from ideal diamagnetism, the best sample A$^*_{488}[t_{\rm ann}]$ yields a considerably larger $\mu_0H_{\rm c1}\simeq10$~mT compared to the estimate $\lesssim1$~mT for the as-grown samples. Such a large value of 10~mT is consistent with $\lambda\simeq270$~nm, assuming a realistic Ginzburg-Landau parameter $\kappa_{ab}\simeq100$ in Eq.~(\ref{eq4}). For a quantitative comparison of the superconducting properties of the different samples the superconducting susceptibility $\chi_{\rm sc}(\mu_0H_{\rm int})$ was estimated using the relation:
\begin{equation}
\label{eq5}
\chi_{\rm sc}(\mu_0H_{\rm int})=\frac{M(\mu_0H_{\rm int})}{H_{\rm int}}.
\end{equation}
In Table~\ref{table2} $\chi_{\rm sc}(1~{\rm mT})$ and $\chi_{\rm sc}(10~{\rm mT})$ are listed. Comparing $\chi_{\rm sc}(\mu_0H_{\rm int})$ for the sample A$_{413}[t_{\rm ann}]$ with increasing $t_{\rm ann}$, no improvement of the diamagnetic response was found with annealing. However, for all other samples A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$ both, $\chi_{\rm sc}(1~{\rm mT})$ and $\chi_{\rm sc}(10~{\rm mT})$ increase substantially with increasing $t_{\rm ann}$. Whereas the improvement of screening in 10~mT indicates an increase of critical current density, the changes observed in very low magnetic fields are rather related to an increase of $H_{\rm c1}$ connected with a decrease of $\lambda$. This suggests that the changes induced by annealing directly influence the density and the mobility of the charge carriers in the superconducting phase.
\\\indent
Besides magnetization, also resistivity experiments are expected to exhibit pronounced changes with annealing. Resistivity studies may provide independent and complementary information to the magnetization experiments. Whereas magnetization measurements probe the global macroscopic properties of a sample, its resistivity is sensitive to microscopic currents flowing through this mesoscopic phase separated material. Figure~\ref{fig5} shows the in-plane resistivity $\rho$ for the Rb$_x$Fe$_{2-y}$Se$_2$ single crystal, measured in zero magnetic field by cooling from 300 to 5~K. The measurements were performed on the as-grown sample (B$_{488}[0~{\rm h}]$) and were repeated after annealing in $488$~K for 3~h (B$_{488}[3~{\rm h}]$) using the same contacts. A clear reduction of $\rho$ in the normal state was found together with an increase of $T_{\rm c}$ from 31.5~K in the pristine sample to 33.1~K for the annealed sample (see Table~\ref{table3}), in very good agreement with the increase observed by magnetization (see Table~\ref{table1}). The hump in $\rho(T)$ between 100 and 150~K for the as-grown sample B$_{488}[0~{\rm h}]$ seen in Fig.~\ref{fig5} was earlier interpreted as a possible metal-insulator transition.\cite{Han2012} Such a transition would be likely related to the mesoscopic phase separation present in Rb$_x$Fe$_{2-y}$Se$_2$. In this picture the minority phase is connected with percolative paths along which electrical current may flow.\cite{Shen2011} Interestingly, this hump is strongly decreased with annealing at 488~K for 3~h, indicating that normal-state electric conductivity is enhanced in the annealed sample.
\\\indent
In Fig.~\ref{fig6}(a)-(d) the resistivity measurements at low temperatures performed on the pristine and annealed Rb$_x$Fe$_{2-y}$Se$_2$ single crystal B$_{488}[t_{\rm ann}]$ for various magnetic fields applied along the $c$-axis and in the $ab$-plane are presented. The transition temperature $T_{\rm c}$ is reduced with increasing $H$ for all configurations. In order to quantify this phase transition, the upper critical field $H_{\rm c2}$ is determined by following field and temperature at which 50\% of the normal state resistivity is suppressed [dashed line in Fig.~\ref{fig6}, panels (a)-(d)]. Figure~\ref{fig6}(e) shows the estimated upper critical field along the $c$-axis [$H_{\rm c2}^{||c}(T)$] and in the $ab$-plane [$H_{\rm c2}^{||ab}(T)$], for the as-grown and annealed sample. An increase of $T_{\rm c}(H)$ with annealing is observed in the whole temperature-field phase diagram. The slopes $-\mu_0dH_{\rm c2}^{||\alpha}/dT$ ($\alpha=c,ab$) of the phase boundaries for sufficiently high $H$ are summarized in Table~\ref{table3}. From these the upper critical fields at zero temperature were estimated applying the Werthamer-Helfand-Hohenberg (WHH) approximation\cite{Werthamer}
\begin{equation}
\label{eq6}
H_{\rm c2}(0)=-0.69\cdot T_{\rm c}\frac{dH_{\rm c2}}{dT},
\end{equation}
where $-dH_{\rm c2}/dT$ is defined as the maximal slope of the $H_{\rm c2}(T)$ curve in the vicinity of $T_{\rm c}$. Here we considered the linear part of the curve well below but not too far from $T_{\rm c}$, emphasized in Fig.~\ref{fig6}(e), which yields a more reliable estimate for the upper critical field of superconductors with an upturn curvature close to $T_{\rm c}$.\cite{Bukowski2009} Interestingly, the upper critical field anisotropy
\begin{equation}
\label{eq7}
\gamma_H=H_{\rm c2}^{||ab}/H_{\rm c2}^{||c},
\end{equation}
increases with annealing by 24.1\% (see Table~\ref{table4}). This suggests that thermally treated iron-chalcogenide superconductors with improved macroscopic physical properties are more anisotropic.
\\\indent
Besides investigating the properties in the superconducting state, it is also important to monitor the changes in normal-state properties of the Rb$_x$Fe$_{2-y}$Se$_2$ single crystals as a result of post annealing. In Fig.~\ref{fig7} we present the magnetic moment $m$ measured in 1~T and in 3~T for A$_{413}[t_{\rm ann}]$, A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$ with $t_{\rm ann}=0$, 3, and 36~h. The magnetic moment in the normal state, recorded between 50 and 370~K, systematically increases with $t_{\rm ann}$ for {\it all} investigated samples. In the normal state the major component of the magnetic moment is stemming from the antiferromagnetic phase. However, some small ferromagnetic contribution is present in all Rb$_x$Fe$_{2-y}$Se$_2$ crystals, most likely due to a ferromagnetic impurity phase. From the measurements presented in Fig.~\ref{fig7} we determined the antiferromagnetic susceptibility $\chi_{\rm AFM}(T)$ according to
\begin{equation}
\label{eq8}
\chi_{\rm AFM}(T)=\frac{1}{\mathcal{M}}\cdot\frac{m(\mu_0H)-m(\mu_0H')}{H-H'},
\end{equation}
where $\mathcal{M}$ denotes the sample mass. Here, $\mu_0H$ and $\mu_0H'$ are 1 and 3~T, respectively. The ferromagnetic contribution to the magnetization is assumed to be constant in field and is derived accordingly
\begin{equation}
\label{eq9}
M_{\rm FM}(T)=\frac{m(\mu_0H)}{\mathcal{M}}-\chi_{\rm AFM}(T)\cdot H.
\end{equation}
The antiferromagnetic susceptibility for all the as-grown samples and those annealed for 3~h and for 36~h are shown in Fig.~\ref{fig8}(a). The ferromagnetic component of the magnetization $M_{\rm FM}(T)$ is shown in Fig.~\ref{fig8}(b). Sample A$_{413}[t_{\rm ann}]$ remains unaffected by annealing, as already observed in the zfc magnetization experiments performed in the superconducting state as discussed above. However, for the samples A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$ the high-field susceptibility $\chi_{\rm AFM}(T)$ increases substantially with increasing $t_{\rm ann}$. In Table~\ref{table5} we list the observed values for $\chi_{\rm AFM}(50~{\rm K})$ for all samples and $t_{\rm ann}$. Obviously, the change in $\chi_{\rm AFM}(50~{\rm K})$ is most pronounced for the sample A$_{563}[t_{\rm ann}]$, annealed at 563~K. In addition, the ferromagnetic component $M_{\rm FM}(T)$ is almost unchanged for sample A$_{413}[t_{\rm ann}]$, but increases for the samples A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$ with increasing $t_{\rm ann}$. Again, the change in $M_{\rm FM}(50~{\rm K})$ is maximal for sample A$_{563}[t_{\rm ann}]$.
\\\indent
\begin{table*}[t!]
\caption{Evolution of $\chi_{\rm AFM}(50~{\rm K})$ [see Eq.~(\ref{eq8})] and $M_{\rm FM}(50~{\rm K})$ [see Eq.~(\ref{eq9})] with annealing time $t_{\rm ann}$ for the samples A$_{413}[t_{\rm ann}]$, A$_{488}[t_{\rm ann}]$, A$_{563}[t_{\rm ann}]$, and A$^*_{488}[t_{\rm ann}]$. The changes with annealing $\delta_{t_{\rm ann}}(\chi_{\rm AFM}(50~{\rm K}))$ and $\delta_{t_{\rm ann}}(M_{\rm FM}(50~{\rm K}))$ were calculated applying Eq.~(\ref{eq2}).}
\label{table5}
\begin{tabular}{p{25mm} c c c c }
\hline\hline
Sample & $\chi_{\rm AFM}(50~{\rm K})$ & $\delta_{t_{\rm ann}}(\chi_{\rm AFM}(50~{\rm K}))$ & $M_{\rm FM}(50~{\rm K})$ & $\delta_{t_{\rm ann}}(M_{\rm FM}(50~{\rm K}))$ \\
& $(10^{-8}~{\rm m}^3/{\rm kg})$ & & $(10^{-3}~{\rm Am}^2/{\rm kg})$ & \\\hline\hline
A$_{413}[0~{\rm h}]$ & 2.020(1) & & 2.79(1) & \\
A$_{413}[3~{\rm h}]$ & 2.006(1) & $-$0.7\% & 2.75(1) & $-$1.4\% \\
A$_{413}[36~{\rm h}]$ & 2.044(1) & +1.2\% & 2.79(1) & $\pm0.0$\% \\\hline
A$_{488}[0~{\rm h}]$ & 1.808(1) & & 2.77(1) & \\
A$_{488}[3~{\rm h}]$ & 1.883(1) & +4.1\% & 5.91(1) & +113\% \\
A$_{488}[36~{\rm h}]$ & 1.953(1) & +8.0\% & 10.73(1) & +287\% \\\hline
A$_{563}[0~{\rm h}]$ & 2.400(1) & & 6.91(1) & \\
A$_{563}[3~{\rm h}]$ & 2.598(1) & +8.3\% & 17.25(1) & +150\% \\
A$_{563}[36~{\rm h}]$ & 2.778(1) & +15.8\% & 28.78(1) & +317\% \\\hline
A$^*_{488}[0~{\rm h}]$ & 1.796(1) & & 2.67(1) & \\
A$^*_{488}[3~{\rm h}]$ & 1.883(1) & +4.8\% & 3.43(1) & +28.5\% \\
A$^*_{488}[36~{\rm h}]$ & 1.947(1) & +8.4\% & 15.92(1) & +496\% \\\hline\hline
\end{tabular}
\end{table*}
The effect of annealing on the magnetic and superconducting properties of Rb$_x$Fe$_{2-y}$Se$_2$ single crystals was further investigated by means of transverse-field (TF) and zero-field (ZF) $\mu$SR experiments. The $\mu$SR measurements are based on the observation of the time evolution of the muon spin polarization. (For a detailed description of the $\mu$SR technique see \emph{e.g.} Ref.~\cite{Amato}.) For these experiments two mosaics of samples were prepared: \mbox{(i) C$_{488}[0~{\rm h}]$}, consisting of three as-grown Rb$_x$Fe$_{2-y}$Se$_2$ single crystals, and (ii) C$_{488}[60~{\rm h}]$, consisting of three Rb$_x$Fe$_{2-y}$Se$_2$ single crystals simultaneously annealed in 488~K for 60~h. Previous $\mu$SR experiments revealed that Rb$_x$Fe$_{2-y}$Se$_2$ consists of a magnetic ($\sim90\%$) and a non-magnetic superconducting ($\sim10\%$) phase.\cite{Shermadini2012} In order to investigate the superconducting properties, a field of 70~mT was applied transverse to the initial muon spin polarization and parallel to the crystallographic $c$-axis. In this TF configuration the muons probe the local magnetic field distribution $P(B)$ of the vortex lattice formed in the superconducting areas. Simultaneously, the signal stemming from the magnetic regions of the sample is suppressed, since the superposition of the strong internal field and the weak external field leads to a fast depolarization and to a loss of asymmetry.
\\\indent
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig9.pdf}
\caption{(color online) Results of the TF $\mu$SR investigation of as-grown and 60~h annealed Rb$_x$Fe$_{2-y}$Se$_2$ single crystals, C$_{488}[0~{\rm h}]$ and C$_{488}[60~{\rm h}]$, in an magnetic field of 70~mT applied along the $c$-axis. (a) $P(B)$ for both samples at 5~K. The line shape for C$_{488}[60~{\rm h}]$ is more asymmetric compared to that for the as-grown sample C$_{488}[0~{\rm h}]$. (b) and (c) $\mu$SR time spectra at 40~K for sample C$_{488}[0~{\rm h}]$ and C$_{488}[60~{\rm h}]$. The thin solid line is a fit to the data assuming a single relaxation rate $\sigma$. The thick solid line is the envelope of the oscillating function. The data for the annealed sample C$_{488}[60~{\rm h}]$ exhibit a significantly faster damping.}
\label{fig9}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig10.pdf}
\caption{(color online) Results of the ZF $\mu$SR investigation of as-grown and 60~h annealed Rb$_x$Fe$_{2-y}$Se$_2$ single crystals, C$_{488}[0~{\rm h}]$ and C$_{488}[60~{\rm h}]$. All data were modeled assuming two internal magnetic fields $B_{\rm int,1}\approx 1$~T and $B_{\rm int,2}\approx 3$~T.}
\label{fig10}
\end{figure}
Consistent with the above presented macroscopic magnetization and resistivity results, also the intrinsic superconducting properties are significantly improved after annealing. The lineshape of the local magnetic field distribution $P(B)$ of C$_{488}[60~{\rm h}]$ shown in Fig.~\ref{fig9}(a) is more asymmetric as compared to that of C$_{488}[0~{\rm h}]$, indicating the presence of a more homogeneous and more regular vortex lattice in the superconducting regions. Note that the sharp peak of $P(B)$ at 70~mT is stemming from the signal of background muons, whose spins rotate simply in the applied magnetic field. A more detailed analysis of the as obtained $P(B)$ yields that the shielding of the magnetic field for C$_{488}[60~{\rm h}]$ is substantially larger due to a reduction of the first moment $<B>$ of $P(B)$ by $\sim5\%$. This is surprising, since the microscopic in-plane magnetic penetration depth $\lambda_{ab}(0)\simeq258(2)$~nm,\cite{Shermadini2012} as well as the total asymmetry of the superconducting part remain essentially unchanged after 60~h annealing [see Fig.~\ref{fig9}(b)]. These results imply that the volume fraction of the magnetic and the non-magnetic phase is unaffected by annealing, in contradiction to the conclusions of a neutron diffraction study, reporting a reduction of the minority phase after annealing of Rb$_x$Fe$_{2-y}$Se$_2$ single crystals for 100~h at 488~K.\cite{Pomjakushin2012} This discrepancy might arise from the difference in $T_{\rm p}$ of the samples studied here (489~K) and in Ref.~\onlinecite{Pomjakushin2012} (475~K).
\\\indent
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{fig11.pdf}
\caption{(color online) STEM images of as-grown and annealed Rb$_x$Fe$_{2-y}$Se$_2$ single crystals D$_{488}[0~{\rm h}]$ and D$_{488}[3~{\rm h}]$. The microstructure caused by mesoscopic phase separation in the annealed sample D$_{488}[3~{\rm h}]$, shown in panel (b), is modified compared to the one of the as-grown sample D$_{488}[0~{\rm h}]$, shown in panel (a).}
\label{fig11}
\end{figure*}
Importantly, the normal-state relaxation rate $\sigma$ of the $\mu$SR time spectra derived from the data at 40~K (well above $T_{\rm c}$) increases drastically with $t_{\rm ann}$ [see Fig.~\ref{fig9}(b) and (c)]. Whereas for the as-grown sample C$_{488}[0~{\rm h}]$ $\sigma=0.141(33)~\mu$s$^{-1}$, the relaxation rate of the 60~h annealed sample C$_{488}[60~{\rm h}]$ is considerably larger ($\sigma=0.303(43)~\mu$s$^{-1}$). This indicates a substantially increased field inhomogeneity in the non-magnetic part of the sample. Since the volume fraction is unchanged during annealing, this suggests that the microstructure of the sample caused by mesoscopic phase separation is modified by annealing at 488~K, in such a way that the individual size of the non-magnetic regions is reduced and their number is increased, {\it but their total volume remains unaffected}.
\\\indent
In order to examine our samples for the internal magnetic field distribution when no magnetic field is applied, low temperature ZF $\mu$SR experiments were performed on the same Rb$_x$Fe$_{2-y}$Se$_2$ single crystals. Consistent with the results of the TF experiments, the total volume of the non-magnetic regions was found to be $\sim10\%$ of the total sample volume only. In the ZF data a clear oscillating signal may be found in all samples for very short time scales as shown in Fig.~\ref{fig10}. An analysis of the time evolution of this signal revealed that two internal magnetic fields $B_{\rm int,1}\approx1$~T and $B_{\rm int,2}\approx3$~T are present in the samples. In analogy to the evolution of the magnetic volume fraction, $B_{\rm int,1}$ and $B_{\rm int,2}$ are not affected by annealing at 488~K. They are directly proportional to the iron moment in the antiferromagnetic phase. Moreover, annealing again does not affect the ratio $B_{\rm int,1}/B_{\rm int,2}$. Hence, no changes in the internal magnetic fields were observed by $\mu$SR after annealing the as-grown Rb$_x$Fe$_{2-y}$Se$_2$ single crystals, even though the macroscopic superconducting properties were substantially improved (see Figs.~\ref{fig3} and \ref{fig4}).
\\\indent
In order to visualize microscopic changes in the phase separation of our Rb$_x$Fe$_{2-y}$Se$_2$ single crystals with annealing, additional STEM images were taken on as-grown and annealed samples D$_{488}[0~{\rm h}]$ and D$_{488}[3~{\rm h}]$ (see Fig.~\ref{fig11}). The microstructure caused by mesoscopic phase separation in the annealed sample D$_{488}[3~{\rm h}]$, shown in Fig.~\ref{fig11}(b), is modified compared to the one of the as-grown sample D$_{488}[0~{\rm h}]$, shown in Fig.~\ref{fig11}(a). Whereas a few inclusions of the minority phases only are observed at the surface of D$_{488}[0~{\rm h}]$, sample D$_{488}[3~{\rm h}]$ reveals plenty of such inclusions in the same area. However, the inclusions of the minority phase of sample D$_{488}[3~{\rm h}]$ are in general smaller in size than the ones of the as-grown sample D$_{488}[0~{\rm h}]$, in agreement with the results of the above $\mu$SR experiments.
\\\indent
\section{Discussion}
The superconducting and normal-state properties of mesoscopically phase separated Rb$_x$Fe$_{2-y}$Se$_2$, where non-magnetic regions exist in a magnetic surrounding, are strikingly similar to those expected for granular superconductors. From early work on granular superconductors it is known that the macroscopic properties of such materials studied by various techniques may vary substantially, depending on the particular grain-size distribution and their coupling by Josephson links.\cite{Ambegaokar1963, Ebner1985, Clem1988}. Importantly, granular superconductors may easily appear as bulk superconducting, however, their superconducting-state parameters, such as the magnetic penetration depth $\lambda$, the coherence length $\xi$, and the lower and upper critical fields ($H_{\rm c1}$ and $H_{\rm c2}$) differ substantially from those of related non-granular superconductors. Such a scenario may also hold for mesoscopically phase separated Rb$_x$Fe$_{2-y}$Se$_2$, since various experimental techniques provide quite different values for $\lambda$. For Rb$_x$Fe$_{2-y}$Se$_2$ recent $\mu$SR studies yielded $\lambda_{ab}(0)\simeq250-260$~nm,\cite{Shermadini2012, Charnukha2012_b} in agreement with $\lambda_{ab}(0)\simeq290$~nm obtained for K$_x$Fe$_{2-y}$Se$_2$ by means of high field nuclear magnetic resonance (NMR) experiments.\cite{Torchetti2011} These values are considerably smaller than those usually obtained by macroscopic techniques ($\lambda_{ab}(0)\simeq1.6-2.2~\mu$m).\cite{Bosma2012, Charnukha2012, Homes2012} In a mesoscopically phase separated superconductor macroscopic experiments yield an effective magnetic penetration depth which is a measure of the length scale over which the magnetic field penetrates the sample. On the other hand, $\mu$SR is a microscopic probe of the vortex state and is only sensitive to the superconducting fraction of the sample. Therefore, $\mu$SR measures a value of the magnetic penetration depth which is closer to the instrinsic value than the values usually obtained by macroscopic techniques. Since so far no single-phase superconducting $A_x$Fe$_{2-y}$Se$_2$ sample could be synthesized, it should not be excluded that granularity might be an important ingredient for the appearance of superconductivity in this system.
\\\indent
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{fig12.pdf}
\caption{(color online) Series of temperature dependent zfc magnetization measurements on a Rb$_x$Fe$_{2-y}$Se$_2$ single crystal in 0.3~mT. The curves obtained after the various post-annealings of the sample are labeled by the respective number.}
\label{fig12}
\end{figure}
As strongly suggested by the presented magnetization and resistivity data, pronounced changes of the physical properties of Rb$_x$Fe$_{2-y}$Se$_2$ are caused by tuning the annealing conditions. Whereas annealing at 413~K, well below $T_{\rm p}$, does not lead to any significant change in magnetic and transport properties, annealing just at $T_{\rm p}$, the onset of phase separation, favors the enhancement of superconductivity. Accordingly, $T_{\rm c}$ increases, the transition sharpens, the normal-state resistivity decreases, and $H_{\rm c2}$ increases. However, after annealing at 563~K, well above $T_{\rm p}$, all superconducting properties get drastically suppressed. In addition, the antiferromagnetic susceptibility and the ferromagnetic saturation magnetization of the investigated samples systematically increase. This may be related to the change in iron valency as observed in annealed K$_{0.8}$Fe$_{1.6}$Se$_2$,\cite{Simonelli2012} or with an increase of Fe-based impurity phases.
\\\indent
A recent neutron diffraction study of the Rb$_x$Fe$_{2-y}$Se$_2$ system reports a pronounced reduction of the 122 minority phase when the samples were annealed at 488~K for 100~h.\cite{Pomjakushin2012} However, the present $\mu$SR experiments yield clear evidence that the volume fraction of the two phases remains unchanged by annealing, while the field inhomogeneity in the non-magnetic parts of the sample increases substantially. This implies that the microstructure caused by mesoscopic phase separation in the sample is modified by annealing just at $T_{\rm p}$ in such a way that the size of non-magnetic regions is reduced, and the number of regions is increased, but their total volume remains unaffected. Since the $\mu$SR results clearly demonstrate that the total volume of the minority phase is constant, even after 60~h of annealing, this rearrangement of the coexisting phases leads to the conclusion that changes of the coupling between these regions must be related to the improvement of superconductive properties. Whereas, 488~K was chosen to match the onset of phase separation $T_{\rm p}\simeq489$~K in the single crystals studied here, the samples used in the neutron diffraction study had a significantly lower $T_{\rm p}\simeq475$~K.\cite{Pomjakushin2012} Therefore, the observed reduction of the minority phase found by the neutron study might be due to a partial degradation of the minority phase as a result of 100~h annealing at temperatures exceeding $T_{\rm p}$. That this scenario appears to be reasonable is further supported by the data presented in Fig.~\ref{fig12}, where a series of magnetization measurements are shown for a Rb$_x$Fe$_{2-y}$Se$_2$ single crystal of a similar batch as the one used above. Here, always the same temperature dependence of the zfc magnetization measurement in a magnetic field of $\mu_0H=0.3$~mT along the $c$-axis was performed after each subsequent annealing of the sealed single crystal in a quartz ampoule. Note that $T_{\rm c}$ of the as-grown sample is easily shifted to higher values by an annealing at 488~K for some hours. However, after the subsequent annealings during which the temperature was modestly increased up to 563~K, superconductivity is strongly suppressed as seen by the decrease of $T_{\rm c}$ and the broadening of the transition. During the final annealing, again the optimal annealing temperature of 488~K was chosen, this time for a very long annealing time up to 72~h. However, superconductivity did not fully recover. Obviously, the short annealings at temperatures exceeding $T_{\rm p}$ formed additional magnetic phases, which cannot be reversed anymore, even by choosing a very long annealing time.
\\\indent
All changes of superconducting and magnetic properties caused by annealing are evidently related to changes in the microstructure of the sample caused by mesoscopic phase separation in Rb$_x$Fe$_{2-y}$Se$_2$. The difference of the superconducting properties between the as-grown and annealed single crystals are likely explained by assuming that inhomogeneities (in particular phase boundaries and/or stripes) are necessary to enhance superconductivity.\cite{Bianconi1996, Moon2010, Das2011, Andersen2011, Wittlin2012} In the present case, the existing boundaries between the magnetic majority regions and non-magnetic minority regions may play the role of such inhomogeneities. In the current case, reviewing the changes observed of the superconducting and normal-state properties with annealing, it is likely that the intergrain coupling between magnetic and non-magnetic domains is crucial. Annealing of Rb$_x$Fe$_{2-y}$Se$_2$ single crystals just at $T_{\rm p}$ favors the mesoscopic phase separation in such a way that domain boundaries are further developed, improving all superconducting properties. However, if the samples are annealed at higher temperature, the superconducting phase degrades and by that it is more difficult to build up a percolative network favorable for superconductivity. In total $\sim10\%$ of the sample remains superconducting in a magnetic field of $70$~mT, whereas its macroscopic properties strongly depend on the optimal coupling between the superconducting regions, being strongly field and temperature dependent. This scenario appears similar to that of a granular superconductor in which the macroscopic physics are directly connected to the microscopic Josephson coupling between the individual grains. In addition, all changes in the phase separation may be related to changes in crystal structure and lattice parameters.\cite{Pomjakushin2012} Thus, internal pressure on the superconducting and non-superconducting domains may be likely involved in the appearance of superconductivity. Besides, also metallic nano-clusters were reported to show enhanced superconducting properties.\cite{Kresin2012}
\section{Conclusions}
Extended magnetization and resistivity measurements of Rb$_{x}$Fe$_{2-y}$Se$_{2}$ single crystals revealed that post annealing at a temperature well below the onset temperature of phase separation $T_{\rm p}$ neither changes the magnetic nor the superconducting properties of the crystals. Annealing at a temperature above $T_{\rm p}$ reduces the value of $T_{\rm c}$ drastically and suppresses antiferromagnetic order. However, annealing at 488~K, just at $T_{\rm p}$ leads to a substantial increase of $T_{\rm c}$ and sharpens the transition to the superconducting state. These results suggest that the superconducting properties of mesoscopically phase separated Rb$_{x}$Fe$_{2-y}$Se$_{2}$ can be tuned by the annealing temperature. In addition, $\mu$SR and STEM investigations indicate that non-magnetic regions of the sample rearrange with annealing at 488~K in such a way that their individual size is reduced and the number of regions is increased, but their total volume remains unaffected. At temperatures exceeding $T_{\rm p}$, where the majority $I4/m$ phase prevails, ferromagnetism is enhanced with annealing time, but is presumably detrimental to the formation of the superconducting phase. In conclusion, by annealing single crystals of Rb$_{x}$Fe$_{2-y}$Se$_{2}$ the microstructure of the crystals arising from mesoscopic phase separation is changed, leading to an improvement of the superconducting properties and an enhancement of $T_{\rm c}$.\\
\begin{acknowledgments}
Helpful discussions with K.~A.~M\"uller, V.~Yu.~Pomjakushin, and B.~M.~Wojek are gratefully acknowledged. S.~W.~thanks A.~Feierls and S.~R\"osch for their help during part of the experiments. The $\mu$SR measurements were performed at the Swiss Muon Source, Paul Scherrer Institute, Villigen, Switzerland. This work was supported by the Swiss National Science Foundation, the NCCR program MaNEP, the National Science Centre of Poland based on decision No.~DEC-2011/01/B/ST3/02374, and the European Regional Development Fund within the Innovative Economy Operational Programme 2007-2013 No~POIG.02.01-00-14-032/08.
\end{acknowledgments}
|
1,116,691,498,348 | arxiv | \section{Introduction}
Not all supernova remnants (SNRs) show the ``classical'' shell-like structure.
Some of them present instead a filled-center structure: the prototype is the
Crab Nebula, but other objects with similar properties have been discovered
since then. They are called filled-center SNRs, or Crab-like SNRs, or {\it
plerions} (Weiler \& Panagia 1978).
Plerions are recognized not just on the basis of their morphology, but also
of other properties, like:
{\it a flat power-law radio spectrum,} with a spectral index ranging from 0.0
to $-$0.3;
{\it a high radio polarization,} with a well organized pattern (not true for
all plerions);
{\it a power-law X-ray spectrum,} with photon index close to $-$2
(Asaoka \& Koyama 1990);
{\it the detection of an associated pulsar} (not true for all plerions --- see
Pacini, these Proceedings).
Although the details of the nature and structure of plerions are still unclear,
there is a common agreement on the following points:
{\it a plerion is an expanding bubble, formed essentially by magnetic fields
and relativistic electrons,} and the observed synchrotron emission originates
from these two components;
{\it a continuous supply of magnetic flux and relativistic particles is
required,} in order to explain the typical synchrotron emissivities, as well as
the high frequency emission (from particles with synchrotron lifetimes shorter
than the SNR age).
In a simplified approach, one may assume that magnetic fields and particles are
uniformly distributed into the plerionic bubble: this approach (Pacini \&
Salvati 1973; Reynolds \& Chevalier 1984; Bandiera et al.\ 1984) is usually
adequate to explain the evolution of the overall nebular spectrum. However the
homogeneity assumption is likely to be incorrect. New particles and magnetic
flux are released by the associated pulsar, presumably near the pulsar wind
termination shock (Rees \& Gunn 1974; Kennel \& Coroniti 1984a,b).
Therefore the degree of homogeneity of the electrons distribution depends on
how efficient are the mechanisms (diffusion, or advection) by which particles
propagate through the nebula (see Amato, these Proceedings).
Also the structure of the magnetic field can be rather complex. From
considerations on the MHD relations it follows that spherical models cannot
account adequately for the field structure: it can have at most a cylindrical
symmetry (Begelman \& Li 1992), but more complicate patterns are suggested by
observations. The comparison of high resolution maps at various frequencies may
then give important clues on the structure on plerions, and on the processes
governing the evolution of the magnetic field as well as of the particle
distribution.
The outline of this paper is the following: I begin by reviewing the
classical, simplified approach to the evolution of a plerion and of its
emission; then I consider the case of the Crab Nebula, the prototype of this
class, as well as of some other plerions with characteristics different from
the Crab; finally I describe the results and perspectives of a multifrequency
study, with high spatial resolution, of the Crab Nebula and ot and other plerionsher plerions.
\section{Classical models of the evolution of the synchrotron emission}
Classical models for the evolution of the synchrotron emission from a plerion
as a whole (Pacini \& Salvati 1973; Reynolds \& Chevalier 1984; Bandiera et
al.\ 1984) are based on the original analysis of Kardashev (1962). The starting
points are the two basic synchrotron equations, that for the radiated power
from an electron with energy $E$ ($W_s=c_1B^2E^2$), and that for the typical
frequency of radiation ($\nu_s=c_2BE^2$): these formulae are averages over
pitch angles, under the assumption of isotropy. If $N(E)$ is the present
distribution of electrons, and the magnetic field $B$ is constant throughout
the nebula, it immediately follows that the synchrotron spectrum is given by
$L(\nu)=(c_1/2c_2)BEN(E)$. This relation establishes the well known connection
between the particle distribution and the radiated spectrum (if
$L(\nu)\propto\nu^{-\alpha}$, then $N(E)\propto E^{-(1+2\alpha)}$).
Let us consider, for instance, the synchrotron spectrum of the Crab Nebula.
One may identify various spectral regions with different spectral indices: the
radio, with $\alpha\simeq0.3$; the optical, with $\alpha\simeq0.8$; the X rays,
with $\alpha\simeq1.0$; and a further steepening above 100 keV. All the changes
of slope in $L(\nu)$ correspond to breaks in $N(E)$: the issue is to determine
which of them are intrinsic to the injected particle distribution, and which
result from the evolution. Usually an original power-law distribution is
assumed, with the aim of explaining all breaks as originated just from the
evolution.
The basic ingredients are:
1. {\it The evolution of the energy input from the spinning down pulsar} (this
input typically lasts for a time $\tau_o$, and then falls down).
2. {\it The fraction by which this power is shared between injected particles
and field} (usually assumed to be constant).
3. {\it The expansion law of the nebula,} $R(t)$, that at earlier times may
be linear or even accelerated; but later on, with the passage of the reverse
shock coming from the outer blast wave, the plerion may shrink, and then
re-expand at a lower rate (Reynolds \& Chevalier 1984).
The evolution of the magnetic field in the nebula is modelled by including the
effects of the adiabatic losses; while for the evolution of particles both
adiabatic and synchrotron losses must be taken into account.
A special particle energy is that at which the timescales for adiabatic and
synchrotron losses are comparable ($E_b\sim1/c_1B^2t$): a break in the
spectrum occurs at the frequency that corresponds to $E_b$.
Before the time $\tau_o$ this is the only evolutive
break present in the spectrum. Kardashev (1962) showed that the change in the
spectral index must be $\Delta\alpha=0.5$: this result can be directly tested
on the data.
After the time $\tau_o$, a second break should appear in the distribution, at
the energy $E_c(t)=E_b(\tau_o)R(\tau_o)/R(t)$: this is the ``fossil break'',
namely the adiabatic evolution of the break located at $E_b$ at the time
$\tau_o$. This break is at a frequency smaller than that of the truly evolutive
break.
The Crab Nebula fits rather well into the scenario described above. In fact:
1. {\it The classical law for the pulsar spin-down seems to be verified:} the
total energy released by the pulsar since its birth ($\sim10^{49}$~erg) is also
consistent with the kinetic energy excess in the optical thermal filaments (due
to their dynamical coupling with the plerionic bubble).
2. {\it The efficiency in particle production is reasonably high:} the present
pulsar spin-down power ($4.5\times10^{38}\,{\rm erg\,s^{-1}}$) is comparable
with the total synchrotron luminosity ($\sim0.7\times10^{38}\,{\rm
erg\,s^{-1}}$).
3. {\it The efficiency in magnetic field production is rather high:} from the
position of the break ($\nu_b\simeq10^{13}$~Hz) one may derive a nebular
magnetic field $B\sim0.4$~mG, i.e.\ a magnetic energy $\sim2\times10^{49}$~erg,
close to the total energy released by the pulsar.
4. {\it The measured secular variation in radio} ($-$0.17\% per yr; Aller \&
Reynolds 1985) {\it agrees with the theoretical estimate} (V\'eron-Cetty \&
Woltjer 1991).
5. {\it The change in the spectral slope from radio to optical is
$\Delta\alpha=0.5$,} in agreement with Kardashev; however the further breaks at
higher frequencies cannot be explained in this way, unless breaks in the
injected distribution are invoked.
It can be noticed that in the Crab Nebula fields and electrons are in near
equipartition. Is this equipartition typical for all plerions? Does
equipartition hold at injection, or is it the result of a subsequent
field-particle coupling?
\section{Non Crab-like plerions}
There is a bunch of plerions characterized by a spectral break at frequencies
much lower than in the Crab Nebula: for instance, in 3C58 and in G21.5$-$0.9
the break is at 50~GHz; while in CTB87 it is at 20~GHz. Woltjer et
al.\ (1997) discussed properties and evolutive implications for these objects.
Beyond the low-frequency position of the break, they share other features, like
a sharp break, with $\Delta\alpha$ larger than the canonical value 0.5.
Furthermore, in none of these objects pulsations from the expected neutron star
have been detected yet.
It is hard to explain the observed break as the main evolutive break, since it
would imply a very large nebular field. The most extreme case is that of CTB87,
an extended plerion for which the estimated magnetic energy is greater than
$6\times10^{51}$~erg, well above that of the supernova explosion itself. This
paradox can be solved if the break observed is the fossil break: in this case
also the theoretical limit on $\Delta\alpha$ across the break can be overcome,
under the condition that the pulsar slow-down follows a very steep law (even
though some difficulties remain, in modelling the break sharpness). This
implies that the associated pulsar has slowed down considerably,
and therefore has become much fainter than at the origin: this
may be a reason why no pulsations are detectable.
Among these plerions, 3C58 is that posing more problems to models. In this
object the sharpness of the break requires an abrupt decrease in the rate of
injected particles. Beyond that, models must also account for the measured
increase in the radio emission (Green 1987, and references therein). Woltjer et
al.\ (1997) show that, in order to match the observations, a sudden change in
the relative efficiencies of field and particles production is required: this
could actually be associated to a recent ``phase change'' in the pulsar
magnetosphere.
\section{Plerionic components in composite SNRs}
Composite SNRs are those in which a shell-like component (due to the
interaction of the supernova ejecta with the ambient medium) co-exists with a
plerionic component. Helfand \& Becker (1987) introduced the class of
composite SNRs and outlined their properties: they are old enough to have
developed a shell component, but still young enough to host a detectable
plerionic component.
Studying a composite SNR is more interesting than just studying independently a
shell-like and a plerionic remnant, since the two components have the same
origin (thus the same age and distance), and are interacting. Slane et
al.\ (1998) discuss what kind of diagnostic tools can be used. For instance,
the X-ray spectrum of the thermal shell allows one to evaluate its pressure and
therefore, by assuming a (rough) pressure equilibrium with the plerion, the
plerionic magnetic field. This is an estimate alternative to that based on the
position of the evolutive break, and may then represent a test on the nature of
that break.
In the cases in which the associated pulsar has not been detected yet, its most
likely parameters may be estimated. If a pulsar has been already detected, its
spin-down time may be used to estimate the SNR age, that should agree with the
age derived from the thermal X-ray spectrum. Moreover one can derive the
present pulsar energy output, and then the efficiency by which this
energy is converted into magnetic fields and particles.
This analysis has been already carried on for various composite SNRs,
like G11.2$-$0.3 (Bandiera et al.\ 1996), CTA1 (Slane et al.\ 1997), N157B
(Cusumano et al.\ 1998), MSH11$-$62 (Harrus et al.\ 1998), G327.1$-$1.1 (Sun et
al.\ 1999), G39.2$-$0.3 (Harrus \& Slane 1999).
\section{The Crab Nebula: results of a multifrequency analysis}
The above considerations on the Crab Nebula, as well as those on other
plerions, are mainly referring to their global properties. But a much deeper
insight should follow from a combined study of the spatial-spectral properties
of the synchrotron emission from these objects. I present here some results
coming from a comparison of optical and X-ray maps of the Crab Nebula, and a
preliminary analysis of millimetric observations of this plerion.
The Crab Nebula is a very bright object, over a wide range of frequencies:
therefore it is an ideal target for a detailed multifrequency investigation
with high spatial resolution. Our project has been inspired by the study of
V\'eron-Cetty \& Woltjer (1993) on the Crab synchrotron emission in the
optical range: they produced a map of the optical spectral index, and showed
that a spectral steepening occurs going outwards; they also noticed that
the region with optical spectral index flatter than $-$0.7 matches the
shape of the Crab in X rays.
We decided to carry on a detailed, quantitative comparison between optical and
X rays: for this reason we have re-analyzed the V\'eron-Cetty \& Woltjer (1993)
optical data, while for the X rays we have used public data of ROSAT HRI. In
the X-ray data reduction we have stressed, rather than the resolution, the
sensitivity to regions with very low surface brightness. For this purpose we
have deconvolved the map in order to eliminate the wings of the instrumental
Point Spread Function, as well as the halo due to dust scattering. Details on
the data analysis and on their interpretation are given by Bandiera et
al.\ (1998).
As already pointed out by Hester et al.\ (1995) the outer and faint X-ray
emission extends almost to the boundary of the optical nebula. We have then
performed a quantitative analysis over a large area, by using $5"\times5"$
pixels. Fig.~1 gives a plot of the optical-to-X averaged spectral index
($\alpha_{OX}$) versus the optical spectral index ($\alpha_{Opt}$), where each
dot refers to a single pixel: unfortunately no high resolution spectral map in
the X rays is available yet.
In the interpretation of this plot our underlining assumption is that, due to
the evolution, the spectral indices of the particle distribution, in all
energy ranges, tend to steepen: this corresponds to a softening of the emission
spectrum at all frequencies, and translates into the requirement that the time
evolution of a given bunch of particles produces a drift of the related dot
towards the upper right direction of the plot.
\begin{figure}
\plotone{bandiera_fig1.eps}
\caption{
Plot of the $\alpha_{OX}$ versus $\alpha_{Opt}$ spectral index.
Each dot corresponds to a $5''\times5''$ pixel in the image of the Nebula.
See text for the meaning of the various lines.
}
\end{figure}
The area of the plot can be then subdivided into zones. Most of the dots are in
the region confined by the two lines labelled by $a$ and $b$: let us call it
the ``Main Region'' of the plot. All the pixels corresponding to the dots in
the Main Region are located in the main body of the Crab Nebula, namely that
obtained by cutting out the N-W and S-E elongations. The line
$a'$, parallel to $a$, indicates the cases of no spectral bending between
optical and X rays ($\alpha_{Opt}=\alpha_{OX}$). Pixels with $\alpha_{OX}<1$
outline rather well the region of the X-ray ``torus'' (see e.g.\ Hester at al.\
1995): in the plot they are located on the lower left side, in agreement with
our expectation that this is the main location where to find freshly injected
particles.
We introduced the quantity $m$, that
gives the position of a dot across the strip bounded by lines $a$ and $b$:
this quantity is related to the spectral bending between optical and X rays. A
map of $m$ is given in Fig.~2-L: points with lower $m$ (namely points with a
lower bending; brighter pixels in the map) are generally located in a thick
``equatorial'' belt. In particular, pixels with a very low bending coincide
with some prominent thermal filaments. If the absence of bending is a sign of
freshly injected, or re-accelerated, particles, then in these regions some
secondary acceleration processes could take place.
Let us go back to the plot of Fig.~1. Following the scheme introduced above,
the dots above line $b$ (``Secondary Region'') cannot be the result of a
continuous evolution from dots originally located in the Main Region.
They are more likely to represent the evolution of particles originally
emitting a spectrum with $\alpha_{OX}$ not smaller that 1.75
(line $c$), and that afterwards, while moving outwards, have softened
considerably
their spectrum. Fig.~2-R gives a map of $\alpha_{OX}$ for all the points
confined to the two lobes (brighter pixels indicate higher values of
$\alpha_{OX}$): here a softening of the spectrum in the outer zones is
apparent. In the lobes, the zones with harder spectra seem to lie on the
prolongation of the X-ray ``jets'' (see Hester et al.\ 1995). This is
very clear for the S-E lobe, and may be the indication that these
particles are directly provided by the jets.
\begin{figure}
\plotone{bandiera_fig2.eps}
\caption{
{\bf Left:} map of the quantity $m$, for the Main Region (regions with less
bent spectra are brighter).
{\bf Right:} map of $\alpha_{OX}$, for the Secondary Region (steeper spectra
are brighter).
}
\end{figure}
All these considerations are largely empirical. Moreover, some conclusions may
be partly biased by projection effects: anyway the general results
should remain valid. The first result is the great variety of
local spectra in the nebula. This may be partly explained in terms of the
evolution of electrons injected at the central torus; however, both in the
regions with low optical-to-X spectral bending and in the polar lobes further
particle components seem to be present, suggesting the
presence of secondary acceleration (or re-acceleration) processes.
A similar result follows from recent observations at 230~GHz, with the
IRAM 30~m telescope, in collaboration with
R.Cesaroni and R.Neri: a comparison with
a map at 1.4~GHz shows variations in the spectral index between the two
frequencies. If confirmed, this result is very important. In fact the
spectral index of the Crab Nebula in the radio shows very little spatial
variations (Bietenholz et al.\ 1997) and, if there is only one channel of
injection, the same behaviour is expected up to the
evolutive spectral break ($\sim10^{13}$~Hz for the Crab).
On the contrary, flatter spectra are found, between 1.4 and 230~GHz, in the
central regions of the Crab Nebula, similarly as seen
in the optical range. Further observations are however required, in order to
confirm this result.
\section{Conclusions}
The comparison of high-resolution maps at different frequencies may provide a
wealth of information on the plerions, and will help to clarify some open
issues on these objects, like: how, where and by how many different mechanisms
particles can be accelerated inside plerions? How do they propagate through the
nebula? What is the structure of the magnetic field? How the evolution of a
plerion and of the associated neutron star affect the synchrotron emission?
A big step towards
the understanding of these objects will hopefully come with the
arrival of the new generation X-ray telescopes, that will be able to perform
spectral mapping with arcsec resolution. X-ray emitting electrons have very
short lifetimes, typically of the order of the light crossing time of the
nebula, and therefore from an X-ray spectral mapping one could derive
information directly on the sites of the present-time injection.
But another promising spectral range for effectively probing the physical
conditions in the nebula is the millimetric one. In various plerions a
spectral break is located near that range: spectral maps may then give
indications on the spatial variations of the break position, and therefore
on the magnetic structure of the nebula.
Even in the presence of such high quality observations, one may wonder how
powerful the synchrotron emission is as a diagnostic tool. In fact it can
provide just a mixed information on particles and magnetic field, and only
projected along the line of sight. Therefore even the interpretation of high
resolution spectral maps is generally not straightforward, unless theoretical
models will be developed at a level of detail similar to that of the
observations.
\acknowledgements
Many of the results presented here come from discussions and collaborations
with various persons: E.Amato, F.Bocchino, R.Cesaroni, R.Neri, F.Pacini,
M.Salvati, P.Slane, L.Woltjer.
This work is partly supported by the Italian Space Agency (ASI) through grant
ARS-98-116.
|
1,116,691,498,349 | arxiv | \section{Introduction}
Cryptoeconomic incentives in the form of blockchain-based tokens are seen as an enabler of the sharing economy~\cite{pazaitis2017blockchain,ferraro2018distributed} that could shift society toward greater sustainability~\cite{heinrichs2013sharing,fanitabasi2021self}. One of the resources that is shared in these economies is information~\cite{richter2019data,raweewan2018information,nonaka1994dynamic}, which due to its growing utilization in data-intensive technologies~\cite{bennati2018machine} is becoming increasingly important~\cite{helbing2022socio, economist2017world}. This has resulted in the collection of large data sets by organizations~\cite{cai2015challenges}. Nevertheless, in this age of vast data quantities, obtaining high-quality information is a challenge~\cite{cai2015challenges,gao2016big} (e.g. the accuracy of the collected data is low). Moreover, because organizations collect massive amounts of unstructured data, such as customer behavior (product choices and sleeping patterns), opinions (e.g. Facebook likes), medical health records, or IoT data, the amount of data collected exceeds the processing power available to analyze it~\cite{helbing2015digital}, possibly resulting in sampling biases. Furthermore, these "Big Data" approaches often involve the danger of collapsing the complexity of entire human personalities into assumptions constructed from simple data (e.g. website clicks) and usually miss the unique domain-specific knowledge users have~\cite{lukyanenko2011citizen}. Thus, it has been suggested that information providers should structure their input in a contextualized way when sharing their data, utilizing semantic web technologies~\cite{ballandies2022improving}, such as linked data and ontologies~\cite{berners2001semantic,w32004semantic}, and evaluate the quality of information shared by other providers~\cite{paulheim2017knowledge}.
Nevertheless, as this would require additional effort on the part of the information providers~\cite{paulheim2017knowledge}, incentives such as gamification~\cite{re2018framework}, reputation~\cite{zhou2020smart}, money~\cite{luo2019improving}, or auctions~\cite{chen2019toward} are suggested to motivate the data providers and thus improve the characteristics of the collected information. However, previous work on the incentivization of information sharing focuses on the quantity of collected information while excluding quality characteristics such as accuracy or contextualization~\cite{restuccia2016incentive}.
Increasingly, cyrptoeconomic incentives in the form of blockchain-based tokens are proposed to be awarded to participants of information-sharing communities~\cite{shrestha2018blockchain,zou2019reportcoin, makhdoom2020privysharing,jung2021mechanism,naz2019secure}. In these studies, the performance of the applied incentives is investigated using simulations~\cite{lakhani2021token}, game-theoretical methodologies~\cite{imanimehr2019token}, and case studies~\cite{hunhevicz2020incentivizing}. Nonetheless, behavioral data of users in comparison with treatment groups with and without incentivization have not been collected, which limits these approaches as the utilized models cannot be calibrated with real-world data~\cite{wittekcrypto}. Controlled experiments are therefore required that investigate the impact of these cryptoeconomic incentives on humans information-sharing behavior. In particular, such an empirical approach could assess and validate the accuracy of the utilized theoretical models~\cite{pazaitis2017blockchain}.
Likewise, although the application of \textit{multiple} token incentives has been proposed to improve the maintenance and sharing of a common resource~\cite{hunhevicz2020incentivizing,kleineberg2021social} and has been investigated in games~\cite{imanimehr2019token} and simulations~\cite{pardi2021chemical}, the impact of simultaneously applying these incentives has not been investigated empirically in a controlled experiment.
This paper addresses these identified gaps with the following research question:
"What is the effect of multiple blockchain-based tokens on human information-sharing behavior measured in the quantity, accuracy, and contextualization of the shared information?"
By testing hypotheses that are informed from self-determination theory~\cite{deci1991motivational,ryan2000self} with an experimental methodology in the form of a randomized control trial utilizing a 2x2 factorial design, the impact of two types of cryptoeconomic incentives in the form of blockchain-based tokens on the information-sharing behavior of humans is investigated.
The contributions of this paper can be summarized as follows:
\begin{itemize}
\item A conceptual impact model (Figure \ref{fig:hypotheses_model}) links cryptoeconomic incentives to human motivation and information-sharing behavior in consideration of self-determination theory~\cite{deci1991motivational,ryan2000self}.
\item The living lab experimental methodology~\cite{pournaras2022how} is augmented with a 2x2 factorial design to investigate the impacts of blockchain-based cryptoeconomic incentives on human information-sharing behavior.
\item Four effects of cryptoeconomic tokens on human behavior are identified: i) a hitherto unreported interaction effect between two types of cryptoeconomic tokens when applied simultaneously; ii) an internalization effect of cryptoeconomic tokens in the form of improved information-sharing behavior even after the incentivization period has ended; iii) a crowding-out effect on intrinsic motivation when cryptoeconomic tokens are applied; iv) a time effect resulting in a variation of the impact of cryptoeconomic incentivization over time.
\item A novel high-quality dataset illustrates user information-sharing behavior under multiple token incentives that facilitates causal inferences about human behavior under cryptoeconomic incentivization.
\item The work demonstrates how self determination theory can be applied in the formulation of hypotheses and testing in Token Engineering and Token Economics.
\item The implications of the findings for the design and engineering of one-dimensional and multi-dimensional token systems are discussed critically, taking into account ethical impacts.
\end{itemize}
Since these contributions inform an improved construction of blockchain-based incentives, they are of relevance for the community, which is increasingly utilizing and investigating such incentives in various application domains~\cite{cai2019analysis,barreiro2019blockchain,gan2020token}.
This paper is structured as follows: In Section \ref{sec:rel_work}, related work in information sharing is discussed. The research methodology is introduced in Section \ref{sec:research_methodology}, while Section \ref{sec:results} presents the evaluation. Section \ref{sec:discussion} summarizes the findings and discusses their implications. Finally, Section~\ref{sec:conclusion} draws the conclusion and provides an outlook for future work.
\section{Related Work in information sharing}
\label{sec:rel_work}
\subsection{Self-determination theory and incentives}
\label{sec:std_theory}
Humans are intrinsically and extrinsically motivated to share information~\cite{osterloh2000motivation}. Intrinsic motivation refers to when people perform a task such as information sharing out of the pleasure they derive from the task itself, whereas extrinsic motivation stems from incentives, such as monetary payments, reputation gains, or punishments. When compared to extrinsic motivation for a specific task, intrinsic motivation leads to enhanced performance, persistence, creativity, learning capacity, and endurance in humans~\cite{ryan2000self} and may therefore be more important than extrinsic motivation for specific scenarios such as contributing computer code~\cite{dapp2009effects} or sharing information~\cite{palmisano2008motivational}.
Introduced by \citet{deci2013intrinsic,deci1991motivational}, self-determination theory illustrates the conditions under which humans are intrinsically motivated to work on a task: Three innate psychological needs must be satisfied: "competence", "autonomy" and "relatedness". In particular, a feeling of competence does not enhance intrinsic motivation unless accompanied by a sense of autonomy~\cite{ryan2000self,harder2008rewards}. In this context, applying misaligned incentives may infringe on the perceived autonomy of humans and thereby reduce their intrinsic motivation~\cite{amabile1993motivational}; this is referred to as crowding-out effect~\cite{osterloh2000motivation}.
However, competence-enhancing incentives may support intrinsic motivation ~\cite{amabile1993motivational,deci1991motivational} and this is referred to as internalization~\cite{deci1991motivational}.
\begin{figure*}[tbh!]
\begin{center}
\includegraphics[width=0.75\textwidth]{findings_model_2.pdf
\end{center}
\caption{Impact of incentives (money/ access/ reputation) on motivation (intrinsic/ extrinsic) and information-sharing behavior (quality/ quantity). Impacts (arrows) are derived from related work (in parentheses). }\label{fig:hypotheses_model}
\end{figure*}
Figure \ref{fig:hypotheses_model} illustrates findings from literature about the dependencies between different types of incentives, the two types of motivation, and their impact on characteristics of information. Extrinsic and intrinsic motivation are not separate systems, but influence each other~\cite{amabile1993motivational}. Extrinsic motivation can be integrated to become intrinsic motivation (Arrow I in Figure \ref{fig:hypotheses_model})~\cite{deci1991motivational}. People are extrinsically motivated to share a higher quantity of information by money~\cite{christin2013s,pournaras2022how}, the access to the collected information~\cite{christin2013s,deangelis2014systemic}, reputation~\cite{christin2013s}, and their intrinsic motivation~\cite{christin2013s} (Arrows II and VII in Figure \ref{fig:hypotheses_model}). In particular, the strongest increase in quantity is observed for money, followed by the access to information, reputation, and then intrinsic motivation~\cite{christin2013s}.
In contrast to quantity, it has been observed that the quality of shared information remains unaffected under monetary incentives when mesasuring it in terms of the prediction accuracy of stock recommendations~\cite{chen2019monetary} or is even worse than before incentivization when meassured
by a word count index~\cite{khern2018extrinsic}, or useability (helpfulness of reviews)~\cite{khern2018extrinsic}, or quality of produced images~\cite{laske2017quantity} (Arrow III in Figure \ref{fig:hypotheses_model}).
It has been found that monetary incentives decrease intrinsic motivation (Arrow IV in Figure \ref{fig:hypotheses_model}), while they increase extrinsic motivation (Arrow V in Figure \ref{fig:hypotheses_model})~\cite{feng2011effect,vilnai2018motivating,harder2008rewards}, which might explain the impact of monetary incentives on quality: As intrinsic motivation has been observed to predict quality (Arrow VI in Figure \ref{fig:hypotheses_model} )~\cite{cerasoli2014intrinsic,dapp2009effects} and only to a lesser extent quantity (Arrow VII in Figure \ref{fig:hypotheses_model})~\cite{christin2013s}, whereas extrinsic motivation predicts the quantity of shared information (Arrow VIII in Figure \ref{fig:hypotheses_model})~\cite{cerasoli2014intrinsic}, the use of monetary incentives would result in an increased extrinsic motivation and decreased intrinsic motivation, thereby leading to a higher quantity but lower quality of shared information.
In contrast to monetary incentives, because they may enhance an individuals feeling of competence~\cite{amabile1993motivational,deci1991motivational}, reputation systems impact extrinsic motivation positively~\cite{hung2011influence}, while also having a positive impact on intrinsic motivation~\cite{kuwabara2015reputation} (Arrows IX and X in Figure \ref{fig:hypotheses_model}).
Moreover, it has been found that rewarding each information-sharing action is more effective than summarized payments~\cite{yang2008knowledge}, and that small subgroups have shown moderate to strong aversion to incentives~\cite{sadler2018incentives}, which has been confirmed by Pournaras et al.~\cite{pournaras2022how}.
\begin{table*}[t] \caption{Related work that studies blockchain technology and tokens to improve information sharing. Frame: Famework; Impl.: Implemenation; Sim.: Simulation; Anly.: Analytical; Exp.: Experiment; Mult.: Multiple; Des.: Design; Imp.: Implication} \label{tab:rel_work}
\begin{tabular}{llcccccccccc} \hline \\
\textbf{ID} & \textbf{Paper} & \multicolumn{2}{c}{\textbf{Artifact}} & \multicolumn{4}{c}{\textbf{Evaluation}} & \multicolumn{3}{c}{\textbf{Token}} & \multicolumn{1}{l}{\textbf{Ethics}} \\
& & \textit{Fram.} & \textit{Impl.} & \textit{Sim.} & \textit{Anly.} & \textit{Work.} & \textit{Exp.} & \textit{Mult.} & \textit{Des.} & \textit{Imp.} & \textit{} \\ \hline \\
1 & \citet{pazaitis2017blockchain} & x & & & & & & x & & & x \\
2 & \citet{shrestha2018blockchain} & x & & & & & & & & & \\
3 & \citet{hulsemann2019walk} & x & & x & & & & & & x & \\
4 & \citet{naz2019secure} & x & x & x & & & & & & & \\
5 & Imanimehr et. al [31] & x & & x & x & & & x & & x & \\
6 & \citet{manoj2020incentive} & x & & & & & & & & & \\
7 & \citet{hunhevicz2020incentivizing} & x & x & & & x & & & & & \\
8 & \citet{zhang2020design} & x & x & & & & & & & & \\
9 & \citet{wittekcrypto} & x & & & & & & & & & \\
10 & \citet{jaiman2021user} & x & x & x & & & & & x & & \\
11 & \citet{jung2021mechanism} & x & & x & x & & & & & & \\
12 & \textbf{This paper} & \textbf{x} & \textbf{x} & \textit{} & \textit{} & \textit{} & \textbf{x} & \textbf{x} & \textbf{x} & \textbf{x} & \textbf{x} \\ \hline
\end{tabular}
\end{table*}
\subsection{Tokens for information sharing}
Cryptoeconomic incentives in the form of blockchain-based tokens span a multi-dimensional incentive system~\cite{kleineberg2021social} enabling a differentiated pricing of a broader spectrum of externalities ~\cite{kleineberg2021social, helbing2021qualified}. This can result in the improved self-organization of society when compared to one-dimensional incentive systems such as the current monetary system~\cite{dapp2021finance,dapp2018finance}.
These tokens are defined as a \textit{"a unit of value issued within a DLT
system [or blockchain system] and which can be used as a medium of exchange or
unit of account"}~\cite{ballandies2021decrypting} and are increasingly utilized in communities to encourage the sharing of information.
Table \ref{tab:rel_work} illustrates related work that utilizes blockchain-based tokens in information-sharing scenarios.
All of these works contribute a conceptual framework about blockchain and tokens and how they can be applied to improve the information sharing in a community (Column Fram. in Table \ref{tab:rel_work}). Four of these frameworks are implemented in a software artifact (Column Impl. in Table \ref{tab:rel_work}): \citet{naz2019secure} implement a software artifact that integrates IPFS\footnote{Inter Plenatary File System, a peer-to-peer protocol for exchanging files: https://ipfs.io/ (last accessed 2022-02-07)} with a blockchain to improve the quality of shared data by incentivizing stakeholders with tokens to review the shared information. Similarly, \citet{hunhevicz2020incentivizing} use tokens to incentivize high-quality datasets in a construction process by awarding those that provide complete and accurate information. \citet{zhang2020design} use tokens in their prototype to incentivize the provision of credit data. Finally, \citet{jaiman2021user} utilize tokens as representations of ownership and access rights to data sets.
Two of these implementations are evaluated in simulations (ID 4 and 10 in Table \ref{tab:rel_work}). Moreover, three of the proposed concepts that are not implemented in a software artifact are tested in simulations (Column Sim.; ID 3, 5 and 11 in Table \ref{tab:rel_work}). \citet{hulsemann2019walk} apply an agent-based modeling approach to investigate three different types of tokens. Likewise, \citet{imanimehr2019token} investigate with a simulation the application of multiple tokens to incentivize the optimal utilization of video stream layers. Moreover, they analytically investigate their scenario with methods from game theory (Column Anly. in Table \ref{tab:rel_work}). Similarly, \citet{jung2021mechanism} utilize both methods from game theory/ mechanism design as well as simulations to evaluate their framework that improves the provision and maintenance of patient health records. \citet{hunhevicz2020incentivizing} evaluate their framework and implementation with stakeholders in a workshop (Column Work.; ID 7 in Table \ref{tab:rel_work}).
Three frameworks are not evaluated (ID 1, 2 and 6 in Table \ref{tab:rel_work}).
By utilizing tokens in their frameworks and implementations, only two of the contributions investigate the implications of introduced tokens on system properties (Column Imp.; ID 3 and 5 in Table \ref{tab:rel_work}). Moreover, only one of the contributions illustrates the token design (Column Des.; ID 10 in Table \ref{tab:rel_work}): The token is a modified ERC-721 token that has a source of value of ownership/access rights to data, its supply is uncapped and the token is transferable. Nevertheless, a standard illustration as utilized by \citet{dobler2019extension,ballandies2021finance} that would make different designs comparable has not been applied.
Moreover, the impact of a specific token design on user information-sharing behavior is not rigorously investigated with a controlled experiment (Column Exp. in Table \ref{tab:rel_work}). Assumptions thus have to be utilized in the above-mentioned simulations and analysis that limit the applicability of the findings to real-world scenarios.
Two of the works utilize multiple tokens in their application scenario (Column Mult.; ID 1 and 5 in Table \ref{tab:rel_work}): \citet{pazaitis2017blockchain} enable the setup of multiple tokens to capture the value created in different decentralised communities, and \citet{imanimehr2019token} use multiple tokens for optimal sharing of bandwidth.
Finally, despite touching on sensitive applications domains such as health or credit data, only one of the works discusses the ethical implications of their contributions and findings (Column Ethics; ID 1 in Table \ref{sec:rel_work}).
This paper (ID 12 in Table \ref{tab:rel_work}) addresses these limitations by evaluating the impact of two cryptoeconomic incentives on human information-sharing behavior in a controlled experiment involving 132 participants over four days (Section \ref{sec:results}). The utilized token designs are illustrated (Section \ref{sec:exp_treatment}) and the (ethical) implications (Section \ref{sec:discussion}) of this work are discussed.
\section{Research methodology}
\label{sec:research_methodology}
The impact of two token incentives on human information-sharing behavior is investigated with an experimental methodology. The conducted experiment is explained below (Section \ref{sec:exp_meth}), followed by the measured variables (Section \ref{sec:variables_measures}), tested hypotheses (Section \ref{sec:hypotheses}) and the analysis methods (Section \ref{sec:analysis_meth}).
\subsection{Experiment}
\label{sec:exp_meth}
The experiment has been conducted by modifying the mixed-mode "living lab"~\cite{pournaras2022how} experimental methodology such that the randomized control trial is augmented with a 2x2 factorial pre-test/ post-test design that utilizes two token incentives (Figure \ref{fig:variables_methods}).
\begin{figure*}[th!]
\begin{floatrow}
\ffigbox{%
\includegraphics[width=0.5\textwidth]{scenario.png}
}{%
\caption{Experiment scenario: Participants share information with a library institution and obtain blockchain-based tokens in return. Tokens collected by other users can be discovered in interactions. }\label{fig:scenario}
}
\ffigbox{%
\includegraphics[width=0.35\textwidth]{factorial_desgin.pdf}
}{%
\caption{Treatment groups in the 2x2 factorial design experiment vary according to the received token incentive: N = no token incentives; C = context token incentive; M = money token incentive; B = both token incentives. }\label{fig:factorial_design}
}
\end{floatrow}
\end{figure*}
The experiment consists of three phases (Figure \ref{fig:variables_methods}): The entry and exit phase is facilitated by the ETH Decision Science Laboratory\footnote{\label{fn:descil}ETH Decision Science Laboratory provides infrastructures and services for researchers to perform human subject trials in the intersecting areas of Decision and Behavioral Sciences: https://www.descil.ethz.ch/ (last accessed: 2022-03-11)} (DeSciL) of ETH Zurich, using their infrastructure and staff. The core phase is facilitated by the research team and a blockchain-based Web 3.0 application (Section \ref{sec:exp_technical}).
In the entry phase, participants provide their consent to the study and are instructed on the application of the software used in the core phase.
Before the core phase, the participants answer an entry survey consisting of demographic questions. The core phase of the experiment consists of four days in which participants utilize the software artifact to share information. On the second and third day of the core phase, in exchange for their shared information participants obtain token rewards.
In the exit phase, participants answer an exit survey and receive their financial compensation.
The conducted experiment was granted an ethical approval by the Decision Science Laboratory (DeSciL) as well as the Ethics Commission of ETH Zurich.
In the following, Section \ref{sec:exp_scenario} illustrates the real-world information-sharing scenario of the core phase, followed by Section \ref{sec:data_model}, where the model of the collected data is introduced. Section \ref{sec:exp_treatment} illustrates the applied incentives and the treatment groups. Then, Section \ref{sec:exp_technical} provides the technical specifications of the utilized software artifact. Finally, Section \ref{sec:exp_recruitment} provides an overview of the recruitment process and the compensation paid to the participants.
\subsubsection{Scenario}
\label{sec:exp_scenario}
The scenario of the experiment has been illustrated by \citet{ballandies2022improving}. A summary is given in the following and depicted in Figure \ref{fig:scenario}: Participants of the experiment share solicited information via their personal devices (e.g. laptop or mobile phone) with an organization, using a Web 3.0 application (Section \ref{sec:exp_technical}). In order to facilitate a realistic setup of the experiment and to comply with the anti-deception policy of DeSciL\footnotemark[\getrefnumber{fn:descil}], the shared information is received by a real-world library organization that has an interest in obtaining feedback from customers and unaware-customers\footnote{Unaware-customers are defined by the library as customers who are not aware that they are customers of the library. For instance, researchers accessing closed-access journals via their university credentials for which the library pays} of their services. Furthermore, in order to study user behavior in a realistic setting, participants can choose the time of feedback provision such that it is best integrated into their daily routines.
As an incentive for sharing information with the library organization, participants receive units of two types of blockchain-based tokens (Section \ref{sec:exp_treatment}). The amount of token units collected by other users and a subset of their shared information can be discovered in interactions.
The model of the shared information is illustrated in Section \ref{sec:data_model}.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=\textwidth]{information_model_2.pdf
\end{center}
\caption{Data model of the experiment that visualizes the stakeholders (library organization and its customers), collected information, survey questions, and token incentives.}\label{fig:data_model}
\end{figure*}
\subsubsection{Data Model}
\label{sec:data_model}
Figure \ref{fig:data_model} illustrates the ecosystem of the collected information as an ontology~\cite{ballandies2021mobile}. The library formulated $274$ survey questions, which they wanted to ask customers and unaware-customers of their services. These questions are of one of the following types: single-choice, multiple-choice, Likert scale, open text, or a combination of thereof.
In the experiment, participants take the role of customers and share information with the library in the form of answers to the given questions. Participants have the option to enrich their answer to a question with three types of contextualizations (Figure \ref{fig:feed4org_answer}). They can state from their perspective how important the question is for the library to improve their services (Likert scale), how satisfied they are with the answer options to the question (Likert scale), and provide a comment (open text field).
As an incentive to share information, participants obtain units of two types of cryptoeconomic incentives: \textit{Money token} and \textit{Context token}, which are illustrated in greater detail in Section \ref{sec:exp_treatment}.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.75\textwidth]{feed4org_statistics_2_crop.png
\end{center}
\caption{Statistics view of the utilized software artifact. It depicts the amount of context token and money token units collected by a user (above). Moreover, the software artifact shows the leaderboard that compares users based on the collected context token units (below).}\label{fig:feed4org_statistics}
\end{figure*}
\subsubsection{Incentives and treatment groups}
\label{sec:exp_treatment}
Two types of cryptoeconomic incentives are utilized in this paper: The money token is a stable coin~\cite{pernice2019monetary} that resembles the Swiss fiat currency. It is a capped, pre-mined, transferable, and non-burnable ERC-20 token whose units are pegged to the Swiss franc at an exchange rate of 1:0.2 CHF. Users obtain a unit of this token whenever they provide an answer to a survey question (Figure \ref{fig:data_model}).
The context token is a utility token~\cite{ballandies2021decrypting} and models reputation in the system: It is a ERC-20 token that is uncapped, transferable, burnable, and not pre-mined. A token unit is created whenever a contextualization (Figure~\ref{fig:data_model}) is performed in the system and is awarded to the user who provided that information. The amount of context token units collected is visible to others on a leaderboard during the experiment (Figure~\ref{fig:feed4org_statistics}) and thus constructs users' reputation, which functions as a source of value to this token. In particular, reputation is a widely adopted incentive mechanism that has been utilized to improve the quality of shared data~\cite{luo2019improving}. Additionally, the context token can be utilized to access a privileged service in the form of voting actions~\cite{ballandies2022improving}, which further provides value to the token~\cite{luo2019improving}.
Figure \ref{fig:factorial_design} illustrates the 2x2 factorial design of the study, whereby the two token types awarded to experiment participants are varied: Group N is the control group that receives no token incentives; Group C obtains the context token; Group M obtains the money token; Group B obtains both, the money and the context token.
\begin{figure*}[h!]
\begin{floatrow}
\ffigbox{%
\includegraphics[width=0.5\textwidth]{feed4org_answer_2.png
}{
\caption{Answer view of the utilized software artifact: Users can answer questions posed by the library and contextualize it with three contextualization options (importance, satisfaction, and comment).} \label{fig:feed4org_answer}
}
\ffigbox{%
\includegraphics[width=0.5\textwidth]{feed4org_satisfaction_black_crop.png}
}{%
\caption{View of the satisfaction contextualization (Figure \ref{fig:feed4org_answer}). Users can specify how satisfied they are with the answer options provided for a question.}\label{fig:context}
}
\end{floatrow}
\end{figure*}
\subsubsection{Technical infrastructure}
\label{sec:exp_technical}
This research applies the customer feedback system developed by \citet{ballandies2022improving} to enable users to share information with a library organization and to receive two cryptoeconomic incentives (Section \ref{sec:exp_treatment}) in exchange. The software artifact is a Web 3.0 app that utilizes the Finance 4.0 infrastructure~\cite{dapp2021finance} and the Ethereum\footnote{https://ethereum.org/en/ (last accessed: 2022-03-21)} (ETH) blockchain. It enables the collection of solicited and unsolicited feedback from users of an organization. Figure \ref{fig:feed4org_answer} illustrates how users can provide solicited feedback by answering questions posed by an organization. This feedback can be contextualized by i) stating the importance of the question to improve the organization's service (bottom left in Figure \ref{fig:feed4org_answer}), or ii) stating the satisfaction with the answer options to the question (bottom center in Figure \ref{fig:feed4org_answer}), or iii) providing further feedback via a comment field (bottom right in Figure \ref{fig:feed4org_answer}). Figure \ref{fig:context} depicts how users can contextualize an answer with their satisfaction regarding the answer options. Figure \ref{fig:feed4org_statistics} shows how reputation is facilitated in the system by comparing users based on the amount of collected context token units. Moreover, this view gives users an overview of their collected money and context token units.
\subsubsection{Recruitment, compensation, and ethical approval}
\label{sec:exp_recruitment}
The participants were recruited by the ETH Decision Science Laboratory\footnotemark[\getrefnumber{fn:descil}] (DeSciL), who, following their protocols and ethical standards, were guaranteed fair compensation, and information regarding participants' identity was separated from the experiment data, thereby enabling anonymity for the participants.
150 participants were recruited, 132 of which completed the exit phase (88 \% completion rate), which is a reasonable number that balances resources (compensation/ infrastructure), rigor, and control of the experimental process~\cite{pournaras2022how}. In particular, the mixed-mode experimental process preserves the realism of the scenario by involving a real-world organization that obtains the shared information, while facilitating controlled experimental conditions that result in a novel high-quality dataset to allow (causal) inferences about human behavior under cryptoeconomic incentivization.
Participants were recruited from the full UAST\footnote{https://www.uast.uzh.ch/ (last accessed: 2021-12-01)} pool (no criterion was applied), which mainly consists of students and researchers of ETH Zurich and the University of Zurich, and thus is subject to sampling biases when making inferences about the behavior of the general population. Nevertheless, as these are exactly the customers and unaware-customers of the real-world library organization around which the use case of information collection in this experiment was constructed, the participants' profiles match well to the experimental scenario. Consequently, the findings may be transferable to similar scenarios, where customers share information with an organization.
Four recruitment sessions were performed within the period from May 17, 2021 to June 11, 2021.
The DeSciL requires the fair minimum and avarage compensation of experiment participants. This is satisfied by compensating each participant $i$ of a treatment group ($N,C,M,B$ in Figure \ref{fig:factorial_design}) Swiss francs via one of the following payout formulas $p$:
\begin{equation}
\label{eq:payout}
\begin{aligned}
p(M_i) &= \text{min}\big(60 \text{ CHF}, \text{MT}(M_i) \times 0.2\text{ CHF}\big) \\
p(B_i) &= \text{min}\big(60 \text{ CHF}, \text{MT}(B_i) \times 0.2\text{ CHF}\big) \\
p(N_i) &= \text{max}\big(20 \text{ CHF}, \frac{T-P}{N(C) + N(B)}\big)\\
p(C_i) &= \text{max}\big(20 \text{ CHF}, \frac{T-P}{N(C) + N(B)}\big)\\
\end{aligned}
\end{equation}
where,
\begin{equation}
\begin{aligned}
\text{MT}(i) &: \text{amount of collected money token units of participant $i$} ,\\
T &= N \times 40 \text{ CHF}; \text{total available payout} \\
P &= \sum_i^{N(M)}p(M_i) + \sum_i^{N(B)}p(B_i); \text{total received payout} \\ & \text{by groups M and B}\\
N &: \text{number of participants}\\
N(j) &: \text{number of participants in treatment group $j$}
\end{aligned}
\end{equation}
This results in a minimum compensation of 20 CHF and an average compensation of 40 CHF (0.5 CHF/min) for the participants.
The payout for participants of treatment groups that received the money token (groups M and B in Figure \ref{fig:factorial_design}) depends on the amount of money token units collected. This amount is multiplied by $0.20$ CHF and then awarded to the participants. The total payout is capped at 60 CHF per participant resulting in a maximum of 150 questions for which a user can be rewarded per day.
The payout for participants of the other treatment groups (groups N and C in Figure \ref{fig:factorial_design}) depend on the payout of the treatment groups that receive the money token (groups M and B in Figure \ref{fig:factorial_design}), such that the average compensation over all experiment participants is 40~CHF.
\subsection{Variables and measures}
\label{sec:variables_measures}
\begin{figure*}[h!]
\begin{center}
\includegraphics[width=\textwidth]{data_methods.pdf
\end{center}
\caption{Measured variables in the three phases of the experiment and the applied analysis methods.}\label{fig:variables_methods}
\end{figure*}
Figure \ref{fig:variables_methods} illustrates the measured variables of this paper in the three phases of the experiment. The participants answered demographic questions in the entry phase and another survey in the exit phase.
Two extrinsic incentives, the money token, and the context token (Section \ref{sec:exp_treatment}) are manipulated as independent variables on the second and third day of the core phase.
Several dependent variables are measured each day: The quantity of shared information is measured by the number of replies to survey questions. Moreover, two quality characteristics are measured: i)
Contextualization is the number of contextualization actions performed by participants in response survey questions (via the bottom buttons in Figure \ref{fig:feed4org_answer} as shown in Section \ref{sec:exp_technical}). This is the amount of "metadata" that a user provides with an answer that contributes to the usability of information and is considered a quality dimension of information~\cite{cai2015challenges}.
Further, ii) accuracy is a quality element of information that contributes to the reliability of information~\cite{cai2015challenges}. Applying the methodology of estimating choice variability~\cite{brus2021sources,polania2019efficient}, accuracy is operationalized in this paper as follows: With equal probability, survey questions are displayed more than once to participants.
The average accuracy with which a participant answers a specific question is then calculated by taking the Jaccard similarity~\cite{jaccard1901bulletin} between the answers provided to that question. The final accuracy for a user is then obtained by taking the average similarity over all questions and days.
\subsection{Hypotheses}
\label{sec:hypotheses}
The hypotheses of this paper test five assumptions regarding the utilized token incentives (Section \ref{sec:exp_treatment}) and treatment group assembly. They are formulated by connecting these assumptions to the introduced conceptual impact model~(Figure~\ref{fig:hypotheses_model}).
In the following, the five assumptions are first illustrated (Section \ref{sec:assumptions}) before the hypotheses are introduced (Section \ref{sec:hypotheses_formulation}).
\subsubsection{Assumptions}
\label{sec:assumptions}
\begin{assump}
\label{ass:money_token}
\textit{The money token (stable coin) is perceived as a monetary incentive and thus has a similar impact on the human information-sharing behavior as money. }
\end{assump}
The money token utilized in the experiment is a stable coin that has a fixed exchange rate with the Swiss franc (Section~\ref{sec:exp_treatment}) and thus resembles fiat money. The impact of fiat money on human behavior has been studied in information-sharing scenarios of related work (Section~\ref{sec:rel_work}). Due to this resemblance, it is hypothesized that the money token has a similar impact on human motivation and information-sharing behavior as monetary incentives. In particular, the money token impacts the extrinsic motivation positively and the intrinsic motivation negatively such that the quantity of shared information is increased and the quality is decreased, as illustrated in Figure~\ref{fig:hypotheses_model}.
\begin{assump}
\label{ass:context_reputation}
\textit{The context token impacts intrinsic motivation positively.}
\end{assump}
The context token is a utility token that has reputation as its source of value (Section \ref{sec:exp_treatment}) and it is thus hypothesized that it is perceived as a competence-enhancing incentive~\cite{amabile1993motivational} that increases the intrinsic motivation of individuals.
\begin{assump}
\label{ass:context_extrinsic}
\textit{The context token impacts extrinsic motivation positively.}
\end{assump}
Since the context token shares some characteristics with money (it is transferable and collectible), it is hypothesized that it has a positive impact on extrinsic motivation, albeit to a lower extend when compared to the money token.
\begin{assump}
\label{ass:no_interaction}
\textit{No interaction exists between the money and the context token.}
\end{assump}
It is assumed that no interaction between the money and context token exists when they are applied simultaneously.
\begin{assump}
\label{ass:no_bias}
\textit{No bias exists in the assembly of the treatment groups.}
\end{assump}
It is assumed that each treatment group consists of a similar participant structure.
These assumptions are the basis for the hypotheses that are formulated in the following.
\subsubsection{Hyptoheses formulation}
\label{sec:hypotheses_formulation}
In order to formulate the hypotheses, the assumptions (Section \ref{sec:assumptions}) are linked to the conceptual impact model (Figure \ref{fig:hypotheses_model}):
Under the assumption of no biases in the assembly of the treatment groups (Assumption~\ref{ass:no_bias}), it is hypothesized on Day~1 of the experiment that no difference in behavior among the treatment groups measured in quantity of answers or contextualizations are observed, because no token incentives are applied on that day:
\begin{hyp}
\textit{Day 1: \\ quantity(M) $=$ quantity(C) $=$ quantity(N) $=$ quantity(B)}
\end{hyp}
\begin{hyp}
\textit{Day 1: \\ context(C) $=$ context(N) $=$ context(M) $=$ context(B)}
\end{hyp}
Due to the impact on extrinsic motivation (Assumptions~\ref{ass:money_token} and~\ref{ass:context_extrinsic}), it is hypothesized from the arguments above that the M group (money token incentives) shares a greater quantity of information during incentivization days when compared to to the C group (context token incentive), which in turn shares more information than the N group (control group). Moreover, under the assumption of no interactions (Assumption~\ref{ass:no_interaction}) and because both tokens contribute to extrinsic motivation (Assumptions \ref{ass:money_token} and \ref{ass:context_extrinsic}), it is hypothesized that the B group (both token incentives) shares more information than the M group. Thus for Days 2 and 3, when incentives are applied, the following hypothesis is posed:
\begin{hyp}
\textit{Days 2 \& 3: \\ quantity(B) $>$ quantity(M) $>$ quantity(C) $>$ quantity(N)}
\end{hyp}
Also, it is hypothesized that because of the competence-enhancing effect of the context token that would increase the intrinsic motivation of individuals (Assumption \ref{ass:context_reputation}), the C group shares information with greater quality characteristics such as contextualization or accuracy when compared to the N group. Moreover, because of the negative impact of the money token on intrinsic motivation (Assumptions \ref{ass:money_token}), the M groups quality characteristics are hypothesized to be worse than those of the N group. Finally, because i) the context token offsets the negative impact of the money token on intrinsic motivation (Assumptions \ref{ass:money_token} and \ref{ass:context_reputation}), and ii) there is no interaction effect between the tokens (Assumption \ref{ass:no_interaction}), it is hypothesized that the B group shares information with equal quality when compared to the N group, but less than the C group.
Thus for Day 2 and 3, when incentives are applied, the following hypotheses are stated:
\begin{hyp}
\textit{Days 2 \& 3:\\ context(C) $>$ context(B) $=$ context(N) $>$ context(M)}
\end{hyp}
Since no incentives are applied on the fourth day of the experiment, only the intrinsic motivation of individuals affects the characteristics of shared information on that day (Figure \ref{fig:hypotheses_model}).
Thus, because it is assumed that the money token decreased (Assumption \ref{ass:money_token}) while the context token incentive increased (Assumption \ref{ass:context_reputation}) the intrinsic motivation, it is hypothesized that for the quality characteristics the C group outperforms the N group, which in turn outperforms the M group. Moreover, the N group and B group share an equal number of contextualizations:
\begin{hyp}
\textit{Day 4: \\ context(C) $>$ context(N) $=$ context(B) $>$ context(M)}
\end{hyp}
In contrast, because intrinsic motivation only plays a minor role for the quantity of shared information (Figure \ref{fig:hypotheses_model}), it is hypothesized that the number of answers given on Day 4 does not differ significantly between the groups:
\begin{hyp}
\textit{ Day 4: \\ quantity(M) $=$ quantity(C) $=$ quantity(N) $=$ quantity(B)}
\end{hyp}
\begin{table*}[]
\caption{Results of the chi-squared ($\chi^2$) test for the eight demographic questions and the treatment/~wave grouping per treatment group/ recruitment wave illustrating that no bias is identified in the construction of the groups or the recruitment waves.} \label{tab:treatment_bias}
\begin{tabular}{lllllll} \hline\\
\textbf{} & \textbf{} & \multicolumn{2}{c}{\textbf{Treat. Group}} & \multicolumn{2}{c}{\textbf{Rec. Wave}} & \multicolumn{1}{c}{\textbf{}} \\
\textbf{ID} & \textbf{Question} & \textit{T. stat.} & \textit{p val.} & \textit{T. stat.} & \textit{p val.} & \multicolumn{1}{c}{\textbf{n}} \\ \hline \\
1 & What is your gender? & 1.5 & 0.69 & 2 & 0.57 & 3 \\
2 & How old are you? & 49.5 & 0.42 & 54.7 & 0.23 & 48 \\
3 & How long do you use a mobile smart phone? & 8.9 & 0.45 & 5.5 & 0.79 & 9 \\
4 & How long do you use blockchain/ crypto apps? & 7.2 & 0.84 & 27.6 & 0.01 & 12 \\
5 & Which of the following groups do you belong to? & 32.3 & 0.22 & 22.1 & 0.73 & 27 \\
6 & What is your role or function at {[}..{]} your institution {[}..{]}? & 18.1 & 0.26 & 7.8 & 0.93 & 15 \\
7 & What field/subject do you mainly work/study in? & 26.1 & 0.35 & 25.6 & 0.37 & 24 \\
8 & Do you use the res. and services provided by {[}..{]} library? & 6 & 0.11 & 0 & 1 & 3 \\
9 & Recruitment Wave & 2.1 & 0.99 & 396 & 0 & 9 \\
10 & Treatment Group & 396 & 0 & 2.1 & 0.99 & 9 \\ \hline
\end{tabular}
\end{table*}
\begin{table}[]
\caption{p-values obtained from the normality test of dependent variables per day/ over all days and treatment group. p-values $\geq$ 0.05 indicate normal distributions (marked with an asterisk).} \label{tab:normality}
\begin{tabular}{clllll} \hline \\
\multicolumn{1}{l}{\textbf{Day}} & \textbf{Treatment} & \multicolumn{3}{c}{\textbf{p-value}} \\
& & \textit{Answ.} & \textit{Cont.} & \textit{Accu.} \\ \hline \\
\multirow{4}{*}{\textbf{1}} & \textit{No} & 0 & 0 & - \\
& \textit{Context} & 0 & 0.002 & - \\
& \textit{Money} & 0 & 0 & - \\
& \textit{Both} & 0 & 0 & - \\
\multirow{4}{*}{\textbf{2}} & \textit{No} & 0 & 0.222* & - \\
& \textit{Context} & 0 & 0.001 & - \\
& \textit{Money} & 0 & 0 & - \\
& \textit{Both} & 0 & 0 & - \\
\multirow{4}{*}{\textbf{3}} & \textit{No} & 0.762* & 0.05* & - \\
& \textit{Context} & 0.251* & 0.04 & - \\
& \textit{Money} & 0.001 & 0 & - \\
& \textit{Both} & 0 & 0 & - \\
\multirow{4}{*}{\textbf{4}} & \textit{No} & 0.003 & 0 & - \\
& \textit{Context} & 0.16* & 0.01 & - \\
& \textit{Money} & 0 & 0 & - \\
& \textit{Both} & 0 & 0 & - \\
\multirow{4}{*}{\textbf{All}} & \textit{No} & 0.001 & 0.231* & 0 \\
& \textit{Context} & 0.217* & 0.002 & 0 \\
& \textit{Money} & 0.528* & 0 & 0.005 \\
& \textit{Both} & 0 & 0.007 & 0.026 \\ \hline
\end{tabular}
\end{table}
The accuracy is measured over all four days. Consequently, a hypothesis about the daily differences among the groups cannot be drawn. Accuracy is a quality characteristic (Section \ref{sec:variables_measures}). Quality has been found to be positively impacted by intrinsic motivation (Section \ref{sec:rel_work}). Thus, it is hypothesized that the averaged accuracy score of the C group is higher than that of N group, which in turn has a higher score than the M group. Moreover, because the context token offsets the negative impact of the money token on intrinsic motivation, the N and B groups have a similar accuracy:
\begin{hyp}
\textit{ All days: \\ accuracy(C) $>$ accuracy (N) $=$ accuracy(B) $>$ accuracy (M)}
\end{hyp}
\subsection{Analysis methods}
\label{sec:analysis_meth}
Figure \ref{fig:variables_methods} illustrates the methods that are applied to evaluate the hypotheses.
The demographic information from the entry survey in the entry phase is utilized to illustrate the profiles of participants. Moreover, this information is applied in chi-squared ($\chi^2$) tests~\cite{lowry2014concepts} to validate that no treatment group biases are present. In particular, the test is employed to test the null hypothesis that no relationship exists on the demographic variables among the treatment groups.
Survey responses from the exit survey are used to validate the experimental setup such as the rewards obtained by the participants.
Histograms, qqplots, the Shapiro-Wilk test~\cite{shapiro1965analysis,razali2011power} and the D'Agostiono \& Pearson test~\cite{d1971omnibus,d1973tests} are used to investigate the distribution of the dependent variables.
In order to analyze treatment group differences, the Kruskal-Wallis one-way analysis of variance by ranks for independent samples (H-test) is utilized~\cite{kruskal1952use} and, for a post-hoc pairwise comparisons test of mean rank sums, the Conover-Iman~\cite{conover1979multiple} and the Dunn~\cite{dunn1964multiple} methods are applied. Furthermore, CDF plots are utilized to investigate differences in group behavior.
The treatment effect of the applied incentives and the interaction effect among the incentives are analyzed via interaction plots.
\begin{figure*}%
\centering
\subfloat[\centering Day 1]{{\includegraphics[width=0.245\linewidth]{cdf_answers_day1.pdf} \label{fig:quant_day1_cdf}}}%
\subfloat[\centering Day 2]{{\includegraphics[width=0.245\linewidth]{cdf_answers_day2.pdf} \label{fig:quant_day2_cdf}}}%
\subfloat[\centering Day 3]{{\includegraphics[width=0.245\linewidth]{cdf_answers_day3.pdf} \label{fig:quant_day3_cdf}}}%
\subfloat[\centering Day 4]{{\includegraphics[width=0.245\linewidth]{cdf_answers_day4.pdf} \label{fig:quant_day4_cdf}}}%
\qquad
\subfloat[\centering Day 1]{{\includegraphics[width=0.245\linewidth]{cdf_context_day1.pdf}\label{fig:context_day1_cdf} } }%
\subfloat[\centering Day 2]{{\includegraphics[width=0.245\linewidth]{cdf_context_day2.pdf}\label{fig:context_day2_cdf} }}%
\subfloat[\centering Day 3]{{\includegraphics[width=0.245\linewidth]{cdf_context_day3.pdf}\label{fig:context_day3_cdf} }}%
\subfloat[\centering Day 4]{{\includegraphics[width=0.245\linewidth]{cdf_context_day4.pdf} \label{fig:context_day4_cdf}}}%
\qquad
\subfloat[\centering All Days]{{\includegraphics[width=0.245\linewidth]{cdf_accuracy_total.pdf} \label{fig:accurarcy_cdf} }}%
\caption{Cumulative density plots for the treatments of the dependent variables over the four days/all days. The plot illustrates the cumulative percentage of users who reach an equal or lower value of the variable. }%
\label{fig:cdf}%
\end{figure*}
\begin{figure*}%
\centering
\subfloat[\centering Day 1]{{\includegraphics[width=0.245\linewidth]{quantity_day_1.pdf} \label{fig:quantity_day1_interaction} }}%
\subfloat[\centering Day 2]{{\includegraphics[width=0.245\linewidth]{quantity_day_2.pdf} \label{fig:quantity_day2_interaction}}}%
\subfloat[\centering Day 3]{{\includegraphics[width=0.245\linewidth]{quantity_day_3.pdf} \label{fig:quantity_day3_interaction}}}%
\subfloat[Day 4]{{\includegraphics[width=0.245\linewidth]{quantity_day_4.pdf} \label{fig:quantity_day4_interaction}}}%
\qquad
\subfloat[\centering Day 1]{{\includegraphics[width=0.245\linewidth]{context_day_1.pdf}\label{fig:context_day1_interaction} } }%
\subfloat[\centering Day 2]{{\includegraphics[width=0.245\linewidth]{context_day_2.pdf}\label{fig:context_day2_interaction} }}%
\subfloat[\centering Day 3]{{\includegraphics[width=0.245\linewidth]{context_day_3.pdf}\label{fig:context_day3_interaction} }}%
\subfloat[\centering Day 4]{{\includegraphics[width=0.245\linewidth]{context_day_4.pdf}\label{fig:context_day4_interaction}}}%
\qquad
\subfloat[\centering All Days]{{\includegraphics[width=0.245\linewidth]{accuracy_day_all.pdf} \label{fig:accuracy_interaction}}}%
\caption{Interaction plots among the treatments for the dependent variables over the four days/ all days. The minus indicates that the token has not been applied, whereas the plus indicates an utilization of tokens. Thus, the dashed line connects the treatment groups that utilized the context token (left: treatment with context token; right: treatment with both tokens). The solid line connects the treatment groups that did not utilize the context token (left: treatment with no tokens (control group); right: treatment with money token).}%
\label{fig:interaction_plots}%
\end{figure*}
\section{Results}
\label{sec:results}
\subsection{Demographics/ Profiles of the participants}
150 candidates were invited to participate in the study, $132$ of which completed all three phases (entry, core, and exit phase in Figure \ref{fig:variables_methods}). The average age of the participants was $23.2$ years. $62$ were male, $68$ were female and $2$ did not specify their gender. $36$ users had used blockchain/crypto apps before the experiment (50\% 1-6 month, 16.6\% 6-12 month, 16.6\% 1-2 years, 16.6 \% $>$ 2 years). $65$ participants were bachelor students, $56$ were master students, and 11 were "other". 54.5~\% of the participants stated that they were active users of the services of the library that functioned as a use case for the living lab experiment methodology.
\subsection{Treatment groups biases, experiment validation, and dependent variable distribution}
Table \ref{tab:treatment_bias} depicts the results of the chi-squared ($\chi^2$) test for the demographic questions and the treatment/wave per treatment~group/recruitment wave. Neither in the treatment group construction nor in the recruitment waves are biases identified.
On average, the participants found the rewards fair (2.6/4)\footnote{Evaluated on a 5-point likert scale} and the onboarding materials useful (2.8/4). In particular, it has been identified in earlier work that learning how to utilize the web application is perceived as easy by the participants~\cite{ballandies2022improving}. Thus, the chosen compensation fulfilled the requirements of DeSciL\footnotemark[\getrefnumber{fn:descil}] and the chosen technology in the form of a blockchain-based Web application did not restrict users in participating in information sharing.
Table \ref{tab:normality} illustrates the distributions of the dependent variables utilized in the analysis. In the majority of cases, these variables are non-normally distributed, thus requiring the Kruskal-Wallis test that does not assume normally distributed variables to analyze the distribution~\cite{kruskal1952use}.
\subsection{Group differences and interaction effects}
\begin{table*}[] \caption{Findings from the Kruskal-Wallis and Conover-Iman posthoc analysis. Light green are those entries marked that accept the hypotheses stated in Section \ref{sec:hypotheses}. The following deviations can be explained by adjusting the assumptions of the hypotheses: i - Context token has a negligible effect on intrinsic motivation; ii - Extrinsic motivation has a considerable impact on contextualization; iii - Interaction effects between the tokens are present.} \label{tab:findings}
\begin{tabular}{lllccccccc} \hline \\
{\color[HTML]{4C4C4C} \textbf{ID}} & {\color[HTML]{4C4C4C} \textbf{Character.}} & {\color[HTML]{4C4C4C} \textbf{Hyp.}} & {\color[HTML]{4C4C4C} \textbf{Day}} & \multicolumn{6}{c}{{\color[HTML]{4C4C4C} \textbf{Finding}}} \\
{\color[HTML]{4C4C4C} } & {\color[HTML]{4C4C4C} } & {\color[HTML]{4C4C4C} } & {\color[HTML]{4C4C4C} } & {\color[HTML]{4C4C4C} \textit{a}} & {\color[HTML]{4C4C4C} \textit{b}} & {\color[HTML]{4C4C4C} \textit{c}} & {\color[HTML]{4C4C4C} \textit{d}} & {\color[HTML]{4C4C4C} \textit{e}} & {\color[HTML]{4C4C4C} \textit{f}} \\ \hline \\
{\color[HTML]{4C4C4C} 1} & {\color[HTML]{4C4C4C} Quantity} & {\color[HTML]{4C4C4C} H1} & {\color[HTML]{4C4C4C} 1} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M =C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $C=N$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M=N$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=M$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=N$} \\
{\color[HTML]{4C4C4C} 2} & {\color[HTML]{4C4C4C} Quantity} & {\color[HTML]{4C4C4C} H3} & {\color[HTML]{4C4C4C} 2} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M > C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $C>N$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M>N$} & {\color[HTML]{4C4C4C} $B=M^{iii}$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B>C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B>N$} \\
{\color[HTML]{4C4C4C} 3} & {\color[HTML]{4C4C4C} Quantity} & {\color[HTML]{4C4C4C} H3} & {\color[HTML]{4C4C4C} 3} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M > C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $C>N$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M>N$} & {\color[HTML]{4C4C4C} $B=M^{iii}$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B>C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B>N$} \\
{\color[HTML]{4C4C4C} 4} & {\color[HTML]{4C4C4C} Quantity} & {\color[HTML]{4C4C4C} H6} & {\color[HTML]{4C4C4C} 4} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M =C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $C=N$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M=N$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=M$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=N$} \\
{\color[HTML]{4C4C4C} 5} & {\color[HTML]{4C4C4C} Context} & {\color[HTML]{4C4C4C} H2} & {\color[HTML]{4C4C4C} 1} & {\color[HTML]{4C4C4C} $M<C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $C=N$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M=N$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=M$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=N$} \\
{\color[HTML]{4C4C4C} 6} & {\color[HTML]{4C4C4C} Context} & {\color[HTML]{4C4C4C} H4} & {\color[HTML]{4C4C4C} 2} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M < C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $C>N^{ii}$} & {\color[HTML]{4C4C4C} $M=N^{ii}$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B>M$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B<C$} & {\color[HTML]{4C4C4C} $B>N^{ii}$} \\
{\color[HTML]{4C4C4C} 7} & {\color[HTML]{4C4C4C} Context} & {\color[HTML]{4C4C4C} H4} & {\color[HTML]{4C4C4C} 3} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M < C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $C>N^{ii}$} & {\color[HTML]{4C4C4C} $M=N^{ii}$} & {\color[HTML]{4C4C4C} $B=M^{i,iii}$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B < C$} & {\color[HTML]{4C4C4C} $B=N^{i,iii}$} \\
{\color[HTML]{4C4C4C} 8} & {\color[HTML]{4C4C4C} Context} & {\color[HTML]{4C4C4C} H5} & {\color[HTML]{4C4C4C} 4} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M < C$} & {\color[HTML]{4C4C4C} $C=N^{i}$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M < N$} & {\color[HTML]{4C4C4C} $B= M^{i}$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B<C$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $B=N$} \\
{\color[HTML]{4C4C4C} 9} & {\color[HTML]{4C4C4C} Accuracy} & {\color[HTML]{4C4C4C} H7} & {\color[HTML]{4C4C4C} all} & {\color[HTML]{4C4C4C} $M = C$} & {\color[HTML]{4C4C4C} $C=N^{i}$} & \cellcolor[HTML]{E5F5E0}{\color[HTML]{4C4C4C} $M < N$} & {\color[HTML]{4C4C4C} $B= M^{i}$} & {\color[HTML]{4C4C4C} $B=C$} & {\color[HTML]{4C4C4C} $B<N^{i}$} \\
\hline
\end{tabular}
\end{table*}
Table \ref{tab:kruskal} depicts the results of the Kruskal-Wallis test applied to the distributions of the dependent variables for each day/ over all days of the four treatment groups. Moreover, Table \ref{tab:post_hoc_days} and \ref{tab:post_hoc_all} illustrate the post-hoc analysis that applies the Conover-Iman test for those days which exhibit significant differences in the Kruskal-Wallis analysis. Moreover, Figure \ref{fig:cdf} depicts the cumulative distribution for each treatment group and Figure \ref{fig:interaction_plots} shows the interactions among the treatments for the analyzed dependent variables.
In the following, the observations for each dependent variable are illustrated in detail.
\subsubsection{Quantity}
The treatment group behaviors for the quantity variable are significantly different for Day~2 and Day~3 (Table \ref{tab:kruskal}). Considering the post-hoc analysis (Table \ref{tab:post_hoc_days}), it is possible to determine that for both days, all treatment group pairs are significantly different, except for the B-M (both token incentives-money token incentive) pair. The CDF plot illustrates this observation (Figure \ref{fig:cdf}): The M and B groups have a similar higher probability to provide more answers when compared to the C (context token incentive) and N (control group) groups (in this order). Moreover, M and B distributions show two peaks, one around 60 answers and one around 150 answers, the latter being the maximum number of answers for which a payment is received on a given day (Section \ref{sec:exp_treatment}). These peaks are more clear visible on Day 3 and are stronger for the B group when compared to the M group. Moreover, the CDF plot for Day 3 (Figure \ref{fig:quant_day3_cdf}) indicates a tendency for money token receivers to answer a higher number of questions.
The plots in Figure \ref{fig:interaction_plots} illustrate the median interaction effects.
Similarily to the Kruskal-Wallis test, on Day 1 and Day 4 no effect of the incentives are identified (Figure \ref{fig:quantity_day1_interaction} and \ref{fig:quantity_day4_interaction}). On days~2 and 3 (Figure \ref{fig:quantity_day2_interaction} and \ref{fig:quantity_day3_interaction}), both incentives result in an increase of questions answered when compared to the control group, whereby the money token leads to a considerably stronger increase than the context token. Moreover, at Day~3 an interaction is observed: When compared to the money group, the context token dampens the effect of the money token in the B group resulting in fewer questions answered.
\subsubsection{Contextualization}
In contrast to quantity, the treatment group behaviors are significantly different for all four days (Table \ref{tab:post_hoc_days}). The interaction plot on Day 1 (Figure \ref{fig:context_day1_interaction}) illustrates how the context token treatment resulted in a higher number of contextualizations. The CDF plot on Day 1 (Figure \ref{fig:context_day1_cdf}) depicts a similar distribution of the treatment groups with a higher tendency of the context group to provide more contextualizations.
On Day~2, all groups except the M-N pair are significantly different (Table \ref{tab:post_hoc_days}). Nevertheless, on Day 3 no differences between the B-N and B-M pairs are observed any longer. An opposing trend is observed in the M-N and B-C pairs where the p-values become smaller over the two days (Table \ref{tab:post_hoc_days}).
The CDF plots for Day 2 and 3 (Figure \ref{fig:context_day2_cdf} and \ref{fig:context_day3_cdf}) illustrate these trends: On Day 2, the B group distribution is close to the C distribution. Nevertheless, on Day 3 it more closely resembles the M group distribution, where most individuals provide few contextualizations and few individuals many. Moreover, the difference between the M and N groups becomes stronger for individuals that provide few contextualizations.
The interaction plots for Day 2 and day 3 (Figure \ref{fig:context_day2_interaction} and \ref{fig:context_day3_interaction}) illustrate an interaction effect between the money and context token resulting in fewer contextualizations when compared to the context token alone. In contrast to the money token, no trend in the interaction is observed over the four days.
After removing the incentives on Day 4, the pairs B-C and C-M and M-N show a significantly different behavior (Table \ref{tab:post_hoc_days}). This is in contrast to the observation for the quantity variable, where no distinct behavior on Day 4 is identified.
No significant difference between treatments with the context token and the control group is identified. The CDF plot for Day 4 (Figure \ref{fig:context_day4_cdf}) illustrates the similarity between the B-M groups, and respectively between the C-N groups and the difference between each of these pairs.
The interaction plot (Figure \ref{fig:context_day4_interaction}) illustrates how the C group provides the most contextualizations on Day 4, followed by the control group and then the other two groups.
\subsubsection{Accuracy}
A significant difference among the group behaviors for the accuracy is identified over all four days (Table~\ref{tab:kruskal}). Table~\ref{tab:post_hoc_all} indicates that this difference originates from the pairs B-N and M-N. Nevertheless, the p-values of the pairs B-C and M-C are also almost significant (p-value = 0.052). These differences are illustrated in the CDF-plot (Figure \ref{fig:accurarcy_cdf}) which depicts higher probabilities for the C and N group to reach higher accuracy values when compared to the other two groups. Figure \ref{fig:accuracy_interaction} shows that the control group reaches the highest accuracy in their answers, followed by the context, both and money groups.
\section{Discussion}
\label{sec:discussion}
\subsection{Results}
Table \ref{tab:findings} illustrates the findings from the Kruskal-Wallis and post-hoc analysis with regard to the hypotheses (Section~\ref{sec:hypotheses_formulation}). The results inform an adjustment to the assumptions (Section~\ref{sec:assumptions}) that were utilized in the formulation of these hypotheses:
I) It was assumed that the context token has a positive impact on both the intrinsic and extrinsic motivation (Assumption \ref{ass:context_reputation} and \ref{ass:context_extrinsic} in Section \ref{sec:hypotheses}). Yet, the findings provide evidence that this token has overall only a small positive or negligible impact on intrinsic motivation. This would explain the parity of the C group and N group for the context characteristic on Day 4 (ID 8/Column b in Table \ref{tab:findings}): No incentives are applied on that day, thus the extrinsic motivation is equally zero for both groups and only the intrinsic motivation defines the contextualization behavior. However, the median number of contextualizations on Day 4 (Figure \ref{fig:context_day4_interaction}) is higher for the C group when compared to the N group indicating a small positive impact that is also illustrated by the CDF plot (Figure \ref{fig:context_day4_cdf}).
The neglible impact is also illustrated in the parity between the B and M group on Day 4 (ID 8/ Column d in Table \ref{tab:findings}). The money token reduced the intrinsic motivation in both groups and because the context token did not offset this negative impact, it is the same for both groups on Day 4.
Moreover, the parity of these pairs for the accuracy characteristic (ID 9/Column b and d in Table \ref{tab:findings}) can be explained thus: Since accuracy is mainly impacted by intrinsic motivation, which following the previous considerations is equally pronounced between the M and B groups, the group behaviors are equal. It also explains the inequality between the B and N group (ID 9/ Column f in Table \ref{tab:findings}): Since the intrinsic motivation is reduced in the B group due to the money token, and the context token does not have a significant positive impact on intrinsic motivation, the intrinsic motivation in the B group is lower than in the N group and thus the shared information is of lesser accuracy.
II) Extrinsic motivation has a considerable impact on the context characteristic. i) This explains the parity between the M and N group for Days 2 and 3 (ID 6,7/Column c in Table \ref{tab:findings}). Although the intrinsic motivation is reduced due to the money token, it is replaced by the extrinsic motivation stemming from this incentive resulting in a similar contextualization-sharing behavior. Consequently at Day 4, when the incentive is removed, the M group shares fewer contextualizations (ID 8/Column c in Table \ref{tab:findings}). ii) It also explains the inequality between the B and N group on Day 2 (ID 6/Column f in Table \ref{tab:findings}). In contrast to the comparison between the M and N group, the context token in the B group adds to the extrinsic motivation such that the decrease in intrinsic motivation is exceeded resulting in a greater motivation to share contextualizations when compared to the N group. iii) Moreover, following the same arguments, it also explains the inequality between the B and M group on Day 2 (ID 6/Column d in Table \ref{tab:findings}). iv) Finally, it also explains the inequality between the C and N group on Days 2 and 3 (ID 6,7/Column b in Table \ref{tab:findings}).
III) In contrast to Assumption \ref{ass:no_interaction}, interactions between the money and context token incentive are observed. i) For Day 2, the B group shares more contextualizations than the monetary group (ID 6 Column d in Table \ref{tab:findings}). Nevertheless, on Day 3 no difference is observed (ID 7 Column d in Table \ref{tab:findings}), which indicates that over time the two tokens interact with each other, thereby decreasing their impact on the users' motivation. ii) This might also explain the parity between these groups for the quantity of information shared on Day 2 and 3 (ID 2,3/ Column d in Table \ref{tab:findings}): Both tokens interfere such that their combined impact on users' motivation does not differ from a single token incentive. Furthermore, the interaction plot (Figure \ref{fig:interaction_plots}) even indicates a lower positive impact of the combined incentives when compared to the single money token incentive. In addition, the plot indicates that this interaction becomes stronger over the four days, which is also illustrated by the CDF plot.
iii) This interaction also explains the shift from inequality to equality for the B and N group on Days 2 and 3 (ID 6,7 Column f in Table \ref{tab:findings}).
The findings further provide evidence that the money token crowds out intrinsic motivation.
The interaction plots and CDF plots for the accuracy indicate a crowding-out of intrinsic motivation by the context token. Nevertheless, according to the Kruskal-Wallis test and its post-hoc analysis, these latter differences are not significant. In particular, for the number of contextualizations the context token even has a positive impact after incentivization ends, which might be explained by an internalization of the incentive (Section \ref{sec:std_theory}) for this information dimension.
Finally, a time effect is present in both, single- and multiple-token scenarios, which indicates that the behavioral change can vary over time.
\subsection{Implications}
\subsubsection{One-dimensional token systems}
\label{sec:one_dimensional_token_system}
The internalization effect of the context token on contextualization actions after incentivization ends illustrates a potential advantage of blockchain-based cryptoeconomic incentives when compared to traditional approaches utilizing monetary incentives. The intrinsic motivation of users might be impacted positively by internalizing these incentives (Section \ref{sec:std_theory}), thus resulting in an improvement of performance measured in the amount of contextualizations provided, even after the incentivization period ends.
However, the findings also indicate that this utility token induces a worse performance in the accuracy of shared data when compared to the control group (Figure \ref{fig:accurarcy_cdf} and \ref{fig:accuracy_interaction}). Therefore, the identified internalization might be limited to information dimensions that are directly incentivized by a utility token.
Moreover, the findings indicate that stable coins such as the utilized money token crowd out intrinsic motivation, which resulted in this work in a reduction of information quality measured in accuracy and contextualization.
In order to design effective incentives, future work should evaluate the token designs and scenarios under which internalization or crowding-out are observed. In particular, as internalization or crowding-out can vary between different performance measures (e.g. contextualization and accuracy), one is advised to carefully evaluate all impacts a token might have before using it in real-world applications.
\subsubsection{Multi-dimensional token systems}
The findings provide evidence that applying multiple token-based incentives simultaneously can result in a combined improvement of several information characteristics (e.g., as shown for quantity and contextualization) and could therefore improve system performance when compared to a scenario where a single token is utilized.
Nevertheless, the identified interaction effect between the two tokens of this paper indicates that designing multi-token systems is a non-trivial task that has implications for systems in which the application of multiple tokens is considered.
In particular, positive and negative impacts of tokens on human behavior may not simply add up. The findings also show that these effects may only become apparent over time. Thus, it is necessary to carefully analyze the interdependencies between combinations of tokens in longitudinal studies before they are utilized in real-world systems. The results of simulations and formal analyses of multi-dimensional token systems are limited if they do not consider these token interactions.
\subsubsection{(Ethical) risks}
Considering the observation, that the current big data paradigm is not challenged by a lack of data~\cite{helbing2015digital}, but contextualized and accurate information, the findings of this paper raise the question if incentives in the form of blockchain-based tokens should be applied at all to motivate individuals of communities to share information. Such incentives may result in a further increase in quantity of collected information while reducing its quality (e.g. accuracy).
However, incentives might work differently in data-sharing scenarios where the quality of shared data under different incentivizations is determined by decisions users take a priori, which are then posteriori executed by an artificial intelligence, as studied by \citet{pournaras2016self,asikis2020optimization} with a computational methodology for privacy-utility decisions.
Yet, neither the user acceptance of these decisions nor the impact they have on the trust of users for decision-support systems have been studied. Therefore, unknown effects could be present in these scenarios that bias users' behavior and which limit the generalizability of findings from such simulations to real-world situations. Furthermore, because users have to perform decisions a priori in these scenarios, such approaches might fail to capture the unique situational domain knowledge users possess or their creativity and intuition which is required for some application domains such as the customer feedback provision analyzed in this paper.
Increasingly, token incentives are applied in various application domains of society such as construction~\cite{hunhevicz2020incentivizing}, health~\cite{jung2021mechanism}, Covid-19 prevention measures~\cite{manoj2020incentive}, electricity production and consumption~\cite{wittekcrypto}, car sharing~\cite{kim2018blockchain,valavstin2019blockchain}, alleviating traffic congestion~\cite{aung2020t}, book-keeping~\cite{cai2019analysis}, decentralized access-control systems~\cite{gan2020token}, or waste reduction~\cite{pardi2021chemical}. Nevertheless, behavioral traits stemming from intrinsic motivation, such as creativity, joy, self-determination, purpose, and endurance may be important for some of those application domains which could be crowded out by these cryptoeconomic incentives and thus would result in reduced performance. For instance, endurance~\cite{hackmann2014social} and creativity~\cite{thornton2014climate} have been identified as important factors for addressing climate change.
Moreover, this increasing tokenization of areas of life that have not been tokenized before could reduce social relations and human interactions to transactions within a market-driven economy~\cite{pazaitis2017blockchain}. This might be in opposition to values that stakeholders in these systems hold~\cite{pazaitis2017blockchain}.
In addition, it has been found that the measurement act itself, which is required in tokenization for the quantifying and proving of actions~\cite{dapp2021finance}, can reduce intrinsic motivation and thus creativity and endurance in individuals~\cite{etkin2016hidden}.
In addition, the identified effects of this work question the assumptions of controversial token systems in the form of social credit systems~\cite{creemers2018china,liang2018constructing}, which are also discussed in Western democracies as tools for managing society~\cite{bmbf_2020}: Centrally designing and introducing token incentives may fail due to unkown, crowding-out or interdependent effects that may cascade over time. Considering the large and complex design space of DLT systems~\cite{ballandies2021decrypting}, an iterative, local, and community-driven approach utilizing the wisdom of crowds and self-organization for token designs as illustrated in \citet{dapp2021finance} might be the way to proceed in designing stable token systems. In particular, these principles have been found to enable communities to mitigate the tragedy of the commons and successfully share and maintain a common resource~\cite{baur2021measures}.
Thus, before applying token incentives in an application scenario, this author suggests rigorously considering the values of all stakeholders in the system construction process and analyzing whether applying such incentives could crowd out intrinsic motivation in the scenario under consideration. Only then should the system be iteratively constructed in scenarios that are locally bound. For this, the methodology of this paper in combination with value-sensitive design~\cite{friedman2013value,van2015handbook} and iterative design science research~\cite{hevner2004design,vom2020special} methodologies can be applied, as demonstrated for token-based blockchain systems by \citet{ballandies2021finance,ballandies2022improving}.
\subsubsection{Limitations}
The experiment facilitates realism while enabling the laboratory-like testing of hypotheses~\cite{pournaras2022how}. Due to the realism sought, not all influences on users' information-sharing behavior could be controlled for, which may reduce the quality of the measurements and findings.
In particular, the questions asked are formulated by the library organization which had a real business interest in the answers. Thus, the questions are not standardized and hence, some of the questions might be more difficult to answer. This could introduce bias to quality characteristics such as accuracy and may have resulted in the lower differentiation between treatment groups in this characteristic (ID~9 in Table~\ref{tab:findings}). Furthermore, the accuracy was summarized over the four days. As a result, there is a lack of a granular daily view on the impact of token incentives on this characteristic.
\subsubsection{Impact}
\label{sec:impact}
The realistic setup of the experiment illustrates and underlines the importance of the findings of this paper for real-world organizations and communities. The identification of significant positive and negative effects of both token incentives on human sharing behavior and their observed interactions provide evidence that such effects are present in real-world sharing scenarios and should therefore be analyzed and evaluated by organizations and communities before they are applied in their use cases. In particular, a token design may not be robust, with use of the token having a different impact than intended~\cite{ballandies2021financebook}. The methodology of this paper can be applied to analyze such effects in real-world systems.
The identified effects (interaction, internalization, time, and crowding-out) inform the Token Engineering and Token Economics community in the design of stable cryptoeconomies. Currently, methodologies in these fields mainly rely on game theory, mechanism design, and simulations~\cite{barreiro2019blockchain,kim2021token,zhang2019engineering,khamisa2021token,laskowski2020evidence,tan2020economics}. Nevertheless, none of these approaches considers the identified effects of this work on human behavior in their assumptions. Consequently, including these effects could improve the correspondence of findings from these methodologies with reality. Thus, this paper demonstrates the importance of behavioral experiments in the field of Token Engineering and Token Economics.
In addition, this work illustrates the usability of self-determination theory to test hypotheses of token designs on human behavior.
\section{Conclusion and Outlook}
\label{sec:conclusion}
This work evaluates the combined impact of multiple cryptoeconomic incentives in the form of blockchain-based tokens on human information-sharing behavior. By utilizing a rigorous experimental methodology with a 2x2 factorial design involving 132 participants, the impact is evaluated in a real-world information-sharing scenario involving a major Swiss organization and its customers. The identified interaction effect between the tokens and the potential crowding-out of intrinsic motivation by these cryptoeconomic incentives are important for researchers and practitioners to consider because they indicate that designing multi-token systems is a non-trivial task: The impact of individual token incentives on human behavior are not independent from each other and a token design might not be sufficiently robust, with the impact of the token possibly differing from the intended effect. These impacts have to be considered when implementing, simulating or mathematically analysing token economies as presented in the Discussion (Section \ref{sec:impact}). In particular, they inform the assumptions taken in theoretical models, validate their accuracy, and may thus facilitate their improved connection with reality.
Therefore, the methodology of this paper and the identified effects might be of use for organizations and communities that intend to apply (multiple) token incentives.
The results point to various avenues for future research. i) Since information quality is a multi-dimensional concept (Section \ref{sec:variables_measures}) and the impact of token incentives can vary between those dimensions (Section \ref{sec:one_dimensional_token_system}), the impact of the chosen token incentives on other operationalizations of quality than accuracy or contextualization can be evaluated to further quantify the impact of these tokens on human information-sharing behavior. ii) In general, considering the broad design space of tokens and blockchain systems, the impact of further instances of cryptoeconomic token incentives should be evaluated in experiments to identify conditions and scenarios that are impaired or benefit from the introduction of cryptoeconomic incentives. iii) Due to the identified interaction effect and the complexity of potential system layouts, evaluating all these combinations in experimental setups might not be feasible. Thus, simulations should be employed to identify areas of interest in the design space, which, in a second step, are investigated in experiments. Modeling the determined effects of this work as emergent phenomena of a complex system could be a promising approach for these simulations.
Finally, machine learning methods such as k-means or hierarchical clustering could be utilized to identify hidden patterns in the data that may impact human sharing behavior under incentivization.
Thus, to conclude, further research by the cryptoeconomics community is required to identify why, how, and in which situations cryptoeconomic incentives should be applied.
\begin{landscape}
\begin{table}[]\caption{p-values obtained from the Kruskal-Wallis test when comparing the different treatment groups for the five dependent variables. Levels identifying significant differences among the treatment groups distributions: $\leq$ 0.001 ***, $\leq$ 0.01 **, $\leq$ 0.5 *.}\label{tab:kruskal}
\begin{tabular}{cccc}\hline \\
\textbf{Day} & \multicolumn{3}{c}{\textbf{p-value}} \\
& \textit{Quantity} & \textit{Context.} & \textit{Accuracy} \\\hline \\
\textit{1} & 0.957 & 0.029* & - \\
\textit{2} & 0*** & 0*** & - \\
\textit{3} & 0*** & 0*** & - \\
\textit{4} & 0.795 & 0*** & - \\
\textit{All} & 0*** & 0*** & 0.021* \\ \hline
\end{tabular}
\end{table}
\begin{table}[] \caption{Daily p-values of Conover-Iman post-hoc analysis for the significant values of the Kruskal-Wallis test (Table \ref{tab:kruskal}) for the quantity and contextualization variables.
}\label{tab:post_hoc_days}
\begin{tabular}{cccccc|cccc|cccc|cccc} \hline
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{4}{c}{\textbf{Day 1: No incentives}} & \multicolumn{4}{c}{\textbf{Day 2: Token incentives}} & \multicolumn{4}{c}{\textbf{Day 3: Token incentives}} & \multicolumn{4}{c}{\textbf{Day 4: No incentives}} \\
\multicolumn{1}{l}{} & \textbf{Treat.} & \textit{B} & \textit{C} & \textit{M} & \textit{N} & \textit{B} & \textit{C} & \textit{M} & \textit{N} & \textit{B} & \textit{C} & \textit{M} & \textit{N} & \textit{B} & \textit{C} & \textit{M} & \textit{N} \\ \hline \\
\multirow{4}{*}{\textbf{Quantity}} & \textit{B} & - & - & - & - & 1 & 0 & 0.683 & 0 & 1 & 0.005 & 0.48 & 0 & - & - & - & - \\
& \textit{C} & - & - & - & - & 0 & 1 & 0 & 0.001 & 0.005 & 1 & 0.001 & 0.005 & - & - & - & - \\
& \textit{M} & - & - & - & - & 0.683 & 0 & 1 & 0 & 0.48 & 0.001 & 1 & 0 & - & - & - & - \\
& \textit{N} & - & - & - & - & 0 & 0.001 & 0 & 1 & 0 & 0.005 & 0 & 1 & - & - & - & - \\ \hline
\multirow{4}{*}{\textbf{Context.}} & \textit{B} & 1 & 0.204 & 0.712 & 0.644 & 1 & 0.049 & 0.02 & 0.02 & 1 & 0.001 & 0.098 & 0.393 & 1 & 0.007 & 0.289 & 0.289 \\
& \textit{C} & 0.204 & 1 & 0.035 & 0.712 & 0.049 & 1 & 0 & 0 & 0.001 & 1 & 0 & 0 & 0.007 & 1 & 0 & 0.227 \\
& \textit{M} & 0.712 & 0.035 & 1 & 0.181 & 0.02 & 0 & 1 & 0.926 & 0.098 & 0 & 1 & 0.395 & 0.289 & 0 & 1 & 0.041 \\
& \textit{N} & 0.644 & 0.712 & 0.181 & 1 & 0.02 & 0 & 0.926 & 1 & 0.393 & 0 & 0.395 & 1 & 0.289 & 0.227 & 0.041 & 1 \\ \hline
\end{tabular}
\end{table}
\begin{table}[]\caption{p-values of Conover-Iman post-hoc analysis for the significant values of the Kruskal-Wallis test (Table \ref{tab:kruskal}) for the accuracy variable over all days.
}\label{tab:post_hoc_all}
\begin{tabular}{cc|cccc|cccc|}\hline
\multicolumn{1}{l}{} & & \multicolumn{4}{c}{\textbf{Accurcacy}} \\
\multicolumn{1}{l}{} & \textbf{Treat.} & \textit{B} & \textit{C} & \textit{M} & \textit{N} \\ \hline \\
\multirow{4}{*}{\textbf{Over all days}} & \textit{B} & 1 & 0.052 & 0.957 & 0.004 \\
& \textit{C} & 0.052 & 1 & 0.052 & 0.603 \\
& \textit{M} & 0.957 & 0.052 & 1 & 0.004 \\
& \textit{N} & 0.004 & 0.603 & 0.004 & 1 \\ \hline
\end{tabular}
\end{table}
\end{landscape}
\appendices
\section*{Ethical approval}
\section*{Acknowledgment}
I thank Evangelos Pournaras, Dirk Helbing, Claudio Tessone and Carina I. Hausladen for their valuable comments. Moreover, I would like to thank Stefan Wehrli and the rest of the ETH DeSciL staff members for their support in conducting the experiment. Furthermore, I thank Maximiliane Okonnek and the ETH library lab team for providing the financial support of the experiment, the access to their infrastructure, and their valuable feedback. Finally, I thank Lewis J. Dale for his assistance in the proofreading of this paper.
\bibliographystyle{unsrtnat}
|
1,116,691,498,350 | arxiv | \section{Introduction}\label{chapter=plants:section=introduction}
Agriculture is a sector that requires multi-disciplinary knowledge to steadily evolve \cite{intro:multi_disciplinary, intro:multi_disciplinary_1, intro:multi_disciplinary_2} since large-scale food production necessitates a deep understanding of every relevant plant species \cite{intro:deep_knowledge, intro:deep_knowledge_1, intro:deep_knowledge_2} and highly advanced machinery \cite{intro:machinery, intro:machinery_1, intro:machinery_2} to ensure an optimized yield. With the emergence and the recent practical successes of the Internet of Things, robotics and artificial intelligence \cite{intro:emergence}, the sector has observed a surge in smart farming solutions which has led the march to the fourth agricultural revolution \cite{intro:agriculture_4}. Consequently, new fields such as digital agriculture \cite{intro:digital_agriculture} and precision agriculture \cite{intro:precision_agriculture} have become intensively researched which has led to an ever-increasing pace in innovation. Although the integration of artificial intelligence is making its way to digital agricultural applications such as crop planting \cite{intro:planting, intro:planting_1, intro:planting_2} and harvesting \cite{intro:harvesting, intro:harvesting_1, intro:harvesting_2}, it is still limited to mechanical tasks that mainly require environmental awareness \cite{intro:machinery}. A number of issues that are highly concerning for farmers such as crop disease detection, plant-growth monitoring and need-based irrigation have only recently started to gain an interest in the machine learning community \cite{intro:machine_learning}. Despite the efforts to create automated systems to resolve such issues \cite{intro:efforts, intro:efforts_1}, only highly specialized systems are created which only work with specific species or variants under constrained conditions \cite{intro:disease, intro:growth, intro:irrigation}. Furthermore, due to the lack of voluminous labeled data for each species and for each specific issue \cite{intro:machine_learning}, automated systems mainly rely on advanced visual feature engineering that requires a team of plant specialists, and only partially rely on machine learning for using these features to extract useful information \cite{intro:feature_engineering, intro:feature_engineering_1, intro:feature_engineering_2}. In fact, the sheer diversity of plant species, variants, diseases, growth stages and growing conditions makes manual feature engineering unfeasible to cover every individual farmer's need. While some attempts at creating massive plants datasets have been successful \cite{intro:dataset, intro:dataset_1, intro:dataset_2}, their usefulness was limited by the species present in them and the initial task they were created for. For instance, a dataset containing species grown in a tropical biome for the purpose of species recognition has limited to no usefulness for farmers working in a grassland biome who want to establish the presence of a disease in certain plants.
\newline\indent
As a result, the EAGL--I system \cite{intro:EAGL-I} has been recently proposed to automatically generate a high number of labeled images in a short time (1 image per second) in an effort to circumvent the problem of the lack of data for specific needs. Consequently, two massive datasets were created with this system \cite{intro:EAGL-I_big_dataset}; one that contains 1.2 million images of indoor-grown crops and weeds common to Canadian prairies, and one that contains 540,000 images of plants imaged in farmland. Also, a publicly available dataset called ``Weed seedling images of species common to Manitoba, Canada" (WSISCMC) \cite{intro:WSISCMC} which contains 40,000 images of 8 species that are very rarely represented in plants datasets was created with the EAGL--I system. The purpose of the creation of the dataset was to demonstrate the capability of the EAGL--I system to rapidly generate large amounts of data that are suitable for machine learning applications and that can be used to solve specific digital agriculture problems, such as the ability to recognize several species of grasses which are responsible for the loss of hundreds of millions of dollars. However, in \cite{intro:EAGL-I}, the validity of the dataset was only tested for a binary classification problem consisting in differentiating between grasses and non-grasses without determining the species of the given plant itself. While the model that solves this problem gives an indication on how well the samples of the dataset are distributed to allow for grass differentiation, it does not provide enough information on how detailed the samples of the dataset are at a granular level to allow the identification of the species they belong to. Indeed, the higher the number of mutually exclusive classes, the more distinctive and detailed the spatial features of the plants need to be. Therefore, we chose to tackle the problem of plant species recognition on the WSISCMC dataset leading to a solution that can help the early eradication of invasive species that are harmful to the growth of certain crops. Furthermore, this constitutes a step further in validating the dataset for machine learning applications, and by extension, the EAGL--I system for the creation of meaningful datasets. Thus, this work aims at maximizing the plant recognition accuracy on the WSISCMC dataset by creating a highly reliable and accurate network using the model development framework shown in Figure \ref{fig:plants_framework} in order to provide insight on how to improve the data acquisition process to produce cleaner samples for future massive datasets.
\newline\indent
\begin{figure}[h!]
\centering
\includegraphics[scale=0.12]{plants_framework.jpg}
\caption{Block diagram for producing a reliable plant species classification model.}
\label{fig:plants_framework}
\end{figure}
To solve this problem, we decided to use a novel deep learning model that we recently developed called 1-Dimensional Polynomial Neural Network (1DPNN) \cite{intro:1DPNN} which was proven to encapsulate more information from its training set and to generalize better than a 1-Dimensional Convolutional Neural Network (1DCNN) in less time and with less memory than a 1DCNN. A major limitation of the 1DPNN model is that it can only process 1D signals and that there was no way to determine the polynomial degree of each of its layers upon solving a given problem. Therefore, we propose an extension of this model to 2D and 3D signals, such as images and videos respectively, and we generalize its denomination to NDPNN where ND stands for 1D, 2D or 3D. We also develop a heuristic algorithm that makes use of a polynomial degree reduction formula \cite{intro:reduction} that we have recently discovered and that allows to determine the smallest degree of each layer of a pre-trained NDPNN that preserves its performance on its test set, thus, enabling it to use less memory and less computational power while maintaining the same performance on its test set. Moreover, since the WSISCMC dataset is composed of images with different sizes, we create a method that we call Variably Overlapping Time--Coherent Sliding Window (VOTCSW) which allows the transformation of images with variable sizes to a 3D representation with fixed size that is suitable for 3DCNNs and 3DPNNs. Furthermore, we redistribute the samples of the dataset to maximize the learning efficiency of any machine learning model that may use it. We also train various well-know architectures such as ResNetV2 \cite{intro:resnet}, InceptionV3 \cite{intro:inception} and Xception \cite{intro:xception}, and we evaluate the gain of using the VOTCSW method and the NDPNN degree reduction heuristic with respect to regular 2DCNNs architectures.
\newline\indent
The contributions of this work are i) the extension of 1DPNNs to NDPNNs which can now be used on 2D and 3D signals such as images and videos, ii) the development of a heuristic algorithm for the degree reduction of pre-trained NDPNNs which creates lighter and faster NDPNNs with little to no compromise to their initial performances, iii) the formalization of the VOTCSW method which circumvents the need to resize images from an image dataset that has variable sizes by transforming each image to a 3D representation of fixed size that is suitable for 3DCNNs and 3DPNNs and which improves their inference on the WSISCMC dataset, iv) the resampling of the WSISCMC dataset with respect to class distribution and size distribution in order to enhance the performance of any machine learning model trained on it, v) the creation of a NDPNN model development framework that makes use of the degree reduction heuristic and the VOTCSW method and allows the creation of the best fitting neural network architecture on the WSISCMC dataset, vi) the creation of a simple 3DPNN architecture that achieves a state-of-the-art 99.9\% accuracy on the WSISCMC dataset, and which outperforms highly complex neural network architectures such as ResNet50V2 and InceptionV3, in less time and with substantially less parameters, vii) and the determination of aberrant samples in the WSISCMC dataset which are not suitable for the single-plant species recognition task that this dataset was created for.
\newline\indent
The outline of this paper is as follows. Section \ref{chapter=plants:section=related work} discusses the most recent works in plant species recognition while Section \ref{chapter=plants:section=theoretical framework} introduces 1DPNNs and their extension to NDPNNs, develops the layer-wise degree reduction heuristic and formulates the theoretical foundation of the VOTCSW method. Section \ref{chapter=plants:section=experiments} presents the model development framework established to produce a highly accurate model for the WSISCMC dataset and discusses the results obtained while providing insights on how the methods developed in Section \ref{chapter=plants:section=theoretical framework} influenced them. Finally, Section \ref{chapter=plants:section=conclusion} presents a summary of what was achieved in this work, discusses the limitations of the methods and the models used, and proposes solutions to overcome these limitations.
\section{Related Work}\label{chapter=plants:section=related work}
Plant species recognition is one of the most important tasks in the application of machine learning to digital agriculture. In this context, behaviour specific to a species will inform the identification of a plant's disease or the plant's need for resources such as water or light. The most common approach to identify the species of a given plant is to analyze its leaves \cite{related:leaves}. In fact, many public datasets \cite{intro:dataset, related:dataset} are only composed of leaves scanned in a uniform background. The methods that are mostly used rely either on combining feature engineering and machine learning classifiers or using deep learning models for both feature extraction and classification.
\newline\indent
Indeed, Purohit et al. \cite{related:feature_engineering} created morphological features based on the geometric shape of any given leaf and used these features to discriminate between 33 species of plants using different classifiers. They achieved a state-of-the-art $95.42\%$ accuracy on the Flavia dataset \cite{intro:dataset} and they demonstrated that their morphological features are superior to color features or texture features. However, they did not compare the efficiency of their features to ones that are fully determined using deep learning models. On the contrary, Wang et al. \cite{related:leaf_deep_learning} created a novel multiscale convolutional neural network (CNN) with an attention mechanism that can filter out the global background information of the given leaf image and highlight its saliency which allows it to consistently outperform classification models based on morphological feature extraction, as well as classification models based on well-known CNN architectures. They explain that CNNs are better at extracting high-level and low-level features without the need to perform image preprocessing and that their model which combines both these types of features in an attempt to discover estimable relationships outperforms regular CNNs. However, the models that were developed were only trained on the ICL dataset \cite{related:dataset} which only contains leaf images, and can therefore be hard to use on images of entire plants.
\newline\indent
Mehdipour Ghazi et al. \cite{related:cnn_full_plant} have considered training versatile architecutres of CNNs on the LifeCLEF 2015 \cite{intro:dataset_2} which contains $100,000$ images of plant organs from $1000$ species that are mostly captured outdoors. They proposed a data augmentation approach that randomly extracts and scales a number of random square patches from any given image before applying a rotation. These patches and the original image are then resized to a fixed size and the mean image is substracted from them in order to keep the most relevant features. The resulting images are then fed to a CNN model which outputs a prediction for each image. The prediction of the original image is then determined by summing these predictions together. This aggregation as well as fine-tuning pre-trained networks such as VGGNet \cite{related:vgg_net}, GoogLeNet \cite{related:google_net}, and AlexNet \cite{related:alex_net} with different hyperparameters controlling the number of weight updates, the batch size and the number of patches, allowed them to achieve state-of-the-art results on the dataset, notably by fusing VGGNet and GoogLeNet. The authors observe that, when training networks from scratch, simpler architectures are preferred to the kind of architectures they used in their work. Indeed, they could not train any network from scratch to produce satisfactory results, and they did not attempt to create simpler and more specialized architectures for the problem.
\newline\indent
The work presented in this paper differs from the related papers in two ways:
\begin{enumerate}
\item The WSISCMC dataset contains images of entire plants that trace different growth stages such that a trained classifier may be able to integrate the temporal evolution of a species organs in its inference, and may produce features richer than the ones that are only determined from separate organs such as leaves.
\item Concomitantly, the neural network architectures that we used are constructed in various stages of simplicity. Starting from a simple 2DCNN architecture, a different data representation suitable for a simple 3DCNN architecture is built, then both architectures are extended to NDPNNs. Finally, highly complex architectures such as InceptionV3 and ResNet50V2 are considered.
\end{enumerate}
\section{Theoretical Framework}\label{chapter=plants:section=theoretical framework}
In order to obtain the most accurate results on the plant species classification problem, we use a novel model called 1DPNN which we proposed for audio signal related problems. In this work, we generalize our model to be applied on 2D and 3D signals such as images and videos, and we show that the equations governing its inference and its learning do not change with the dimension of the signal. Hence, we refer to the generalized model as N-Dimensional PNN (NDPNN) since it can theoretically be used to process signals with any dimension. We also develop a heuristic algorithm for reducing the polynomial degree of each layer of a pre-trained NDPNN in a way that preserves its performance on a given test set. We also create a new method that we call variably overlapping time--coherent sliding window (VOTCSW) for changing the representation of images from a 2-dimensional grid of pixels to a 3-dimensional grid of pixels moving through time in order to solve the problem of the size variability of the images in the WSISCMC dataset considered in this problem. This section discusses the inference, the learning and the degree reduction of NDPNNs and details the foundation of the VOTCSW method.
\subsection{N-Dimensional Polynomial Neural Networks and Polynomial Degree Reduction}\label{chapter=plants:sub=NDPNN}
1DPNNs are an extension of 1DCNNs such that each neuron creates a polynomial approximation of a kernel that is applied to its input. Given a network with $L$ layers, such that a layer $l\in[\![1,L]\!]$ contains $N_l$ neurons, a neuron $i\in[\![1,N_l]\!]$ in a layer $l$ produces an output $y_i^{(l)}$ from its previous layer's output $Y_{l-1}$. The neuron possess a bias $b_i^{(l)}$, an activation function $f_i^{(l)}$, and $D_l$ weight vectors $W_{id}^{(l)}$ corresponding to every exponentiation $d$ of its previous layer's output up to a degree $D_l$. Eq. (\ref{Forward propagation equation}) below shows the output of a 1DPNN neuron.
\begin{flalign}\label{Forward propagation equation}
\forall l\in[\![1,L]\!],\forall i\in[\![1,N_{l}]\!],\
y_{i}^{(l)}=f_{i}^{(l)}\left(\sum_{d=1}^{D_{l}}W_{id}^{(l)}*Y_{l-1}^{d}+b_{i}^{(l)}\right)=f_{i}^{(l)}\left(x_i^{(l)}\right),&&
\end{flalign}
where $*$ is the convolution operator, $Y_{l-1}^{d}=\underbrace{Y_{l-1}\odot\cdots\odot Y_{l-1}}_{d\ times}$, and $\odot$ is the Hadamard product \cite{intro:1DPNN}.
\newline\indent
The equation above shows that the weight $W_{id}^{(l)}$ corresponding to $Y_{l-1}^d$ can be of any dimension as long as the convolution with $Y_{l-1}^d$ remains valid. For instance, if $Y_{l-1}^d$ is a list of 2D feature maps, $W_{id}^{(l)}$ has to be a list of 2D filter masks. Therefore, the equation of a 1DPNN neuron can be applied on images and videos as long as the dimension of the weights are adjusted accordingly. Hence, the same equation governs the behavior of a 2DPNN, a 3DPNN and an NDPNN in general. Furthermore, this also applies to the gradient estimation of an NDPNN, so even the backpropagation remains unchanged. Furthermore, we propose a method to reduce the degree of each layer after an NDPNN network is fully trained on a given dataset. This makes use of a fast polynomial degree reduction formula that we recently proposed \cite{intro:reduction} which can generate a polynomial of low degree that behaves the same as a given polynomial of higher degree on a symmetric interval. The method that we propose is a post-processing method that can compress a fully trained NDPNN, thus making it faster and lighter, without sacrificing its performance on the dataset it was trained on. The memory and computational efficiency gain mainly depend on the topology of the NDPNN and the performance loss tolerance. Although the polynomial degree reduction is performed on a symmetric interval and an NDPNN can use unbounded activation functions in general, we use the fact that, after training, the NDPNN weights do not change. So the input of each layer will be bounded in a certain interval when the NDPNN is fed with samples from the training set. The bounding interval may not be symmetric, but every interval is contained within a symmetric one, which allows us to properly use the polynomial degree reduction formula. Eq. (\ref{Forward propagation equation}) does not clearly show how the polynomial degree reduction can be achieved, which is why we consider the case of 1DPNNs to demonstrate the principle.
\newline\indent
In the 1D case, the output of a given layer $l$ is $Y_l$ which is a list containing the outputs of every neuron in that layer. The output of a neuron $i$ in that layer, $y_i^{(l)}=f_i^{(l)}\left(x_i^{(l)}\right)$, is a vector of size $M_l$. Similarly, the weights of that neuron with respect to the exponentiation degree $d$, $W_{id}^{(l)}$, is a list of $N_{l-1}$ vectors of size $K_l$. Therefore, expanding $x_i^{(l)}$ from Eq. (\ref{Forward propagation equation}) to compute each of its vector element independently produces:
\begin{flalign*}
\begin{split}
\forall m\in[\![0,M_l-1]\!],x_i^{(l)}(m)&=\sum_{d=1}^{D_l}\sum_{j=1}^{N_{l-1}}\sum_{k=0}^{K_l-1}w_{ijd}^{(l)}(k)\left(y_j^{(l-1)}(m+k)\right)^d+b_{i}^{(l)}\\
&=\sum_{j=1}^{N_{l-1}}\sum_{k=0}^{K_l-1}\left(\sum_{d=1}^{D_l}w_{ijd}^{(l)}(k)\left(y_j^{(l-1)}(m+k)\right)^d+\dfrac{b_i^{(l)}}{N_{l-1}K_l}\right)\\
&=\sum_{j=1}^{N_{l-1}}\sum_{k=0}^{K_l-1}P_{ijk}^{(l)}\left(y_j^{(l-1)}(m+k)\right),\\
\end{split}&&
\end{flalign*}
such that
\begin{flalign*}
P_{ijk}^{(l)}(X)=\sum_{d=1}^{D_l}w_{ijd}^{(l)}(k)X^d+\dfrac{b_i^{(l)}}{N_{l-1}K_l},&&
\end{flalign*}
and $X$ is an indeterminate. This shows that the output of a neuron $i$ in a layer $l$ is the result of the summation of $N_{l-1}K_l$ distinct polynomials, and that the layer consists of $N_lN_{l-1}K_l$ distinct polynomials. In the general case of an NDPNN, the number of polynomials in a layer $l$ would be $N_lN_{l-1}$ multiplied by the receptive field of that layer. Algorithm \ref{alg:degree_reduction} describes the process of compressing an NDPNN by means of layer-wise polynomial degree reduction. $F$ represents the trained model which takes as input samples from a given training set $T$ and processes them into an output. Note that given a model $F$, one can access the output of its layer $l$ by using $F_l$. The algorithm needs a performance evaluation function $\epsilon$ which takes a model, a dataset and its labels $T_{true}$ as input and needs a reduction tolerance $\epsilon_{0}$ which stops the algorithm when the performance evaluation of the reduced model is below $\epsilon_{0}$. $\epsilon$ can be any performance evaluation metric which outputs a higher score when the performance of the model is better, such as the accuracy. The algorithm starts by initializing the reduction of each layer $R$ to $0$ and by determining the smallest symmetric interval $[-A_l,A_l]$ where the values of the input of each layer $l$ are bounded. Then, it goes through each layer $l'$, it creates a copy $\tilde{F}$ of the initial model $F$, and reduces every layer's degree $D_l$ of the copy $\tilde{F}$ by the last degree reduced $R$ on the symmetric interval $[-A_l,A_l]$, except for layer $l'$ which has its degree reduced by $R+1$ on the symmetric interval $[-A_{l'},A_{l'}]$ as expressed by the instruction $\tilde{D}_l\gets D_l-\left(R_{l}+\mathds{1}_{\{l'\}}(l)\right)$ where $\mathds{1}$ is the indicator function. The weights of each layer of the initial model copy $\tilde{F}$ are therefore replaced during this process by reduced weights calculated via the polynomial degree reduction formula. If a layer $l'$ has been reduced to the degree $1$, the algorithm does not attempt to reduce it further and ignores it by assigning a nil performance $P_{l'}=0$. It then evaluates the copy $\tilde{F}$ using the performance evaluation function $\epsilon$ and stores the score $P_{l'}$ in a list. After performing these steps for every layer $l'$ in the model, the algorithm determines the layer $\tilde{l}$ whose reduction impacted the performance of the model the least and increases its reduction $R_{\tilde{l}}$ by 1. These steps are repeated until all the layers are reduced to a degree of $1$ or until the best performance of the current reduction is below $\epsilon_{0}$. Following that, the algorithm creates a final copy $\tilde{F}$ of the model $F$ and reduces the degree of each layer $l$ according to the reduction limit $R_l$ determined in the previous steps. The algorithm then returns the most reduced model within the limit of $\epsilon_{0}$.
\begin{figure}[h!]
\resizebox{0.8\linewidth}{!}
{
\begin{algorithm}[H]
\caption{NDPNN layer-wise polynomial degree reduction}\label{alg:degree_reduction}
\KwData
{\newline
$\bullet$ $L, N_l, D_l,W_{id}^{(l)}, b_i^{(l)}, \forall l\in[\![1,L]\!],\forall (i,d)\in[\![1,N_l]\!]\times[\![1,D_l]\!]$\newline
$\bullet$ Trained model $F$\newline
$\bullet$ Training set $T$ and training labels $T_{true}$\newline
$\bullet$ Evaluation function $\epsilon(F, T, T_{true})$\newline
$\bullet$ Reduction tolerance $\epsilon_{0}$\newline
}
\KwResult
{
\newline
$\bullet$ $\tilde{D}_l,\tilde{W}_{id}^{(l)}, \tilde{b}_i^{(l)}, \forall l\in[\![1,L]\!],\forall (i,d)\in[\![1,N_l]\!]\times[\![1,\tilde{D}_l]\!]$\newline
$\bullet$ Reduced model $\tilde{F}$
}
\SetAlgoLined\DontPrintSemicolon
$A\gets(0,...,0)_L$ \;
$R\gets(0,...,0)_L$ \;
$P\gets(\epsilon_{0},...,\epsilon_{0})_L$ \;
$A_1\gets\max\lvert T\rvert$\;
\For{$l\in[\![2,L]\!]$}
{
$A_l\gets\max\left(\left\lvert F_{l-1}(T)\right\rvert\right)$\;
}
$\tilde{l}\gets 0$\;
\While{$P_{\tilde{l}}\geq\epsilon_{0} \land \underset{l\in[\![1,L]\!]}{\max} (D_l-R_l)>1$}
{
\For{$l'\in[\![1,L]\!]$}
{
$\tilde{F}\gets F$\;
\For{$l\in[\![1,L]\!]$}
{
$\tilde{D}_l\gets D_l-\left(R_{l}+\mathds{1}_{\{l'\}}(l)\right)$\;
\eIf{$\tilde{D}_l\geq1$}
{
\For{$i\in[\![1,N_l]\!]$}
{
$\left(\left(\tilde{W}_{id}^{(l)}\right)_{[\![1,\tilde{D}_l]\!]}, \tilde{b}_i^{(l)}\right)\gets$reduce\_degree\_to\_n$\left(\left(W_{id}^{(l)}\right)_{[\![1,D_l]\!]}, b_i^{(l)}, D_l, \tilde{D}_l,A_l\right)$\;
$\tilde{F}\gets$replace\_neuron\_weights$\left(\tilde{F}, \tilde{b}_i^{(l)}, \left(\tilde{W}_{id}^{(l)}\right)_{[\![1,\tilde{D}_l]\!]},\tilde{D}_l,i,l\right)$\;
}
}
{
$P_{l'}\gets0$\;
Break the innermost For loop\;
}
}
$P_{l'}\gets \epsilon(\tilde{F},T, T_{true}) * \left(1-\mathds{1}_{\{0\}}(P_{l'})\right)$ \;
}
$\tilde{l}\gets\underset{l\in[\![1,L]\!]}{argmax}P$\;
\If{$P_{\tilde{l}}\geq\epsilon_{0}$}
{
$R_{\tilde{l}}\gets R_{\tilde{l}} + 1$\;
}
}
$\tilde{F}\gets F$\;
\For{$l\in[\![1,L]\!]$}
{
$\tilde{D}_l\gets D_l-R_{l}$\;
\For{$i\in[\![1,N_l]\!]$}
{
$\left(\left(\tilde{W}_{id}^{(l)}\right)_{[\![1,\tilde{D}_l]\!]}, \tilde{b}_i^{(l)}\right)\gets$reduce\_degree\_to\_n$\left(\left(W_{id}^{(l)}\right)_{[\![1,D_l]\!]}, b_i^{(l)}, D_l, \tilde{D}_l,A_l\right)$\;
$\tilde{F}\gets$replace\_neuron\_weights$\left(\tilde{F}, \tilde{b}_i^{(l)}, \left(\tilde{W}_{id}^{(l)}\right)_{[\![1,\tilde{D}_l]\!]},\tilde{D}_l,i,l\right)$\;
}
}
\end{algorithm}
}
\end{figure}
\subsection{Variably Overlapping Time--Coherent Sliding Window}\label{chapter=plants:sub=VOTCSW}
The WSISCMC plant species classification dataset used in this work contains images with varying sizes which are not suitable for neural networks that only accept an input with a predetermined size. As a result, there is a need to transform these images into a representation with a fixed size. In most cases, shrinking the images to the smallest size present in the dataset is enough to train a network to produce very accurate results. However, image resizing comes at the cost of either losing important details that may be detrimental for the performance of the network when shrinking, or adding synthetic pixels when padding or magnifying. Thus, we created the Variably Overlapping Time--Coherent Sliding Window (VOTCSW) technique, which transforms each image, regardless of its size, to a 3-dimensional representation with fixed size $(h,w,M)$. The VOTCSW allows this representation to be interpreted as a video of size $(h,w)$ containing $M$ frames by ensuring that two consecutive frames are spatially correlated hence the ``Time--Coherent" term. Therefore, the 3-dimensional representation can be fed into 3DCNNs and 3DPNNs. Furthermore, the method is a safe alternative to resizing as it does not add nor remove any pixel from the original images, or at least, it minimizes the need to do so under certain conditions that will be discussed below. The VOTCSW is based on the sliding window technique (hence the ``Sliding Window" term) which is a powerful signal processing tool that is used to decompose a signal containing a high number of samples into small chunks called windows containing a smaller number of samples that can be processed faster. Consecutive windows overlap with a certain ratio $\alpha\in[0,1[$ in order to ensure a correlation between them. The classical use of this technique consists in determining a window size that ensures enough representative samples to be present in a single window and an overlap that allows better processing performance. However, there is no consideration as to how many windows are extracted for each different signal length. In constrast, our proposed VOTCSW method aims to extract exactly $M$ windows of a fixed size from any signal regardless of its length. This is achieved by calculating an overlap for each signal length hence the Variably Overlapping term.
\newline\indent
Given an image of size $(H,W)$ and a desired 3-dimensional representation $(h,w,M)$, we define the following relationships:
\begin{flalign}\label{eq:H&W}
\begin{cases}
H &= h + N_h(1-\alpha)h\\
W &= w + N_w(1-\alpha)w\\
\end{cases},&&
\end{flalign}
where $\alpha\in[0,1[$ is the window overlap, $N_h$ is the number of windows that overlap with their predecessors and that are needed to cover the height of the image, and $N_w$ is the number of windows that overlap with their predecessors and that are needed to cover the width of the image. Following that, the total number of windows $M$ needed to cover the image in its entirety is
\begin{flalign}\label{eq:M}
M=(1+N_h)(1+N_w)=\left(\dfrac{H-h}{(1-\alpha)h}+1\right)\left(\dfrac{W-w}{(1-\alpha)w}+1\right).&&
\end{flalign}
Eq. (\ref{eq:M}) establishes a relationship between the overlap $\alpha$, the size of the image $(H,W)$ which is fixed, the size of the sliding window $(h,w)$ which is fixed and the total number of windows $M$ which is also fixed.
\begin{proposition}\label{prop:alpha}
The value of $\alpha$ with respect to $H$, $h$, $W$, $w$ and $M$ is
\begin{flalign}\label{eq:alpha}
\alpha=1-\dfrac{((H-h)w+(W-w)h)+\sqrt{((H-h)w+(W-w)h)^2+4hw(H-h)(W-w)(M-1)}}{2hw(M-1)}.&&
\end{flalign}
\end{proposition}
\begin{proof}
From Eq.(\ref{eq:M}), we have
\begin{flalign*}
\begin{split}
&M=\left(\dfrac{H-h}{(1-\alpha)h}+1\right)\left(\dfrac{W-w}{(1-\alpha)w}+1\right)\\
&\iff hwM(1-\alpha)^2=\left(H-h+h(1-\alpha)\right)\left(W-w+w(1-\alpha)\right)\\
&\iff hwM(1-\alpha)^2=(H-h)(W-w)+((H-h)w+(W-w)h)(1-\alpha)+hw(1-\alpha)^2\\
&\iff hw(M-1)(1-\alpha)^2-((H-h)w+(W-w)h)(1-\alpha)-(H-h)(W-w)=0.
\end{split} &&
\end{flalign*}
This is a second degree equation with $(1-\alpha)$ as unknown. Therefore, the discriminant is
\begin{flalign*}
\Delta=((H-h)w+(W-w)h)^2+4hw(H-h)(W-w)(M-1).&&
\end{flalign*}
Since $M$ is the number of windows, it is necessarily greater than 1. Moreover, $H>h$ and $W>w$ because the window size is always smaller than the image size. Consequently $\Delta>0$ and we have two solutions expressed as such:
\begin{flalign*}
1-\alpha_{1,2}=\dfrac{((H-h)w+(W-w)h)\pm\sqrt{\Delta}}{2hw(M-1)}.&&
\end{flalign*}
However, $\sqrt{\Delta}> ((H-h)w+(W-w)h)$ and since $\alpha\in[0,1[$ by definition, then $1-\alpha>0$ and only the following valid solution remains:
\begin{flalign*}
\alpha=1-\dfrac{((H-h)w+(W-w)h)+\sqrt{\Delta}}{2hw(M-1)}.&&
\end{flalign*}
\end{proof}
Eq. (\ref{eq:alpha}) was determined from the fact that $\alpha<1$. However, $\alpha$ needs to be positive as well. Therefore, there is a need to determine a condition on the choice of $h$, $w$ and $M$ with respect to $H$ and $W$ to ensure the positivity of $\alpha$.
\begin{proposition}\label{prop:alpha_positive}
\begin{flalign*}
\alpha\geq0\iff hwM\geq HW.&&
\end{flalign*}
\end{proposition}
\begin{proof}
We suppose that $\alpha\geq0$.
\begin{flalign*}
\begin{split}
&\alpha\geq0\iff 1-\alpha\leq1\iff\\
&\dfrac{((H-h)w+(W-w)h)+\sqrt{((H-h)w+(W-w)h)^2+4hw(H-h)(W-w)(M-1)}}{2hw(M-1)}\leq1\\
&\iff\\
&\sqrt{((H-h)w+(W-w)h)^2+4hw(H-h)(W-w)(M-1)}\\
&\leq 2hw(M-1) - ((H-h)w+(W-w)h)\\
&\iff\\
&((H-h)w+(W-w)h)^2+4hw(H-h)(W-w)(M-1)\\
&\leq 4h^2w^2(M-1)^2 + ((H-h)w+(W-w)h)^2 - 4hw((H-h)w+(W-w)h)(M-1)\\
&\iff 4hw(M-1)\left((H-h)(W-w)+(H-h)w+(W-w)h-hw(M-1)\right)\leq0\\
&\underset{M>1}{\iff}\left((H-h)(W-w)+(H-h)w+(W-w)h-hw(M-1)\right)\leq0\\
&\iff HW - Hw - hW +hw + Hw - hw + hW - hw - hwM + hw \leq0\\
&\iff HW - hwM \leq0\\
&\iff hwM \geq HW
\end{split}&&
\end{flalign*}
\end{proof}
Proposition \ref{prop:alpha_positive} implies that $h$, $w$ and $M$ can not be chosen arbitrarily. Furthermore, since $h$, $w$ and $M$ need to be fixed before transforming the images of the dataset, they have to verify this condition for every image of size $(H,W)$. As a result, we need to determine tighter conditions to be able to determine $h$, $w$ and $M$ consistently. Let $\beta$ be the aspect ratio of an image of size $(H,W)$ such that $W=\beta H$ and let $\gamma$ be the aspect ratio of a window of size $(h,w)$ such that $w=\gamma h$. We then derive from Eq. (\ref{eq:H&W})
\begin{flalign*}
W = w + (1-\alpha)wN_w\iff\beta H = \gamma h +(1-\alpha)\beta hN_w\iff N_w = \dfrac{\beta H-\gamma h}{(1-\alpha)\gamma h}.&&
\end{flalign*}
And since from Eq. (\ref{eq:H&W}), we have $N_h=\dfrac{H-h}{(1-\alpha)h}$, we deduce by dividing $N_w$ by $N_h$ that $N_w=\dfrac{\beta H-\gamma h}{\gamma(H-h)}N_h$ and thus
\begin{flalign}\label{eq:M_N_h}
M=(N_h+1)\left(\dfrac{\beta H-\gamma h}{\gamma(H-h)}N_h+1\right).&&
\end{flalign}
Since $M$ is an integer constant, and $N_h$ is an integer constant, $\dfrac{\beta H-\gamma h}{\gamma(H-h)}N_h$ should also be an integer constant. As a result, we can define a positive constant $p$ such that $p= \dfrac{\beta H-\gamma h}{\gamma(H-h)}$. The only variables in $p$ are $\beta$ and $H$ since they depend on the image being processed. $\gamma$ and $h$ are not supposed to change with the size of the image being processed. Consequently, we can write
\begin{flalign}\label{eq:p}
p = \dfrac{\beta H-\gamma h}{\gamma(H-h)}\iff \gamma(H-h)p+\gamma h=\beta H\iff H(\beta-\gamma p)=\gamma h(1-p).&&
\end{flalign}
This means that when $p\neq1$ and $\beta\neq\gamma p$, we have $H=\dfrac{\gamma h (1-p)}{\beta-\gamma p}$. This implies that, in order for $M$ to be a valid integer, every image height has to be resized to $H$, which contradicts the purpose of the VOTCSW. Therefore, we consider the case where $\beta=\gamma p$. We can deduce from Eq. (\ref{eq:p}) that, under this condition, $p$ will be equal to $1$ and $\beta=\gamma$. As a result, the aspect ratio of each image should be equal to that of the sliding window for $M$ to be a valid integer, which is a simpler condition than the previous one. In the following, we will only consider the case where $\beta=\gamma$ since the dataset used in this work contains images that have the same aspect ratio. When $\beta=\gamma$, $M=(N_h+1)^2$ which means that $M$ should be a square number. This is yet another condition on how to choose $M$ and this narrows down the possibilities even further. Under this assumption, the calculation of $\alpha$ can be simplified.
\begin{proposition}\label{prop:alpha_simple}
When $\beta=\gamma$, $\alpha$ only depends on $M$, $H$ and $h$ and its value is
\begin{flalign}\label{eq:alpha_simple}
\alpha=\dfrac{\sqrt{M}h-H}{h(\sqrt{M}-1)}.&&
\end{flalign}
\end{proposition}
\begin{proof}
By combining Eq. (\ref{eq:H&W}) and Eq. (\ref{eq:M_N_h}), we have
\begin{flalign*}
\begin{split}
M = (N_h+1)^2=\left(\dfrac{H-h}{(1-\alpha)h}+1\right)^2&\iff(1-\alpha)^2h^2M=(H-\alpha h)^2\\
&\iff(1-\alpha)h\sqrt{M}=H-\alpha h\\
&\iff\alpha=\dfrac{\sqrt{M}h-H}{h(\sqrt{M}-1)}.
\end{split}&&
\end{flalign*}
The positivity of $\alpha$ is verified when $h\geq \dfrac{H}{\sqrt{M}}$.
\end{proof}
When extracting the windows from an image, we should limit the overlap $\alpha$ to not reach its extremum in order to obtain consistent windows. Therefore, we impose on $\alpha$ two limits $\alpha_{min}$ and $\alpha_{max}$ such that $0\leq\alpha_{min}\leq\alpha\leq\alpha_{max}<1$. Given $H_{max}$, the height of the biggest image in the dataset and $H_{min}$, the height of the smallest image in the dataset, we can formulate a condition on $h$.
\begin{proposition}\label{prop:h}
The height $h$ of the sliding window has to verify the following condition:
\begin{flalign*}
\dfrac{H_{max}}{\sqrt{M}-\alpha_{min}(\sqrt{M}-1)}\leq h\leq \dfrac{H_{min}}{\sqrt{M}-\alpha_{max}(\sqrt{M}-1)}.&&
\end{flalign*}
\end{proposition}
\begin{proof}
We know that $H_{min}\leq H\leq H_{max}$. Consequently, by using Eq. (\ref{eq:alpha_simple}) we have
\begin{flalign}\label{eq:alpha_min_max}
\alpha_{min}\leq\dfrac{\sqrt{M}h-H_{max}}{h(\sqrt{M}-1)}\leq \alpha\leq\dfrac{\sqrt{M}h-H_{min}}{h(\sqrt{M}-1)}\leq\alpha_{max}.&&
\end{flalign}
Therefore, by extracting $h$ from Eq. (\ref{eq:alpha_min_max}) we obtain
\begin{flalign*}
\dfrac{H_{max}}{\sqrt{M}-\alpha_{min}(\sqrt{M}-1)}\leq h\leq \dfrac{H_{min}}{\sqrt{M}-\alpha_{max}(\sqrt{M}-1)}.&&
\end{flalign*}
\end{proof}
Although a condition on the choice $h$ is important, the parameter that will most likely be chosen first when using the VOTCSW technique is $M$. However, $\alpha_{min}$ and $\alpha_{max}$ are as important as $M$ because they specify the maximum and minimum amount of correlation between two consecutive windows. Hence, a condition on their choice is also important.
\begin{theorem}\label{theorem:condition on the parameters}
The parameters $M$, $\alpha_{min}$ and $\alpha_{max}$ can be determined in 6 different ways. For each way, there are conditions that need to be verified in order to extract the windows correctly.
\newline
$\bullet$ When determining $M$ then $\alpha_{min}$ then $\alpha_{max}$, the following conditions apply:
\begin{flalign*}
\begin{cases}
\sqrt{M}&\geq\dfrac{H_{max}}{H_{min}}\\
\alpha_{min}&\leq \dfrac{H_{max}}{H_{min}}+\left(1-\dfrac{H_{max}}{H_{min}}\right)\dfrac{\sqrt{M}}{\sqrt{M}-1}\\
\alpha_{max}&\geq\left(1-\dfrac{H_{min}}{H_{max}}\right)\dfrac{\sqrt{M}}{\sqrt{M}-1}+\dfrac{H_{min}}{H_{max}}\alpha_{min}\\
\end{cases}&&
\end{flalign*}
$\bullet$ When determining $M$ then $\alpha_{max}$ then $\alpha_{min}$, the following conditions apply:
\begin{flalign*}
\begin{cases}
\sqrt{M}&\geq\dfrac{H_{max}}{H_{min}}\\
\alpha_{max}&\geq\left(1-\dfrac{H_{min}}{H_{max}}\right)\dfrac{\sqrt{M}}{\sqrt{M}-1}\\
\alpha_{min}&\leq \dfrac{H_{max}}{H_{min}}\alpha_{max}+\left(1-\dfrac{H_{max}}{H_{min}}\right)\dfrac{\sqrt{M}}{\sqrt{M}-1}\\
\end{cases}&&
\end{flalign*}
$\bullet$ When determining $\alpha_{min}$ then $M$ then $\alpha_{max}$, the following conditions apply:
\begin{flalign}\label{eq:alpha_min_m}
\begin{cases}
\sqrt{M}&\geq\dfrac{H_{max}-H_{min}\alpha_{min}}{(1-\alpha_{min})H_{min}}\\
\alpha_{max}&\geq\left(1-\dfrac{H_{min}}{H_{max}}\right)\dfrac{\sqrt{M}}{\sqrt{M}-1}+\dfrac{H_{min}}{H_{max}}\alpha_{min}\\
\end{cases}&&
\end{flalign}
$\bullet$ When determining $\alpha_{min}$, then $\alpha_{max}$ then $M$, the following conditions apply:
\begin{flalign}\label{eq:alpha_min_alpha_max}
\begin{cases}
\alpha_{max}&\geq1-(1-\alpha_{min})\dfrac{H_{min}}{H_{max}}\\
\sqrt{M}&\geq\dfrac{H_{max}\alpha_{max}-H_{min}\alpha_{min}}{H_{min}(1-\alpha_{min})-H_{max}(1-\alpha_{max})}\\
\end{cases}&&
\end{flalign}
$\bullet$ When determining $\alpha_{max}$ then $M$ then $\alpha_{min}$, the following conditions apply:
\begin{flalign*}
\begin{cases}
\alpha_{max}&\geq1-\dfrac{H_{min}}{H_{max}}\\
\sqrt{M}&\geq\dfrac{H_{max}\alpha_{max}}{H_{min}-(1-\alpha_{max})H_{max}}\\
\alpha_{min}&\leq \dfrac{H_{max}}{H_{min}}\alpha_{max}+\left(1-\dfrac{H_{max}}{H_{min}}\right)\dfrac{\sqrt{M}}{\sqrt{M}-1}\\
\end{cases}&&
\end{flalign*}
$\bullet$ When determining $\alpha_{max}$ then $\alpha_{min}$ then $M$, the following conditions apply:
\begin{flalign*}
\begin{cases}
\alpha_{max}&\geq1-\dfrac{H_{min}}{H_{max}}\\
\alpha_{min}&\leq 1-\dfrac{H_{max}}{H_{min}}(1-\alpha_{max})\\
\sqrt{M}&\geq\dfrac{H_{max}\alpha_{max}-H_{min}\alpha_{min}}{H_{min}(1-\alpha_{min})-H_{max}(1-\alpha_{max})}\\
\end{cases}&&
\end{flalign*}
\end{theorem}
\begin{proof}
The proof of this theorem is based solely on the result of Proposition \ref{prop:h}. The condition for this Proposition to be valid is
\begin{flalign*}
\dfrac{H_{max}}{\sqrt{M}-\alpha_{min}(\sqrt{M}-1)}\leq \dfrac{H_{min}}{\sqrt{M}-\alpha_{max}(\sqrt{M}-1)}&&.
\end{flalign*}
Therefore we get
\begin{flalign}\label{eq:root_eq_theorem}
H_{max}\left(\sqrt{M}-\alpha_{max}(\sqrt{M}-1)\right)\leq H_{min}\left(\sqrt{M}-\alpha_{min}(\sqrt{M}-1)\right).&&
\end{flalign}
This equation establishes a relationship between $M$, $\alpha_{min}$ and $\alpha_{max}$ and the conditions stated in the theorem are all derived from it. We will only prove the case where $\alpha_{max}$ is determined first, then $\alpha_{min}$ then $M$ because it illustrates how the 5 other cases are proved. We begin by isolating $\sqrt{M}$ from Eq. (\ref{eq:root_eq_theorem}) as such:
\begin{flalign*}
\sqrt{M}\left(H_{max}(1-\alpha_{max})-H_{min}(1-\alpha_{min})\right)\leq H_{min}\alpha_{min}-H_{max}\alpha_{max}.&&
\end{flalign*}
The right term of the inequality is negative since $H_{min}\leq H_{max}$ and $\alpha_{min}\leq\alpha_{max}$ by definition. Consequently, if the term $\left(H_{max}(1-\alpha_{max})-H_{min}(1-\alpha_{min})\right)$ was positive, it would result in $\sqrt{M}\leq0$ which is false by definition. Therefore, it must be negative for $M$ to exist. As a result, we obtain the following condition:
\begin{flalign*}
\sqrt{M}&\geq\dfrac{H_{max}\alpha_{max}-H_{min}\alpha_{min}}{H_{min}(1-\alpha_{min})-H_{max}(1-\alpha_{max})}.&&
\end{flalign*}
The fact that $\left(H_{max}(1-\alpha_{max})-H_{min}(1-\alpha_{min})\right)\leq0$ leads to
\begin{flalign*}
\alpha_{min}\leq 1-\dfrac{H_{max}}{H_{min}}(1-\alpha_{max}).&&
\end{flalign*}
However, $\alpha_{min}\geq0$ by definition, so the right term of the inequality has to be positive for $\alpha_{min}$ to exist. Thus, we obtain the following condition:
\begin{flalign*}
\alpha_{max}\geq1-\dfrac{H_{min}}{H_{max}}.&&
\end{flalign*}
The other cases are proved by using the same reasoning.
\end{proof}
Theorem \ref{theorem:condition on the parameters} shows that in 2 particular cases described by Eqs. (\ref{eq:alpha_min_m}) and (\ref{eq:alpha_min_alpha_max}) , when $\alpha_{min}$ is determined first, there is no constraint on its value other than that it should be positive and lower than 1. These 2 cases should be preferred over the other more constrained ones when choosing the values of $\alpha_{min}$, $\alpha_{max}$ and $M$. The theorem also ensures the choice of well defined parameters, given $H_{min}$ and $H_{max}$ only. Nevertheless, in some datasets, the difference between $H_{min}$ and $H_{max}$ is considerable and may lead to the choice of a high number of windows $M$, a maximum overlap $\alpha_{max}$ close to 1, or a minimum overlap $\alpha_{min}$ close to 0. Therefore, one can also arbitrarily choose the parameters that are deemed appropriate and then choose a maximum height $\tilde{H}_{max}$ and a minimum height $\tilde{H}_{min}$ such that:
\begin{flalign}\label{eq:H_max/H_min}
\dfrac{\tilde{H}_{max}}{\tilde{H}_{min}}\leq \dfrac{\sqrt{M}-\alpha_{min}(\sqrt{M}-1)}{\sqrt{M}-\alpha_{max}(\sqrt{M}-1)}.&&
\end{flalign}
After performing this choice, each image in the dataset whose height exceeds $\tilde{H}_{max}$ should be shrinked to $\tilde{H}_{max}$, and each image whose height is less than $\tilde{H}_{min}$ should be padded or magnified to $\tilde{H}_{min}$. Although this defeats the purpose of the VOTCSW method, it is still better than resizing all the images in the dataset since the images whose heights are within $[\tilde{H}_{min},\tilde{H}_{max}]$ will remain the same. Using Proposition \ref{prop:alpha_simple}, Proposition \ref{prop:h} and Theorem \ref{theorem:condition on the parameters} enables any image whose height is between the limits $H_{min}$ and $H_{max}$ and whose aspect ratio $\beta$ is the same as that of the sliding window $\gamma$ to be transformed in a fixed 3-dimensional representation $(h,\gamma h,M)$. However, the order of the windows in the 3-dimensional sequence should not be arbitrary as it should ensure a correlation between every two consecutive windows. The VOTCSW method is based on the sliding window technique which is widely used on 1-dimensional signals because the inter-window correlation is always ensured. However, this is no longer guaranteed for higher dimensional signals such as images as shown in Figure \ref{fig:sliding_window} which describes the sliding window technique on an image represented by a rectangle where the the light blue color represents a window a the dark blue color shows the overlap between two windows. As represented by the arrows, the window slides from left to right and resumes to the initial position when it reaches the boundaries of the image. It then slides one step downward and continues sliding from left to right until it covers the whole image. This sliding pattern does not ensure that every two consecutive windows are correlated and this can be noticed in the aforementioned figure where window $n$ and window $n+1$ are totally separated.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{sliding_window.jpg}
\caption{Sliding windows on a rectangle representing an image. The arrows represent the sliding pattern and the dark blue color shows the overlap between two windows.}
\label{fig:sliding_window}
\end{figure}
Nevertheless, this usually does not matter in machine learning applications where each window can be treated independently and the result is determined by aggregating the results obtained on each individual window. However, the VOTCSW method consists in creating a causal 3-dimensional representation that is analogous to a video and which can be processed as a video in the sense that there is a time coherence between two consecutive windows meaning that one necessarily appears before the other and is spatially correlated with it. To ensure that, we define, given a matrix $I$ of size $H\times W$ representing an image, the following sequences describing a time--coherent left-to-right, top-to-bottom sliding pattern:
\begin{flalign}\label{eq:sliding_pattern}
\begin{split}
&\forall n\in[\![0,M-1]\!],\\
&\delta_n = \left\lfloor\dfrac{n}{\sqrt{M}}\right\rfloor,\\
&a_n=(1-\alpha)h\delta_n,\\
&b_n=a_n+h,\\
&c_n=\left(\left(1-(-1)^{\delta_n}\right)\dfrac{\sqrt{M}-1}{2}+(-1)^{\delta_n}\left(n-\delta_n\sqrt{M}\right)\right)(1-\alpha)\gamma h,\\
&d_n = c_n+\gamma h,\\
&\mathcal{W}_n = \begin{bmatrix}
I_{a_nc_n} & I_{a_nc_{n+1}} & \dots \\
\vdots & \ddots & \\
I_{b_nc_n} & & I_{b_nd_n}
\end{bmatrix},
\end{split}&&
\end{flalign}
where $\mathcal{W}_n$ is the n-th window extracted from the image $I$, $h$ is a constant determined using Theorem \ref{theorem:condition on the parameters} and Proposition \ref{prop:h}, and $\alpha$ is calculated using Proposition \ref{prop:alpha_simple}. This pattern is a modification of the one described in Figure \ref{fig:sliding_window} such that when the window slides to the right edge of the rectangle, it slides downward by a step, and slides back to the left until reaching the left edge before it slides downward again and slides back to the right edge. As a result, this pattern ensures the time--coherence of the 3-dimensional representation created by the VOTCSW method. Another property of the VOTCSW method is that it performs an oversampling of the pixels due to the window overlap that enables the window to cover the same pixel more than once.
\begin{proposition}\label{prop:oversampling}
The maximum oversampling factor for a pixel in an image that is processed with the VOTCSW method with an overlap $\alpha$ is $\dfrac{1}{(1-\alpha)^2}$.
\end{proposition}
\begin{proof}
We designate by $(x,y)$ a pixel in a given image. We suppose that $h$, $M$, and $\gamma$ were chosen prior to the use of the VOTCSW method and that the overlap $\alpha$ was calculated for the image. For the pixel $(x,y)$ to be contained in a given window $(n,m)$ such that $(n,m)\in[\![0,\sqrt{M}-1]\!]^2$, the following conditions need to be verified:
\begin{flalign*}
\begin{cases}
(1-\alpha)hn&\leq x\leq (1-\alpha)hn+h\\
(1-\alpha)\gamma hm&\leq y\leq (1-\alpha)\gamma hm+\gamma h\\
\end{cases}.&&
\end{flalign*}
By extracting $n$ and $m$ from the previous conditions, we obtain
\begin{flalign*}
\begin{cases}
\dfrac{x}{(1-\alpha)h}-\dfrac{1}{1-\alpha}&\leq n \leq \dfrac{x}{(1-\alpha)h}\\
\dfrac{y}{(1-\alpha)\gamma h}-\dfrac{1}{1-\alpha}&\leq m \leq \dfrac{y}{(1-\alpha)\gamma h}
\end{cases},&&
\end{flalign*}
which determines what values of $n$ and $m$ are valid for a window to contain the pixel $(x,y)$. Since $n$ is an integer, it can take at most $\dfrac{x}{(1-\alpha)h}-\left(\dfrac{x}{(1-\alpha)h}-\dfrac{1}{1-\alpha}\right)=\dfrac{1}{1-\alpha}$ different values in $[\![0,\sqrt{M}-1]\!]$, and the same can be inferred for $m$. Since every combination of $n$ and $m$ that follows the previous conditions can determine a window that contains the pixel $(x,y)$, the total number of times the pixel $(x,y)$ is present in a window is at most $\dfrac{1}{(1-\alpha)^2}$.
\end{proof}
Proposition \ref{prop:oversampling} implies that $\alpha_{min}$ and $\alpha_{max}$ are means to control the maximum oversampling factor of a pixel. Moreover, it implies that, the smaller the image, the more its pixels will be oversampled which will definitely alter the class distribution for a classification problem if the size distribution in each class is uneven. This may prove useful in certain cases of image classification where the images representing the least represented class happen to be the smallest in size. Finally, the VOTCSW method can be summarized in the following steps:
\begin{enumerate}
\item[1.] Ensure that the image dataset has a single aspect ratio $\beta$ and determine $H_{max}$ and $H_{min}$.
\item[2.] Choose $\alpha_{min}$, $\alpha_{max}$, $M$ and $h$ as recommended in Theorem \ref{theorem:condition on the parameters} and Proposition \ref{prop:h}. Define $\gamma=\beta$.
\item[3.] Choose a time--coherent sliding pattern such as the one described in Eq. (\ref{eq:sliding_pattern}) and use it for each image in the dataset.
\end{enumerate}
\section{Experiments and results}\label{chapter=plants:section=experiments}
In order to produce meaningful results and to reliably choose a model over the other, the framework shown in Figure \ref{fig:plants_framework} was designed. This section details and discusses each block in the diagram and provides an in-depth analysis of the results obtained on the WSISCMC dataset.
\subsection{Preprocessing}
The WSISCMC plant species classification dataset that is used in this work was mainly constructed to specifically overcome the limitations of the current state-of-the-art datasets used in machine learning which consist in a lack of sufficient variety and quantity. It contains $38,680$ high-quality square photos with different sizes of 8 different species of plants taken from different angles, and at various growth stages, which can theoretically enable a well-trained classification model to recognize plants at any stage in their growth. Moreover, the plants are potted and placed in front of a uniform background that can easily be substituted with field images for example. There are also images which are taken with a mobile phone in various backgrounds to enable trained models to be tested on unprocessed images. In \cite{intro:EAGL-I}, it was shown that the WSISCMC dataset allows the production of a reliable binary classifier that differentiates between grasses and non-grasses. However, the dataset was not used to train a more complex model of plant species classification. A first attempt to evaluate the difficulty of this task on this dataset was to create a baseline 2DCNN that takes images resized to $224\times224$ and attempts to predict the species of the plant present in the dataset. The tested accuracy of that baseline model was $49.23\%$ which was mediocre.
\newline\indent
It was then determined, after an analysis of the dataset, that there was a significant imbalance in the class distribution. Moreover, this distribution was radically different between the training set and the test set. Furthermore, the size distribution was also very different between the training set and test set, such that the maximum image size in the training set was $1226\times1226$ and the maximum image size in the test set was $2346\times2346$. In addition, the distribution of size per class also varied between the training set and the test set. Therefore, the dataset was entirely redistributed into a training set and a test set that have the same class distribution and the same size per class distribution while keeping the same train-test ratio as the initial dataset. Table \ref{table:plants_distribution} shows the initial training set and test set class distributions as well as the class distribution of the reworked dataset. We notice that the ``Wild Oat" class is not even present in the test set and that the ``Smartweed" and ``Yellow Foxtail" classes are the most represented classes in the test set, and the least represented ones in the training set. Figure \ref{fig:size_distribution}.(a) shows the size distribution of the ``Canola" class in the training set and the test set whereas Figure \ref{fig:size_distribution}.(b) shows the size distribution of the ``Canola" class in the reworked dataset. We notice that the size distribution of the ``Canola" class in the training set is widely different from the test set and that the reworked dataset ensures that every size is present with the same proportion in the training set and the test set.
\begin{table}[h!]
\begin{tabular}{llll}
Species&Training set distribution&Test set distribution&Reworked distribution\\\hline
Smartweed&0.03&0.14&0.04\\\hline
Yellow Foxtail&0.10&0.22&0.11\\\hline
Barnyard Grass&0.25&0.12&0.23\\\hline
Wild Buckwheat&0.12&0.14&0.12\\\hline
Canola&0.19&0.14&0.19\\\hline
Canada Thistle&0.14&0.14&0.14\\\hline
Dandelion&0.14&0.10&0.13\\\hline
Wild Oat& 0.03& 0&0.03\\\hline
\end{tabular}
\caption{Training set, test set and reworked distributions of the 8 species in the dataset.}
\label{table:plants_distribution}
\end{table}
\begin{figure}[h!]
\centering
\subcaptionbox{Training set and test set distribution.}{\includegraphics[scale=0.065]{train_test_size_distribution.jpg}}\subcaptionbox{Reworked dataset distribution.}{\includegraphics[scale=0.065]{reworked_size_distribution.jpg}}
\caption{Size distributions for the ``Canola" class in the training set, the test set and the reworked dataset.}
\label{fig:size_distribution}
\end{figure}
\newline\indent
These factors, as well as the aforementioned ones, explain why the baseline model failed to accurately determine the species of the images it processed. Nevertheless, after reworking the dataset, the baseline model achieved an accuracy of $95.1\%$. Although this was a noticeable improvement, there was still room to achieve better accuracy with further preprocessing. One of which is to modify the bias initialization of the classifier's last layer to take into account the inherent data imbalance. Since the last layer uses a softmax activation function, we need to solve the following system of equations:
\begin{flalign*}
\forall k\in[\![1,N]\!], p_k = \dfrac{e^{b_k}}{\sum_{i=1}^{N}e^{b_i}},&&
\end{flalign*}
where $N$ is the number of classes, $p_k$ is the presence of class $k$ in the dataset, and $b_k$ is the bias in the neuron $k$ that will predict the probability of an image to belong to class $k$. This system is linear in $e^{b_k}$ and is easily solvable by noticing that $\dfrac{p_j}{p_k}=e^{b_j-b_k}$. This bias modification improved the accuracy of the baseline model by $1.58\%$ which reached $96.68\%$ accuracy. Consequently, we created 6 different versions of the same dataset. The first three versions are produced by the VOTCSW method with the following parameters $\alpha_{min}=0.1$, $\alpha_{max}=0.9$ and $M=9$. These parameters impose that we shrink the images whose height is greater than $973$ to $973$. This value was determined using Eq. (\ref{eq:H_max/H_min}) for $H_{min}=418$ which represents the lowest size present in the dataset. The difference between the first three versions produced by the VOTCSW method is the sliding patterns which are horizontal (refer to Eq. (\ref{eq:sliding_pattern})), vertical and spiral, respectively. The images were all transformed to 3D tensors of size $348\times348\times9$, meaning that they are equivalent to videos of size $348\times348$ containing $9$ frames. The fourth version and fifth version of the dataset consist in resizing the images to a size that is equivalent, in terms of the number of samples, to the size of the ``videos" generated by the VOTCSW method. This size is $\sqrt{348\times348\times9}=1044$ and the images that are larger than $1044$ are shrinked to $1044$ whereas the ones that are smaller than $1044$ are zero-padded to that size in the fourth version, and magnified to that size in the fifth version. The sixth version of the dataset consists in shrinking all the images to a size of $224\times224$ which is suitable for using well-known architectures such as ResNet or Inception. The 6 versions of the dataset are referred to as WSISCMC--H, WSISCMC--V, WSISCMC--S, WSISCMC--1044P, WSISCMC--1044M, and WSISCMC--224 following the order in which they were introduced above.
\subsection{Model development}
After performing the preprocessing described above, we created multiple variations for each kind of model. All the networks used share in common a $3\times3$ ($3\times3\times3$ for 3D) filter mask size for the convolution layers, an initial layer composed of $32$ neurons, a last feature extraction layer composed of $64$ neurons, the use of ReLU \cite{relu} as activation function, the shape of the output of their feature extractor which is $576$ features and their densely connected layers which are composed of 2 layers, one containing $128$ neurons, followed by one containing $8$ neurons corresponding to the $8$ classes of the dataset with a softmax activation. For each dataset version, there is a predetermined depth for the networks created as shown in Figure \ref{fig:network_architecture}. We performed a grid search on the number of neurons for the 4 penultimate feature extraction layers of every 2D network with 10-fold cross validation and we ensure that the validation set always has the same class distribution and size distribution as the train set. The possible values used for the number of neurons in these layers were $16$, $32$ and $64$ to keep the experiments feasible to be completed in a reasonable amount of time. The layers that were not searched were composed of $64$ neurons by default. This search allowed us to select the best architecture among $81$ variations for each of the 3 2D dataset versions, WSISCMC--224, WSISCMC--1044P and WSISCMC--1044M.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.15]{network_architecture.jpg}
\caption{Network architecture for each version of the WSISCMC dataset.}
\label{fig:network_architecture}
\end{figure}
\newline\indent
Following that, for each 2D version of the dataset, a network was created and trained with the best determined architecture 10 times with different initial weights and evaluated on its corresponding test set such that only the best one is chosen. Then, the 2DCNN architecture of the best network among the one trained on the WSISCMC--1044P and the one trained on the WSISCMC--1044M was chosen to create a 3DCNN architecture that has the same number of parameters as the chosen architecture. This is to ensure a fair evaluation of the effect of the VOTCSW method on the performance of the 3DCNN. Indeed, if a 3DCNN with more parameters than a 2DCNN trained on WSISCMC--1044P or WSISCMC--1044M achieves better results than the 2DCNN, then this might be due to the difference in the number of parameters, and not the difference in data representation. Since 2DCNNs and 3DCNNs perform the same fundamental operation, if both have the same number of parameters and one of them performs better than the other, then this is most likely due to the difference in data representation. Naturally, the weight initialization also plays a role in this difference in performance, therefore, we always train the same network 10 times with different initial weights and choose the one that achieves the best accuracy on the test set. The architecture of the 3DCNN that is calculated from the best 2DCNN architecture has to be consistent with the above description in the sense that the initial layer contains $32$ neurons, and the last feature extraction layer contains $64$ neurons. We assume that all the inner layers of the 3DCNN have the same number of neurons $N$. If we denote the number of parameters of the feature extractor of the best network trained on WSISCMC--1044P or WSISCMC--1044M by $N_{1044}$, the receptive field of the 3D convolution layers by $R_3$ ($27$ in this case), and the receptive field of the 2D convolution layers by $R_2$ ($9$ in this case), then the number of neurons $N$ of each inner layer of the 3DCNN is determined by the following equation:
\begin{flalign*}
3\times32\times R_3 + 32R_3 N + 2 R_3 N^2 + R_2N^2+ 64R_2N + 32 + 64 + 4N=N_{1044},&&
\end{flalign*}
which is equivalent to
\begin{flalign}\label{eq:N}
(2R_3+R_2)N^2+4(8R_3+16R_2+1)N+96(R_3+1)-N_{1044}=0.&&
\end{flalign}
This second degree equation with unknown $N$ has a unique positive solution because $96(R_3+1)-N_{1044}$ is negative and the other coefficients are positive. Since $N$ has to be an integer, the determined solution is rounded before it is used.
\newline\indent
The resulting architecture was then used to create and train 3DCNNs on WSISCMC--H, WSISCMC--V and WSISCMC--S. The best 3DCNN network and the best 2DCNN network were then selected to be extended to NDPNNs. Consequently, each convolution layer of each network was changed to an NDPNN layer with a degree $7$, such that a 2DPNN and a 3DPNN were trained and evaluated on their proper respective datasets. Finally, both of them were reduced using the layer-wise degree reduction heuristic described in Algorithm \ref{alg:degree_reduction} with $0$ tolerance to gain computational efficiency and reduce memory usage without sacrificing their accuracy. We also fine-tuned a ResNet50V2, an InceptionV3 and a Xception network on the WSISCMC--224 dataset. These networks were also trained from scratch and only the best among the fine-tuned and the custom network of each model was selected for evaluation. Every model was trained with the Adam optimizer \cite{adam}, a batch size of $128$ and a learning rate of $10^{-3}$ for $100$ epochs.
\subsection{Results and discussion}
The grid search determined that, for all the networks, $64$ neurons in every layer (except the first) produces the best average results. The number of neurons for the 3DCNN layers was therefore determined to be $53$ according to Eq. (\ref{eq:N}) with $N_{1044}=257408$ which corresponds to the number of parameters of the feature extractor of the 2DCNN--1044P and the 2DCNN--1044M. Furthermore, ResNet50V2, InceptionV3 and Xception failed to produce decent results when they were trained from scratch. Therefore, only the fine-tuned networks were considered. Table \ref{table:performance} shows the best test accuracy, aggregated precision, aggregated recall, aggregated F1 score, the average inference time per sample and the number of trainable parameters of every model described above. The experiments show that the best 2DCNN model is the one trained on WSISCMC--224, and the best 3DCNN is the one trained with the vertical sliding pattern. Therefore, they were chosen to be extended to NDPNNs and the 2DPNN achieved a $99.48\%$ accuracy while the 3DPNN achieved a state-of-the-art $99.58\%$ accuracy. Their execution times and the number of trainable parameters were measured after the polynomial degree reduction described in Algorithm \ref{alg:degree_reduction} which determined that the first two layers of the 2DPNN could be reduced to a degree of $7$ and $2$ respectively while the remaining ones could be reduced to $1$ which represents $4.67$ times less parameters, and that the first three layers of the 3DPNN could be reduced to $6$, $2$ and $2$ respectively, while the remaining ones could be reduced to $1$ which represents $4.01$ times less parameters. The results also show that ResNet50V2, InceptionV3 and Xception failed to match the performance of the 2DPNN and the 3DPNN despite having more than 30 times the number of parameters. Furthermore, even though the accuracy of the 3DPNN is unmatched, the data generated by the VOTCSW method came with an increase in the spatio-temporal complexity of the model as it runs $3.28$ times slower than its 2D counterpart, and has $1.75$ times more parameters. However, both 2DCNN--1044P and 2DCNN--1044M did not provide satisfactory results compared to the 3DCNN models that run faster and have less parameters, which tends to show that the 3D representation created with the VOTCSW method is better than padding, magnifying and shrinking. Moreover, the 3DCNN models achieve better performance overall than the 2DCNN--224 model but they have more parameters and run slower than the 2DCNN--224 model which suggests that the VOTCSW method comes with the cost of slightly heavier but better models.
\newline\indent
\begin{table}[h!]
\begin{tabular}{lllllll}
& Performance\\\cline{2-7}
Model&Accuracy&Precision&Recall&F1 Score& Inference time (ms)&Parameters\\\hline
2DCNN--224 & 98.15&98.2&98.08&98.14&4.7&241,992\\\hline
2DCNN--1044P& 97.8&97.83&97.78&97.8&14.49&332,296\\\hline
2DCNN--1044M& 98.03&97.96&98.11&98.03&14.73&332,296\\\hline
3DCNN--H& 98.28&98.37&98.16&98.26&14.26&331,075\\\hline
3DCNN--V & 98.43&98.64&98.3&98.46&14.15&331,075\\\hline
3DCNN--S & 98.28&98.37&98.16&98.26&14.31&331,075\\\hline
2DPNN--224 & 99.48&99.53&99.33&99.42&4.98&265,608\\\hline
\textbf{3DPNN--V} & \textbf{99.58}&\textbf{99.69}&\textbf{99.36}&\textbf{99.52}&16.34&465,670\\\hline
ResNet50V2 & 98.25&98.17&98.28&98.22&28&23,581,192\\\hline
InceptionV3 & 97.78&97.64&97.83&97.73&38&21,819,176\\\hline
Xception & 97.9&97.85&98.03&97.93&31&21,819,176\\\hline
\end{tabular}
\caption{Accuracy, precision, recall, F1 score, inference time and number of trainable parameters of all the models trained on the reworked WSISCMC dataset. Bold values represent the highest value in their respective column.}
\label{table:performance}
\end{table}
An analysis of the generalization behavior of the networks shows that the 3DCNN--V model learns to generalize faster than the equally complex 2DCNN--1044M model and the 2DCNN--224 model as represented in Figure \ref{fig:accuracy_time}.(a). Furthermore, a more stable convergence is observed for the 3DCNN--V model which may be explained by the fact that the oversampling inherent to the VOTCSW method (refer to Proposition \ref{prop:oversampling}) has a regularization effect that smoothes the weight updates and enables a steadier training with a potential reduction of overfitting. These effects are also observed in the convergence of the NDPNN models represented in Figure \ref{fig:accuracy_time}.(b) where there is a clear gap between the convergence speed and stability of the 3DPNN--V and that of the 2DPNN--224.
\begin{figure}[h!]
\centering
\subcaptionbox{CNNs accuracy over time.}{\includegraphics[scale=0.0635]{CNNs.jpg}}\hspace{5pt}\subcaptionbox{NDPNNs accuracy over time.}{\includegraphics[scale=0.0635]{NDPNNs.jpg}}
\caption{Evolution of the networks' test accuracies per epoch.}
\label{fig:accuracy_time}
\end{figure}
A further analysis of the performance of the 3DPNN--V model shows that it was able to encapsulate enough information to perfectly recognize the least represented class in the dataset which is ``Wild Oat" as shown in Table \ref{table:confusion} outlining the confusion matrix of the 3DPNN--V model. Since only 17 images were wrongly classified, an in-depth investigation was performed. This investigation revealed that 10 images were showing a blue background as illustrated in Figure \ref{fig:wrongly_classified}.(a) and that 3 images were containing multiple plants in one image as shown in Figure \ref{fig:wrongly_classified}.(b). Moreover, upon further investigation, it was determined that the 3DPNN--V model correctly recognizes one of the plants present in all of the 3 multiple-plant images. Therefore, we can consider that the model is only wrong on 4 images since the problematic images contradict the task of single plant classification by either showing no plant or multiple ones. As a result, the 3DPNN--V model achieved an effective accuracy of $\dfrac{4004-17}{4004-17+4}=99.9\%$ when we removed the aberrant samples from the test set.
\begin{table}[h!]
\resizebox{\columnwidth}{!}{
\begin{tabular}{lllllllll}
Actual--Predicted&Canola&Dandelion&Canada Thistle&Wild Oat&Wild Buckwheat&Smartweed&Barnyard Grass&Yellow Foxtail\\\hline
Canola&752&1&1&0&1&0&0&0\\\hline
Dandelion&0&540&0&0&0&0&0&0\\\hline
Canada Thistle&0&0&545&0&0&0&0&0\\\hline
Wild Oat&0&0&0&126&0&0&0&0\\\hline
Wild Buckwheat&0&1&0&0&488&0&1&0\\\hline
Smartweed&0&1&1&0&0&145&2&0\\\hline
Barnyard Grass&1&0&0&0&0&0&939&0\\\hline
Yellow Foxtail&0&1&1&0&0&0&5&452\\\hline
\end{tabular}
}
\caption{Confusion matrix of the 3DPNN--V model.}
\label{table:confusion}
\end{table}
\begin{figure}[h!]
\centering
\subcaptionbox{Empty image.}{\includegraphics[scale=0.3]{blue_background.png}}\hspace{5pt}\subcaptionbox{Multiple plants in one image.}{\includegraphics[scale=0.12]{many_plants.png}}
\caption{Examples of images wrongly classified by the 3DPNN-V model.}
\label{fig:wrongly_classified}
\end{figure}
\newline\indent
We now investigate the reason why the VOTCSW method enables the creation of a model that generalizes better than one trained on resized images aside from its regularization-like behavior. The intuition behind the improved generalization comes from the fact that the VOTCSW method enables the model using a 3D convolution kernel to have a larger effective field of view than a 2D convolution kernel as illustrated in Figure \ref{fig:comparison} where the white squares represents a $3\times3$ convolution kernels, and the red, green and black dashed squares represent 3 consecutive overlapping windows generated by the VOTCSW method. The VOTCSW convolution kernel has three times more parameters and is more spatially dilated which enables it to take into account three distinct informative areas that may be distant such as leaves. Hence, this helps in creating ``spatially aware" models that can, not only achieve what regular 2D convolution models already do, but also establish a map of more complex spatial features.
\begin{figure}[h!]
\centering
\subcaptionbox{Regular 2D convolution kernel.}{\includegraphics[scale=0.12]{regular_convolution.jpg}}\hspace{5pt}\subcaptionbox{VOTCSW 3D convolution kernel.}{\includegraphics[scale=0.12]{VOTCSW.jpg}}
\caption{Comparison between a regular 2D convolution kernel and VOTCSW 3D convolution kernel.}
\label{fig:comparison}
\end{figure}
\section{Conclusion}\label{chapter=plants:section=conclusion}
The experiments conducted on the WSISCMC dataset demonstrate that the VOTCSW method coupled with 3DPNNs outperform fine-tuned ResNetV2, InceptionV3 and Xception with far less spatio-temporal complexity. This confirms the intuition proposed by Mehdipour Ghazi et al. \cite{related:cnn_full_plant} who states that models with simpler architectures have the tendency to better learn from scratch than complex architectures. Moreover, we determined that the NDPNN layer-wise degree reduction heuristic would be able to significantly compress a pre-trained NDPNN without altering its performance on the test set, which makes it a necessary postprocessing tool that has to be used in conjunction with NDPNNs. Furthermore, we also demonstrated that the VOTCSW method offers a better alternative than resizing when using the WSISCMC dataset which contains images with variable sizes and that the 3D representation it creates is more informative than a resized 2D representation. In addition, we discovered that the current publicly available WSISCMC dataset can not be used with machine learning models without a mandatory preprocessing consisting in redistributing the samples of each class by occurrence and size to create a test set that has the same distribution as the training set. Besides, we also discovered that there were aberrant samples in the dataset which contradict the fact that the dataset should only contain single-plant images. However, despite these minor issues, we can safely declare that the EAGL--I system has the potential to produce highly relevant massive datasets, provided that the authors impose a stricter control on the data acquisition process and ensure that classes are balanced.
\newline\indent
Despite its effectiveness, the NDPNN layer-wise degree reduction heuristic was only applied after the training of an NDPNN was completed, which is the most time consuming process and the most risky in terms of stability as 1DPNNs have been proven to show some instability when trained with unbounded activation functions and this instability is expected to be observed on 2DPNNs and 3DPNNs. The instability can potentially be reduced by lowering the degree of each layer of the model either before or during training, hence the potential to use this heuristic in the model validation process, instead of after a model is trained. Furthermore, the heuristic is based on determining the smallest symmetric interval that contains the values of a given layer's output regardless of the channel/neuron. This implies that some polynomials are over-reduced, meaning that they are reduced on a bigger interval than the one they produce their output from. A potential solution to this is to determine the smallest symmetric interval that contains the values of a given neuron's output to produce a finer and more accurate reduction. As for the VOTCSW method, although it enabled the creation of highly accurate models on the WSISCMC dataset, there is not enough evidence to claim that it can improve the results on any dataset that contains images with variable size. Additionally, the increase in model performance may not always justify the size and parameter overhead that it introduces compared to simply shrinking images for applications that are memory bound. Besides, there is no clear indication on how to determine the minimum ratio $\dfrac{\tilde{H}_{max}}{\tilde{H}_{min}}$ or the adequate parameters $M$, $\alpha_{min}$, $\alpha_{max}$ and $h$ that can maximize the performance of a model on a given dataset.
\newline\indent
Therefore, future work will focus on applying the VOTCSW method on bigger and more varied datasets in order to determine if the performance improvement that it introduces to models has any statistical significance. We will also aim to determine how the choice of the VOTCSW parameters influence the performance of the trained models, with an emphasis on creating a more dataset-specific set of rules for the method's use. We also plan on exploiting the NDPNN layer-wise degree reduction heuristic for validating models before training, which can be a safer alternative to the one it was used in this work. In addition, we intend to apply the polynomial reduction on the neuronal level in order to obtain more stable and accurate polynomials than the ones obtained with the layer-wise reduction, which may enable the degree of a layer to be further reduced.
\section*{Acknowledgement}
This work was funded by the Mitacs Globalink Graduate Fellowship Program (no. FR41237) and the NSERC Discovery Grants Program (nos. 194376, 418413). The authors wish to thank The University of Winnipeg’s Faculty of Graduate Studies for the UW Graduate Studies Scholarship, and Dr. Michael Beck for his valuable assistance with the WSISCMC dataset.
|
1,116,691,498,351 | arxiv | \section{ Introduction }
{Agoda}~\footnote{{Agoda}.com} is a global online travel agency for
hotels, vacation rentals, flights and airport transfers.
Millions of guests find their accommodations and
millions of accommodation providers list their properties in {Agoda}.
Among these millions of properties listed in {Agoda},
many of their prices are fetched through third party suppliers.
These third party suppliers
do not synchronize the hotel prices to {Agoda}.
Every time, to get a hotel price from these suppliers,
{Agoda}\ needs to make 1 HTTP request call to the supplier to
fetch the corresponding hotel price.
However, due to the sheer volume of the search requests received from users,
it is impossible to forward every request to the supplier.
Hence, a cache database which temporarily stores the hotel prices is built.
For each hotel price received from the supplier,
{Agoda}\ stores it into this cache database for some amount of time and evicts the price from the cache once it expires.
Figure~\ref{fig:flow2} above abstracts the system flow.
Every time an user searches a hotel in {Agoda},
{Agoda}\ first reads from the cache.
If there is a hotel price for this search from the user in the cache,
it is a 'hit' and we will serve the user with the cached price.
Otherwise, it is a 'miss' and the user will not have the price for that hotel.
For every 'miss',
{Agoda}\ will send a request to the supplier to get the price for that hotel,
and put the returned price into the cache.
So that, the subsequent users can benefit from the cache price.
However, every supplier limits the amount of requests we can send at every second. Once we reach the limit, the subsequent messages will be ignored. Hence, this poses four challenges.
\begin{figure}[t]
\centering
\includegraphics[width = 4.3in]{flow.png}
\caption{System flow of third party supplier hotel serving.
If a cached price exists, {Agoda}\ first serves the cache price to the user.
Otherwise, {Agoda}, on a best-efforts basis,
sends a request to the supplier to fetch for hotel price and put it in cache.}
\label{fig:flow2}
\end{figure}
\textbf{Challenge 1: Time-to-live (TTL) determination}.
For a hotel price fetched from the supplier,
how long should we put such hotel price in the cache before expiring them?
We call this duration as time-to-live (TTL).
The larger the TTL, the longer the hotel prices stay in the cache database.
As presented in Figure~\ref{fig:ttl_role}, the TTL plays three roles:
\begin{figure}[t!]
\centering
\includegraphics[height=1.9in]{challenge1.png}
\caption{TTL v.s. cache hit, QPS and price accuracy.}
\label{fig:ttl_role}
\end{figure}
\begin{itemize}
\item \textbf{Cache Hit}.
With a larger TTL, hotel prices are cached in the database for a longer period of time and hence, more hotel prices will remain in the database. When we receive a search from our users, there is a higher chance of getting a hit in the database. This enhances our ability to serve our users with more hotel prices from the third party suppliers.
\item \textbf{QPS}.
As we have limited QPS to each supplier, a larger TTL allows more hotel prices to be cached in database. Instead of spending QPS on repeated queries, we can better utilise the QPS to serve a wider range of user requests.
\item \textbf{Price Accuracy}.
As the hotel prices from suppliers changes from time to time, a larger TTL means that the hotel prices in our cache database are more likely to be inaccurate. Hence, we will not be able to serve the users with the most updated hotel price.
\end{itemize}
There is a trade-off between cache hit and price accuracy. We need to choose the TTL that caters to both cache hit and price accuracy.
To our best knowledge,
most Online Travel Agents (OTA) typically pick
a small TTL ranging from 15 minutes to 30 minutes.
However, this is not optimal.
\textbf{Challenge 2: Cross data centre QPS management}.
\begin{figure}
\centering
\includegraphics[height = 1.9in]{qps_management.png}
\caption{Cross data centre QPS management limitation.
Data centre A peaks around $50\%$ QPS around \texttt{18:00} and
data centre B peaks around $50\%$ QPS around \texttt{04:00}. }
\label{fig:qps-multi-dc}
\end{figure}
{Agoda}\ has several data centres globally to handle the user requests.
For each supplier,
we need to set a maximum number of QPS that each data centre is allowed to send.
However, each data centre has its own traffic pattern.
Figure~\ref{fig:qps-multi-dc} presents an example of
the number of QPS sent to a supplier from
two data centres A and B.
For data centre A, it peaks around $50\%$ QPS around \texttt{18:00}.
At the same time, data centre B peaks around $50\%$ QPS around \texttt{04:00}.
If we evenly distribute this $100\%$ QPS to data centre A and data centre B,
then we are not fully utilizing this $100\%$ QPS.
If we allocate more than $50\%$ QPS to each data center,
how can we make sure that
data center A and data center B never exceed the $100\%$ QPS in total?
Note that, the impact of breaching the QPS limit could be catastrophic to the supplier,
which might potentially bring down the supplier to be offline.
\textbf{Challenge 3: Single data centre QPS utilization}.
\begin{figure}
\centering
\includegraphics[height = 1.9in]{qps_unutilize.png}
\caption{Un-utilized QPS}
\label{fig:qps-utilization}
\end{figure}
As mentioned in the previous section,
each data centre has its own traffic pattern,
there are peak periods when we send the most amount of requests to the supplier,
and non-peak period when we send much fewer number of requests to the supplier.
As demonstrated in Figure~\ref{fig:qps-utilization},
for this data centre,
it sends \textless $40\%$ QPS to the supplier around \texttt{08:00}.
However, similar to the abovementioned example, $100\% - 40\% = 60\%$ QPS of this data centre is not utilized.
\textbf{Challenge 4: Cache hit ceiling}.
The passive system flow presented in Figure~\ref{fig:flow2}
has an intrinsic limitation to improve the cache hit.
Note that, this design sends a request to supplier to fetch for price
only if there is a miss.
This is passive!
Hence, a cache hit only happens if
the same hotel search happened previously and the TTL is larger than the
time difference between the current and previous hotel search.
Note that we cannot set TTL to be arbitrarily large as this will lower the price accuracy as explained in Challenge 1. As long as TTL of a specific search is not arbitrarily large, it will expire and the next request of this search will be a miss.
Even though we can set the TTL to arbitrarily large,
those hotel searches that never happened before will always be miss.
For example, if more than 20\% of the requests are new hotel searches.
Then, it is inevitable for us to have \textless 80\% cache hit regardless of how large the TTL is set.
To overcome the 4 challenges mentioned above,
we propose ${PriceAggregator}$, an intelligent system for hotel price fetching.
As presented in Figure~\ref{fig:superAgg_flow},
before every price is written to cache (Price DB),
it always goes through a TTL service,
which assigns different TTL for the different hotel searches.
This TTL service is built on historical data extracted to
optimize the trade-off between cache hit and price accuracy,
which addresses the Challenge 1.
Apart from passively sending requests to supplier to fetch for hotel price,
${PriceAggregator}$ re-invent the process by
adding an aggressive service which pro-actively
sends requests to supplier to fetch for hotel price
on a constant QPS.
By having a constant QPS,
Challenge 2 and Challenge 3 can be addressed easily.
Moreover, this aggressive service does not wait for a hotel search
to appear before sending requests to supplier.
Therefore, it can increase the cache hit and hence, addresses Challenge 4.
In summary, we make the following contributions in the paper:
\begin{enumerate}
\item We propose ${PriceAggregator}$,
an intelligent system which maximizes the bookings for a limited QPS.
To the best of our knowledge,
this is the first productionised intelligent system which optimises the utilization of QPS.
\item We present a TTL service,
SmartTTL which optimizes the trade-off between cache hit and price accuracy.
\item Extensive A/B experiments were conducted to show that ${PriceAggregator}$ is effective and increases {Agoda}'s revenue significantly.
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[height=1.5in]{system_flow.png}
\caption{${PriceAggregator}$ system flow. }
\label{fig:superAgg_flow}
\end{figure}
The rest of the paper is organized as following.
Section~\ref{sec:definition} presents the necessary definitions before presenting the TTL service, {SmartTTL }\ in Section 3.
In Section 4, we present the aggressive model.
In Section 5, we present the experiment results and analysis.
Section 6 presents the related work
before concluding the paper in Section 7.
\section{Preliminary and Definition}
\label{sec:definition}
In this section, we make necessary definitions.
Figure~\ref{fig:bookinig_flow} presents the major steps in the hotel booking process.
In stage 1, an user requests for a hotel price.
In stage 2, if the hotel price is already existing in the cache, then the user will be presented with the cached price. Otherwise, the user won't be able to see the hotel price.
In stage 3, if the user is happy with the hotel price, then the user clicks booking.
In stage 4, {Agoda}\ confirms with the hotel whether the price is eligible to sell. If the price is eligible to sell, then {Agoda}\ confirms the booking in stage 5.
\begin{figure}[t]
\centering
\includegraphics[height = 0.65in]{booking_flow.png}
\caption{{Agoda}\ booking flow}
\label{fig:bookinig_flow}
\end{figure}
\begin{definition}
Let $U=\{u_1, u_2, \dots, u_{|U|}\}$
be the set of users requesting for hotels in {Agoda}.
Let $H=\{h_1, h_2, \dots, h_{|H|}\}$
be the set of hotels that {Agoda}\ have.
Let $C=\{c_1,c_2,\dots, c_{|C|}\}$
be the set of search criteria that {Agoda}\ receives,
and each $c_i$ is in the form of
$ \langle \texttt{checkin,checkout,adults,children,}$
$\texttt{rooms} \rangle$.
\end{definition}
In the definition above, $U$ and $H$ are self-explanatory.
For $C$, $\langle \texttt{2020-05-01,}$
$\texttt{2020-05-02, 2,0,1} \rangle$
means a search criteria having the
checkin date as $\texttt{2020-05}$
$\texttt{-01}$,
the checkout date as $\texttt{2020-05-02}$,
the number of adults as $\texttt{2}$,
the number of children as $\texttt{0}$
and the number of room as $\texttt{1}$.
Therefore, we can define the itinerary request and the user search as follows.
\begin{definition}
Let $R = \{r_1, r_2, \dots, r_{|S|}\}$ be the set of itinerary request that {Agoda}\ sends to the suppliers, where $r_i \in H \times C$.
Let $S = \{s_1, s_2, \dots, s_{|S|}\}$ be the set of searches that {Agoda}\ receives from the user, where $s_i \in U \times H \times C$.
\end{definition}
For example, an itinerary request $r_i = \langle \texttt{Hilton Amsterdam}$,$\texttt{2020-06-01}$, $\texttt{2020-06-02,1,0,1} \rangle$ means {Agoda}\ sends a request to the supplier to
fetch price for hotel $\texttt{ Hilton Amsterdam}$ on $\texttt{checkin=2020-06-01, checkout=}$,\linebreak
$\texttt{2020-06-02}$,
$\texttt{adults=1}$,
$\texttt{children=0}$,
$\texttt{rooms=1}$.
Similarly, an user search
$s_i= \langle \texttt{Alex}$,
$\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01}$,
$\texttt{2020-}$
$\texttt{05-02}$,
$\texttt{2}$,$\texttt{0,1} \rangle$
means $\texttt{Alex}$ searched on hotel $\texttt{Hilton Amsterdam}$ for price
on $\texttt{checkin=2020-05-01}$,
$\texttt{checkout=2020-05-02}$,
$\texttt{adults=2}$,
$\texttt{children=0}$,
$\texttt{rooms=1}$.
Note that, if $\texttt{Alex}$ makes the same searches on
$\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01}$,
$\texttt{2020-05-02}$,
$\texttt{2}$,$\texttt{0,1}$
multiple times in a day, it is considered as multiple user searches. Therefore, S here is a multi-set.
\begin{definition}
$P_D(s_i)$ is the probability of an user search $s_i$ that hits on the hotel prices in the cache.
\end{definition}
For example, if $\texttt{Alex}$ makes the 10 searches on
$\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01}$,
$\texttt{2020-05-02}$,
$\texttt{2}$,
$\texttt{0,1}$, and 8 out of these 10 searches hit on the price cached.
Then, $P_D(\langle \texttt{Alex},
\texttt{Hilton Amsterdam},
\texttt{2020-05-01},
\texttt{2020-05-02},
\texttt{2},
\texttt{0,1} \rangle )
= \frac{8}{10} = 0.8$
\begin{definition}
$P_B(s_i)$ is the probability of an user search $s_i$ that ended up with booking attempt, given that the hotel price is in the cache.
\end{definition}
Following the above example, for $\langle \texttt{Alex}$,
$\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01}$,
$\texttt{2020-05-02},
\texttt{2},
\texttt{0,1} \rangle$,
$\texttt{Alex}$ has 8 searches returned prices.
And out of these 8 searches,
$\texttt{Alex}$ makes 2 booking attempts.
Then, $P_B(\langle \texttt{Alex},
\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01},
\texttt{2020-05-02},
\texttt{2},
\texttt{0,1} \rangle) = \frac{2}{8} = 0.25$
\begin{definition}
$P_A(s_i)$ is the probability of the hotel price is accurate after an user makes a booking attempt on search $s_i$.
\end{definition}
Continuing the example above,
out of the 2 booking attempts,
1 booking attempt succeeds.
Hence, $P_A(\langle \texttt{Alex},
\texttt{Hilton Amsterdam}$,
$\texttt{2020-05-01},
\texttt{2020-05-02}$,
$\texttt{2},\texttt{0,1} \rangle)$ = $\frac{1}{2} = 0.5$.
Therefore, we can formulate the number of bookings expected as follows.
\begin{definition}
The expected number of bookings is the following
\begin{equation}
K = \sum_{s_i} P_D(s_i) \times P_B(s_i) \times P_A(s_i)
\label{eqn:booking}
\end{equation}
\end{definition}
Therefore, our goal is to optimise such $K$.
To optimize $K$,
we would expect $P_D(s_i)$, $P_B(s_i)$, $P_A(s_i)$ to be as high as possible.
$P_B(s_i)$ is an user behaviour,
as a hotel price fetching system,
this is not controllable.
But we can learn this $P_B$ from historical data.
However, $P_D(s_i)$,
$P_A(s_i)$ could be tuned by adjusting the TTL.
As illustrated by Figure\ref{fig:ttl_role},
to increase $P_D(s_i)$, one can simply increase the TTL.
Similarly, to increase $P_A(s_i)$,
one just needs to decrease TTL.
We will discuss how to set the TTL to optimize the booking in Section~\ref{sec:smartTTL}.
\section{SmartTTL}
\label{sec:smartTTL}
In this section,
we explain how we build a smart TTL service which
assigns itinerary request specific TTL to optimize the bookings.
There are three major steps: price-duration extraction,
price-duration clustering and TTL assignment.
\subsection{Price-Duration Extraction}
Price-duration refers to how long each price stay unchanged. This is approximated by the time difference between two consecutive requests of the same itinerary that {Agoda}\ sends to the supplier. Figure 7 presents an example of extracting price-duration distribution from empirical data of hotel $\texttt{Hilton}$
$\texttt{Amsterdam}$ and search criteria
$\langle \texttt{2019-10-01,2019-10-02}$, $\texttt{1,0,1} \rangle$.
{Agoda}\ first sends a request to supplier at $\texttt{13:00}$ to fetch for price,
and that's the first time we fetch price for such itinerary.
So, there is no price change and no price-duration extracted.
Later, at $\texttt{13:31}$,
{Agoda}\ sends the second request to the supplier to fetch for price,
and observes that the price has changed.
Hence, the price-duration for the previous price is 31 minutes
(the time difference between \texttt{13:00} and \texttt{13:31}).
Similarly, at $\texttt{14:03}$,
{Agoda}\ sends the third request to the supplier to fetch for price,
and again, observes that the price has changed.
Hence, the price-duration for the second price is 32 minutes.
Therefore, for each search criteria, we can extract the empirical price-duration distributions.
\begin{figure}
\centering
\includegraphics[height = 1.8in]{ttl_extraction.png}
\caption{Price duration extraction from empirical data}
\label{fig:ttl_extract}
\end{figure}
\subsection{Price-Duration Clustering}
In {Agoda}, we have billions of such user searches every day.
It is practically intractable and unnecessary
to store such volume of search criteria's TTL into in-memory cache, e.g. Redis or Memcached.
Therefore, we need to reduce the cardinality of the user searches.
And we do it through clustering.
Figure~\ref{fig:ttl_cluster} presents the price-duration clustering process.
We cluster these user searches into clusters to reduce the cardinality.
In ${PriceAggregator}$, we used ${XGBoost}$~\cite{xgboost}
for the clustering feature ranking,
and the significant features are $\texttt{checkin}$, $\texttt{price\_availability}$.
We observe that the itinerary requests with same $\texttt{checkin}$ and $\texttt{price\_availability}$ (whether the hotel is sold out or not)
have the similar price-duration distribution.
Hence, for all supplier requests with same $\texttt{checkin}$ and $\texttt{price\_availability}$, we group them into the same cluster, and use the aggregated price-duration distribution to represent the cluster.
By doing this, we dramatically reduce the cardinality to $\sim1000$,
which can be easily stored into any in-memory data structure.
\begin{figure}[h]
\centering
\includegraphics[height = 1.4in]{TTL_Cluster.png}
\caption{Similar supplier requests are clustered together}
\label{fig:ttl_cluster}
\end{figure}
\subsection{TTL Assignment}
In the above section,
we finished clustering.
Next, we need to assign a TTL for each cluster.
Note that, we want to optimize the bookings as expressed in Equation 1,
and the TTL will affect the cache hit ($P_D$ in Equation 1)
and booking price ($P_A$ in Equation 1) accuracy.
Hence, we want to assign a TTL for each cluster in which Equation 1 is optimised.
For cache hit, we can easily approximate the cache miss ratio curve ~\cite{WWW2020} using Cumulative Distribution Function (CDF) of the gap time (time difference between current request and previous request for the same itinerary search).
Figure~\ref{fig:cdf} presents the CDF
of the gap time, where the x-axis is the gap time,
and the y-axis is the portion of requests whose gap time $\leq$ a specific gap time.
For example, $80\%$ of the requests are having gap time $\leq$ 120 minutes in Figure~\ref{fig:cdf}.
Hence, by setting TTL at $120$ minutes, we can achieve $80\%$ cache hit.
Therefore, the cache miss ratio curve related to TTL can be easily found,
and we can know the approximated cache hit rate for each TTL we choose.
For booking accuracy of a cluster $C$, this can be approximated by
$$
\frac{\sum_{r_i \in C } \min(1,\frac{TTL_{r_i}}{TTL_{assigned}} )}{|C|}
$$
For example, in a specific cluster,
if the empirical price-duration observed is $120$ minutes and $100$ minutes, and we assigned $150$ minutes.
Hence, we know that there are $120$ and $100$ minutes that we are using the accurate price.
Hence, the accuracy is $(\frac{120}{150} + \frac{100}{150})/2 = \frac{11}{15}$.
Hence, to optimize the bookings as expressed in Equation~\ref{eqn:booking},
we just need to numerate the different TTL in each cluster to find such TTL.
So far, we have completed the major steps in SmartTTL.
\begin{figure}[h]
\centering
\includegraphics[height=2in]{gap_time.png}
\caption{CDF of gap time. x-axis is the gap time in minutes.}
\label{fig:cdf}
\end{figure}
\section{From Passive Model to Aggressive Model}
As mentioned in Section 1, {SmartTTL } addresses the Challenge 1.
We still have three more challenges remaining untackled.
For Challenge 2 and Challenge 3,
we can resolve it by guaranteeing each data centre sends constant rate $\mu$ of requests to the suppliers.
Every time passive model sends $\mu_{passive}$ requests to the suppliers,
where $\mu_{passive} \textless \mu$,
we proactively send extra $\mu - \mu_{passive}$ requests to supplier.
The question is how to generate these $\mu - \mu_{passive}$ requests.
Next,
we will present one alternative of generating such $\mu - \mu_{passive}$ requests.
\subsection{Aggressive Model with LRU Cache}
In this section,
we describe an aggressive model which aggressively sends requests to the supplier to fetch for hotel price.
These requests are generated from the auxiliary cache ${\mathcal{C}}_{LRU}$. There are two major steps:
\textbf{Cache building}.
The auxiliary cache ${\mathcal{C}}_{LRU}$ is built up by using historical user searches.
For each user search $s_i$, they are always admitted into ${\mathcal{C}}_{LRU}$.
Once ${\mathcal{C}}_{LRU}$ reaches its maximum capacity specified,
${\mathcal{C}}_{LRU}$ will evict the user search $s_i$ which is Least Recently Used (LRU).
\textbf{Request pulling}.
At every second $t_i$,
passive model needs to send $\mu_{passive}$ requests to supplier.
And the supplier allows us to send $\mu$ requests per second.
Hence,
aggressive model will send $\mu_{aggressive} = \mu - \mu_{passive}$ requests to the supplier.
To generate such $\mu_{aggressive}$ requests,
{Agoda}\ pulls $\mu_{aggressive}$ requests from ${\mathcal{C}}_{LRU}$ which are going to expire (starts from requests that are closets to expiry until $\mu_{aggressive}$ is used up).
It is obvious that the above approach can solve Challenge 2 and Challenge 3.
Moreover, it can also help improve the cache hit by requesting the hotel prices
before an user searches for it.
However, this is not optimal.
For example,
a specific hotel could be very popular.
However, if the hotel is not price competitive,
then {Agoda}\ does not need to waste such QPS to
pull the hotel price from such supplier.
In the next section, we will introduce an aggressive model which optimizes the bookings.
\subsection{Aggressive model with SmartScheduler}
As mentioned, aggressive model with LRU cache is not optimal.
Moreover, previously, passive model always has the highest priority.
Meaning aggressive model only sends requests to supplier if there is extra QPS left.
However, this is again not optimized.
In this section, we present an aggressive model which optimizes the bookings.
It has 5 major steps.
\textbf{Itinerary frequency calculation}. This describes how many times an itinerary needs to be requested to ensure it is always available in database.
If we want a high cache hit rate, we want an itinerary $r_i$ to be always available in the database,
that means we need to make sure that such itinerary $r_i$ is fetched before it expires.
Moreover, for each $r_i$,
we have the generated $TTL_{r_{i}}$.
Hence, to make sure an itinerary $r_i$ is always available in database $D$
for 24 hours (1440 minutes),
we need to send $f_{r_i}$ requests to supplier, where $f_{r_i}$ is
\begin{equation}
f_{r_i}=\left \lceil \frac{1440}{TTL_{r_i}} \right \rceil
\label{eqn:itinerary_frequency}
\end{equation}
\textbf{Itinerary value evaluation}.This evaluates the value of an itinerary by the probability of booking from this itinerary.
With above itinerary frequency calculation, we can assume an itinerary request is always a 'hit' in the database. Hence, in this step, we evaluate the itinerary value given that such itinerary is always available in our Price DB.
That is, for all user search $s_i$ on the same itinerary $r_i$, $s_i \succ r_i$,
it will be always cache hit, i.e. ${P_D(s_i)}=1$.
Recall from Equation~\ref{eqn:booking},
for each itinerary request $r_i$,
we have now the expected number of bookings as
\begin{equation}
E[K_{r_i}] = \sum_{s_i \in r_i} {P_D(s_i)} \times {P_B(s_i)} \times {P_A(s_i)} = \sum_{s_i \in r_i} {P_B(s_i)} \times {P_A(s_i)}
\label{eqn:itinerary_value}
\end{equation}
\textbf{Request value evaluation}. This evaluates the value of a request by the probability of booking from this request.
By Equation~\ref{eqn:itinerary_value} and Equation~\ref{eqn:itinerary_frequency},
we can have the expected bookings per supplier request as
\begin{equation}
\frac{E[K_{r_i}]}{f_{r_i}}
\label{eqn:bookingperrequest}
\end{equation}
\textbf{Top request generation}. This generates the top requests we want to select according to their values.
Within a day, for a specific supplier,
we are allowed to send $M=\mu \times 60 \times 60 \times 24$ requests to supplier.
Therefore, by Equation~\ref{eqn:bookingperrequest},
we can order the supplier requests and pick the most valuable $M$ requests.
\textbf{Top request scheduling}. This describes how to schedule to pull the top requests we selected.
Given that we have $M$ requests need to be sent to the supplier,
we need to make sure 1. each of these requests is sent to the supplier
before its previous request expires.
2. at every second, we send exactly $\mu$ requests to the supplier.
For all itinerary requests, we group the itinerary request by their frequency,
where $G(f_i) = [r_1, r_2, r_3, \dots, r_k]$ and itinerary request $r_1, r_2, r_3, \dots, r_k$ all have frequency $f_i$ and same $TTL_{i}$. This means every single request $r_i$ where $i = 1, 2, 3, ..., k$, is to be scheduled to send for $f_i$ times and all k itinerary requests, $r_1, r_2, r_3, \dots, r_k$, are to be sent within a period of $TTL_{i}$.
To ensure every of these k itinerary requests are sent within a period of $TTL_{i}$, we can simply distribute these $r_1, r_2, r_3, \dots, r_k$ requests evenly over each second in $TTL_{i}$. Thus, we need to schedule to send $\frac{k}{TTL_i}$ requests each second within $TTL_i$. Now we just need to send the same set of requests every $TTL_i$ and repeats this process for $f_i$ times For example, if $G(4)=[r_1, r_2, \dots, r_{43200}]$,
then we have 43200 itinerary requests having frequency = $4$ and $TTL$ = $6$ hours which is 21600 seconds. That means, in every 6 hours, we need to schedule 43200 itinerary requests, which is $\frac{43200}{21600} = 2$ requests per second. That is, if we don't consider any ordering of the 43200 requests, we will send requests $r_1$ and $r_2$ in 1st second, $r3$ and $r4$ in 2nd second until $r_{43199}$ and $sr_{43200}$ itinerary requests in 21600th seconds. In 21601th second, $r_1$ and $r_2$ will be sent again and so on. These 43200 itinerary requests are scheduled to be sent for a frequency of 4 times in a single day.
By having the above 5 steps, we can see that the most valuable $M$ requests are sent by SmartScheduler which maximizes the booking.
\section{Experiment and Analysis}
The aggressive model with SmartScheduler has been deployed in production at {Agoda}.
The deployment has yielded significant gains on bookings and other system metrics.
Before the deployment,
we have done extensive online A/B experiment to evaluate the effectiveness of the model.
In the following section, we will present the experiments conducted in 2019.
As {Agoda}\ is a publicly listed company,
we are sorry that we can't reveal the exact number of bookings due to data sensitivity,
but we will try to be as informative as possible.
Overall, aggressive model with SmartScheduler wins other baseline algorithms by $10\%$ to $30\%$.
\subsection{Experimentation suppliers}
There are two types of suppliers {Agoda}\ have experimented with:
\begin{enumerate}
\item \textbf{Retailers}.
Retailer are those suppliers whose market manager from each OTA deals with hotel directly and they are selling hotel rooms online.
\item \textbf{Wholesalers}. Wholesalers are those suppliers that sell hotels/hotel’s room/other products in large quantities at lower price (package rate), mainly selling to B2B or retailer not direct consumer.
\end{enumerate}
In this paper, we present the results from 5 suppliers.
\textit{ \textbf{Supplier A} } is a Wholesaler supplier which operates in Europe.
\textit{ \textbf{Supplier B} } is a Wholesaler supplier which operates worldwide.
\textit{ \textbf{Supplier C} } is a Wholesaler supplier which operates worldwide.
\textit{ \textbf{Supplier D} } is a Retailer supplier which operates in Japan.
\textit{ \textbf{Supplier E} } is a Retailer supplier which operates in Korea.
In this section, all the experiments were conducted through online A/B experiment over 14 days,
where half of the allocated users are experiencing algorithm A and
the other half are experiencing algorithm B.
Moreover, for all the plots in this section,
\begin{itemize}
\item x-axis is the nth day of the experiment.
\item bar-plot represents the bookings and line-plot represents the cache hit.
\end{itemize}
\subsection{Fixed TTL v.s. SmartTTL}
In this section, we compare the performance between passive model with Fixed TTL (A) and passive model with SmartTTL (B).
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierA.png}
\caption{A/B Experiment on Supplier A}
\label{fig:expa}
\end{figure}
Figure~\ref{fig:expa} presents the results on Supplier A,
and we can see that B variant wins A variant by a small margin.
Overall, B variant wins by $2-4\%$ for cache hit, and $\sim2\%$ for bookings. This is expected as SmartTTL only address Challenge 1.
\subsection{SmartTTL v.s. Aggressive Model with SmartScheduler}
In this section, we compare the performance between passive model with SmartTTL (A) and aggreesive model with SmartScheduler (B).
We present the A/B experiment results
Supplier C and Supplier E.
Figure~\ref{fig:expc} presents the results on Supplier C,
and we can easily see that B variant wins A variant significantly in terms of booking and cache hit ratio.
For cache hit and bookings, B variant wins A variant consistently.
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierC.png}
\caption{A/B Experiment on Supplier C}
\label{fig:expc}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierE.png}
\caption{A/B Experiment on Supplier E }
\label{fig:expe}
\end{figure}
Figure~\ref{fig:expe} presents the results on Supplier E,
and we can easily see that B variant wins A variant significantly in terms of booking and cache hit ratio.
For cache hit, B variant wins A variant consistently.
For bookings, we can see that B never lose to A on any single day.
\subsection{Aggressive Model with LRU Cache v.s. Aggressive Model with SmartScheduler}
In this section, we compare the performance between aggressive model with LRU cache (A) and aggreesive model with SmartScheduler (B).
We present the A/B experiment results
Supplier B and Supplier D.
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierB.png}
\caption{A/B Experiment on Supplier B}
\label{fig:expb}
\end{figure}
Figure~\ref{fig:expb} presents the results on Supplier B,
and we can easily see that B variant wins A variant significantly in terms of booking and cache hit ratio.
For cache hit, B variant wins A variant consistently.
It is worthwhile to note that the overall booking declines along the x-axis,
this could be caused by many factors such as promotions from competitors, seasonality etc.
However, B variant is still able to win A variant by a consistent trend.
\begin{figure}
\centering
\includegraphics[height=1.8in]{SupplierD.png}
\caption{A/B Experiment on Supplier D}
\label{fig:expd}
\end{figure}
Figure~\ref{fig:expd} presents the results on Supplier D,
and we can easily see that B variant wins A variant significantly in terms of booking and cache hit ratio.
For cache hit, B variant wins A variant consistently.
For bookings, we can see that B consistently wins A by more than $10\%$.
And on certain days, e.g. day 5, B wins by more than $50\%$.
\section{Related Work}
The growth of traveling industry has attracted substantial academic attention~\cite{airbnb,bcom, europricing}.
To increase the revenue, many effort have been spent on enhancing the pricing strategy.
Aziz et al. proposed a revenue management system framework based on price decisions which optimizes the revenue~\cite{roompricing}.
Authors in ~\cite{airbnb} proposed Smart Price which improves the room booking by guiding the hosts to price the rooms in Airbnb.
As long-term stay is getting more common, Ling et al. ~\cite{long_stay} derived the optimal pricing strategy for long-term stay, which is beneficial to hotel as well as its customer.
Similar efforts have been seen in ~\cite{noone2016pricing,dynamicpricing} in using pricing strategies to increase the revenues.
Apart from pricing strategy,
some effort has been spent on overbooking~\cite{toh2002hotel,koide}.
For example, Antonio et al.~\cite{ANTONIO2017} built prediction models for predicting cancellation of booking to mitigate revenue loss derived from booking cancellations.
Nevertheless, none of the existing work has studied hotel price fetching strategy.
To the best of our knowledge, we are the first to deploy an optimized price fetching strategy which increases the revenue by large margin.
\section{Conclusion and Future Work}
In this paper, we presented ${PriceAggregator}$,
an intelligent hotel price fetching system which optimizes the bookings.
To the best of our knowledge,\\
${PriceAggregator}$ is the first productionized system which addresses the 4 challenges mentioned in Section 1.
It differs from most existing OTA system by having SmartTTL which determines itinerary specific TTL.
Moreover, instead of passively sending requests to suppliers,
${PriceAggregator}$ aggressively fetches the most valuable hotel prices from suppliers which optimizes the bookings.
Extensive online experiments shows that
${PriceAggregator}$ is not only effective in improving system metrics like cache hit,
but also grows the company revenues significantly.
We believe that ${PriceAggregator}$ is a rewarding direction for application of data science in OTAs.
One of the factor which brings bookings is pricing.
In the future, we will explore how to optimize the bookings through a hybrid of pricing strategy and pricing fetching strategy.
\tiny
\printbibliography
\end{document}
|
1,116,691,498,352 | arxiv | \section{Introduction}
Recent studies of rare-earth-based metallic systems have revealed novel electronic states arising from a complex interplay of magnetism and electron-band topology~\cite{hirschberger2016chiral,shekhar2018anomalous,borisenko2019time,soh2019magnetic,wang2016anisotropic,jo2020manipulating,riberolles2020magnetic}. EuMg$_2$Bi$_2$ is one such system that undergoes antiferromagnetic (AFM) ordering below a N\'eel temperature $T_{\rm N} \approx 6.7$~K~\cite{Pakhira2020,may2011structure, kabir2019observation} and is also reported to host multiple Dirac points located at different energies with respect to the Fermi energy~\cite{kabir2019observation}. Various topological states of EuMg$_2$Bi$_2$\ (such as axion or Weyl states) are dependent on the nature of the magnetic order since time-reversal symmetry breaking and magnetic crystalline symmetry may gap or split the Dirac points.
EuMg$_2$Bi$_2$\ crystallizes in the trigonal CaAl$_2$Si$_2$-type crystal structure~\cite{zheng1986site} (space group $P\bar{3}m1$, No. 164), where the Eu atoms form a triangular lattice in the $ab$~plane with simple hexagonal-stacking along the $c$~axis. Recently, our anisotropic magnetic susceptibility $\chi(T)$ data measured in a magnetic field $H= 1$~kOe demonstrated that both the in-plane and out-of-plane magnetic susceptibilities are almost temperature independent below $T_{\rm N}$~\cite{Pakhira2020}. Using our recent formulation of molecular-field theory~\cite{johnston2012magnetic,johnston2015unified}, it has been proposed that the magnetic structure below $T_{\rm N}$ is a $c$-axis helix with a turn angle of $\approx 120^\circ$ between adjacent Eu layers in which the Eu spins are ferromagnetically aligned in the $ab$~plane in each Eu layer~\cite{Pakhira2020}.
Here, we report neutron-diffraction measurements on single-crystal EuMg$_2$Bi$_2$ and determine the zero-field Eu$^{2+}$ spin $S=7/2$ magnetic structure below $T_{\rm N}$ to be \mbox{A-type} AFM order with the moments aligned in the $ab$~plane. We also present $\chi(T)$ results in a low magnetic field $H = 100$~Oe that are consistent with the magnetic structure obtained from neutron diffraction measurements in zero field. The difference between the present AFM structure and that inferred from the previous $\chi(T)$ measurements in $H = 1$~kOe which report a 120 degree helical structure \cite{Pakhira2020} implies that the magnetic texture (i.e., structure and/or domains) is sensitive to the strength of the applied magnetic field and requires additional neutron-diffraction measurements under magnetic field for confirmation.
The experimental details and methods are presented in Sec.~\ref{Sec:ExpDet}. The neutron diffraction measurements and analyses are discussed in Sec.~\ref{Sec:Neutron} and the $\chi(T)$ measurements in Sec.~\ref{Sec:MagSus}. The results are summarized in Sec.~\ref{Sec:Conclu}.
\section{\label{Sec:ExpDet} Experimental Details and Methods}
EuMg$_2$Bi$_2$\ single crystals with hexagonal lattice parameters $a = 4.7724(3)$ and $c = 7.8483(5)$~\AA~\cite{Pakhira2020} were grown by a self-flux method with starting composition EuMg$_4$Bi$_6$ as described previously~\cite{may2011structure}. The $\chi(T)$ measurements were carried out using a magnetic-properties measurement system (MPMS, Quantum Design, Inc.) in the temperature range 1.8--300~K\@. A $\sim 50$~mg crystal was cut into two pieces having masses $\sim 10$~mg and $\sim 40$~mg. The 10~mg piece was used for the magnetization measurements and the 40~mg piece was used for neutron diffraction experiments.
Single-crystal neutron-diffraction experiments were performed in zero applied magnetic field using the TRIAX triple-axis spectrometer at the University of Missouri Research Reactor (MURR). An incident neutron beam of energy $E_i = 30.5$ meV or 14.7 meV was directed at the sample using a pyrolytic graphite (PG) monochromator. Elastic scattering data were acquired with $E_i = 30.5$ meV in order to reduce the absorption caused by highly absorbing Eu, whereas $E_i = 14.7$~meV was used to improve the resolution in a search for possible peaks associated with an incommensurate magnetic structure. A PG analyzer was used to reduce the background. Neutron wavelength harmonics were removed from the beam using PG filters placed before the monochromator and in between the sample and analyzer. Beam divergence was limited using collimators before the monochromator; between the monochromator and sample; sample and analyzer; and analyzer and detector of $60^\prime-60^\prime-40^\prime-40^\prime$, respectively.
A 40~mg EuMg$_2$Bi$_2$\ crystal was mounted on the cold tip of an Advanced Research Systems closed-cycle refrigerator with a base temperature of 4~K\@. The crystal was aligned in the $(HHL)$ and $(H0L)$ scattering planes whereupon a wide range of reciprocal space was accessible for our comparative diffraction study above (10~K) and below (4~K) $T_{\rm N} = 6.7$~K\@. Reciprocal space was searched extensively using a series of $H$-, $HH$-, and $L$-scans as well as mesh scans in order to identify any commensurate or incommensurate wave vectors that might be present.
\section{\label{Sec:Neutron} Neutron diffraction}
\begin{figure}
\centering
\includegraphics[width=3.4 in]{00l-peaks.pdf}
\caption{(a) Diffraction pattern along $(00L)$ of single-crystal EuMg$_2$Bi$_2$\ at 4 and 10 K as indicated. Aluminum Bragg reflections are from the sample holder. (b) Difference between the $(00L)$ patterns taken at 4 K and 10 K\@. (c) Difference between the $(10L)$ patterns taken at 4 K and at 10 K\@. (d) Difference between a $(11L)$ patterns taken at $T=4$~K and 10 K\@. All three difference patterns show clear magnetic peaks at half-integer $L$ up to $L=3.5$, consistent with A-type AFM, i.e, the $H = 0$ ground state is such that the intraplane ordering is ferromagnetic while adjacent layers are aligned antiferromagnetically.}
\label{Fig:00l}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.4 in]{hh0-peaks.pdf}
\caption{Difference between the pattern taken at $T = 4$~K and that at 10 K for (a) $(H 0 0)$ and for (b) $(H H 0)$ with no indication of nonferromagnetic in-plane magnetic ordering of EuMg$_2$Bi$_2$, that together with Fig.\ \ref{Fig:00l} indicate simple A-type antiferromagnetism. The noise in (b) is due to thermal changes of the Bragg peaks, most prominent are those from the Al can containing the sample.}
\label{Fig:hh0}
\end{figure}
Figure~\ref{Fig:00l}(a) shows diffraction scans along $(00L)$ at 4 and 10 K, where reflections at half-integer $L$ values are apparent at $T = 4$~K\@. For more clarity, Fig.~\ref{Fig:00l}(b) shows the difference between these two scans, where within experimental uncertainty there is no evidence for other reflections associated with a modulated structure along the $c$ axis. Similar differences [i.e., $I$(4 K) $-$ $I$(10 K)] for scans along $(10L)$ and $(11L)$, shown in Figs.~\ref{Fig:00l}(c) and~\ref{Fig:00l}(d), respectively, also reveal new peaks at half-integer $L$ values. Qualitatively, these newly emerging Bragg reflections indicate the doubling of the unit cell along the $c$~axis. We also note that the intensities of the new peaks become weaker at larger $L$ values and also as the total momentum transfer $Q$ gets larger [i.e., $Q_{(11L)} > Q_{(10L)}> Q_{(00L)}$], roughly following the falloff expected from the magnetic form factor. These qualitative observations unequivocally establish that these peaks are associated with A-type AFM ordering with AFM propagation vector $\vec{\tau} = \left(0,0,\frac{1}{2}\right)$ (in reciprocal-lattice units) consisting of ferromagnetic layers with moments aligned in the $ab$~plane that are stacked antiferromagnetically. The $\chi(T)$ data in the following section confirm that the ordered moments lie in the $ab$~plane.
To confirm the in-plane ferromagnetic (FM) structure we carried out more comprehensive scans to search for additional magnetic peaks. In particular, Fig.~\ref{Fig:hh0} shows that no additional magnetic peaks are observed in the difference between scans taken at 4 and 10~K along $(H00)$ (a) and $(HH0)$ (b), consistent with a single AFM propagation vector $\vec{\tau} = \left(0,0,\frac{1}{2}\right)$. The sharp features in these difference scans are artifacts of the subtraction caused by slight shifts in nuclear Bragg-peak positions due to thermal expansion upon heating. We also performed scans in the $(HHL)$ and $(H0L)$ planes and found additional peaks only at the expected half-integer $L$~positions.
A mean-field analysis of previous single-crystal $\chi(T)$ measurements with \mbox{$H = 1$~kOe} (as opposed to the zero applied magnetic field for the present neutron-diffraction experiments) indicated a $c$-axis helical magnetic ground state where each adjacent Eu-moment layer is ferromagnetically-aligned in the $ab$~plane and rotated by $\approx 120^\circ$ with respect to its nearest-neighbor (NN) Eu layers~\cite{Pakhira2020}. If present, such a magnetic structure would give rise to a magnetic unit cell three times that of the chemical unit cell along the $c$ axis, and would be manifested by extra magnetic Bragg reflections shifted from the nuclear Bragg positions by $\pm 1/3$. To search for such reflections or other helical magnetic structures, we conducted scans around prominent magnetic peaks using $E_i = 14.7$ meV. Figure~\ref{Fig:Mesh} shows a $(H0L)$ 2D map of the intensity at $T = 4$~K minus that taken at 10~K\@. As shown, we only find peaks at (0~0~1.5) and at (1~0~1.5) associated with A-type AFM order and observe no other features, in particular no peaks are found at $L \pm 1/3$ that would correspond to the 120$^\circ$ rotation between NN layers. Nevertheless, we note that the magnetic Bragg reflections are elongated along the $(00L)$ direction beyond the instrumental resolution. Such a shape may arise from stacking faults of the FM layers.
\begin{figure}
\centering
\includegraphics[width=2.75 in]{Mesh-17.pdf}
\caption{2D $(00L)$ $(H00)$ mesh at $E_i=14.4$ meV measured at 4 K and after subtracting a similar mesh at 10 K, i.e., in the paramagnetic state above $T_{\rm N}$. The reflections (0 0 1.5) and (1 0 1.5) are purely magnetic peaks due to a 180$^\circ$ rotation between adjacent layers. The absence of other features in the mesh constitute evidence that there is no 120$^\circ$ helical order at zero applied magnetic field.}
\label{Fig:Mesh}
\end{figure}
\begin{figure}
\includegraphics[width=3. in]{Fig-Structure.pdf}
\caption{Chemical and A-type AFM spin structure of EuMg$_2$Bi$_2$. (a) FM spins are aligned towards NN and (b) towards NNN. (a1) and (b1) show the corresponding projection of a single layer on the $ab$-plane. Our neutron diffraction data are insensitive to the direction of the FM moment in the plane.}
\label{Fig:structure}
\end{figure}
The proposed A-type AFM structure is shown in Fig.~\ref{Fig:structure}, where adjacent NN FM layers are rotated by 180$^\circ$ with respect to each other. The direction of the FM moment in the Eu layer cannot be determined from neutron diffraction alone. Thus, in Fig.~\ref{Fig:structure} we show two possible magnetic structures where the moments are pointing towards their in-plane NN (a,a1) or to their next nearest neighbor (NNN) (b,b1) (there are no additional possibilities according to the Bilbao crystallographic server~\cite{Mato2015}). Using published values for the structural parameters, we obtain good agreement with the intensities of the nuclear Bragg peaks, both above and below TN. From this basis, we are able to confirm the A-type magnetic structure and obtain an estimate for the ordered magnetic moment $\mu = \langle gS\rangle\,\mu_{\rm B}$ at $T = 4$ K using the FullProf software~\cite{RODRIGUEZCARVAJAL1993}.
Individual Bragg peaks measured by $\theta$-$2\theta$ scans were fit to Gaussian lineshapes to determine their integrated intensities which were then corrected for the geometric Lorentz factor. To account for the significant neutron absorption cross section of Eu, we use the \textsc{mag2pol} \cite{Qureshi2019} software, by supplying the approximate sample shape as a plate of dimensions $2\times2\times0.5$ mm$^3$. For the refinement of the chemical structure with space group $P{\bar 3}m1$ we used published structural parameters~\cite{may2011structure,Pakhira2020} which we find are in good agreement with our refinement. As noted above, the possible magnetic structures that can occur with a second order phase transition from space group $P{\bar 3}m1$ to AFM order with propagation vector $\vec{\tau}= \left(0, 0, \frac{1}{2}\right)$ are consistent with antiparallel $c$~axis stacking of FM layers (A-type AFM order]. In our analysis of the magnetic structure, we use the $C_c2/m$ (\# 12.63) symmetry~\cite{Gallego2019} [this is the magnetic structure shown in Fig.\ \ref{Fig:structure}(a) with magnetic moments directed towards NN), and note that our diffraction data eliminates any other minimum symmetry reduction.
Our refinement of the magnetic structure also yields an average magnetic moment $\mu = \langle gS\rangle\,\mu_{\rm B} = (5.3\pm0.5$)\,$\mu_{\rm B}$ at $T = 4$~K. This value is smaller than the zero-temperature ordered moment $\mu = 7\,\mu_{\rm B}$ expected from the electronic configuration of Eu$^{2+}$~\cite{Cable1977} with $S=7/2,\ L = 0$ and $g=2$ because $\mu$ is not yet saturated to its full value at $T = 0$. Figure~\ref{Fig:OP}(a) shows the integrated intensity of the (0 0 0.5) magnetic peak as a function temperature where we use a simple power-law function $I_{\rm (0\,0\, 0.5)}(T) = C|1-T/T_{\rm N}|^{2\beta} \propto \mu^2$ to fit the data (solid line with sharp transition). The smooth line around {$T_{\textrm N}$} is obtained by the same power law but weighted by a Gaussian distribution of {$T_{\textrm N}$} (this form is sometimes used to account for crystal inhomogeneities) yielding $T_{\rm N} = 6.2 \pm 0.4$ and $\beta = 0.40 \pm 0.05$. The temperature probe in the neutron diffraction measurements is placed outside the helium-filled aluminum can holding the crystal, likely recording temperatures that are slightly lower than that of the sample. This may explain the discrepancy with the {$T_{\textrm N}$}\ measured by the magnetic susceptibility. Most importantly, the phenomenological fits show that the order parameter is still increasing at $T = 4$~K and not close to its saturated value. Indeed, Fig.~\ref{Fig:OP}(b) shows the square root of the data in Fig.~\ref{Fig:OP}(a) after subtracting the background and normalizing the value at $T = 4$ K to the extracted average magnetic moment to $\mu(4~ {\rm K}) = 5.3$ $\mu_{\rm B}$. Using the power-law yields ($9.5 \pm 1$) $\mu_{\rm B}$ at $T=0$. This approach overestimates the expected 7 $\mu_{\rm B}$ at $T=0$ because the phenomenological power-law fit is only accurate just below $T_{\rm N}$~\cite{johnston2015unified}.
\begin{figure}
\centering
\includegraphics[width=3. in]{OP-17.pdf}
\caption{(a) Integrated intensity as a function of temperature $T$ of the (0 0 $\frac{1}{2}$) magnetic Bragg reflection and (b)~calculated ordered moment~$\mu$ $versus$ $T$, with a power-law fit (solid green line) indicating $T_{\rm N} = (6.2 \pm 0.4)$~K\@. The red curves in (a) and~(b) assume a Gaussian dstribution of $T_{\rm N}$.}
\label{Fig:OP}
\end{figure}
\section{\label{Sec:MagSus} Magnetic Susceptibility}
\begin{figure}[ht!]
\includegraphics[width=3. in]{Fig_chi.pdf}
\caption{Temperature dependence of zero-field-cooled (ZFC) magnetic susceptibility measured at different applied magnetic fields as listed when the field is (a) in the $ab$-plane ($H \parallel ab$) and (b) along the $c$-axis ($H \parallel c$). In panel~(b), the two data sets for $H \parallel ab$ and $H \parallel c$ perfectly overlap in the paramagnetic regime with $T \geq T_{\rm N}$\@.}
\label{Fig:chi}
\end{figure}
We presume that the difference between zero-field A-type AF order determined from neutron diffraction and the reported 120$^{\circ}$ helical order inferred from bulk susceptibility is caused by the application of a magnetic field.~\cite{Pakhira2020}. Accordingly, we measured the temperature dependence of the magnetic susceptibility $\chi \equiv \frac{M}{H}$ a very low-field of at $H = 100$~Oe to better approximate the zero-field conditions of our neutron diffraction data (on the same piece of single crystal) as shown in Figs.~\ref{Fig:chi}(a) and \ref{Fig:chi}(b) for $H$ aligned in the $ab$~plane ($\chi_{ab}$) and along the $c$~axis ($\chi_c$), respectively. The compound orders antiferromagnetically below the N\'eel temperature $T_{\rm N} \approx 6.7$~K, as reported earlier~\cite{Pakhira2020, may2011structure, kabir2019observation}. Although $\chi_c$ in Fig.~\ref{Fig:chi}(a) is nearly independent of $T$ below $T_{\rm N}$, $\chi_{ab}(H=100$~Oe) in Fig.~\ref{Fig:chi}(a) decreases by about a factor of two upon cooling from $T_{\rm N}$ to 1.8~K\@.
To clarify the nature of the ground-state magnetic structure, we analyzed the low-field $\chi(T)$ data in Fig.~\ref{Fig:chi}(a) using unified molecular-field theory (MFT)~\cite{johnston2012magnetic,johnston2015unified}. This theory holds for systems of identical crystallographically-equivalent Heisenberg spins interacting by Heisenberg exchange and the magnetic properties are calculated from the exchange interactions between an arbitrary spin and its neighbors. According to the MFT, for a $c$-axis helix $\chi_c$ is independent of $T$ below $T_{\rm N}$, as seen to be approximately satisfied in Fig.~\ref{Fig:chi}(a). However, $\chi_{ab}$ is dependent on the turn angle $kd$ for a $c$-axis helix and are related by
\begin{eqnarray}
\ \frac{\chi_{Jab} (T = 0)}{\chi_J (T_{\rm N})} = \frac{1}{2[1 + 2\cos(kd) + 2\cos^2(kd)]},
\label{Eq.Turnangle}
\end{eqnarray}
where $k$ is the magnitude of the $c$-axis helix wave vector in reciprocal-lattice units, $d$ is the distance between the magnetic layers along the $c$~axis, and the subscript $J$ represents that the anisotropy in $\chi (T \geq T_{\rm N})$ has been removed by spherically averaging the anisotroic \mbox{$\chi (T \geq T_{\rm N})$} data; hence the Heisenberg interactions~$J$ determine the resulting behavior of the spherically-averaged magnetic susceptibility above~$T_{\rm N}$\@.
Figure~\ref{Fig:chi}(b) depicts the normalized susceptibility $\chi(T)/\chi(T_{\rm N})$ of EuMg$_2$Bi$_2$\ for $H \parallel ab$ and $H \parallel c$, respectively, obtained from the data in Fig.~\ref{Fig:chi}(a). It is evident that $\chi_{ab}(1.8~{\rm K})/\chi(T_{\rm N}) \approx 0.5$, yielding a turn angle $kd \approx 180^\circ$ from Eq.~(\ref{Eq.Turnangle}). This turn angle corresponds to A-type AFM order, in agreement with the above analysis of the neutron-diffraction measurements below $T_{\rm N}$ in zero applied field. The same value of $\chi_{ab}(1.8~{\rm K})/\chi(T_{\rm N}) \approx 0.5$ at $T=0$ is obtained from a calculation for equal populations of three collinear AFM domains oriented at 120$^\circ$ from each other. We also note that good fits to $\chi_{ab}(T)$ data obtained in $H=1$~kOe for EuCo$_2$P$_2$ and EuNi$_{1.95}$As$_2$ crystals with the tetragonal ${\rm ThCr_2As_2}$ crystal structure were obtained for $c$-axis helical structures with turn angles in good agreement with the respective $c$-axis helical structures previously obtained from zero-field neutron-diffraction measurements~\cite{Sangeetha2016, Sangeetha2019}.
\section{\label{Sec:Conclu} Conclusion}
EuMg$_2$Bi$_2$ has drawn interest as it exhibits electronic topological properties that give rise to Dirac-like bands near the Fermi level. The presence of the large-spin element Eu$^{2+}$ in the compound makes it attractive since magnetic order can introduce a gap or lower the degeneracy of the Dirac-like bands to create more exotic states, for instance Weyl states. Here, we use zero-field single-crystal zero-field neutron diffraction and low-field magnetic susceptibility measurements to determine the magnetic ground state of this system.
The neutron-diffraction experiments reveal that the intraplane ordering of Eu$^{2+} (S= 7/2$) is ferromagnetic with $ab$-plane alignment and that adjacent layers are stacked antiferromagnetically (i.e., A-type AFM order). Our detailed analysis also confirms that the ordered magnetic moment, as $T$ approaches 0 K, attains its expected value $\sim 7\,\mu_{\rm B}$/Eu. The temperature-dependent magnetic susceptibility measurements at a very low magnetic field applied along the $c$-axis and in the $ab$-plane are consistent with the A-type antiferromagnetism below $T_{\rm N} = 6.7$~K and also that the moments are aligned in the $ab$ plane. We note that close examination of the magnetic Bragg-reflection peak-shapes exhibit broadening along the $(0 0 L)$ direction indicating imperfect correlations between the antiparallel-stacked FM layers. Previous $\chi(T)$ measurements in $H = 1$~kOe indicated that the magnetic structure is a $c$-axis helix with a $120^\circ$ turn angle instead of the A-type AFM structure (180$^\circ$ $c$-axis helix) obtained from our zero-field neutron-diffraction measurements. Neutron-diffraction studies under applied magnetic fields are required to confirm the evolution of the magnetic structure with field inferred from our zero-field neutron-diffraction measurements and the 1~kOe magnetic-susceptibility measurements and are planned for the future.
\acknowledgments
This research was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. S.X.M.R.\ and B.U.\ are supported by the Center for Advancement of Topological Semimetals, an Energy Frontier Research Center funded by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences, through Ames Laboratory. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No.~DE-AC02-07CH11358.
|
1,116,691,498,353 | arxiv | \section{Introduction}
Virtual Testing is one of the crucial steps in the assessment of performance and safety of Autonomous Vehicles (AV).
The core component in this step is a high-fidelity virtual simulator.
These simulators provide a virtual world in which a virtual vehicle can navigate, including the physics engine that model its dynamics.
Given this setup, if this vehicle is controlled by the same Automated Driving System (ADS) software \cite{standard2021j3016} as would be operating on an actual AV, it is possible to make an in-depth study of how the AV would react to specific traffic scenarios.
However, the design of the traffic scenarios is a complex, tedious, and non-standard procedure.
In particular, traffic scenarios can be compared to a scene where the actors (other vehicles or pedestrians) perform their part, and the ego vehicle (the system under test) has to react accordingly.
For example, a simple scenario could involve a pedestrian that, upon the approach of the AV, starts to cross the road directly into its path. This will require a timely reaction from the AV; otherwise, there will be either a collision or a near miss, both outcomes being undesirable.
In this context, it is trivial to observe that the validity of the whole testing procedure is only as good as the test cases used.
Furthermore, the implementation of a designed test case can be hindered by the capability and constraints of tools available, which leads to additional limitations in the overall virtual testing.
This paper stems from the 2021 IEEE Autonomous Test Driving AI Test Challenge \cite{avchallenge}, hence,
the focus of this study is to conduct virtual testing of Baidu Apollo \cite{Apollo} using SVL \cite{svl} simulation platform.
In order to achieve this, we have developed ViSTA, a framework and the corresponding tools to facilitate the design of scenario testing, scenario execution and analysis.
Our contributions can be summarized as follows:
\begin{itemize}
\item Developed a scenario-generation framework, that facilitates the design of tests with varying degrees of automation, ranging from random to manual (controlled) tests.
\item Developed an automated Scenario-execution framework, built on top of the default SVL Python interface, with additional capabilities for varying dynamics of the actors.
\item Generated test cases using our framework using both existing scenario databases and newly crafted scenarios, complemented by expert knowledge and analysis of the Operational Domain (OD) and Operational Design Domain (ODD) \cite{standard2021j3016} assuming SAE L4+ automation level.
\item Prepared the final test suite with both manual and auto-generated test cases (selection).
\item Executed the final test suite through a comprehensive evaluation process, that combines both objective and subjective judgement, leading to the identification of several issues and specific limitations in the ADS under test, i.e., Baidu Apollo.
\end{itemize}
\section{Related Works}
\begin{figure*}[t]
\centering
\includegraphics[trim=0.5cm 5cm 0.5cm 3.5cm,width=\textwidth]{Figures/AVTest_test_generation_pipeline.pdf}
\caption{Overview of ViSTA framework: inputs, automated/manual virtual test generation, execution, evaluation and outcome}
\label{fig:pipeline}
\end{figure*}
The development and testing of AI-based algorithms often relies on testing in equivalent virtual environments before exposing them to their actual physical deployment environments, especially in case of systems that require physical actuation, such as robotic systems.
To this end, in recent years, tools such as \cite{Brockman2016,Juliani2018} has become popular as they provide generic virtual environments, allowing researchers to focus on developing and testing the AI algorithms.
Two of the most popular open-source tools that have emerged recently are the SVL Simulator (formerly known as LGSVL Simulator) \cite{svl} and CARLA \cite{Dosovitskiy2017}.
These are notable in their use of well-known gaming engines such as Unity Engine \cite{UnityTech} and the Unreal Engine \cite{UE4} respectively. Furthermore, they provide bridges towards open-source Automated Driving System (ADS) such as AutoWare \cite{kato2018} or Apollo \cite{Apollo}. In this paper, we adopt SVL Simulator to virtually test and assess the performance of and Apollo ADS, as required by the IEEE AV Test Challenge 2021.
Virtual simulation platforms enable researchers to develop specific procedures targeting specific components \cite{Pylot,PEM} or to generate synthetic data \cite{Richter2016,talwar2020evaluating}.
A crucial part of the simulation, however, is to design the behaviour of actors, i.e. other road users, such as pedestrians or other vehicles that may be termed Traffic Simulation Vehicles (TSV) or Non-Player Characters (NPC) in SVL.
In fact, while random behaviour could potentially be sufficient, it lacks the structure required for proper testing.
On the other hand, Scenario-based testing \cite{openSC,ScenarioCarla} aims to overcome this by providing a format to describe a scenario in a somewhat deterministic manner, although the AV under test itself may exhibit stochastic behavior.
\section{Virtual Testing Procedure}
The methods we employed for designing the test cases is a mixture of manual and automatic generation, as shown in Figure \ref{fig:pipeline}. In particular, we favoured manual design to promote diversity and completeness, while the role of automation is in providing slight variations of the same test case.
The automated execution of the virtual test cases produces results in both a machine-readable tabular format\footnote{The Virtual Test Results Data Logging format and code is intended to be published at https://github.com/cetran-sg. At the time of publication of this paper, this is pending for review and approval from stakeholders.} (e.g., \texttt{.csv}) which enables \textit{objective} and/or automated evaluation, as well as detailed videos that facilitate offline evaluation on a \textit{subjective} basis, by various domain experts.
We adopt a scenario-based virtual testing approach to test the safety and general driving performance of an autonomous vehicle, which is intended to be deployed in public urban roads under diverse operating conditions. These operating conditions can cover both (a) the pre-specified conditions it is designed to operate in, i.e., its ODD, as well as (b) the actual real-world conditions it must operate in, regardless of whether the ADS was designed to handle those conditions or not, i.e., its OD.
\subsection{Test Categorization}
Our tests have been organized into various types:
\begin{itemize}
\item Basic functional/behavioral tests: Focusing on Behavioral Safety of the AV in dynamic urban traffic conditions.
\item Negative tests (edge case tests): Focusing on Safety of the Intended Functionality (SOTIF) and extreme conditions (edge/corner cases) in dynamic urban traffic conditions.
\item Environmental tests: Focusing on the operating conditions (e.g., weather, lighting) around the AV that can change/evolve dynamically.
\item OD/ODD coverage tests: Focusing on special aspects of the OD/ODD in which the AV is designed to operate in. This may also cover out-of-ODD conditions, at which the AV should achieve the minimal risk condition (MRC).
\item Regression tests: Focusing on essential items to test for any regressions made due to changes in the ADS periodically; to be useful in long-term testing as the ADS is being developed.
\end{itemize}
\subsection{Test Objectives}
The tests are designed with two broad level objectives.
The primary objective is to test the Vehicle under Test (VUT), i.e., the AV, under different categories of conditions.
Firstly, this should help gain an understanding of its current capabilities in terms of driving safely in a given ODD typically on public roads in an urban environment under different environment conditions including non-optimal weather, traffic and lighting.
Secondly, this should help check whether it meets the general capability expectations as per the rule of the land, to give confidence to the AV developers, regulators and general public, that the AV is safe for public deployment. Such rules may be outlined in the driving regulations for the traffic jurisdiction, e.g., California Driver handbook from DMV or Technical Reference 68 (TR68) in Singapore.
The second objective is to critically evaluate the usefulness and effectiveness of automated test case generation, in contrast to traditional manual test case development that relies on the skills and knowledge of experienced test personnel, test experts and domain experts. Understanding the strengths and limits of automation helps test engineers to launch a comprehensive test strategy that can make the best use of both modalities.
\subsection{Scenario Diversity}
Specific attention has be dedicated to ensure diversity in testing that can be measured in terms of coverage of various important aspects. In particular, we focus on diversity of Environment (road layout, signals, weather conditions) and Actors, i.e. the other road users that define the traffic scenario.
\subsubsection{Environment}
We primarily choose the San Francisco map available in SVL since it provides a wide variety of road layouts. Additionally, we consider the Borresgas Avenue map for richer detail and higher fidelity of map modeling.
In fact, one of the first steps is to observe the map and to identify all the relevant road features such as pedestrian crosswalks, traffic light intersections, non-signalized intersection, bus stops, parking areas, and traffic signs. Special road layouts (e.g., skewed or star-shaped intersections) also can be considered and exploited.
Furthermore, our test cases should include a diverse use of traffic lights, weather conditions, and objects on the road such as construction zones or traffic cones that occur in the OD/ODD. Given the scope of the AV Test Challenge, we exploit all the options available in SVL Simulator. These considerations enhances the variety and diversity for our test cases and make them meaningful.
\subsubsection{Actors}
The actual dynamic traffic scenarios are determined by the scripted Actors (or NPCs) that are usually designed to reproduce pre-defined and repeatable behavior and trajectories, and that can be activated (triggered) based on some predefined conditions.
Hence, a comprehensive test case selection should provide a proper distribution across the various actors and objects that can be typically found in the deployment area and/or OD:
\begin{itemize}
\item Pedestrian: with varying gender, age, size;
\item Vehicles: cars, trucks, school bus, motorbike, cyclists, emergency vehicles etc.
\end{itemize}
In our framework, we have designed the Actors to perform a variety of maneuvers, e.g., \textit{driving straight}, \textit{turning}, \textit{swerving}, \textit{parking}.
Furthermore, we also implement advanced scenario-modeling capability such that the Actors can violate traffic rules, to produce additional challenges to the ego-vehicle. This include cases such as a jaywalking pedestrian, or an NPC that may be tailgating the ego vehicle or jumping a red light or simply refusing to give way.
\subsubsection{Known AV weaknesses and Simulation tool limitations}
Known weaknesses in AV technology, particularly, in terms of sensors/perception, prediction, planning and control must be exploited through judicious choice of test parameters.
Furthermore, the limitations of virtual simulation toolchain (e.g., modular testing in SVL directly provides ground truth, offering perfect perception) must be considered and scenario parameters can be adjusted so that the test cases are still effective in finding AV issues.
\section{Scenario-based Testing Framework}
In this section, we describe details of the ViSTA framework which is designed to facilitate virtual validation of a given ADS and simulator, such as Apollo and SVL Simulator.
\subsection{Scenario Generation}
Our abstraction of a Scenario include the below elements:
\begin{itemize}
\item Map ID: which map the scenario is taking place;
\item Ego Vehicle Start position and mission:
determines the ego vehicle starting position and heading, as well as the desired destination coordinates.
\item a time limit: the simulation timeout, any scenario execution is to be considered a fail if the ego vehicle doesn't reach its destination within the time limit.
\end{itemize}
For specific scenarios, there is the also option to control traffic lights, place traffic cones on the road in specific positions, and specify the weather conditions using the below additional scenario elements.
\begin{itemize}
\item A list of Actors: the scripted vehicles in the scenario.
\item A sorted list of time-windows and corresponding weather / lighting condition parameters.
\item A list of configurations for traffic lights and other controllable objects (e.g., cones).
\end{itemize}
The core distinction between scenarios is the scripted behaviour of the actors and optionally, the applied dynamic environment conditions (e.g., weather, lighting and/or traffic light conditions). In our approach, the actors and environment have a deterministic behaviour stored in JSON files, which are referred by the relative scenario.
Actor can be of two types, pedestrian and vehicles. Pedestrian are simple to model, as their movement pattern is easily designed by defining the waypoints.
However, vehicles behaviour is more complex, and defining path can be more challenging.
SVL handling of NPC waypoints is based on linear interpolation which slightly deviates from plausible vehicle dynamics if applied on distant waypoints. However, it is adequate if the waypoints are in close proximity.
Manually designing a scenario is much simpler if its possible to specify only a limited amount of waypoints, and compute the intermediate ones automatically.
Hence, our approach is to facilitates the design with four different but complementary principles.
\paragraph{Key-Waypoint} Our approach is to define key-waypoints, that compose a coarse trajectory.
This trajectory is then refined into a smoother one by a simple adaptation of \cite{quintic}.
In particular, we relaxed the acceleration and jerk limits, since for this task a smooth driving for the scripted vehicles is not crucial. On the contrary, we may actually need to model "bad" drivers with abrupt driving behaviour.
Furthermore, we decided to introduce a speed limit parameter, since we want to have more control in the designing phase.
This way, scripting an Actor is more straightforward, while not giving up a plausible vehicle dynamic.
\begin{figure*}
\centering
\subfloat[b1][Scenario JSON]{\includegraphics[trim=0cm 0.3cm 0cm 0.3cm,width=0.95\columnwidth,clip]{Figures/testCaseJSON.png}\label{fig:tc_a}}
\hspace{3mm}
\subfloat[b1][Actor JSON]{\includegraphics[trim=0cm 0.3cm 0cm 0.3cm,width=0.95\columnwidth,clip]{Figures/actorJSON.png}\label{fig:tc_b}}
\caption{Example of JSON files for TC\_006 and for one the Actor involved.}
\label{fig:json}
\end{figure*}
\paragraph{Local/Global coordinates} In the design phase, the waypoints are only relative to the previous pose. This allows the designer to not consider global coordinates but rater focus on the vehicle behaviour. By specifying the starting position in the map, all the relative waypoints are easily converted.
\paragraph{Semantic Maneuver} To provide a semantic meaning on the key-waypoints design, we organized the concept of maneuvers in two levels. Level One maneuvers are atomic maneuvers, translate directly into a single key-waypoint, and are parametrized by Level Two maneuvers. The latter are complex maneuvers, that describes a sequence, and a composition, of level one maneuver.
\paragraph{Parameter Automation} In the design phase, each maneuver requires a set of parameters to be univocally defined. For example, even the simplest "driving straight" can be determined by the length of the segment as well as the target speed. A swerve maneuver, in addition to the previous, is parametrized by the lateral offset induced by the swerving.
Carefully defining each value to generate the appropriate challenge is tedious and time-consuming. In our implementation, it is possible to specify any parameter as a distribution, that will be sampled at the generation phase. Furthermore, we predisposed a function to generate a defined amount of samples for the designed scenario.
Given this functions, during the design phase it is sufficient to specify the sequence of Level Two maneuvers that will be automatically converted, in the order. By definition, a Level Two maneuver is a sequence and combination of Level One maneuvers. Similarly, Level One maneuver is by definition directly converted into a key-waypoint. Finally, the key-waypoint are used to generate the list of intermediate waypoints, i.e. the smooth trajectory. Our approach is to use our adaptation of quintic polynomial planning \cite{quintic}.
The final prepared set of waypoints and corresponding actor/config info can then be stored in a JSON file that can be used runtime to script a deterministic actor (see Figure \ref{fig:json}).
\subsection{Simulation Runtime}
The automatic execution of test cases start by providing a selection of scenario IDs to an automation script that will execute them sequentially and record the results.
The script will establish the necessary connections towards the simulator and Apollo.
A Scenario Manager will read the selected scenario specification (JSON file), and load the map, traffic lights, traffic cones, and weather conditions accordingly.
Furthermore, the script will predispose the Actors and their waypoint instructions, as specified in the associated JSON file.
Then, the script will set the initial state of the ego-vehicle, activate the Apollo modules and provide the target destination.
It will also launch a separate script, running inside docker, to log the ego vehicle and obstacle position obtained on Apollo CyberRT bridge.
After completing these steps, the script will instruct the simulator to run until the ego vehicle's intended mission destination is reached, or the scenario execution time budget is exhausted (times out). Any failure to complete the mission in the allowed time budget, whether it is due to a collision or unsafe situation or not, can lead to the test case flagged as a FAIL, even before it is evaluated offline in further detail (as discussed in next section).
\subsection{Analysis of Virtual Test Execution Results Data and Safety Performance Evaluation}
When the test cases are executed, the run-time scripts records the dynamic state (position, velocity, heading etc.) of the ego-vehicle and the actors or stationary objects active at each simulation step, into our well-established tabular format as a \texttt{.csv} file.
These files are then parsed and processed by an offline evaluator scripts that computes various objective metrics to analyze the AV performance. These can include, but not limited to the following.
\paragraph {Occurrence of accidents or collisions}
The ego-vehicle is expected to avoid collisions or accidents with other actors or objects. Exceptions would be when the collision is unavoidable and did not occur due to the ego’s own actions.
\paragraph {Violation of unsafe lateral/longitudinal clearance or safety envelope}
The ego-vehicle is expected to maintain a safety envelope or exclusion zone, represented by lateral/longitudinal clearance distances between ego-vehicle and an actor or object on or beside or relevant to the ego’s path. Exceptions would be when the collision is unavoidable and did not occur due to the ego’s own actions.
\paragraph {Violation of minimal TTC}
Time taken for ego-vehicle to collide with an actor or object in future, at their current velocities and considering their current positions, if they are on a collision course.
\paragraph {Violation of unsafe temporal safety distance}
The time taken to travel the Euclidean distance between the nearest points on the body of the ego-vehicle and that of an actor or object, at the current velocities, irrespective of whether they are on a collision course or not.
\paragraph {Violation of road speed limit}
The ego-vehicle is expected not to exceed the road speed limit of the road it is driving in currently.
Furthermore, the behaviour of the ego-vehicle in a given scenario and/or context can also be analysed \textit{subjectively}, by the domain experts such as the Chief Tester and supporting Test Analysts (at least 3 people in total). The experts will take into account the objective evaluation and then analyse the results subjectively and make a final decision on the test outcome. If the experts differ in their individual judgement, a consensus or voting can be made between the individual opinions.
In many cases, objective judgement may not be possible to achieve in practice; therefore, this is an important tool to be adopted and is flexible and scalable; however, this is dependent on the skillset and experience and knowledge of the experts.
The following are a few high-level metrics designed to subjectively check the safe behavior of the system under test under different scenario conditions.
\paragraph {Occurrence of collision [IF]}
Applicable when the Ego vehicle collides with any of the NPCs involved.
\paragraph {Unnecessary swerving [NC]}
Applicable when the Ego vehicle moves forward while changing its heading continuously, without a valid reason.
\paragraph {Unnecessary braking [NC]}
Applicable when the Ego vehicle slows down, without a valid reason.
\paragraph {Following too close to other road users [NC]}
Applicable when the Ego vehicle is tailgating the leading vehicle or cyclist or similar cases.
\paragraph{Other unacceptable on-road behavioural aspects [IF/NC]}
To be subjectively decided by the experts.
For continuous improvements of the process, new behavioral aspects can be added as additional metrics as the virtual testing process is executed and the testers gain more experience.
The general evaluation procedure is as follows:
For each test case, we can obtain an evaluation for each metric, which may be an Immediate Failure (IF) or Non-conformity (NC). Immediate Failure would mean that the outcome of the test case is a FAIL.
In contrast, non-conformity would mean that the outcome is a PASS but with some special conditions.
Finally, an evaluation with no IF or NC for any metric, would imply a direct PASS for the test case.
The final decision on the outcome of a test case is based on a judicious combination of objective and subjective evaluation, with subjective judgement being final and binding.
\section{Test Cases}
In this section we present a selection of Scenario and Test Cases we developed, summarized in Table \ref{tab:scenario}.
This selection allows to investigate the limitations of Apollo, or SVL, in addressing specific road situations or infrastructures.
\input{Sections/tab1}
\begin{figure}
\centering
\subfloat[a][SV\_016: Modular Test bypass occlusion.]{\includegraphics[trim= 0 0.3cm 0 0.6cm ,width=0.9\columnwidth,clip]{Figures/screenshot2.PNG}\label{fig:occ}}
\\
\subfloat[b][SV\_029: Collision with vehicle exiting parking lot.]{\includegraphics[trim = 0 0.3cm 0 0.6cm ,width=0.9\columnwidth,clip]{Figures/screenshot1.PNG}\label{fig:coll}}
\caption{Example screenshots of 2 test cases during execution.}
\label{fig:json}
\end{figure}
\paragraph{SV\_011}
In this scenario, the ego-vehicle mission is to cross an intersection. While approaching, an NPC violates its right of way. This would lead to a collision, but the ego seems to predict the imminent collision and reacts by braking. Although this scenario does not require the ego to slow down significantly, its rather fast reaction to the NPC causes the ego to violate traffic rules (e.g., potential to cause rear ending).
\paragraph{SV\_012}
In this scenario, a bus is driving on the left side lane of the ego-vehicle, as both are approaching a bus stop, close to an intersection.
The bus changes lane and stops, the ego reacts accordingly by slowing down and stopping behind the bus. However, the ego-vehicle never attempts to overtake the stationary bus.
This could be proper decision making in presence of a bus stop, but under modular testing, it is not clear whether ego has access to such information.
While there is no unsafe behaviour, it is unclear why there is no overtake attempt, which can potentially cause road hogging.
\paragraph{SV\_014}
The ego-vehicle is approaching a construction zone signalized by traffic cones on the road. Once again, modular testing does not include traffic cones, so that Apollo does not react to them and just drive through.
\paragraph{SV\_016}
In this scenario, a parked truck occludes a jaywalking pedestrian while the ego-vehicle approached.
However, the modular testing configuration of SVL Simulator, which is based on ground truth data, allows the ego-vehicle to detect the pedestrian in all cases (see Figure \ref{fig:occ}). This obfuscates the reason causing the ego-vehicle slowing down, which rather than a cautious approach to an area with low visibility, could be an actual reaction to the perceived danger.
\input{Sections/tab2}
\paragraph{SV\_017}
This scenario takes place at a sequence of skewed intersection, where the the ego vehicle need to turn left after a right turn.
This cause the ego vehicle to be on the wrong lane, with not enough space to change lane.
Interestingly, a static obstacle on the wrong lane leads the ego vehicle to overtake it, facilitating the lane change and the left turn.
\paragraph{SV\_023}
Traffic lights with arrow signals are not available. Nevertheless, yellow light is detected (due to modular testing), but the ego-vehicle still does not proceed on its path.
This is not a realistic test because the yellow signal duration is not proper. Yellow light was activated for a very long time and the starting point of the ego-vehicle was near the stop line.
However, this Test Case can be used to test the behaviour of the ego for yellow light signal and as Unit test for yellow light classification robustness.
\paragraph{SV\_024}
Although a straightforward scenario, Apollo does not detect the traffic signal. Even in modular testing, traffic signals appears to be only determined by the HD-map annotations Apollo uses.
\paragraph{SV\_027}An NPC is tailgating the ego-vehicle, but does not slow down at a signalised intersection, and hits the ego.
While the unsafe behaviour is attributed to the NPC, it appears that ego-vehicle is not reacting to it in any way.
\paragraph{SV\_029}
In this scenario, an NPC exits a parking lot on the right of the approaching ego-vehicle.
This parking lot is not annotated in Apollo HD-map, and appears that the ego-vehicle is ignoring the vehicle by not generating a prediction on its path.
This leads to not detecting the imminent danger and no reaction from the ego-vehicle, which leads to a collision as illustrated in Figure \ref{fig:coll}).
\paragraph{SV\_030}
In this scenario, a pedestrian is walking longitudinally in the middle of the road and stops after a while.
We used this scenario to test weather conditions, but given the modular testing setups it appears there is no effect at all.
Nevertheless, in this scenario is interesting to observe how the ego-vehicle follows the pedestrian by driving at very low speed, and overtakes only when the pedestrian stops. However, the overtake maneuver is not proper and comes very close to the pedestrian, which is a safety concern.
\paragraph{SV\_031}
In this scenario, a leading NPC is stopping in front of the ego-vehicle. The ego reacts accordingly by slowing down and overtakes the stopped vehicle.
Although this is the expected behaviour, it seems to be inconsistent with TC\_012a.
\begin{figure*}[t]
\centering
\includegraphics[trim= 2cm 6.5cm 1cm 7cm, width=\textwidth, clip]{Figures/resultsSummary.pdf}
\caption{Outcome Distribution of 50 Test Cases organized by the main test objective.}
\label{fig:result}
\end{figure*}
\subsection{Identified Issues}
In Table \ref{tab:issues} we report all the issues identified using our designed test cases. We can summarize them as follows:
\begin{itemize}
\item Lack of proper controls: 1, 2, 5, 6, 9, 10, 22, 23, 24. This set contains situations where a better control is desirable, even if it is not necessary to improve safety.
\item Lack of functionalities: 3, 13, 16. Apollo 5.0 lacks capability to do Parking and U-turn maneuvers, so we assume that these are unavailable in the tested configuration.
\item Lack of evasive maneuver: 4, 17, 18, 21, 25. In this set of scenarios and Test Case appears evident that Apollo is not capable to perform an of evasive maneuver, even when an adjacent empty lane is available.
\item Implementation issues: 7, 8, 11, 12, 14, 15, 19. These specific Test Cases highlight limitation in the current implementation of the modular testing or SVL-Apollo interaction in general.
\item Unique situation: 20. This case is particularly interesting since it is very specific. Apparently Apollo does not consider the NPC exiting the parking lot since that area is not part of the HD-map.
\end{itemize}
\subsection{Summary}
Alternatively, we can classify the Test Cases based on their \textit{main test objective category}, i.e., the most challenging aspect of the scenario that the AV is being tested against in the particular test case.
These challenging aspects can vary based on the AV's interactions with other actors, ranging from normal behaviour to edge cases such as traffic rule violations, to involving specific road features or traffic infrastructure or specific weather conditions.
Figure \ref{fig:result} summarize the outcome of the Test Cases according to the aforementioned classification.
We note that not all of the 50 designed Test Cases were implementable given missing functionalities of the baseline simulator (i.e. no plugins are allowed) or specific road features on the map.
Furthermore, this results are influenced by the modular testing mode, which provides unfair advantages as well as limitations. In particular, test cases involving weather are not effective in modular testing, since any perception challenge is bypassed. Conversely, modular testing also bypass occlusions, which is unrealistic under actual deployment.
Nevertheless, the test case set should also include features that are not directly implementable or testable within the current testing platform or settings.
\section{Conclusions}
In this paper, we describe details of ViSTA, a virtual testing framework developed to generate scenarios and execute them for validating an ADS driving SAE L4+ AVs.
This enabled us to design specific test cases and identify a wide set of limitations, inconsistencies, and problems within the chosen ADS (Apollo), simulator (SVL), and their integration.
It is noteworthy that our tests can fail not just in cases when the AV causes accidents or incidents. They can also fail when AV behavior leads to unsafe situations (e.g., TC\_030) that are avoidable through better path planning and responses, such as evasive maneuvers. Also, perfect perception
allows us to focus on such behavioral issues and inconsistencies of the ADS.
This study also highlights the importance of developing better tools for virtual testing, as safety assessment is crucial to ensure the growth and safe deployment of AVs on public roads.
Furthermore, the adoption of interpretable and focused test cases are critical to discover latent issues, that may not necessarily be highlighted when driving in a purely randomized environment.
In future, we hope to extend the ViSTA framework to cover the following aspects. Firstly, we aim to achieve a balanced trade-off between automated and manual design of test cases. Secondly, we hope to implement a robust selection/filtering of meaningful scenario parameters from the large space of feasible parameters for urban driving ODDs, which is a non-trivial task.
Finally, we plan to include both the actual Sensing and Perception as well as equivalent Perception Error Models \cite{PEM} into the into the virtual testing loop, which can allow further diversity and more effective testing.
|
1,116,691,498,354 | arxiv | \section{Introduction and Related Works} \label{Intro}
The growing complexity of modern software systems stimulated the use of com\-ponent-based approaches and the enforcement of the separation of concerns \cite{Djoudi2016}. In Context-aware computing the separation is made between the functions the system is built for, that can change in time owing to different conditions, and the context into which the system must operate, which sets the current environmental situation.
Among the most widely used definitions of Context, and of Context-aware Computing, those proposed by A. Dey \cite{Dey2001} state: {\sl{``Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves."}} and {\sl{``A system is Context-aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user�s task."}}
Context-aware applications have been used to: (i) tailor the set of application-relevant data, (ii) increase the precision of information retrieval, (iii) discover services, (iv) build smart environments, et cetera, and different models of the context have been proposed \cite{Bolchini2009,Bolchini2007}. \\
However, new application domains such as self-adapting systems \cite{Schreiber2017}, safety critical applications, autonomous vehicle design and manufacturing, disaster prevention, or healthcare management, require a very high level of dependability that can only be achieved by formally determining their behaviors. To this end, Bigraphs and Model-checking \cite{Djoudi2016,Cherfia2014} approaches have been proposed. In \cite{Padovitz2004,Padovitz2005} Padovitz et Al. consider a state-space approach to describe the \emph{situation} dimension and to determine the likelihood of transitions between \emph{situation subspaces}, all other Context dimensions remaining constant; the likelihood of the transition is evaluated by assuming notions analogous to those of velocity and acceleration in mechanical systems.\\
Properties as:
\begin{itemize}
\item the existence of \textit{stable equilibrium points};
\item the absence of \textit{undesired} oscillations (limit cycles);
\item \textit{observability} - the measure of how well internal states of a system can be inferred from knowledge of its external outputs (and, possibly, of its corresponding inputs);
\item \textit{controllability} - the ability of an external input (the vector of control variables) to drive the internal state of a system from any initial state to any other final state in a finite time interval;
\item \textit{reconstructibility} - when the knowledge of the input and output vectors in a discrete time interval allows to uniquely determine the system final state;
\end{itemize}
are but some of the features, together with \textit{fault detection}, that allow to guarantee the expected and safe operation of a system.
Systems theorists are well acquainted with the techniques to prove such properties and in \cite{Diao2005} the authors explore {\sl{``... the extent to which control theory can provide an architectural and analytic foundation for building self-managing systems ..."}}. However, control systems are typically described by means of differential equations and by Matrix Algebra, while Context-aware systems are digital and mostly based on Logics.
Through the introduction of Boolean Control Networks (BCN) and of the semitensor product of matrices, the representative equations of a logic system have been converted into an equivalent algebraic form \cite{Cheng2010b,Cheng2010a}, and solutions to problems such as controllability, observability, stability and reconstructibility have been proposed \cite{Cheng2009,EF_MEV_BCN_obs2012,Fova2016,Zhang_obs}. \\
In a previous paper \cite{SchreiberValcher2019}, we proposed and analyzed a BCN model of an open loop Context-aware early-warning hydrogeological system for which we proved: i) the existence of equilibrium points corresponding to constant inputs; ii) the absence of limit cycles; iii) its reconstructibility; iv) the possibility of detecting stuck-in-faults.
In this paper we consider a multiple feedback loops system
as it naturally arises when modeling the evolution of a patient's health status, subjected to medical therapies, whose vital parameters are, in turn, used as inputs to update the therapies
to be administered to the patient. The model provides the mathematical formalization of a possible algorithm, running on
the mobile device of a nurse in a hospital, aiming at providing him/her with all and only the information on the therapies the patients in his/her ward are to be given.
To focus on the ideas and on the modeling techniques, rather than on the Boolean math, we
have chosen to address the model structure and properties without assigning specific numerical values to the logic matrices involved in the system description. Thus we have derived general results that can be tailored to the specific needs
and choices of the illness forms, therapies and vital parameters. We believe that this is the power of the proposed modelling approach: its flexibility and generality. \\
Finally, we provide here a deterministic model of the patient's health evolution, that represents the evolution of the average case of a patient affected by a specific form of illness. Accordingly, we are giving certain interpretations to the patient's symptoms, as captured by the values of his/her vital parameters, and, based on them, we apply well-settled medical protocols to prescribe therapies and locations where such therapies need to be administered.
A probabilistic model of the patient's reaction to therapies, that also keeps into account the probabilistic correlation between actual health status and the measured values of his/her vital parameters, requires the use of Probabilistic Boolean Networks and will be the subject of our future research.\\
The paper is organised as follows. In Section \ref{case} we describe the case study; we model the Context as well as the functional system as Boolean Control Networks, as explained in Section \ref{model}. In Section \ref{reallife} the mathematical formalisation of real life requirements is presented and Section \ref{last} brings some conclusive remarks.
\section{The Case Study} \label{case}
A hospital keeps a Database that stores all the data relevant both to the patients and to the administrative, medical, and assistance employees. The work of a nurse is guided by an application on his/her mobile device.
The App assists the nurse in his/her routine work and is fed by the physician's diagnostic and prescription activities.
\\
Each patient is provided with healthcare wearable sensors measuring the variables that characterize his/her Medical Status, in our example: the body temperature (\textit{bt}), the blood pressure (\textit{bp}), and the heartbeat frequency (\textit{hf}) \cite{wearable2013, wearable2020}. For ease of representation, all of these variables are discretized and take values in the finite set\ $S=\{low, medium, high\}$.\\
\begin{figure*}[h]
\vspace{-0.8 cm}
\includegraphics[scale=.5]{DB.pdf}
\vspace{-2.1 cm}
\caption{The hospital Database}
\label{fig:DB}
\end{figure*}
Figure \ref{fig:DB} shows a portion of the schema of the hospital Database, which must be dynamically tailored in order to store, on the mobile device of each nurse in a shift, \emph{all and only the treatments each patient in his/her ward is to be given in that shift}; treatments are defined in the Therapy Protocol adopted for the diagnosed illness. The numerical values coming from the sensors - registered in the Medical Record - are converted into their symbolic aggregate counterparts $ \{low, medium, high\}$ in the Sensors Data Processing block and affect the Estimated Patient Status, which can take five values: Healthy (\textit{H}), Convalescent (\textit{C}), Under Observation (\textit{UO}), Ill (\textit{I}), and Life Critical (\textit{LC}). The Estimated Patient Status determines the physician's decision on both the therapy and the patient's location - at home (\textit{h}), in hospital ward (\textit{hw}), in an intensive care unit (\textit{icu}). Clearly, the prescribed therapies depend also on the current location and on the location that is recommended for the patient. For instance, some therapies can be given in a hospital \textit{icu} or in a \textit{ward}, but cannot be given at \textit{home}.
On the other hand, the medical context can require a relocation of the patient. Figure \ref{fig:tailor} shows the overall tailoring process.
\\
Thus, Data tailoring is made on the basis of two different criteria:
\begin{itemize}
\item the work profile of the nurse, which is used to select all and only the patients he/she must attend; it is downloaded at the beginning of the shift and is not affected by external events (Listing \ref{lst:list1});
\item the medical status of the patient, which dynamically requires different treatments.
\end{itemize}
{\tt
\begin{lstlisting}[frame=single,label=lst:list1,caption={Tailoring the Nurse work profile}]
select P_id,bed_n
from nurse,shift,ward,patient
where N_id="A" AND S_id="X" AND S_date="yy/mm/dd"
\end{lstlisting}
\label{list1}}
The query on the nurse's mobile device is shown in Listing \ref{lst:list2}. For the purpose of this work, in the following, we focus only on the medical and not on the administrative issues. The schema of the tailored data, stored on the mobile device, is shown in Figure \ref{fig:DBmob}.\\
{\tt
\begin{lstlisting}[frame=single,label=lst:list2,caption={Querying the nurse's device}]
select D_id, quantity, location
from patient,therapy,made_of,D_id
where P_id="pp" AND time_of_day="hh:mm"
\end{lstlisting}
\label{list2}}
\begin{figure*}[h]
\vspace{-1.0 cm}
\hspace{-1cm}
\includegraphics[scale=.5]{tailor.pdf}
\vspace{-1.4 cm}
\caption{The tailoring process}
\label{fig:tailor}
\end{figure*}
\begin{figure*}[h]
\vspace{-1.5 cm}
\includegraphics[scale=.5]{DBmob.pdf}
\vspace{-3.5 cm}
\caption{The tailored Database}
\label{fig:DBmob}
\end{figure*}
There are $3{^3}=27$ possible combinations (triples) of the sensors symbolic data - synthetically resumed in Table 1.
\begin{table}[ht]\label{tab:tab1}
\caption{Possible input combinations}
\centering
\scriptsize
\begin{tabular}{p{0.4cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}}
\hline\hline
& bt & bp & hf \\ [0.5ex] \hline\hline
1 & low & low & low \\ \hline
2 & low & low & mid \\ \hline
3 & low & low & high \\ \hline
4 & low & mid & low \\ \hline
5 & low & mid & mid \\ \hline
6 & low & mid& high \\ \hline
7 & low & high & low \\ \hline
8 & low & high & mid \\ \hline
9 & low & high & high \\ \hline
10 & low & low & low \\ \hline
11 & low & low & mid \\ \hline
12 & low & low & high \\ \hline
13 & mid & mid & low \\ \hline
14 & mid & mid & mid \\ \hline
15 & mid & mid & high \\ \hline
16 & mid & high & low \\ \hline
17 & mid & high & mid \\ \hline
18 & mid & high & high \\ \hline
19 & high & low & low \\ \hline
20 & high & low & mid \\ \hline
21 & high & low & high \\ \hline
22 & high & mid & low \\ \hline
23 & high & mid & mid \\ \hline
24 & high & mid & high \\ \hline
25 & high & high & low \\ \hline
26 & high & high & mid \\ \hline
27 & high & high & high \\ \hline
\end{tabular}
\end{table}
As detailed in Section \ref{model}, the Patient Context is constituted by a set of variables
and it determines the therapy to be given (e.g., the drugs, their amount and timing). The treatment should change the actual Patient Status - thus changing the sensors output - and, possibly, could require a repositioning of the patient to a different location, so determining feedback loops. Furthermore, even if the model is general and we do not enter in the diagnose-prescription issues, we suppose that \textit{the therapies are effective and that the patient will be eventually dismissed}.\\
In Figure \ref{fig:status} the global system structure is represented showing the Moore state diagrams of the Estimated and the Actual Patient Status, and of the Location respectively, as it will be detailed in Section \ref{model}. \\
\begin{center}
\begin{figure*}[ht]
\hspace{-1cm}
\includegraphics[scale=.55]{status.pdf}
\vspace{-1 cm}
\caption{The system structure}
\label{fig:status}
\end{figure*
\end{center}
\section{The System Model}\label{model}
Before proceeding, we introduce some minimal notions about the left semi-tensor product and the algebraic representations of Boolean Networks and Boolean Control Networks. The interested reader is referred to
\cite{BCNCheng} for a general introduction to this class of models and to their fundamental properties. Additional references for the specific properties and results we will use in the paper will be introduced in the following.\\
We consider Boolean vectors and matrices, taking values in ${\mathcal B} = \{0,1\}$, with the usual
logical operations (And, Or, and Negation). $\delta^i_{k}$ denotes the $i$th canonical vector of size $k$,
namely the $i$th column of the $k$-dimensional identity matrix $I_k$. ${\mathcal L}_{k}$ is
the set of all $k$-dimensional canonical vectors, and ${\mathcal L}_{k\times n}\subset {\mathcal B}^{k\times n}$ the set of all $k\times n$ {\em logical matrices}, namely $k\times n$ matrices whose columns are canonical vectors of size $k$.
Boolean variables $X\in {\mathcal B}$ and vectors ${\bf x}\in {\mathcal L}_2$ are related by a bijective correspondence, defined by the identity
$${\bf x} = \left[\begin{matrix} X\cr \overline{X}\end{matrix}\right].$$
The {\em (left) semi-tensor product} $\ltimes$ between matrices (in particular, vectors) is defined as follows \cite{BCNCheng}:
given $L_1\in {\mathcal L}_{r_1 \times c_1}$ and $L_2\in {\mathcal L}_{r_2\times c_2}$, we set
$$L_1\ltimes L_2 := (L_1 \otimes I_{T/c_1})(L_2 \otimes I_{T/r_2}),
\quad {\rm with}\quad T:= {\rm l.c.m.}\{c_1,r_2\}.$$
The semi-tensor product generalizes the standard matrix product, meaning that when $c_1=r_2$, then
$L_1 \ltimes L_2=L_1L_2$.
In particular, when ${\bf x}_1\in {\mathcal L}_{r_1}$ and ${\bf x}_2\in {\mathcal L}_{r_2}$, we have
${\bf x}_1 \ltimes {\bf x}_2\in {\mathcal L}_{r_1r_2}.$
By resorting to the semi-tensor product, the previous correspondence extends to a bijective correspondence \cite{BCNCheng} between ${\mathcal B}^n$ and ${\mathcal L}_{2^n}$. Indeed, given $X= \left[\begin{matrix}X_1 & X_2 & \dots & X_n\end{matrix}\right]^\top\in {\mathcal B}^n$,
one can set
$${\bf x} := \left[\begin{matrix}X_1\cr \overline{X}_1\end{matrix}\right] \ltimes \left[\begin{matrix}X_2\cr \overline{X}_2\end{matrix}\right]\ltimes \dots \ltimes \left[\begin{matrix}X_n\cr \overline{X}_n\end{matrix}\right],$$
which corresponds to
$${\bf x}= \left[\begin{matrix}X_1X_2\dots X_{n-1} X_n & X_1X_2\dots X_{n-1} \ \overline{X}_n & X_1X_2 \dots \overline{X}_{n-1} X_n & \dots &
\overline{X}_1\overline{X}_2\dots \overline{X}_{n-1} \overline{X}_n\end{matrix}\right]^\top.$$
A {\em Boolean Control Network} (BCN) is a logic state-space model taking the form:
\be
\begin{array}{rcl}
X(t+1) &=& f(X(t),U(t)), \cr
Y(t)&=& h(X(t),U(t)), \qquad t \in \mathbb{Z}_+,
\end{array}
\label{BCNL}
\ee
where $X(t)$, $U(t)$ and $Y(t)$ are
the $n$-dimensional state variable, the $m$-dimensional input variable and the $p$-dimensional output variable at time $t$, taking values in ${\mathcal B}^n$, ${\mathcal B}^m$ and ${\mathcal B}^p$, respectively.
$f$ and $h$ are logic functions, i.e. $f: {\mathcal B}^n \times {\mathcal B}^m \rightarrow {\mathcal B}^n$, while
$h: {\mathcal B}^n \times {\mathcal B}^m \rightarrow {\mathcal B}^p$.
By making use of the semi-tensor product $\ltimes$, the BCN (\ref{BCNL}) can be equivalently represented as \cite{BCNCheng}
\be
\begin{array}{rcl}
{\bf x}(t+1) &=& L \ltimes {\bf u}(t)\ltimes {\bf x}(t), \cr
{\bf y}(t) &=& H \ltimes {\bf u}(t) \ltimes {\bf x}(t), \qquad t \in \mathbb{Z}_+,
\end{array}
\label{BCNA}
\ee
where $L \in \mathcal{L}_{N \times NM}$ and $H \in \mathcal{L}_{P\times NM}$, $N := 2^n, M := 2^m$ and $P :=2^p$. This is called the {\em algebraic expression} of the BCN. The matrix $L$ can be partitioned into $M$ square blocks of size $N$, namely as
$$L = \begin{bmatrix} L_1 & L_2 & \dots & L_{M}\end{bmatrix}.$$
For every $i\in \{1,2,\dots, M\}$, the matrix $L_i\in {\mathcal L}_{N\times N}$ represents the logic matrix that relates ${\bf x}(t+1)$ to ${\bf x}(t)$, when ${\bf u}(t)=\delta^i_{N}$, namely
$${\bf u}(t)=\delta^i_{M} \ \Rightarrow \ {\bf x}(t+1)= L_i {\bf x}(t).$$
In the special case when the logic system has no input, its algebraic expression becomes
\be
\begin{array}{rcl}
{\bf x}(t+1) &=& L {\bf x}(t), \cr
{\bf y}(t) &=& H {\bf x}(t), \qquad t \in \mathbb{Z}_+,
\end{array}
\label{BNA}
\ee
and it is called {\em Boolean Network}.\\
It is easy to realize that the previous algebraic expressions \eqref{BCNA} and \eqref{BNA} can be adopted to represent any state-space model in which the state, input and output variables take values in finite sets, and hence the sizes of the state, input and output vectors $N, M$ and $P$ need not be powers of $2$. When so, oftentimes BCNs and BNs are called multi-valued Control Networks \cite{BCNCheng}. With an abuse of terminology, in this paper we will always refer to them as BCNs and BNs. Also, in the following capital letters will be used to denote the original vectors/variables, taking values in finite sets, and the same
lowercase letters will be used to denote the corresponding canonical vectors.\\
With these preliminary definitions and notations, we are now in a position to introduce the BCN models for our case study.
\subsection{The Patient Context Model}
Let us first consider the ``Patient Context" model. According to Table 1,
we assume as input vector
the 3-dimensional vector $U(t)$, where
\begin{itemize}
\item $U_1(t)$ denotes the (\textit{low, medium or high}) value of the body temperature (\textit{bt}) at time $t$;
\item $U_2(t)$ denotes the (\textit{low, medium or high}) value of the body pressure (\textit{bp}) at time $t$;
\item $U_3(t)$ denotes the (\textit{low, medium or high}) value of the heart frequency (\textit{hf}) at time $t$.
\end{itemize}
The corresponding canonical vector, $u(t)$, therefore belongs to ${\mathcal L}_{27}$,
since each of the variables $U_i(\cdot), i=1,2,3,$ can take three distinct values (see Table 1).
\medskip
The state variable $X(t)$ is a 4-dimensional vector, where
\begin{itemize}
\item $X_1(t)$ denotes the Estimated Patient Status (in other words, the Diagnosis) at time $t$ with respect to a specific form of illness: it takes values in the set $\{H,C,UO,I,$ $LC\}$;
\item $X_2(t)$ represents a counter variable, that keeps tracks of how many consecutive times up to time $t$ the Estimated Patient Status has remained invariant. In other words, $X_2(t)=m$ if $X_1(t)=X_1(t-1)= \dots = X_1(t-m+1)$, but $X_1(t-m+1)\ne X_1(t-m)$. In order to ensure that $X_2$ takes values in a finite set, and for the sake of simplicity\footnote{All the numbers used in this context are, of course, arbitrary and meant to purely exemplify how to design the algorithm and to convert it into a BCN.},
we assume that we keep track until $X_2(\cdot)$ reaches the value $3$, and then we stop. This amounts to saying that $X_2(t)$ belongs to $\{1,2,\ge 3\}$;
\item $X_3(t)$ is
the prescribed therapy at time $t$, belonging to a finite set, say $\{Th0, Th1,$ $ \dots,Th5\}$, where $Th0$ means that the patient does not receive any drug;
\item $X_4(t)$ is the prescribed location (\textit{home, ward, icu}) where the patient will get the therapy at time $t$.
\end{itemize}
The corresponding canonical representation, $x(t)$, under the previous assumptions will belong to ${\mathcal L}_{270}$, since $270=5\times 3\times 6\times 3$.
\medskip
Finally, we assume as output of the Patient Context the 2-dimensional vector
$Y(t)$, where
\begin{itemize}
\item $Y_1(t)$ is
the prescribed therapy at time $t$;
\item $Y_2(t)$ is the prescribed location (\textit{home, ward, icu}) where the patient will get the therapy at time $t$.
\end{itemize}
Clearly, $Y_1(t)=X_3(t)$ and $Y_2(t)=X_4(t)$. Moreover, the canonical representation of $Y(t)$, $y(t)$, belongs to ${\mathcal L}_{18},$ since 18 is the number of possible combinations of therapies and locations. Note, however, that the set of possible outputs can be significantly reduced: for instance, the location \textit{home} is compatible only with the choice to dismiss the patient, after considering his/her health status, and with prescribed
therapies such as \textit{Th0} (no drugs) or a light therapy (say, \textit{Th1}). At the same time certain therapies can be administered only when the patient is in the \textit{icu}. So, one may reasonably assume that a good number of the 18 output values are not realistic and hence can be removed, thus reducing the size of $y(\cdot)$. \\
It is worthwhile to introduce a few comments about the initial state $X(0)$ (or its canonical representation $x(0)$)
and about the update of the state variables $X_i(t),i=1,2,3,4$.
The initial state can be regarded as the result of the triage process: when patients are admitted to the \textit{Emergency Room (ER)}, a preliminary diagnosis is made based on the three measures $U_1(0), U_2(0)$ and $U_3(0)$, since there may be no previous history of the patient and the hospital admission requires a fast evaluation of the medical conditions of the patient. So, $X_1(0)$ may be a static function of $U(0)$. $X_3(0)$ is automatically set to $Th0$, while $X_2(0)$ is set to $1$ and $X_4(0)$ to \textit{home}.
\medskip
We note that $X_1(t+1)$ is naturally expressed as a logic function of $X_1(t), X_2(t),$ $X_3(t), X_4(t)$ and $U(t)$, say $X_1(t+1)=f_1(X(t),U(t))$.
On the other hand, $X_2(t+1)$ naturally depends on $X_2(t), X_1(t)$ and $X_1(t+1)$, and, since we have just pointed out that $X_1(t+1)=f_1(X(t),U(t))$, we can in turn express $X_2(t+1)$ as $X_2(t+1)=f_2(X(t),U(t))$.
Similarly, $X_3(t+1)$ and $X_4(t+1)$ are functions of $X_1(t+1), X_2(t+1),$ $X_3(t), X_4(t)$ and $U(t)$, and hence can be expressed, in turn, as functions of $X_1(t), X_2(t),$ $X_3(t), X_4(t)$ and $U(t)$.\\On the other hand, as we previously remarked,
$Y_1(t)=X_3(t)$ and $Y_2(t)=X_4(t)$.
This implies that
$$X(t+1)=f(X(t),U(t)),$$
while
$$Y(t) = \begin{bmatrix} X_3(t)\cr X_4(t)\end{bmatrix},$$
and hence
\begin{eqnarray*}
x(t+1) &=& L \ltimes u(t) \ltimes x(t), \\
y(t) &=& M x(t), \qquad t\in {\mathbb Z}_+,
\end{eqnarray*}
for suitable choices of the logical matrices $L\in {\mathcal L}_{270\times (27 \cdot 270)}$ and $M\in {\mathcal L}_{18\times 270}.$
\medskip
\subsection{The Patient Model}
At this point we consider the Patient Model. A reasonable choice of the Patient state variables is the following one:
\begin{itemize}
\item
$S_1(t)$ represents the actual Patient Status that takes values in the set $\{H,C,I,LC\}$. Note that this is a proper subset of the set where the Estimated Patient Status takes values, since of course the value \textit{UO} in this case does not make sense.
\item
$S_2(t)$ represents the therapy that has been prescribed at time $t-1$, and hence it coincides with $Y_1(t-1)$.\\
\item $S_3(t)$ is a counter variable that keeps track of how many consecutive times up to time $t$ the therapy has remained invariant. In other words, $S_3(t)=m$ if $S_2(t)=S_2(t-1)= \dots = S_2(t-m+1)$, but $S_2(t-m+1)\ne S_2(t-m)$.
Also in this case we put a bound on $m$ and assume that
$S_3(t)$ belongs to $\{1,2,\ge 3\}$.\\
\item Finally, $S_4(t)$ is the vector collecting the measures of the vital parameters at time $t-1$, namely $S_4(t)=U(t-1)$.
\end{itemize}
For the Patient Model, the natural input is $Y(t)$ (in fact, $Y_1(t)$ could be regarded as enough), while the output is $U(t)$.
Since $U(t)$ is the patient's vital parameters at time $t$, it is reasonable to assume that these measures depend on their own values at time $t-1$ (and hence on $S_4(t)$), on the Patient Status $S_1(t)$, the given therapy at time $t-1$, $S_2(t)$, (indeed it is not realistic to assume that the effect of the therapy is instantaneous) and on the duration of the therapy, namely on $S_3(t)$.
\\
With reasonings similar to the ones adopted for the Patient Context model, we can claim that the Patient Model is described by the logic equations
\begin{eqnarray*}
S(t+1)&=&f_p(S(t),Y(t),U(t)),\\
U(t) &=& h_p(S(t)),
\end{eqnarray*}
and hence by the BCN
\begin{eqnarray*}
s(t+1) &=& F \ltimes y(t)\times u(t) \ltimes s(t), \\
u(t) &=& H s(t), \qquad t\in {\mathbb Z}_+,
\end{eqnarray*}
for suitable choices of the logical matrices $F\in {\mathcal L}_{1944\times (18\cdot 27 \cdot 1944)}$ and $H\in {\mathcal L}_{27\times 1944},$ since $1944= 4\cdot 6\cdot 3\cdot 27$.
\\
So, to summarize, we have the following two models:
\begin{eqnarray}
x(t+1) &=& L \ltimes u(t) \ltimes x(t), \label{pc1} \\
y(t) &=& M x(t), \qquad t\in {\mathbb Z}_+, \label{pc2}
\end{eqnarray}
and
\begin{eqnarray}
s(t+1) &=& F \ltimes y(t)\times u(t) \ltimes s(t), \label{p1} \\
u(t) &=& H s(t), \qquad t\in {\mathbb Z}_+, \label{p2}
\end{eqnarray}
In the following, for the sake of simplicity, we will use the following notation:
$270= {\rm dim}\ x =:N_x$, $1944= {\rm dim}\ s =: N_s$, $27={\rm dim}\ u= N_u$ and $18= {\rm dim}\ y =: N_y$.\\
If we replace \eqref{p2} and \eqref{pc2} in \eqref{p1}, and keep into account that
$$s(t)\ltimes s(t) = \Phi s(t),$$
where $\Phi\in {\mathcal L}_{N_s^2\times N_s}$ is a logical matrix known as {\em power-reducing matrix} \cite{BCNCheng},
then \eqref{p1}
becomes
\begin{equation}
s(t+1) = F \ltimes M \ltimes x(t) \ltimes H\ltimes \Phi \ltimes s(t). \label{p1a}
\end{equation}
At the same time, we can swap, namely reverse the order of, the vector $x(t)$ and the vector $H\ltimes \Phi \ltimes s(t)$ by resorting to the {\em swap matrix} $W$ of suitable size \cite{BCNCheng},
thus obtaining
\begin{eqnarray}
s(t+1) &=& F \ltimes M \ltimes W \ltimes H\ltimes \Phi \ltimes s(t) \ltimes x(t) \nonumber \\
&=& A (s(t) \ltimes x(t)), \label{P}
\end{eqnarray}
where
$$A := F \ltimes M \ltimes W \ltimes H\ltimes \Phi \in {\mathcal L}_{N_s \times N_s N_x}.$$
Similarly,
if we replace \eqref{p2} in \eqref{pc1} we get:
\begin{eqnarray}
x(t+1) &=& L \ltimes H \ltimes s(t) \ltimes x(t)\nonumber \\
&=& B (s(t) \ltimes x(t)),\label{PC} \end{eqnarray}
where
$$B := L \ltimes H\in {\mathcal L}_{N_x \times N_s N_x}.$$
\\
Now, the overall model, keeping into account both the Patient Context and the Patient Model,
becomes:
\begin{eqnarray}
s(t+1) &=& A \ltimes s(t) \ltimes x(t),\\
x(t+1) &=& B \ltimes s(t)\ltimes x(t).
\end{eqnarray}
If we introduce the status of the overall system
$$v(t) := s(t)\ltimes x(t) \in {\mathcal L}_{N_sN_x},$$
we get
$$
v(t+1) = (A \ltimes v(t)) \ltimes (B \ltimes v(t)).$$
It is a matter of elementary calculations to verify that
once we denote by $a_i$ the $i$-th column of $A$ and by $b_j$ the $j$-th column of $B$, the previous equation can be equivalently rewritten as
\begin{equation}
v(t+1) = W \ltimes v(t),
\label{stato_v}
\end{equation}
where
$$W := \begin{bmatrix} a_1 \ltimes b_1 & a_2\ltimes b_2 & \dots & a_{N_sN_x}\ltimes b_{N_sN_x}\end{bmatrix}
\in {\mathcal L}_{(N_sN_x)\times (N_sN_x)}.$$
In addition, one can assume as system output
$$y(t)= M x(t)$$
that can be rewritten as
\be
y(t)= \Psi v(t),
\label{output_v}
\ee
where
$$\Psi := \begin{bmatrix} M & M & \dots & M\end{bmatrix} \in {\mathcal L}_{N_y\times N_sN_x}.$$
So, equations \eqref{stato_v} and \eqref{output_v} together describe a BN that models the overall closed-loop system.
\medskip
\section{Real life requirements and their mathematical formalization}\label{reallife}
In this section we investigate the properties of the overall system, obtained by the feedback connection of the Patient Context and of the Patient Model, namely the BN \eqref{stato_v}-\eqref{output_v}.
\\
As stated in Section \ref{Intro}, we aim to provide general ideas about the mathematical properties of the system that have a clear practical relevance in this context, rather than to check those properties for a specific choice of the logical matrices involved in the system description. Thus, we shall not provide numerical values for the quadruple of logical matrices $(L,M,F,H)$, but we shall show how to reduce our specific feedback system (or parts of it) to standard set-ups for which these properties have already been investigated.
\subsection{Identifiability of the Patient Status}
A first question that is meaningful to pose is whether the Patient Model is a good one, namely it will lead to a correct functioning of the overall system.
In order to clarify what we mean when posing this question, we first need to better explain the perspective we have taken in modelling the patient.
We have assumed that the patient is in a certain medical condition with respect to a specific medical problem.
So, the diagnosis pertains only to the level/seriousness of the patient's health condition, and not to the specific cause of the illness.
Such a medical condition is unknown to the nurse, but of course it is the reason why the patient's vital parameters (\textit{bp, bt, hf}), namely the patient's output $U(t)$, take certain values. The medical status is of course affected by the therapy $Y$ and can be associated with different values of $U$, so the output measure $U(t)$ at time $t$ together with the therapy $Y(t)$ (or $Y_1(t)$) do not allow to uniquely determine $S(t)$. In addition, some therapies may need some time to become effective (which is the reason why we introduced the state variable $S_2(t)$).\\
On the other hand, a good (deterministic) model of the patient\footnote{As previously mentioned, we have adopted a deterministic model and assumed that everything works according to statistics and well settled procedures: therapies are designed according to specific protocols and statistically lead to the full recovery of the patient. This is the reason why the possibility that the patient dies is not contemplated.}
necessarily imposes that the measured vital parameters are significant and hence allow physicians to determine the actual Patient's Status after a finite number of observations.
\\
From a mathematical point of view, this amounts to assuming that \textit{the Patient Model \eqref{p1}-\eqref{p2} is reconstructible}, namely there exists $T\in {\mathbb Z}_+$ such that the knowledge of the signals
$u(\cdot)$ and $y(\cdot)$ in $[0,T]$ allows to uniquely determine $s(T).$
Specifically, we have the following definition:\\
\begin{definition}
The BCN \eqref{p1}-\eqref{p2}, with $s(t)\in {\mathcal L}_{N_s}, u(t)\in {\mathcal L}_{N_u}$ and $y(t)\in {\mathcal L}_{N_y}$, is said to be {\em reconstructible} if there exists $T\in {\mathbb Z}_+$ such that the knowledge of the input and output vectors in the discrete interval $\{0,1,\dots, T\}$ allows to uniquely determine the final state $s(T)$.
\end{definition}
\medskip
It is worth noticing that the BCN \eqref{p1}-\eqref{p2} is different from the standard ones for which the observability and reconstructibility problems have been addressed in the literature (see \cite{EF_MEV_BCN_obs2012,MM_obs,Zhang_obs}), since this BCN is intrinsically in a closed-loop condition, as the BCN output $u(t)$ affects the state update at time $t+1$.
However, by replacing \eqref{p2} in \eqref{p1}, and by using again the power reducing matrix, we can obtain:
\begin{eqnarray*}
s(t+1) &=& F \ltimes y(t)\times H \ltimes \Phi \ltimes s(t), \\
u(t) &=& H s(t), \qquad t\in {\mathbb Z}_+,
\end{eqnarray*}
which, in turn, can be rewritten as
\begin{eqnarray}
s(t+1) &=& {\mathbb F} \ltimes y(t) \ltimes s(t), \label{p1_rec} \\
u(t) &=& H s(t), \qquad t\in {\mathbb Z}_+, \label{p2_rec}
\end{eqnarray}
where
$${\mathbb F} := \begin{bmatrix} {\mathbb F}_ 1 & {\mathbb F}_2 & \dots & {\mathbb F}_{N_y}\end{bmatrix}
$$
and $$
{\mathbb F}_i := \begin{bmatrix} f_i\ltimes (H \ltimes \Phi)\delta^1_{N_s} & \dots & f_i \ltimes (H \ltimes \Phi)\delta^{N_s}_{N_s}\end{bmatrix},$$
where we have denoted by $f_i$ the $i$-th column of the matrix $F$.\\
This allows to reduce the reconstructibility problem for this specific BCN to a standard one, for which there are lots of results and algorithms (see \cite{EF_MEV_BCN_obs2012,Zhang_obs,ZhangJohansson2020,ZhangLeifeldZhang2019}). \\
Clearly, the matrices $F$ and $H$ must be properly selected in order to guarantee the reconstructibility of the patients' status. This means, in particular, that the vital parameters to measure must be chosen in such a way that they are significant enough to allow to identify the actual medical conditions of the patient.
\medskip
From a less formal viewpoint, it is worth underlying that the reconstructibility problem reduces to the problem of correctly identifying the state variable $s_1(t)$, since the definition of $s_i(t), i=2,3,4,$ allows to immediately deduce that such values can be uniquely determined from the variables $y_1(t)$ and $u(t)$.
So, one could focus on a lower dimension model expressing $s_1(t+1)$ in terms of $s_i(t), i=1,2,3,4$, $u(t)$ and $y_1(t)$, where $s_i(t), i=2,3,4, u(t)$ and $y_1(t)$ are known, and address the reconstructibility of $s_1(t)$ from $u(t)$, assuming $s_i(t), i=1,2,3,4$, and $y_1(t)$ as inputs.
\medskip
\subsection{Correct diagnosis}
Of course, once we have ensured that the Patient Model \eqref{p1}-\eqref{p2} is reconstructible, and hence we have properly chosen the vital parameters to measure in order to identify the Patient Status, the natural question arises: Is the Patient Context correctly designed so that after a finite (and possibly small) number of steps $T$, the Patient's Status $s_1(t)$ and the Estimated Patient's Status $x_1(t)$ coincide for every $t\ge T$? This amounts to saying that the protocols to evaluate the Patient Status have been correctly designed.
\\
To formalize this problem, we need to introduce a comparison variable, say $z(t)$.
This variable takes the value $\delta^1_2$ (namely the unitary or YES value) if $s_1(t)=x_1(t)$ and the value
$\delta^2_2$ (namely the zero or NO value) otherwise.
Keeping in mind that $S_1(t)$ takes values in $\{H,C, I,LC\}$ (and hence $s_1(t)\in {\mathcal L}_4$), while
$X_1(t)$ takes values in $\{H,C,UO,I,$ $LC\}$ (and hence $x_1(t)\in {\mathcal L}_5$),
this leads to
$$z(t)= \begin{bmatrix} C_1 & C_2 & C_3 & C_4 \end{bmatrix} s_1(t)\ltimes x_1(t),$$
where\footnote{ To improve the notation one could sort the set of values of the Estimated Patient State as follows: $\{H,C, I,LC,UO\}$. In this way, each of the blocks $C_i$ would have the $i$th column equal to $\delta^1_2$ and all the remaining ones equal to $\delta^2_2$.}
$C_i \in {\mathcal L}_{2\times 5}$ for every $i\in [1,4].$ Moreover,\\
$C_1$ is the block whose first column is $\delta^1_2$ while all the others are $\delta^2_2$;\\
$C_2$ is the block whose second column is $\delta^1_2$ while all the others are $\delta^2_2$;\\
$C_3$ is the block whose fourth column is $\delta^1_2$ while all the others are $\delta^2_2$;\\
$C_4$ is the block whose fifth column is $\delta^1_2$ while all the others are $\delta^2_2$.
Clearly, $z(t)$ can also be expressed as a function of $s(t)$ and $x(t)$ and hence as a function of $v(t)$. This leads to
$$z(t) = {\mathbb C} v(t),$$
for a suitable ${\mathbb C}\in {\mathcal L}_{2\times N_sN_x}$.
So, the problem of understanding whether the system is designed to produce the correct diagnosis can be equivalently translated into the mathematical problem of determining whether for every initial condition, $v(0)$, the output trajectory of the system
\begin{eqnarray}
v(t+1) &=& W v(t) \label{BNlarga}\\
z(t) &=& C v(t)
\end{eqnarray}
eventually takes the value $\delta^1_2$. In other words, we need to ensure that there exists $T\in \mathbb {Z}_+$ such that, for every $v(0)\in {\mathcal L}_{N_sN_x}$, the corresponding output trajectory $z(t), t\in {\mathbb Z}_+,$ satisfies
$z(t)=\delta^1_2$ for every $t\ge T$.\\
Note that the idea is that once the seriousness level of the patient's illness has been correctly diagnosed, this information will never been lost, even if the patient's health status will change.\\
Another way of looking at this problem is to define the set of states
$${\mathcal C}\!\!{\mathcal D} :=\{ v(t)\in {\mathcal L}_{N_sN_x} : s_1(t)\ltimes x_1(t)\in \{\delta^1_4\ltimes \delta^1_5, \delta^2_4\ltimes \delta^2_5, \delta^3_4\ltimes \delta^4_5, \delta^4_4\ltimes \delta^5_5\}\},$$
that represent all possible situations where the Estimated Patient Status $x_1(t)$ coincides with the
Patient Status $s_1(t)$ (in other words, ${\mathcal C}\!\!{\mathcal D}$ is the set of correct diagnoses), and to impose that such a set is a global attractor of the system.\\
From a formal point of view, the set
${\mathcal C}\!\!{\mathcal D}$ is a {\em global attractor of the BN} \eqref{BNlarga}
if there exists $T\ge 0$ such that for every $v(0)\in {\mathcal L}_{N_sN_x \times N_s N_x}$ the corresponding state evolution $v(t), t\in {\mathbb Z}_+$, of the BN \eqref{BNlarga} belongs to ${\mathcal C}\!\!{\mathcal D}$ for every $t\ge T$.\\
This property can be easily checked \cite{BCNCheng,EF_MEV_BCN_Aut} by simply evaluating that all rows
of $W^{N_sN_x}\in {\mathcal L}_{N_sN_x}$, the $N_sN_x$ power of $W$, are zero except for those whose indexes correspond to the canonical vectors in ${\mathcal C}\!\!{\mathcal D}.$
\subsection{Successful therapies}
As previously mentioned, when modeling the evolutions of the Patient Context and of the Patient Model in a deterministic way, we are describing the evolution of the average case of a patient affected by a specific form of illness. Accordingly, we are giving certain interpretations to the patient's symptoms, as captured by the values of his/her vital parameters, and based on them we are applying well-settled medical protocols to prescribe therapies and locations where such therapies need to be administered.
In this context it is clear that death is not contemplated, since this would correspond to assuming that a given medical protocol deterministically leads to the death of the patient and this does not make sense. Similarly, a protocol that deterministically leads to an equilibrium state where the Patient's Status is $C, I$ or $LC$ is not acceptable. In other words, the only reasonable solution is to have designed the Patient Context in such a way that 1) the Patient Status is eventually H; 2) the Estimated Patient Status is, in turn, H.\\
Note that 1) and 2) correspond to imposing that \textit{the global attractor of the system evolution is a proper subset, say ${\mathcal H}$, of the set
${\mathcal C}\!\!{\mathcal D}$} we previously defined. Specifically,
we define the set ${\mathcal H}$ as follows:
$${\mathcal H} :=\{ v(t)\in {\mathcal L}_{N_sN_x} : s_1(t)\ltimes x_1(t)= \delta^1_4\ltimes \delta^1_5\},$$
that represent all possible situations where the Estimated Patient Status $x_1(t)$ is healthy and it coincides with the
Patient Status $s_1(t)$ (in other words, ${\mathcal H}$ is the set of states corresponding to a healthy patient whose health status has been correctly identified), and to impose that such a set is a global attractor of the system\footnote{Note that we are not introducing additional constraints, in particular we are assuming that the vital parameters $u$ of the patient can change within the set of values compatible with the healthy status. Of course, one could further constrain the set ${\mathcal H}$ by assuming that the prescribed therapy is Th0, the patient is at home, and all the counters have reached the saturation level. Even in this case, we may regard as acceptable the existence of a limit cycle, since this would only correspond to oscillations of the values of the state variable $s_4$ within a small set of values that do not raise any concern. Clearly, one may impose also for $s_4$ and hence for $u$ a prescribed desired value, and this would mean asking that the system has a single {\em equilibrium point} (the set ${\mathcal H}$ has cardinality one) which is a global attractor.}.\\
Also, in this case it is possible to verify whether such a requirement is met by evaluating if
all rows
of $W^{N_sN_x}\in {\mathcal L}_{N_sN_x}$, the $N_sN_x$ power of $W$, are zero except for those whose indexes correspond to the canonical vectors in ${\mathcal H}.$
\section{Conclusions} \label{last}
In this paper we have used an interesting case study, related to the
evolution of the health status of a patient, to illustrate how
a feedback Context-aware system can be modeled by means of a BCN. Indeed, the patient is subjected to medical therapies and his/her vital parameters are
not only the outcome of the therapies, but also the input based on which therapies are prescribed.
By referring to a simplified deterministic model in terms of BCNs/BNs, we have been able to illustrate how the most natural practical goals that the overall closed-loop system needs to achieve may be formalized, and hence investigated, by resorting to well-known
System Theory concepts.
Clearly, the given model can be improved and tailored to the specific needs, to account for more complicated algorithms, and more exhaustive sets of data, but the core ideas have already been captured by the current model.
Also, we have addressed what seemed to be the most natural targets in the specific context, but different or additional properties
may be investigated, in case the same modeling technique is applied to describe closed-loop Context Aware systems of different nature.\\
The use of a deterministic model of the patient's health evolution, to plan therapies based on measured vital parameters, represents a first step toward the design of an accurate algorithm to employ in the
mobile device of a nurse.
A probabilistic model, together with some warning system that advises the nurse of when different
decisions are possible with different confidence levels, and hence there is the need for the immediate supervision
of a specialist, is the target of future research.
|
1,116,691,498,355 | arxiv | \section{Introduction}
In recent years, a series of studies showed that coherent low frequency electromagnetic cyclotron waves (ECWs) with a typical frequency of 0.1$-$0.5 Hz at 1 AU can be detected widely in the solar wind \citep{jia09p05,jia10p15,jia14p23,boa15p10,gar16p30,wic16p06}. These waves are characterized by narrow band and have frequencies near the proton cyclotron frequency. They are transverse waves and propagate mainly in the directions quasi-parallel (or antiparallel) to the ambient magnetic field. They can be sporadic in occurrence with a median duration of 51.5 s \citep{jia09p05}, or appear in clusters with durations exceeding 10 min \citep{jia14p23}. Their polarization senses can be left-handed (LH) or right-handed (RH) with respect to the magnetic field in the spacecraft frame, and more ECWs were found to be LH polarized waves with percentage of 64\% \citep{jia09p05} or 55\% \citep{jia14p23} at 1 AU.
Theoretically, many mechanisms can contribute to the generation of the ECWs. They are related to plasma instabilities driven by temperature anisotropies and/or differential velocities between ion populations \citep{gar93,lix00p83,luq06p01,ver13p63,omi14p42}. A plasma with perpendicular temperature ($T_{\perp}$) larger than parallel temperature ($T_\parallel$) may amplify ion cyclotron waves that are inherently LH polarized, while a plasma with converse temperature anisotropy ($T_{\perp} < T_\parallel$) can excite magnetosonic waves that are RH polarized in the plasma frame. When ion beam/core relative flow speed is large sufficiently (typically exceeding the Alfv\'en speed; \citet[][p166]{gar93}), the plasma may generate ion cyclotron waves or magnetosonic waves depending on beam parameters. For details of the mechanism associated with differential flow of protons, one can also refer to the literature \citep[e.g.,][]{abr79p53,dau98p13,dau99p57,gol00p53}. The combined effects of temperature anisotropies and proton differential flows have also been discussed in the recent year, in the case of slow solar wind \citep{gar16p30} as well as descending part or trailing edge of fast solar wind \citep{wic16p06,jia16p07}. Their results tend to suggest that the main driver of instabilities is temperature anisotropies, but proton differential flows provide additional free energy and amplify the wave growth in the solar wind \citep{wic16p06,jia16p07}.
In this Letter, we report our finding that the LH ECWs and RH ECWs have significantly different behaviors in their time-dependent occurrence rates as well as preferential plasma conditions, which may provide indication or important constraint on the mechanisms of generating ECWs in the solar wind. The data and analysis methods used in this Letter are described in Section 2. Section 3 presents the results, and Section 4 gives our discussion and summary.
\section{Data and analysis methods}
The data used in the present Letter are from the STEREO-A spacecraft, which has orbits near 1 AU in the ecliptic
plane and can provide continuous magnetic field data with resolution of 8 Hz as well as plasma data with resolution of 1 min \citep{kai08p05,luh08p17,gal08p37}. Based on the magnetic field, we perform a survey of ECWs and calculate their occurrence rates over the period between 2007 and 2013. An automatic wave detection procedure is employed to identify ECWs. The procedure is developed by \citet{zha17p79} and mainly consists of three steps for magnetic field data in some time interval of 100 s. The first step is to obtain the normalized reduced magnetic helicity that takes values in the range from $-1$ to 1 \citep{mat82p11,gar92p03,hej11p85}. The magnetic helicity is actually a spectrum with resolution of 0.01 Hz, and the spectrum values are examined in the frequency range from 0.05 to 1 Hz. If the spectrum has positive values $\geq$ 0.7 or negative values $\leq -$0.7 in some frequency band with minimum bandwidth of 0.05 Hz, the second step will be carried out to identify enhanced power spectrum requiring transverse wave power three times larger than the background power in the same frequency band; the background power is obtained via fitting the entire transverse power spectrum with a power law. A wave amplitude criterion of 0.1 nT is also set, which completes the third step. During the process a Hamming window and a band-pass filter are used to reduce edge effects and determine a wave amplitude \citep{bor07p04,wil09p06}. The procedure can give the time intervals of ECW occurrence, and their time occurrence rates accordingly. It can also directly give the polarization senses of the waves based on the sign of spectrum values of magnetic helicity.
Note that the polarization is described in the spacecraft frame throughout the paper except that we point out the plasma frame. The polarization will reverse in the two different reference frames for ECWs propagating toward the Sun \citep[e.g.][]{jia09p05,gar16p30}. This is due to the presence of large Doppler shift resulting from fast movement of the solar wind relative to the (approximately standing) spacecraft. The speed of the solar wind is typically 5$-$8 times of the Alfv\'en speed, which is also much greater than the phase velocity of the ECWs, i.e., typically the Alfv\'en speed \citep{jia09p05,jia10p15}. The Doppler shift can be estimated via the relation $f_{sc} = f_{sw}(1+\frac{V_{sw}}{V_A}\hat{\textbf{k}}\cdot\hat{\textbf{V}}_{sw})$ introduced by \citet{jia09p05}, where $f_{sc}$ and $f_{sw}$ are the wave frequency in the spacecraft frame and in the plasma frame, respectively, $\textbf{k}$ denotes the wave propagation vector, $\textbf{V}_{sw}$ is the solar wind velocity, and $V_A$ is the Alfv\'en speed. It is clear that the second term dominates with consideration of $\hat{\textbf{k}}\cdot\hat{\textbf{V}}_{sw} \simeq 1$ (for ECWs propagating away from the Sun) or $\hat{\textbf{k}}\cdot\hat{\textbf{V}}_{sw} \simeq -1$ (for ECWs propagating toward the Sun).
\section{Results}
Figure 1 presents the most important finding of this Letter, in which the occurrence rates for each month are plotted. In the figure the red line shows the occurrence rates of ECWs with LH polarization and blue line displays those with RH polarization. One may first find that the occurrence rate of LH ECWs is larger than that of RH ECWs for most of months, which is compatible with previous result that more ECWs are LH polarized waves in the solar wind \citep{jia09p05,jia10p15,jia14p23,boa15p10}. In particular, the occurrence rate of LH ECWs fluctuates considerably in a wide range from about 0.5\% to 2\%, it exceeds 2.5\% in some months. The occurrence rate of RH ECWs, however, just shows weak fluctuation around 0.46\% (mean value, with a standard deviation of 0.22\%), and may be seen approximately as a constant relative to the occurrence rate of LH ECWs. In addition, it should be noted that minimum of the occurrence rate of LH ECWs is comparable to that of RH ECWs.
In order to understand the implication of the result presented in Figure 1, we investigate the local plasma characteristics associated with occurrence of ECWs, as well as dependence of their occurrence rates on the ambient plasma properties. Results reveal the preferential plasma conditions favoring the LH ECWs and considerable dependence for the LH ECWs. Figure 2 displays plasma parameters with respect to months, where panels from top to bottom correspond to the proton temperature ($T_p$), proton density ($N_p$), and proton velocity ($V_p$). (The plasma data in the first month of 2007 are not available.) In each panel the red line is for median of a plasma parameter associated with LH ECWs, while the black line is for median of the plasma parameter for all plasmas (referred to as ``ambient median" for convenience). The plasma parameter associated with ECWs refers to the plasma data almost simultaneously arising with the ECWs; here an averaging operation is made for the plasma data over each time interval of 100 s once an ECW is found in the same interval. On the contrary, the plasma parameter for all plasmas refers to all the plasma data irrespective of whether the ECWs are in presence or not. One can find that the medians associated with the waves vary with trends similar to those of the ambient medians. In particular, the median of proton temperature (density) as well as velocity is predominantly larger (smaller) than the corresponding ambient median. This result should be interesting and tends to imply that the high temperature, low density, and large velocity are preferential plasma conditions for generation or survival of LH ECWs in the solar wind.
Furthermore, it seems to be fulfilled that a plasma with higher temperature, lower density, and larger velocity will favor a higher occurrence rate of LH ECWs, since these quantities show positive or negative correlations by comparing the time series between Figures 1 and 2. The correlations are particularly high for proton temperature and velocity.
Figure 3 presents scatter plots of the occurrence rate of LH ECWs against ambient medians of proton temperature (left panel) and velocity (right panel). The line with positive slope in each panel of Figure 3 is the best linear fit to the scatter data. As shown, the correlations are considerable, with their correlation coefficients ($C$) as high as close to 0.8.
As for the case of RH ECWs (not shown), the above preferential plasma conditions for the wave generation as well as dependence of occurrence rate of the waves on ambient plasma properties are not clear; the medians of the plasma parameters for the RH ECWs vary around the ambient medians, and the correlation between the wave occurrence rate and ambient temperature as well as velocity is also negligibly small, with $C < 0.2$.
\section{Discussion and summary}
ECWs are common wave activities in the solar wind \citep{jia09p05,jia14p23}. Using the data from the STEREO-A spacecraft and the method developed by \citet{zha17p79}, this Letter first carries out a survey of the occurrence rates of ECWs in the solar wind for each mouth between 2007$-$2013. Results show that the occurrence rate of LH ECWs is larger than that of RH ECWs for most of months. Moreover, the occurrence rate of LH ECWs fluctuates considerably in a wide range, while the occurrence rate of RH ECWs tends to be a constant approximately. In addition, the minima of occurrence rates are comparable for LH and RH ECWs. Preferential plasma conditions favoring LH ECWs and considerable dependence of the occurrence rate of LH ECWs on the ambient plasma properties are revealed, which may provide indication on the mechanisms of generating ECWs in the solar wind.
On the basis of the results in Figures 2 and 3, one may speculate that high-speed solar wind streams are relevant to answer the question what factor results in the difference of occurrence rates between LH and RH ECWs presented in Figure 1. A lot of studies have shown that the plasmas in high-speed streams are characterized by higher temperature, lower density, and larger velocity than those in slow-speed flows \citep{bur74p17,gos78p01,cra02p29}, which first provides preferential conditions contributing to a higher occurrence rate of LH ECWs according to the present research. Moreover, minor ions such as alpha particles in high-speed streams, flow generally faster than protons, therefore forming differential flow with velocity on the order of the Alfv\'en speed \citep{mar82p35,mar91p52,kas08p03}. We believe that the presence of such differential flow could offer a specific mechanism on generating LH ECWs in the solar wind. This idea is also supported by observation of alpha particles. Figure 4 (left panel) is scatter plot of the occurrence rate of LH ECWs versus ambient median of alpha$-$proton drift velocity ($V_d$) in each month. The alpha data are available intermittently from February in 2007 to December in 2010, and have a low resolution of 10 minute for the STEREO at present. Nevertheless, a positive correlation with coefficient exceeding 0.7 is shown. For the sake of comparison, the result for RH ECWs is also plotted in Figure 4 (right panel). The correlation for RH ECWs is small, with $C = 0.19$.
Theoretically, the significance of effect of alpha$-$proton differential flow on proton temperature anisotropy instabilities has been demonstrated via hybrid simulations and linear Vlasov$-$Maxwell theory \citep{hel06p07,pod11p41}. In particular, the results from \citet{pod11p41} show that the alpha$-$proton differential flow causes instability with $T_{\perp} > T_\parallel$ to preferentially generate ion cyclotron waves propagating away from the Sun, and it causes instability with $T_{\perp} < T_\parallel$ to preferentially generate magnetosonic waves propagating toward the Sun. Note that although magnetosonic waves are RH waves in the plasma frame, these waves shall also appear as LH waves in the spacecraft frame due to the large Doppler shift as described in Section 2. In one word, proton temperature anisotropies generate the observed LH ECWs when one takes into account the differential flow of alpha particles relative to the protons \citep{pod11p41}.
The present discussion tends to imply local sources of ECWs driven mainly by proton temperature anisotropies that are common in the solar wind \citep{mar82p52,mar04p02,mat12p73}. The temperature anisotropies can generate ion cyclotron waves or magnetosonic waves via instabilities. These waves may propagate toward or away from the Sun with a comparable probability. The second version may be that almost all of the waves propagate away from the Sun, where the waves are composed of ion cyclotron waves and equal amount of magnetosonic waves in principle \citep{gar16p30,jia16p07}. Both cases above lead to a similar occurrence rate for LH and RH ECWs, which may be the reason why LH and RH ECWs have their comparable minima of occurrence rates shown in Figure 1. However, the presence of high-speed streams and therefore differential flow of alpha particles would change the situation above; the differential flows have an important effect on the proton temperature anisotropy instabilities and causes the instabilities preferentially generating LH ECWs \citep{pod11p41}. In this regard, it becomes easy to understand why occurrence rates for LH and RH ECWs are different and more ECWs are LH waves in the solar wind.
In summary, this Letter finds significant differences in behavior of occurrence rates between the LH ECWs and RH ECWs. The occurrence rate for each month is nearly a constant for the RH ECWs over the period of 7 years, but it varies significantly for the LH ECWs over the same period. The plasma with a higher temperature, lower density, and larger velocity favors the LH ECWs, but there seems to be no preferential conditions for the RH ECWs. Further analysis indicates that the present finding is consistent well with the theory for effect of differential flow of alpha particles on generation of ECWs. This finding hence probably is an evidence for the effect concerning alpha particles. Further studies related to instability simulations are needed, and the parameters found here could constrain initial conditions for the simulations to confirm the speculations in the present Letter.
\acknowledgments
This research was supported by NSFC under grant Nos. 41504131, 41674170, 41531071, 11373070, and the Key Laboratory of Solar
Activity at CAS NAO (KLSA201703). This research was also sponsored partly by the Plan For Scientific Innovation Talent of Henan Province. The authors thank NASA/GSFC for the use of data from STEREO, which are available freely via the Coordinated Data Analysis Web (http://cdaweb.gsfc.nasa.gov/cdaweb/istp$_-$public/).
|
1,116,691,498,356 | arxiv | \section{Introduction}
For over a decade several variants of the so called least squares method of American option pricing have been widely used by financial practitioners and at the same time studied by researchers. The origins of the method can be found in the work of Carriere \cite{Car1996}, Tsitsiklis, Van Roy \cite{Tsi2001} (see also \cite{Tsi1999}), Longstaff, Schwartz \cite{LS} and Cl{\'e}ment, Lamberton, Protter \cite{Cle}. Basically, the method seeks a way of approximating conditional expectations needed in the valuation process either directly as in \cite{LS} and \cite{Cle}, or indirectly through the value function as in \cite{Tsi2001}.
A modification of the algorithm from \cite{LS} was studied in \cite{Cle} from the point of view of the convergence of the method. Subsequently, several papers on this subject have been published --- we will mention just a few of them related to the present article.
Glasserman and Yu \cite{Gla2004a} investigated in 2004 the convergence of the least squares like methods, where --- basically --- the necessary conditional expectations are approximated by finite linear combinations of approximating functions. More specifically, they looked into the problem of accuracy of estimations when the number of approximating functions and the number of simulated trajectories increase. They assumed that the underlying is a multidimensional Markov process. The rather pessimistic outcome, from the practical point of view, is that for polynomials as the approximating functions and for the conventional (resp. geometric) Brownian motion as the underlying, the number of required paths may grow exponentially in the degree (resp. the square of the degree) of the polynomials. Glasserman and Yu remarked that similar property may hold also for more general approximating functions (with the number of approximating functions replacing the maximal degree).
Also in 2004, Stentoft \cite{Ste2004} analyzed and extended the convergence results presented in \cite{Cle}. In particular, he has considered the problem of choosing the optimal number of regressors in relation to the number of simulated trajectories.
In 2005, Egloff \cite{Egl2005} proposed an extension to the original Longstaff-Schwartz \cite{LS} as well as Tsitsiklis-Van Roy (\cite{Tsi1999}, \cite{Tsi2001}) algorithms by treating the optimal stopping problem for multidimensional discrete time Markov processes as a generalized statistical learning problem. His results also improve those from \cite{Cle}. Egloff comments that despite very good performance of least squares algorithms in some practical calculations, precise estimates of the statistical quantities involved in these procedures may be difficult, leading to some less impressive performance in other cases.
Zanger \cite{Zan2009} proposed in 2009 another extension to the least squares method by considering fairly arbitrary subsets of information spaces as the approximating sets. He has also produced some new and interesting convergence results showing in particular that sometimes the exponential dependence on the number of time steps can be avoided. It should be mentioned that the least squares approach can be also seen as part of the stochastic mesh framework proposed by Broadie and Glasserman (\cite{BroGla1997}, \cite{BroGla2002}; see also \cite{Liu2009} and \cite{Gla2010}). It should also be observed that two features seem to be common to the articles mentioned above. Firstly, the underlying is assumed to be Markovian. Secondly, the convergence rates of the method, in all its incarnations, are not encouraging from the computational point of view.
In the present paper, we extend the Cl{\'e}ment, Lamberton, Protter approach \cite{Cle} to
a fairly general setting for the regression approximating conditional expectations. Also, in a natural way,
the underlying does not have to be Markovian and the pay-offs are allowed to be path-dependent.
While lack of Markov property can be easily circumnavigated in other ways, this always implies additional computational cost.
Obviously, by aiming at better approximation of conditional expectation, the potential computational complexity increases considerably. However, the main advantage of relaxation of the assumptions is the increase in freedom to customize the method. Moreover, we would like to argue that the least squares methods should be seen as a general framework leading to a variety of specific implementations. The main reason is essentially the fact that the information space for conditional expectation, or in other words its range, is in many interesting cases infinite dimensional. Inevitably, in these cases any approximation of conditional expectations, or value functions depending on conditional expectations, has to involve significantly restrictive extrinsic assumptions to make practical computations possible. While general convergence results are necessary to motivate the overall approach and some computational complexity may be addressed along the lines of \cite{Rus1997}, it is most likely that the future developments will evolve in the direction of simplified time-series models. It is quite conceivable that an alternative source of realism and numerical efficiency could exploit the advances in both time-series analysis and frame theory (see e.g. \cite{K-M-O}). The empirical basis for such speculations comes from the fact that in many real problems even by taking only a few non-linear regressors, and sometimes ignoring lack of the Markov property, one might arrive at satisfactory results from the practical point of view. There seem to be much anecdotal evidence coming from the financial industry supporting the last statement and in this paper we provide further corroborating evidence in the form of three empirical examples.
The material is organized as follows. The introduction is followed by a short review of consequences of the classic Dobrushin-Minlos theorem, which can lead to viable numerical approximations of conditional expectations. After recalling briefly how Snell envelopes are used in pricing of American-style options, we show that the methods proposed by Cl\'ement, Lamberton and Protter \cite{Cle} can be extended to cover the case of American style options with a very general approach to regression. The setting includes path dependent pay-offs and a non-Markovian multidimensional underlying. This is followed by three computational examples illustrating the viability of the method under rather restrictive assumptions. First, we present pricing of a one year Eurodollar American put and call options with different strike prices. Then, we use the least squares approach to price a 1.5 month American put option, whose payoff function depends on two market indices, namely DAX and EUROSTOXX50. Finally, we use the least squares algorithm to price two 1.5 month American put options, whose payoff function is based on a single market index under the assumption that the underlyings can be described by the Heston-Nandi GARCH(1,1) model ~\cite{HesNan2000}. Again, we will use EUROSTOXX50 and DAX indices as the respective underlying instruments.
\section{Approximation of conditional expectation}
In this section we will introduce some basic notation and recall a classic result of Dobrushin and Minlos \cite{Dob}, which provides motivation, as well as a choice of practical recipes, for approximation of conditional expectations via the so called {\it admissible projection systems}.
The Dodrushin-Minlos theorem shows a specific example of such approximation, but of course there exist infinitely many non-polynomial constructions that would have the same property.
Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space. Since we will be dealing only with random variables of finite variance, we can rely on the Hilbert space geometry in addressing the issues of interest (see \cite{Sma}).
A closed subspace $S\subset L^2(\Omega,\mathcal{F},\mathbb{P})$ is said to be
\emph{probabilistic} if it contains constants and is closed with
respect to taking the maximum of two of its elements, i.e. if $X,Y\in S$, then $X\vee Y\in S$.
For any non-empty set $\mathbf{X}\subset L^2(\Omega,\mathcal{F},\mathbb{P})$, its
\emph{lattice envelope} $\mathrm{Latt}(\mathbf{X})$ is defined as the smallest probabilistic subspace of $L^2(\Omega,\mathcal{F},\mathbb{P})$ containing $\mathbf{X}$. Moreover, if $\mathbf{X}=\{X_1,\ldots,X_n\}$ and $\mathcal{B}_n$ denotes the $\sigma$-algebra of Borel sets in $\mathbb{R}^n$, then it is not difficult to prove that
$
\mathrm{Latt}(\mathbf{X})=
L^2(\Omega,\sigma(\mathbf{X}),\mathbb{P})=
L^2(\Omega,(X_1,\ldots,X_n)^{-1}(\mathcal{B}_n),\mathbb{P}).
$
The latter is sometimes referred to as the \emph{information space generated by} $X_1,\ldots,X_n$.
Even if $\mathbf{X}$ consists of just one scalar random variable, $\mathrm{Latt}(\mathbf{X})$ is typically infinite-dimensional.
Since it is also the range of the orthogonal projection $\mathrm{E}[\cdot\,|\,X_1,\ldots,X_n]$, it would be desirable from the numerical standpoint to be able to approximate such projections with projections onto smaller finite-dimensional vector spaces using available least squares algorithms. However, approximating an orthogonal projection with infinite-dimensional range by projections onto finite dimensional subspaces makes most error estimates useless, unless the nature of the projected objects is somehow known beforehand.
In order to construct finite-dimensional approximation of conditional expectation one could use the following theorem, which is a slight reformulation of a result of Dobrushin and Minlos
\cite{Dob}.
\bigskip
\begin{thm}\label{thm:DopMin} Let
$(\Omega,\mathcal{F},\mathbb{P})$ be a probability space and let
$\alpha>0$.
Let $\mathcal{P}_n$ denote the space of all polynomials of $n$ real variables.
If $X_1,\ldots,X_n$ are random variables such that
$e^{|X_j|}\in L^\alpha(\Omega,\mathcal{F},\mathbb{P})$ for $j=1,\ldots,n$, then:
\begin{description}
\item[(a)] $P(X_1,\ldots,X_n)\in L^p(\Omega,\mathcal{F},\mathbb{P})$ for any
polynomial $P\in\mathcal{P}_n$ and $p\in[1,\infty)$;
\item[(b)] the vector space $\{P(X_1,\ldots,X_n)\,:\,P\in\mathcal{P}_n\}$
is dense in $L^p(\Omega,\sigma(X_1,\ldots,X_n),\mathbb{P})$ for
every $p\in[1,\infty)$.
\end{description}
\end{thm}
It should be noted that the converse to part (a) is false as shown in the following example.
\bigskip
\begin{ex}\emph{
Let $n=1$ and let $X_{1}$ be a discrete valued random variable with probability mass function
\[
\mathbb{P}[X_{1}=m]=\frac{\frac{1}{m^{\ln m}}}{\sum_{m=1}^\infty\frac{1}{m^{\ln m}}},\quad m\in\mathbb{N}.
\]
Since for any $q\geq 1$ and $\alpha>0$
\[
\sum_{m=1}^\infty\frac{m^q}{m^{\ln m}}<\infty
\textrm{ and }
\sum_{m=1}^\infty\frac{e^{\alpha m}}{m^{\ln m}}=\infty,
\]
the property {\bf (a)} from Theorem~\ref{thm:DopMin} is satisfied but $e^{|X_{1}|}\not\in L^{\alpha}(\Omega,\mathcal{F},\mathbb{P})$. $\blacksquare$}
\end{ex}
If the probability measure $\mathbb{P}$ has a bounded support, in $\mathbb{R}^n$,
then the assumption of the Dobrushin-Minlos theorem is trivially satisfied. In fact, in this special case the conclusion of the theorem follows directly from the Stone-Weierstrass Theorem. It is also easy to see that
if $X$ is Gaussian, then $e^{|X|}\in L^1$. However, if $X$ is lognormal, then its moment generating function does not exist in the interval $(0,\infty)$ and hence $e^{\alpha|X|}\not\in L^\alpha$ for all $\alpha>0$.
In concrete applications, the condition $e^{|X|}\in L^\alpha$ can sometimes
be achieved by changing the probability distribution of {\lq\lq}very large{\rq\rq} values of $|X|$. For instance, this can be accomplished by truncation of probability distribution or some direct attenuation of the random variable $X$. Another possibility is the use of suitable weight functions. In this context, the Dubrushin-Minlos theorem can be used to justify the density part in the construction of several classic polynomial bases in spaces of square integrable functions associated with the names of Jacobi, Gagenbauer, Legendre, Chebyshev, Laguerre and Hermite (see e.g. \cite{Chi1978}).
Let $V$ be an information space generated by random variables $X_1,\ldots,X_n$. Suppose that one can furnish a sequence of Borel functions $q_m:\mathbb{R}^n\longrightarrow\mathbb{R}$, with $m\in\mathbb{N}$, such that the set $\{q_m(X_1,\ldots,X_n)\,:\,m\in\mathbb{N}\}$ is linearly dense in $V$ (e.g. with the help of the Dobrushin-Minlos theorem). Then the conditional expectation operator $\mathrm{E}[\cdot\,|\,X_1,\ldots,X_n]$ is the pointwise limit of the sequence of projections onto linear spaces $V^m=\{q_k(X_1,\ldots,X_n)\,:\,1\leq k\leq m\}$ as $m\nearrow\infty$. This observation leads to an auxiliary concept of admissible projection systems.
Given a discrete time filtration $\{\emptyset,\Omega\}=\mathcal{F}_0\subset\mathcal{F}_1\subset\ldots
\subset\mathcal{F}_T\subset\mathcal{F}$ in the probability space $(\Omega,\mathcal{F},\mathbb{P})$, we define an \emph{admissible projection system} as a family
of orthogonal projections $P_t^m\,:\,L^2(\Omega,\mathcal{F},\mathbb{P})\longrightarrow L^2(\Omega,\mathcal{F},\mathbb{P})$, where
$t=1,\ldots,T$ and $m\in\mathbb{N}$,
with ranges $V_t^1\subset V_t^2\subset V_t^3\subset\ldots\ $, whose union is dense in $L^2(\Omega,\mathcal{F}_t,\mathbb{P})$ for each value of $t$.
Note that for any such system and for any fixed $t$, we get pointwise convergence of the projections $P_t^m$ to $\mathrm{E}[\cdot\,|\,\mathcal{F}_t]$. However, this is not a norm convergence unless the underlying sequence of subspaces becomes constant after finitely many steps.
It is well known that Snell envelopes are useful in valuation of American put options in discrete time models (see e.g. \cite{Pli}, p.127). They also furnish the main theoretical ingredient of the least squares option pricing algorithm which is the main topic of this paper. The standard use of Snell envelopes can be easily extended to provide pricing algorithms for more general American style options, that is options that allow execution at any time prior to maturity, but with a wide variety of pay-off patterns.
For a given probability space $(\Omega,\mathcal{F},\mathbb{P})$, let
$(\mathcal{F}_t)_{t=0}^T$ be a filtration, where
$\mathcal{F}_0=\{\emptyset,\Omega\}$ and $\mathcal{F}_T=\mathcal{F}$.
Assume that an adapted stochastic process $(Z_t)_{t=0}^T$ is integrable. One could look at $(Z_{t})$ as the intrinsic value process, that is the (discounted) value of executing some American-style option at time $t$. The \emph{Snell envelope of} $(Z_t)$ is defined as the
adapted process $U_t$ such that $U_T=Z_T$ and $U_t=\max\left(Z_t,\mathrm{E}[U_{t+1}|\mathcal{F}_t]\right)$ for $t\in\{0,\ldots,T-1\}$.
The value $U_{0}$ corresponds to the the price of the option associated with pay-offs given by $(Z_{t})$. Indeed $(U_{t})$ can be seen as the application of dynamic programming principle to the optimal stopping problem $\sup\{\mathrm{E}Z_\nu:\nu\in\mathcal{C}_0^T\}$,
where $\mathcal{C}_0^T$ denote the set of all stopping times with values in the set $\{0,1,\ldots,T\}$ (cf. \cite{LamLap2007} and references therein, for basic properties of Snell envelopes and applications to pricing American-style options). The dynamic programming principle could be also rewritten in terms of the series of stopping times $(\tau_{t})$, defined recursively by putting $\tau_T=T$ and
\[
\tau_t=
t\mathbf{1}_{\{ Z_t\geq E[Z_{\tau_{t+1}}\,|\,\mathcal{F}_t]\}}
+
\tau_{t+1}\mathbf{1}_{\{ Z_t<E[Z_{\tau_{t+1}}\,|\,\mathcal{F}_t]\}},
\qquad t=1,\ldots,T-1.
\]
In particular, we get $U_{t}=E[Z_{\tau_{t}}|\mathcal{F}_{t}]$ and consequently, $\tau_{0}$ is optimal for $(Z_{t})$.
The key element in any numerical implementation of Snell envelopes is the ability to approximate the conditional expectation operator. Except for the finite case, one has to deal with infinite-dimensional spaces of random variables. Some elucidation seems to be in order here.
Given an admissible projection system $(P_{t}^{m})$, for a fixed $m\in\mathbb{N}$ we define the stopping times
$\tau_t^{[m]}$ by recursion, putting $\tau_T^{[m]}=T$ and
\[
\tau_t^{[m]}=
t\mathbf{1}_{\{ Z_t\geq P_t^m(Z_{\tau_{t+1}^{[m]}})\}}
+
\tau_{t+1}^{[m]}\mathbf{1}_{\{ Z_t<P_t^m(Z_{\tau_{t+1}^{[m]}})\}},
\qquad t=1,\ldots,T-1.
\]
Then the following theorem generalizes a result due to Cl\'ement, Lamberton and Protter (see Theorem 3.1 in \cite{Cle}):
\medskip
\begin{thm}\label{thm:1}
If $(P_t^m)$ is an admissible
projection system, then
$
\lim_{m\to\infty}\mathrm{E}\left[Z_{\tau_t^{[m]}}\ |\ \mathcal{F}_t\right]
=\mathrm{E}[Z_{\tau_t}\ |\ \mathcal{F}_t]
$
for $t=1,\ldots,T$, where the convergence is in $L^2$.
\end{thm}
\emph{Proof:} Despite a much more general setting we have adopted here, we can use standard properties of projections in Hilbert spaces and proceed as in \cite{Cle}. $\blacksquare$
Obviously, the above considerations remain valid for vector valued stochastic processes.
\section{The least squares method of option pricing}
Assuming that the filtration is generated by a discrete time multivariate stochastic process, we will show how to use Monte Carlo methods to approximate numerically the value for the optimal stopping for a given adapted process $(Z_{t})$, i.e. how to approximate the Snell envelope $(U_{t})$ of that process. To do so, given an admissible projection system, we basically need to approximate numerically $\mathrm{E}\left[Z_{\tau_t^{[m]}}\right]$ for $m\in\mathbb{N}$, due to Theorem~\ref{thm:1} and the fact that $U_0=\max(Z_0,\mathrm{E}[Z_{\tau_1}])$.
In what follows we will denote the set of all real $(m\times n)$-matrices by $\mathbb{R}^{m\times n}$ with the convention that $\mathbb{R}^m=\mathbb{R}^{1\times m}$. Throughout the section we will use notation and methods similar to those introduced in \cite{Cle} but adapted to our less restrictive assumptions.
Suppose that $(X_t)_{t=0}^T$ is a discrete time $d$-dimensional stochastic process on the
probability space $(\Omega,\mathcal{F},\mathbb{P})$, with $X_0$ being a constant. This process is meant to represent the prices of the underlying assets for an American style option we wish to valuate.
Let
$
X=(X_1,\ldots,X_T):\Omega\longrightarrow\mathbb{R}^{d\times T}
$
and let
$\mathcal{F}_t=\sigma\left(X_0,\ldots,X_t\right)=
\sigma\left(X_1,\ldots,X_t\right)$ for $t=1,\ldots,T$. Given a family of Borel functions
$
f_t:\mathbb{R}^{d\times (t+1)}\longrightarrow\mathbb{R}_+,$ where $t=0,\ldots,T,
$
we define $Z_t=f_t(X_0,\ldots,X_t)$ for $t=0,\ldots,T$.
This sequence represents suitably discounted intrinsic prices of the option we want to consider. Such a general choice of functions $f_t$ expands the potential applicability well beyond American put options.
Next, we need to chose an admissible projection system for the filtration associated with $X$. This is equivalent to choosing for each $t\in\{1,\ldots,T\}$ a suitable sequence of Borel functions
$
q_t^k:\mathbb{R}^{d\times T}\longrightarrow\mathbb{R}$,
where $k\in\mathbb{N},$
which depend only on the first $t$ column variables, and are such that the sequence $\{q_t^k(X)\}_{k\in\mathbb{N}}$ is linearly dense and linearly independent in the space $L^2(\Omega,\sigma(X_1,\ldots,X_t),\mathbb{P})$. Then, we can select an increasing sequence of integers $(k_m)_{m\in\mathbb{N}}$, such that the spaces
$
V_t^m=\mathrm{Lin}\{q_t^k(X)\,:\,k=1,\ldots,k_m\}
$
and the orthogonal projections $P_t^m:L^2(\Omega,\sigma(X),\mathbb{P})\longrightarrow V_t^m$ have all the right properties. The symbol {\lq\lq}$\mathrm{Lin}${\rq\rq} denotes the linear envelope of the given set of vectors.
If the stopping times $\tau^{[m]}$ are defined as in the previous section, then for some
$\alpha_t^m\in\mathbb{R}^{k_m\times 1}$
we have
\[
P_t^m\left(Z_{\tau_{t+1}^{[m]}}\right)=e_t^m(X)\,\alpha_t^m,
\]
where the mapping $e_t^m$ is given by the formula
$
e_t^m=(q_t^1,\ldots,q_t^{k_m}):\mathbb{R}^{d\times T}\longrightarrow\mathbb{R}^{k_m}.
$
In view of our assumptions, the Gram matrix of the components of $e_t^m(X)$ (with respect to the inner product $(Y_1,Y_2)\mapsto\mathrm{E}[Y_1Y_2]$), that is the matrix
$
A_t^m=\Big[
\mathrm{E}\left[q_t^i(X)q_t^j(X)\right]
\Big]_{1\leq i,j\leq k_m}
\in\mathbb{R}^{k_m\times k_m},
$
is invertible and hence
\[
\alpha_t^m=(A_t^m)^{-1}
\left[
\begin{array}{c}
\mathrm{E}\left[Z_{\tau_{t+1}^{[m]}}\, q_t^1(X)\right]\\
\vdots\\
\mathrm{E}\left[Z_{\tau_{t+1}^{[m]}}\, q_t^{k_m}(X)\right]
\end{array}
\right].
\]
Given a number $N$, the next step it to use Monte-Carlo simulation to generate independent trajectories
$
X^{(n)}=\left(X^{(n)}_1,\ldots,X^{(n)}_T\right)\in\mathbb{R}^{d\times T}
$
of the
process $X$, for $n=1,2,\ldots,N$. Each simulation has the fixed starting point $X^{(n)}_0=X_0\in\mathbb{R}^{d\times 1}$.
Define
$
Z_t^{(n)}:=f_t\left(X^{(n)}_0,\ldots,X^{(n)}_t\right)
$
and let
$
\widehat{Z}_t=\left[
Z^{(1)}_t,\ldots,Z^{(N)}_t
\right]^* \in\mathbb{R}^{N\times 1}.
$
This column vector consists simply of the values at time $t$ of all simulated trajectories of the process $Z$.
Define also
\[
V_t^{(m,N)} = \textrm{Lin}
\left\{
\left[
\begin{array}{c}
q_t^{k}(X^{(1)})\\
\vdots\\
q_t^{k}(X^{(N)})
\end{array}
\right]\,:\,k=1,\ldots,k_m
\right\} \subset\mathbb{R}^{N\times 1}
\]
and
\[
P_t^{(m,N)}=\mathrm{Proj}_{V_t^{(m,N)}}:\mathbb{R}^{N\times 1}\longrightarrow\mathbb{R}^{N\times 1}
\]
with respect to the inner product $\frac{\langle x,y\rangle}{N},$ where $\langle x,y\rangle$ denotes the standard scalar product.
Note that
\[
V_t^{(m,N)}=\mathrm{Lin}\left\{
\textrm{the columns of the matrix }
\left[
\begin{array}{c}
e_t^m(X^{(1)})\\
\vdots\\
e_t^m(X^{(N)})
\end{array}
\right]\in\mathbb{R}^{N\times k_m}
\right\}\subset\mathbb{R}^{N\times 1}.
\]
If we define the stopping times $\tau_t^{[m]}$ by letting $\tau_T^{[m]}=T$ and the formula
\[
\tau_t^{[m]}=
t\mathbf{1}_{\{ Z_t\geq P_t^m(Z_{\tau_{t+1}^{[m]}})\}}
+
\tau_{t+1}^{[m]}\mathbf{1}_{\{ Z_t<P_t^m(Z_{\tau_{t+1}^{[m]}})\}},
\qquad t=1,\ldots,T-1,
\]
then for some $\alpha_t^m\in\mathbb{R}^{k_m\times 1}$ we have
\[
P_t^m\left(Z_{\tau_{t+1}^{[m]}}\right)=e_t^m(X)\alpha_t^m.
\]
Similarly, if we define the approximative stopping times $\tau_t^{n,m,N}$ by requiring that
$\tau_T^{n,m,N}=T$ and by putting
\begin{eqnarray*}
\tau_t^{n,m,N}&=&t \mathbf{1}_{ \left\{Z^{(n)}_{t}\geq
\pi_n\left[P_t^{(m,N)}(\widehat{Z}_{\tau_{t+1}^{n,m,N}})\right]\right\}
}+ \tau_{t+1}^{n,m,N} \mathbf{1}_{ \left\{Z^{(n)}_{t}<
\pi_n\left[P_t^{(m,N)}(\widehat{Z}_{\tau_{t+1}^{n,m,N}})\right]\right\}
},\\
&&\textrm{for }t=1,\ldots,T-1,
\end{eqnarray*}
where
$
\pi_n:\mathbb{R}^{N\times 1}\longrightarrow\mathbb{R}
$ is the projection on the $n$-th coordinate,
then for some $\alpha_t^{(m,N)}\in\mathbb{R}^{k_m\times 1}$ we have
\begin{eqnarray*}
P_t^{(m,N)}\left( \left[
\begin{array}{c}
Z_{\tau_{t+1}^{1,m,N}}^{(1)}\\
\vdots\\
Z_{\tau_{t+1}^{N,m,N}}^{(N)}
\end{array}
\right] \right)&=& \left[
\begin{array}{c}
e_t^m(X^{(1)})\\
\vdots\\
e_t^m(X^{(N)})
\end{array}
\right] \alpha_t^{(m,N)}.
\end{eqnarray*}
Let $A_t^{(m,N)}$ denote the $(k_m\times k_m)$-Gram matrix associated with the columns
of the matrix
\[
\left[
\begin{array}{c}
e_t^m(X^{(1)})\\
\vdots\\
e_t^m(X^{(N)})
\end{array}
\right],
\]
(with respect to the inner product $\frac{\langle x,y\rangle}{N}$).
Then this is simply the Gram matrix estimator for the given sample.
Then $\alpha_t^{(m,N)}$ is a solution of the equation
\[
A_t^{(m,N)}\alpha_t^{(m,N)}=
\frac{1}{N}
\left[
\begin{array}{c}
e_t^m(X^{(1)})\\
\vdots\\
e_t^m(X^{(N)})
\end{array}
\right]^*
\left[
\begin{array}{c}
Z^{(1)}_{\tau_{t+1}^{1,m,N}}\\
\vdots\\
Z^{(N)}_{\tau_{t+1}^{N,m,N}}
\end{array}
\right].
\]
By the Law of Large Numbers
$A_t^{(m,N)}\stackrel{a.s.}{\longrightarrow}A_t^m$ as $N\to\infty$,
and
hence for sufficiently large $N$ the matrix $A_t^{(m,N)}$ is
invertible (almost surely). In this case
\[
\alpha_t^{(m,N)}=
\frac{1}{N}
\left(
A_t^{(m,N)}
\right)^{-1}
\left[
\begin{array}{c}
e_t^m(X^{(1)})\\
\vdots\\
e_t^m(X^{(N)})
\end{array}
\right]^*
\left[
\begin{array}{c}
Z^{(1)}_{\tau_{t+1}^{1,m,N}}\\
\vdots\\
Z^{(N)}_{\tau_{t+1}^{N,m,N}}
\end{array}
\right].
\]
For convenience we will write
$
\alpha^{m}=\left(\alpha_1^{m},\ldots,\alpha_{T-1}^{m}\right)
$ and
$
\alpha^{(m,N)}=\left(\alpha_1^{(m,N)},\ldots,\alpha_{T-1}^{(m,N)}\right).
$
Both objects are $k_m\times(T-1)$-matrices.
The next theorem is a direct extension of Theorem 3.2 and Lemma 3.2 from \cite{Cle}.
\medskip\begin{thm} With the above notation, as $N\to\infty$,
\[
\frac{1}{N}\sum_{n=1}^N
Z^{(n)}_{\tau_{t}^{n,m,N}}\stackrel{a.s.}{\longrightarrow}
\mathrm{E}\left[Z_{\tau_t^{[m]}}\right],\qquad t=1,\ldots,T.
\]
\label{thm:2}
\end{thm}
\emph{Proof:} Define
$
B_t=\{
(a^m,z,x)\,: z_t<e_t^m(x)a_t^m
\}\subset
\mathbb{R}^{k_m\times(T-1)}\times\mathbb{R}^{T}\times\mathbb{R}^{d\times T}
$
for $t=1,\ldots,T-1,$ where $a^m=(a_1^m,\ldots,a_{T-1}^m)$, $z=(z_1,\ldots,z_T)$,
and $x=(x_1,\ldots,x_T)$. By $B_t^c$ we will denote the complement of $B_t$.
We define an auxiliary function
$
F_t:
\mathbb{R}^{k_m\times(T-1)}\times\mathbb{R}^{T}\times\mathbb{R}^{d\times T}
\longrightarrow\mathbb{R},
$
by recursion by putting $F_T(a^m,z,x)=z_T$ and $F_t(a^m,z,x)=
z_t\mathbf{1}_{B_t^c}+F_{t+1}(a^m,z,x)\mathbf{1}_{B_t}$ for $t=1,\ldots,T-1.$
It is easy to see that
\[
F_t(a^m,z,x)=
z_t\mathbf{1}_{B_t^c}+
\sum_{s=t+1}^{T-1}z_s\mathbf{1}_{B_t\cap\ldots\cap B_{s-1}\cap B_s^c}+
z_T\mathbf{1}_{B_t\cap\ldots\cap B_{T-1}}
\]
for $ t=1,\ldots,T-1$.
Moreover,
$F_t(a^m,z,x)$ is independent of $a_1^m,\ldots,a_{t-1}^m$, $F_t(\alpha^m,Z,X)=Z_{\tau_t^{[m]}}$ and $F_t(\alpha^{(m,N)},Z^{(n)},X^{(n)})=Z^{(n)}_{\tau_t^{n,m,N}}.$
For $t=2,\ldots,T$ define also two other auxiliary functions $
G_t(a^m,z,x)=F_t(a^m,z,x)e^m_{t-1}(x)$ and $\psi_t(a^m)=\mathrm{E}[G_t(a^m,Z,X)]$.
Using this notation, one can see that for $t=1,\ldots,T-1$:
\begin{align}
\alpha_t^m &= (A_t^m)^{-1}\psi_{t+1}(\alpha^m); \label{alpha}\\
\alpha_t^{(m,N)} &= (A_t^{(m,N)})^{-1}
\frac{1}{N}\sum_{n=1}^NG_{t+1}(\alpha^{(m,N)},Z^{(n)},X^{(n)}).
\label{alphaN}
\end{align}
The following estimate is a higher-dimensional counterpart of Lemma 3.1 in \cite{Cle}
and can be derived along the same lines as that lemma:
\begin{equation}
|F_t(a,z,x)-F_t(\tilde{a},z,x)|\leq\sum_{s=t}^T|z_s|
\left[
\sum_{s=t}^{T-1}
\mathbf{1}_{\{|z_s-e^m_s(x)\tilde{a}_s|\leq|e^m_s(x)|\|\tilde{a}_s-a_s\|\}}
\right],
\label{F-est}
\end{equation}
where $1\leq t\leq T-1$,
$a=(a_1,\ldots,a_{T-1})\in\mathbb{R}^{k_m\times(T-1)}$, $\tilde{a}=(\tilde{a}_1,\ldots,\tilde{a}_{T-1})\in\mathbb{R}^{k_m\times(T-1)}$, $z\in\mathbb{R}^{T}$ and $x\in\mathbb{R}^{d\times T}$.
Using (\ref{alpha}, \ref{alphaN}, \ref{F-est}), and under the technical assumption that $\mathbb{P}(e_t^m(X)\alpha_t^m=Z_t)=0$,
the reasoning from \cite{Cle} can be easily modified to work within our more general setup. In general, this additional technical requirement can be fulfilled by using approximation of the contract functions $f_t$ by functions with probabilistically {\lq\lq}negligible{\rq\rq} fibers and by introduction of small amount of random noise perturbing the probability distribution of $X_t$. $\blacksquare$
\bigskip
Theorems \ref{thm:1} and \ref{thm:2} provide a recipe for approximation of $\mathrm{E}[Z_{\tau_1}]$ and hence also $U_0=\max\left(Z_0,\mathrm{E}[Z_{\tau_1}]\right)$, as required.
\section{Examples}
In this section we show three examples of applications of the above least squares algorithm. The first example covers American call and put options written on Eurodollar futures, which are assumed to conform to the Brace-Gatarek-Musiela model~\cite{BraGatMus1997}. Next we price basket and dual-strike American put options for EUROSTOXX50 and DAX indices, under the standard bivariate Brownian dynamics. Finally, we show how to price univariate American put options, both for EUROSTOXX50 and DAX indices, assuming that the dynamics of the underlyings could be expressed using the Heston-Nandi GARCH(1,1) model~\cite{HesNan2000}.
We have decided not to include convergence speed analysis, as it would make the presented examples much more complicated (e.g. the proper variance reduction technique is a crucial step for any market implementation), without adding much to the conclusions drawn in this paper. We refer to~\cite{Cle,BevJos2009}, and references therein, for detailed analysis about convergence speed in univariate Markovian case. For transparency, we use only the standard models for parameter estimation and Monte Carlo simulation. In particular, only the prices of the underlyings are used for calibration purposes and no Monte Carlo variance reduction technique is implemented. Nevertheless, we present the (smoothed) density function of simulated prices for every example (see Figures~\ref{F:MCprice},~\ref{F:optionBASKET}~and~\ref{F:optionHN}) to give some insight into the accuracy of our implementation. It should be noted that while our examples are rather straightforward, the accuracy seems to be satisfactory. This allow us to be optimistic about the least squares algorithm approach for option pricing, even when the dynamics of the underlyings is complicated and no theoretical price is known.
Our implementation of the least squares algorithm is based on the in-the-money realizations to speed up the convergence and reduce the number of polynomials needed to achieve sufficient level of accuracy. It is worth mentioning that in real-world models, to improve the convergence rate and the speed of the algorithm one might use additional information available from the market (e.g. prices of various derivatives, based on the same underlying instruments) as well as various modifications of the standard Monte Carlo algorithm (for more advanced models cf. \cite{BevJos2009} or \cite{Gla2010} and references therein).
All computations were done using {\bf R 2.15.2} (64-bit). In particular we have used the libraries {\bf fOptions} (for Heston-Nandi parameter calibration, CRR prices and Monte Carlo simulation), {\bf orthopolynom} (for different base functions in L-S algorithm), {\bf timeSeries} (for market data handling) and {\bf Rsge} (for parallel computations).
\subsection{Eurodollar options}
In this subsection we use the least squares algorithm to price one year Eurodollar American put and call options with different strike prices, given the real-market daily prices of the Eurodollar futures. It should be noted that the standard Black-Scholes model cannot be used when the option price is based on more than one LIBOR rate (e.g. when the option's lifetime is longer than 3 months). This is due to the fact that forward rates over consecutive time intervals are related to each other and cannot all be log-normal under the same spot risk-neutral measure. Consequently, models of such instruments in the standard risk-neutral setting are based on non-Markovian dynamics. A.~Brace, D.~Gatarek and M.~Musiela~\cite{BraGatMus1997} proposed a model which can overcome this inconvenience (BGM Model) by utilizing a forward arbitrage-free risk-neutral measure. In the literature, it is also referred to as the LIBOR Market Model (LMM). It is worth mentioning that the dynamics of interest rates described in BGM model is very closely related to the Heath-Jarrow-Morton (HJM) Model. Next we will present a brief overview of the BGM model, followed by some basic information concerning the setup of the least squares algorithm.
\subsubsection{The Brace-Gatarek-Musiela Model.}
The Brace-Gatarek-Musiela model is a stochastic model of time-evolution of interest rates. It will be used here to simulate the (Monte Carlo) paths of LIBOR futures. We will now outline a simplified version of the model that suits our framework, and we will make some comments on the estimation procedure. Let $T_{0}=0$ and $T_{i}=T_{i-1}+\frac{3}{12}$ for $i=1,2,3,4$. In reality the dates of expiration for the consecutive Eurodollar futures differ slightly from 90 days. This might potentially have an impact on the results, especially when we consider short term options. Nevertheless, we will use the theoretical values for simplicity. Let $L_{0}$ be a spot LIBOR rate and let $L_{i}:[0,T_{i}]\times\Omega\rightarrow \mathbb{R}$ be the $i$-th forward LIBOR rate. Assuming $d$ sources of randomness, the dynamics of the $i$-th LIBOR rate can be described by the equation
$$d\log L_{i}(t)=\left(\sum_{j=i(t)}^{i}\frac{\delta_{j}L_{j}(t)}{1+\delta_{j}L_{j}(t)}\sigma_{j}(t)-\frac{\sigma_{i}(t)}{2}\right)\sigma_{i}(t)dt +\sigma_{i}(t)dW^{\mathbb{Q}_{\textrm{Spot}}}(t),$$
where $t\in[0,T_{i}]$, $\delta_{i}=T_{i+1}-T_{i}=3/12$ is the length of the accrual period of the $i$-th LIBOR forward rate, $\sigma_{i}(t):[0,T_{i}]\times \Omega\rightarrow \mathbb{R}^{d}$ is the instantaneous volatility of the $i$-th LIBOR forward rate, $i(t)$ denotes the index of the bond (corresponding to the appropriate Eurodollar future) which is first to expire at time $t$, and finally, $W^{\mathbb{Q}_{\textrm{Spot}}}(t)$ is a standard ($d$-dimensional) Brownian motion under the spot LIBOR measure $\mathbb{Q}_{\textrm{Spot}}$ (see~\cite{Jam1997} for more details). We are assuming here that the sources of randomness are independent of each other and that the proper dependency structure is modelled with $\sigma_{i}$. For the Monte Carlo simulation we will use a standard Euler discretization of the above SDE, with the time step $\Delta t =\frac{1}{360}$, i.e.
\begin{equation}\label{BGMdynamics}
\Delta\log L_{i}(t)=\left( \sum_{j=i(t)}^{i}\frac{\delta_{j}L_{j}(t)}{1+\delta_{j}L_{j}(t)}\sigma_{j}(t)-\frac{\sigma_{i}(t)}{2}\right)\sigma_{i}(t)\Delta t +\sigma_{i}(t)\epsilon_{t}\sqrt{\Delta t},
\end{equation}
where $\epsilon_{t}\sim \mathcal{N}(0,\mathbf{I})$ is a $d$-dimensional standard normally distributed random vector. In our implementation we will use $d=3$. To calibrate the model we need to define the functions $\sigma_{i}(t)$, for $i=1,2,3,4$. We will assume that $\sigma_{i}(t)$ (for $i=1,2,3,4$) is time homogeneous, i.e., that there exists a function $\lambda=(\lambda_1,\lambda_2,\lambda_3): [0,T]\rightarrow\mathbb{R}^{3}$ such that
$\sigma_{i}(t)=\lambda(T_{i}-t)$ for $t\in[0,T_{i}]$ and $i=1,2,3,4$. We will provide values of $\lambda(T_i)$ for $i=1,2,3,4$ and assume that $\lambda(t)=\lambda(T_{i})$ for $t\in [T_{i-1},T_{i}]$.
We will apply the Principal Components Analysis (PCA) to Eurodollar futures data to approximate the values of the $4\times 3$ matrix $\Lambda=[\lambda_j(T_i)]$.
In other words, we base our estimation process on the correlation between the Eurodollar futures. The difficulty with calibration of PCA is that Eurodollar futures have fixed maturity dates, and so for a given $T$ we can monitor a contract with volatility $\lambda(T)$ only once per three months. To overcome this, we will use linear interpolation of the quoted prices of Eurodollar futures (which is in fact a common market practice). It should be noted that we need the $L_{5}$ prices to perform such interpolation. Using that approach to Eurodollar futures prices, we obtain the values of contracts with volatility $\lambda(T_i)$ (i=1,2,3,4) for every trading day $t$. We also use linear interpolation of forward LIBOR rates for the days when the market is not operating (i.e.
we interpolate the contract prices using known quotes from the last trading day before and the next trading day after the date in question).
Because of that assumption, in order to conduct the PCA and to estimate $\sigma_{i}$ (for $i=1,2,3,4$), we will need (for each day) the prices of the five Eurodollar futures closest to delivery. Let us now comment on the PCA estimation process. We assume that
$$\lambda_j(T_i)=\frac{\Theta_{i}s_{j}\alpha_{i,j}}{\sqrt{\sum_{k=1}^{d}s_{k}^{2}\alpha_{i,k}^{2}}},$$
for $i=1,2,3,4$ and $j=1,2,3$. Here, $s_{j}^{2}$ denotes the variance of the $j$-th factor computed by PCA (with $s_{1}^{2}\geq s_{2}^{2}\geq s_{3}^{2}$), $\alpha_{i,j}$ measures the influence of the $j$-th factor when the time to maturity is in the period $[T_{i-1},T_{i}]$ and $\Theta_{i}:=\sum_{j=1}^{3}s_{j}\alpha_{i,j}$ is the total volatility in the $i$-th period. We also assume that the factors are uncorrelated and that the relative influence of every factor is 1 (i.e. for $j_{1},j_{2}\in\{1,2,3\}$ we have $\sum_{i}^{4}\alpha_{i,j_1}\alpha_{i,j_2}=0$ if $j_{1}\neq j_{2}$ and $\sum_{i}^{4}\alpha_{i,j_1}\alpha_{i,j_2}=1$ if $j_{1}=j_{2}$). Combining (\ref{BGMdynamics}) and the parameters from the PCA we will be able simulate Eurodollar futures paths.
\subsubsection{Setup, data details and the least squares method parameters.}
We wish to price the quarterly Eurodollar American call and put options EDZ2 (GEZ2 in Globex notation; it means that the underlying instrument is the December 2012 Eurodollar future). The first trade day for EDZ2 is December 13, 2010, and the expiration date is December 17, 2012. We will estimate the value of several such put and call options during the period from December 20, 2011 to January 20, 2012, with different strike prices - ranging from 98.00 to 99.75. While the values of the American call options could be computed without the use of the least squares algorithm, because they coincide with the European calls, we will calculate them anyway to provide more insight into how the parameters are fitted to the market data. In other words, we wish to check empirically if the differences between the market prices and the computed prices are the result of badly fitted model parameters or are due to a problem with accuracy of the least squares algorithm.
For the calibration purposes we will use the daily closing prices of Eurodollar futures and the spot LIBOR rate. Given a date $t$, we will use a period of the same length as the time to maturity of the option (i.e. if the option lifetime is 300 days, then we take last 300 days data before time $t$ to calibrate our model).
The least squares algorithm needs several inputs. As the functions generating the {\lq\lq}information about the past{\rq\rq} we use the standard exponentially weighted Laguerre polynomials of degree not greater than 3.
Our implementation is based on the Monte Carlo simulation of the $L_{4}$ values obtained using \eqref{BGMdynamics}. The algorithm needs also formulas for the interest rate (for the purpose of discounting) in two instances. Firstly, to discount the values of options from one period to another (in the recursive step-by-step part). Secondly, to compute the final price of the option (i.e. to discount the optimal prices from every simulation to time $T_0=0$). While the second interest rate could be associated with standard spot LIBOR rate, the first one must be based on the evolution of assets (i.e. for every path in the Monte Carlo run, one must estimate separately the spot rate at time $t$ using the prices of Eurodollar contracts).
\subsubsection{Estimation and numerical results.}
In this subsection we present detailed estimation results for the date December 20, 2011. A similar procedure has been also conducted for all remaining days under consideration. Assuming the Brace-Gatarek-Musiela dynamics and taking into account the Eurodollar futures closing prices during the period from December 26, 2010 to December 20, 2011 we have conducted PCA and obtained
$$
\left[ \begin{array}{crrr}
0.024063776 & 0.033758193 & 0.040538115 & 0.043033555\\
0.024267981 & 0.018222734 & 0.007111945 & -0.004846372\\
0.007801289 & -0.001039692 & -0.006052515 & -0.004629562
\end{array} \right]$$
as the estimate of $\Lambda$.
To price several put and call options with different strike prices and the closing date falling on December 20, 2011, we have generated 1000 Monte Carlo simulations of size 10,000 and using the least squares algorithm we have obtained estimated option prices for different strike prices. The results are presented in Table~\ref{T:EDprices2}. The Monte Carlo distributions of the prices of the Eurodollar put and call options with the strike price $99.50$ can be seen in Figure~\ref{F:MCprice}. Figure~\ref{F:MCexample} shows examples of 100 Monte Carlo paths, together with the actual realization of the process.
Similar analysis has been performed for all days from December 21, 2011 to January 20, 2012. During that period, EDZ2 was the fourth closest to delivery Eurodollar Future. In Fig.~\ref{F:MCprice2} we can see the dynamics of the original put and call option prices, the sample means from 1000 simulations (each of size 10,000) and the lower and upper 5\% quantiles for the put and call options with strike price $99.50$. The values of the mean and standard deviation of the simulated prices of the options, as well as the corresponding market prices of the options, can be seen in Table~\ref{T:EDprices}. We have chosen the strike price 99.50 because the mean volume of transactions was highest in the considered period. It is interesting to note that the estimated prices corresponding to this strike price stay consistently higher than the market price (see Table~\ref{T:EDprices}), which might be the result of the fact that the option was particularly actively traded (see Table~\ref{T:EDprices2}, where the market price is in most cases lower than the estimated price). It should also be noted that the value of $\sigma$ in Table~\ref{T:EDprices2} is highest for the strike price equal to 99.50, which may explain the interest in the option with this particular strike price.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=6cm]{ED_path100.eps}
\includegraphics[width=6cm]{ED_path100v2.eps}
\end{center}
\vspace{-20pt}
\caption{Examples of 100 Monte Carlo paths for the $L_{4}$ contact (for December 20, 2011) and the realized path (red) during the first 100 and 300 days.} \label{F:MCexample}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=6cm]{ED_single_price1.eps}
\includegraphics[width=6cm]{ED_single_price2.eps}
\end{center}
\vspace{-20pt}
\caption{The smoothed densities of the simulated prices of the put (left) and the call (right) option on December 20, 2011, with strike price 99.50. The distribution is based on 1000 Monte Carlo runs, each of size 10,000.} \label{F:MCprice}
\end{figure}
\begin{table}[!ht]
\caption{The estimated prices of the Eurodollar options on December 20, 2011, based on 1000 simulations (each of size 10,000). Here $\mu$ denotes the sample mean of the 1000 prices obtained with MC simulation, while $\sigma$ denotes the sample standard deviation.}
\bigskip
\begin{center}
\begin{tabular}{|c|ccc|ccc|}
\hline
Date: Dec. 20, 2011& \multicolumn{3}{|c|}{Put} & \multicolumn{3}{|c|}{Call}\\
\hline
Strike price & Market price &$\mu$ & $\sigma$ & Market price & $\mu$ & $\sigma$\\
\hline
98.00 & 0.070 & 0.045 & 0.0038 & 1.295 & 1.267 & 0.0038 \\
98.12 & 0.078 & 0.052 & 0.0043 & 1.178 & 1.154 & 0.0043 \\
98.25 & 0.085 & 0.061 & 0.0044 & 1.060 & 1.032 & 0.0044 \\
98.37 & 0.095 & 0.070 & 0.0048 & 0.945 & 0.922 & 0.0048 \\
98.50 & 0.105 & 0.082 & 0.0050 & 0.833 & 0.804 & 0.0050 \\
98.62 & 0.120 & 0.096 & 0.0055 & 0.723 & 0.698 & 0.0055 \\
98.75 & 0.138 & 0.114 & 0.0058 & 0.615 & 0.587 & 0.0058 \\
98.87 & 0.155 & 0.134 & 0.0061 & 0.508 & 0.488 & 0.0061 \\
99.00 & 0.175 & 0.160 & 0.0067 & 0.403 & 0.386 & 0.0067 \\
99.12 & 0.203 & 0.191 & 0.0078 & 0.308 & 0.298 & 0.0078 \\
99.25 & 0.238 & 0.232 & 0.0088 & 0.218 & 0.211 & 0.0088 \\
99.37 & 0.280 & 0.280 & 0.0097 & 0.135 & 0.141 & 0.0097 \\
99.50 & 0.340 & 0.345 & 0.0110 & 0.073 & 0.079 & 0.0110 \\
99.62 & 0.425 & 0.421 & 0.0094 & 0.033 & 0.036 & 0.0094 \\
99.75 & 0.525 & 0.528 & 0.0044 & 0.008 & 0.009 & 0.0044 \\
\hline
\end{tabular}
\end{center}
\label{T:EDprices2}
\end{table}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=6cm]{MCprice2.eps}
\includegraphics[width=6cm]{MCprice2v2.eps}
\end{center}
\vspace{-20pt}
\caption{The estimated prices (with confidence level 90\%) and historical prices of the put (left) and call (right) option prices with strike price 99.50, during the period from December 20, 2011 to January 20, 2012.} \label{F:MCprice2}
\end{figure}
\begin{table}[!ht]
\caption{Simulated and historical prices of the Eurodollar options with strike price 99.50 based on 1000 Monte Carlo simulations (each of size 10,000). As before, $\mu$ denotes the sample mean, whereas $\sigma$ denotes the sample standard deviation of the prices calculated form the simulations.}
\bigskip
\begin{center}
\begin{tabular}{|c|ccc|ccc|}
\hline
\multirow{2}{*}{Date} & \multicolumn{3}{|c|}{Put} & \multicolumn{3}{|c|}{Call}\\
\cline{2-7}
& Market price & $\mu$ & $\sigma$ & Market price & $\mu$ & $\sigma$\\
\hline
Dec.20, 2011 & 0.340 & 0.345 & 0.0110 & 0.073 & 0.079 & 0.0011 \\
Dec.21, 2011 & 0.342 & 0.349 & 0.0110 & 0.070 & 0.078 & 0.0011 \\
Dec.22, 2011 & 0.358 & 0.366 & 0.0112 & 0.065 & 0.074 & 0.0011 \\
Dec.23, 2011 & 0.370 & 0.382 & 0.0111 & 0.058 & 0.071 & 0.0011 \\
Dec.27, 2011 & 0.375 & 0.382 & 0.0103 & 0.058 & 0.066 & 0.0010 \\
Dec.28, 2011 & 0.378 & 0.385 & 0.0102 & 0.055 & 0.064 & 0.0010 \\
Dec.29, 2011 & 0.352 & 0.364 & 0.0104 & 0.055 & 0.068 & 0.0010 \\
Dec.30, 2011 & 0.318 & 0.327 & 0.0093 & 0.062 & 0.076 & 0.0011 \\
Jan.03, 2012 & 0.325 & 0.340 & 0.0088 & 0.058 & 0.072 & 0.0011 \\
Jan.04, 2012 & 0.315 & 0.326 & 0.0086 & 0.060 & 0.074 & 0.0011 \\
Jan.05, 2012 & 0.300 & 0.318 & 0.0084 & 0.055 & 0.076 & 0.0011 \\
Jan.06, 2012 & 0.258 & 0.279 & 0.0079 & 0.062 & 0.086 & 0.0012 \\
Jan.09, 2012 & 0.222 & 0.244 & 0.0072 & 0.072 & 0.095 & 0.0012 \\
Jan.10, 2012 & 0.218 & 0.240 & 0.0071 & 0.072 & 0.096 & 0.0012 \\
Jan.11, 2012 & 0.195 & 0.214 & 0.0069 & 0.085 & 0.105 & 0.0012 \\
Jan.12, 2012 & 0.175 & 0.190 & 0.0054 & 0.100 & 0.115 & 0.0013 \\
Jan.13, 2012 & 0.180 & 0.189 & 0.0054 & 0.105 & 0.115 & 0.0013 \\
Jan.17, 2012 & 0.168 & 0.171 & 0.0046 & 0.118 & 0.121 & 0.0012 \\
Jan.18, 2012 & 0.180 & 0.188 & 0.0053 & 0.105 & 0.113 & 0.0013 \\
Jan.19, 2012 & 0.175 & 0.184 & 0.0053 & 0.105 & 0.115 & 0.0013 \\
Jan.20, 2012 & 0.182 & 0.191 & 0.0054 & 0.102 & 0.112 & 0.0012 \\
\hline
\end{tabular}
\end{center}
\label{T:EDprices}
\end{table}
\subsection{Basket and dual-strike options}
In this subsection we will use the least squares algorithm to price 1.5 month basket and dual-strike American put options whose payoff functions are based on two market indices, namely DAX and EUROSTOXX50. The latter will be denoted by the symbol EUR for brevity. We will assume that the underlying instruments follow the standard bivariate Brownian dynamics. Unfortunately, bivariate options are usually over-the-counter (OTC) instruments, so it is difficult to find market data for such options. Nevertheless, we could do a partial comparison with the relevant one-dimensional standard American put options based on DAX and EUR. As was the case with the previous example, we start with some background information.
\subsubsection{DAX and EUROSTOXX50 Indices.}
The univariate standard American put options based on DAX and EUR are traded on the Eurex Exchange. In fact, the underlyings are not indices but exchange-traded funds (ETF), which are actively traded on the German stock market (Deutsche B{\"o}rse Group). The DAX and EUR indices are highly correlated, chiefly due to the inclusion of some common stocks in their baskets. The estimated value of Pearson's linear correlation coefficient for the period from October 23, 2012 to January 08, 2013 is equal to 0.920. Some contagion between these indices might potentially occur, but in such a short period of time this aspect is negligible. In general, the issue of contagion could be addressed by adopting models with different dynamics (e.g. of the multivariate GARCH variety). Such approach would be very closely related to the Heston and Nandi option pricing model~\cite{HesNan2000}, which is the methodology we will adopt in the last example.
\subsubsection{Basket and dual-strike options.}
As has been already stated, basket and dual-strike options are mainly OTC derivatives. In this example we will consider a bivariate American put option. The payoff functions at time $t$, for a bivariate basket American put option (1) and dual-strike American put option (2) is given by
$$p^{(1)}(t)=\max\big(K_{1}-S_{1}(t),K_{2}-S_{2}(t),0\big),\quad p^{(2)}(t)=\max\left(\frac{K_{1}+K_{2}}{2}-\frac{S_{1}(t)+S_{2}(t)}{2},0\right),$$
where $S_{1}(t)$ and $S_{2}(t)$ are the prices of the first and the second underlying at time $t$, respectively, and $K_{1}$, $K_{2}$ are the strike prices.
\subsubsection{Model setup, data details and implementation parameters.}
We will be assuming that the price process $(S_{1}(t),S_{2}(t))$ is modeled by a 2-dimensional geometric Brownian motion, with the instantaneous correlation coefficient and instantaneous standard deviations for the processes $\log S_1$ and $\log S_2$ denoted by $\boldsymbol{\rho},\boldsymbol{\sigma}_1$ and $\boldsymbol{\sigma}_2$, respectively.
We will construct a bivariate basket and dual-strike American put options based on 1 DAX ETF share and 2.5 EUR ETF shares (to have similar strike prices in both cases). We will price Basket and Dual-strike put options on January 08, 2013 with the expiration date March 16, 2013 (to make it comparable to existing univariate options). The option lifetime will be 49 business days. The strike prices will range from 65 to 75 and from 66 to 76, for the first and the second strike price, respectively.
To estimate $\boldsymbol{\rho},\boldsymbol{\sigma}_1$ and $\boldsymbol{\sigma}_2$ we will use the last 50 observations of the price of ETF (DE) DAX and ETF (DE) EUROSTOXX50. Choosing a relatively short time interval for calibration purposes is quite common in practice (e.g. this is the case with the estimation of the VIX volatility index).
As in the previous case, we will need two inputs for the least squares algorithm: an interest rate (for discounting) and appropriate basis functions. Because the option lifetime is short, we will assume that the interest rate is constant and equal to $r=1.50\%$ (the ECB interest rate on January 08, 2013). Moreover, we will use the following exponentially weighted polynomials of two variables to perform the regression:
$$e^{\frac{1}{2}},\quad e^{\frac{-x}{2}}x,\quad e^{\frac{-y}{2}}y,\quad e^{\frac{-(x+y)}{4}}xy,
\quad e^{\frac{-(x+y)}{4}}xy^{2},\quad e^{\frac{-(x+y)}{4}}x^{2}y,\quad e^{\frac{-(x+y)}{4}}x^{2}y^{2}.
$$
\subsubsection{Estimation and numerical results.}
The estimated (annualized) covariance matrix gives us the values $\boldsymbol{\rho}=0.920$, $\boldsymbol{\sigma}_{1}= 0.133$ and $\boldsymbol{\sigma}_{2}=0.119$. Using these numbers, we run 100,000 Monte Carlo simulations (each of size 49). Next, using the least squares algorithm we compute the prices of the basket and dual-strike American put options for different strike prices. We also compute the least squares prices for the univariate American put options based on 1 DAX ETF share and 2.5 EUR ETF share. Apart from the market data, we also present the theoretical price according to the Cox-Ross-Rubinstein model (CRR) as it is used by the Eurex Exchange to quote option prices when no trading takes place. It should be noted that the volume of transaction of American put options is very low, so unfortunately the market price is just for comparison purposes. Also, the least squares price should be compared with the CRR price rather than the market price (as it is computed under the compatible assumptions about the asset dynamics).
The prices (obtained using single 100,000 Monte Carlo run) can be seen in Table~\ref{T:OPTIONprices}. The columns with names DAX and EUR denote the standard univariate put options (i.e. with 1 DAX ETF and 2.5 EUR ETF share as the underlying, respectively). We have also performed multiple Monte Carlo runs (1000), each of size 10,000 for the basket and dual-strike options with the strike prices $K_1=K_2=70$. The corresponding Monte Carlo density function could be seen in Figure~\ref{F:optionBASKET} (this could provide some information about the model and/or the Monte Carlo bias).
\begin{table}[!ht]
\caption{Prices of the options according to historical stock market data, the CRR model and the
least squares algorithm. Here $S_0=(68.05,69.72)$, $r=1.50\%$, $T=49/252$, $\boldsymbol{\sigma}_{1}=0.133$, $\boldsymbol{\sigma}_{2}=0.119$, $\boldsymbol{\rho}=0.920$.}
\bigskip
\begin{center}
\begin{tabular}{|cc|cc|cc|cccc|}
\hline
\multicolumn{2}{|c|}{Strike price} & \multicolumn{2}{|c|}{Market price} & \multicolumn{2}{|c|}{CRR price} & \multicolumn{4}{|c|}{least squares price}\\
\hline
EUR & DAX & EUR & DAX & EUR & DAX & EUR & DAX & Basket & Dual-Strike\\
\hline
65.0 & 66 & 0.58 & 0.34 & 0.45 & 0.25 & 0.44 & 0.24 & 0.31 & 0.46 \\
67.5 & 66 & 1.50 & 0.34 & 1.25 & 0.25 & 1.23 & 0.24 & 0.59 & 1.23 \\
70.0 & 66 & 3.10 & 0.34 & 2.67 & 0.25 & 2.63 & 0.24 & 1.00 & 2.65 \\
72.5 & 66 & 5.15 & 0.34 & 4.63 & 0.25 & 4.62 & 0.24 & 1.58 & 4.62 \\
75.0 & 66 & 7.53 & 0.34 & 6.95 & 0.25 & 6.95 & 0.24 & 2.33 & 6.95 \\
\hline
65.0 & 68 & 0.58 & 0.85 & 0.45 & 0.69 & 0.44 & 0.67 & 0.52 & 0.72 \\
67.5 & 68 & 1.50 & 0.85 & 1.25 & 0.69 & 1.23 & 0.67 & 0.91 & 1.27 \\
70.0 & 68 & 3.10 & 0.85 & 2.67 & 0.69 & 2.63 & 0.67 & 1.45 & 2.65 \\
72.5 & 68 & 5.15 & 0.85 & 4.63 & 0.69 & 4.62 & 0.67 & 2.17 & 4.62 \\
75.0 & 68 & 7.53 & 0.85 & 6.95 & 0.69 & 6.95 & 0.67 & 3.04 & 6.95 \\
\hline
65.0 & 70 & 0.58 & 1.74 & 0.45 & 1.52 & 0.44 & 1.49 & 0.82 & 1.50 \\
67.5 & 70 & 1.50 & 1.74 & 1.25 & 1.52 & 1.23 & 1.49 & 1.33 & 1.64 \\
70.0 & 70 & 3.10 & 1.74 & 2.67 & 1.52 & 2.63 & 1.49 & 2.01 & 2.67 \\
72.5 & 70 & 5.15 & 1.74 & 4.63 & 1.52 & 4.62 & 1.49 & 2.86 & 4.62 \\
75.0 & 70 & 7.53 & 1.74 & 6.95 & 1.52 & 6.95 & 1.49 & 3.85 & 6.95 \\
\hline
65.0 & 72 & 0.58 & 3.00 & 0.45 & 2.79 & 0.44 & 2.75 & 1.22 & 2.76 \\
67.5 & 72 & 1.50 & 3.00 & 1.25 & 2.79 & 1.23 & 2.75 & 1.86 & 2.77 \\
70.0 & 72 & 3.10 & 3.00 & 2.67 & 2.79 & 2.63 & 2.75 & 2.67 & 3.09 \\
72.5 & 72 & 5.15 & 3.00 & 4.63 & 2.79 & 4.62 & 2.75 & 3.64 & 4.63 \\
75.0 & 72 & 7.53 & 3.00 & 6.95 & 2.79 & 6.95 & 2.75 & 4.72 & 6.95 \\
\hline
65.0 & 76 & 0.58 & 6.43 & 0.45 & 6.29 & 0.44 & 6.28 & 2.33 & 6.28 \\
67.5 & 76 & 1.50 & 6.43 & 1.25 & 6.29 & 1.23 & 6.28 & 3.24 & 6.28 \\
70.0 & 76 & 3.10 & 6.43 & 2.67 & 6.29 & 2.63 & 6.28 & 4.28 & 6.28 \\
72.5 & 76 & 5.15 & 6.43 & 4.63 & 6.29 & 4.62 & 6.28 & 5.41 & 6.30 \\
75.0 & 76 & 7.53 & 6.43 & 6.95 & 6.29 & 6.95 & 6.28 & 6.62 & 7.16 \\
\hline
\end{tabular}
\end{center}
\label{T:OPTIONprices}
\end{table}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=6cm]{MCprice_basket.eps}
\includegraphics[width=6cm]{MCprice_dual.eps}
\end{center}
\caption{The smoothed densities of the least squares prices of the basket (left) and the dual-strike (right) American put options for the strike prices $K_1=K_2=70$. The vertical lines depict the sample mean of the least squares prices.} \label{F:optionBASKET}
\end{figure}
\subsection{The Heston-Nandi model}
In the last example we will use the least squares algorithm to price two 1.5 month American put options whose payoff is based on a single market index. We will use data from the previous example, i.e., we will price options written on DAX and EUR indices. We will assume that the dynamics of the underlying instruments could be described with the Heston-Nandi GARCH model~\cite{HesNan2000}.
Let $S_t$ denote the price of the underlying. Using the Heston-Nandi GARCH dynamics, we assume that the log-returns of the random process $S_t$ could be described by formula
$$\Delta\log S_t=r_{\textrm{daily}}+\lambda\sigma_{t}^{2}+\sigma_{t}\epsilon_{t},$$
with
$\sigma_{t}^{2}=\omega+\beta\sigma_{t-1}^{2}+\alpha(\epsilon_{t-1}-\gamma\sigma_{t-1})^{2},$
where $\Delta$ denotes the daily backward difference, the parameter $r_{\textrm{daily}}$ denotes daily risk-free interest rate, $(\lambda, \omega, \beta, \alpha, \gamma)$ are model parameters and $\epsilon_{t}$ is the standard Gaussian white noise. In addition, we will assume that there is no asymmetry in the model, i.e. $\gamma=0$.
If we use the standard Heston-Nandi dynamics (with the objective probability measure) then the discounting part of the least squares algorithm will be path dependent. In order to avoid this complication we will switch to the risk-neutral measure and use the risk-neutral dynamic of the underlying return. The risk neutral process is obtained simply by replacing (previously estimated) parameters $\lambda$ and $\gamma$ with $(-0.5)$ and $(\gamma+\lambda+0.5)$, respectively (see~\cite{HesNan2000} for details). Moreover, we will use the long run expected standard deviation from Heston Nandi model for comparison purposes (see~\cite{HesNan2000}):
\begin{equation}\label{hn.sd}
\sigma_{HN}=\frac{\omega+\alpha}{1-\beta-\alpha\gamma^{2}}.
\end{equation}
The EUR and DAX data will be used again as the underlyings. As before, the options expiration date will be March 16, 2013 and we will price them on January 08, 2013 (thus the option lifetime $T$ will be 49 business days). The weighted Laguerre polynomials of degree not greater than 3 will serve
as the base polynomials for the regression procedure.
As before, we will assume that (annualised) risk free rate is equal to $r=1.50\%$ and put $r_{\textrm{daily}}=r/252$ (as there are 252 trading days each year).
Using the last 50 prices of 2.5 EUR ETF and 1 DAX ETF shares, we have obtained two sets of parameters:
\begin{center}
\def1.2{1.2}
\begin{tabular}{|c|c|c|c|c|}
\hline
&$\lambda$&$\omega$&$\alpha$&$\beta$\\
\hline
EUR&7.280&2.738$\times10^{-5}$&5.238$\times10^{-5}$&0.086\\
\hline
DAX&16.971&1.954$\times10^{-5}$&5.404$\times10^{-5}$&4.758$\times10^{-28}$\\
\hline
\end{tabular}
\end{center}
The initial values of the underlying are 68.05 and 69.72, respectively, in the EUR and DAX case. The (annualized) volatilities obtained from~\eqref{hn.sd} are equal to $0.149$ and $0.137$, respectively. The mean sample prices of American put options obtained from ten simulations (each consisting of 100,000 Monte Carlo paths) can be seen in Table~\ref{T:EUROSTOXX_HN}. We also present the theoretical European put option prices according to Heston-Nandi model~\cite{HesNan2000}, as well as the American put options prices and early exercise premiums (i.e. the differences between the prices of American and European put options) according to Cox-Ross-Rubinstein model (CRR), with volatilities obtained from~\eqref{hn.sd}. Both models are presented for comparison purposes. Moreover, we perform multiple Monte Carlo runs (1000), each of size 10,000, to calculate prices of the American put options with the strike price 70 (both for EUR and DAX). Smoothed simulated probability density functions are plotted in Figure~\ref{F:optionHN}.
\begin{table}[!ht]
\caption{Prices of the EUR and DAX American put options according to the least squares algorithm (L-S), compared with the actual market prices, CRR model prices and the Heston-Nandi European put option prices. EA denotes the early exercise premium.}
\bigskip
\begin{center}
\begin{tabular}{|c|c|cc|cc|}
\hline
\multicolumn{6}{|c|}{EUR American put options}\\
\hline
Strike price & Market price & CRR price & CRR EA & H-N price & L-S price\\
\hline
65.0 & 0.58 & 0.57 & 0.00 & 0.57 & 0.57\\
67.5 & 1.50 & 1.41 & 0.01 & 1.40 & 1.40\\
70.0 & 3.10 & 2.79 & 0.03 & 2.78 & 2.79\\
72.5 & 5.15 & 4.67 & 0.06 & 4.66 & 4.71\\
75.0 & 7.53 & 6.88 & 0.11 & 6.88 & 6.98\\
\hline
\multicolumn{6}{|c|}{DAX American put options}\\
\hline
Strike price & Market price & CRR price & CRR EA & H-N price & L-S price\\
\hline
66 & 0.34 & 0.38 & 0.00 & 0.38 & 0.38\\
68 & 0.85 & 0.88 & 0.01 & 0.87 & 0.87\\
70 & 1.74 & 1.74 & 0.01 & 1.70 & 1.70\\
72 & 3.00 & 2.98 & 0.03 & 2.91 & 2.92\\
76 & 6.43 & 6.33 & 0.10 & 6.22 & 6.31\\
\hline
\end{tabular}
\end{center}
\label{T:EUROSTOXX_HN}
\end{table}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=6cm]{MCprice_hn1.eps}
\includegraphics[width=6cm]{MCprice_hn2.eps}
\end{center}
\caption{The smoothed distributions of the least squares prices of the EUR (left) and DAX (right) American put options. The vertical lines correspond to the sample mean of the least squares prices.}\label{F:optionHN}
\end{figure}
\section{Concluding remarks}
We have shown that the widely used least squares approach to Monte Carlo based pricing of American options remains valid under very general and flexible choice of assumptions. In particular, convergence to the theoretical price obtained via Snell envelopes remains true with a highly adaptable setup for approximation of conditional expectations. Of course one should be aware that the computational cost of liberalization of the assumptions may be potentially very high. However, a growing body of empirical evidence indicates that in many practical applications even relatively limited non-linear extensions of standard regression may produce satisfactory results, as illustrated also by our three examples. The relaxation of the assumptions of the method should be seen primarily as increase in freedom of choice of settings for a specific implementation of the algorithm, which with careful choices may nevertheless retain computational viability.
\subsubsection*{Acknowledgments:}
The second author acknowledges the support by Project operated within the Foundation for Polish Science IPP Programme "Geometry and Topology in Physical Models" co-financed by the EU European Regional Development Fund, Operational Program Innovative Economy 2007-2013.
\bibliographystyle{plainnat}
|
1,116,691,498,357 | arxiv | \section{Introduction}\label{introduction}
This paper deals with minimal isometric immersions of a Kähler manifold into product of two real space forms. More specifically, we will be interested first with obstructions to the existence of pluriharmonic isometric immersions and secondly with restrictions on the Ricci curvature and scalar curvature of minimal Kähler isometric immersions into those spaces.
Over the years, pluriharmonic isometric immersions have been studied by several authors. In the literature, they are also called by $(1,1)$-geodesic immersions and circular immersions (cf. \cite{Dajczer-Gromoll-85,FerreiraRigoliTribuzy93}). Those immersions appear as a natural extension of minimal immersions of Riemann surfaces into a target space, therefore they are minimal in the classical sense. The simplest examples of pluriharmonic isometric immersions are orientable minimal surfaces in an arbitrary Riemannian manifolds and holomorphic isometric immersions between Kähler manifolds, and it is important to notice that those immersions also have associated families of pluriharmonic isometric immersions when the ambient manifold is a Riemannian symmetric space (cf. \cite{Esch-Tribuzy98}).
In real space forms, the study of pluriharmonic isometric immersions and their associated families is due to Dajczer and Gromoll (cf. \cite{Dajczer-Gromoll-85}). They proved that for each non-holomorphic pluriharmonic isometric immersion into real space forms there exists a one-parameter family of noncongruent pluriharmonic submanifolds, likewise minimal surfaces in three-dimensional space forms.
Dajczer and Rodríguez proved another interesting fact about pluriharmonic isometric immersions of Kähler manifolds into Euclidean spaces. They showed that pluriharmonic and minimal Kähler submanifolds mean the same in Euclidean spaces, although this is not obvious (cf. \cite{DajczerRodriguez86}). In addition, they proved that the only minimal isometric immersions of Kähler
manifold $M^{2n}$ into $\mathbb H^m$ are the minimal isometric immersions
of Riemannian surfaces.
More generally for locally symmetric Riemannian manifold of non-compact type, Ferreira, Rigoli and Tribuzy showed that pluriharmonic isometric immersions and minimal isometric immersions of Kähler manifold $M^{2n}$ into those spaces are also the same objects (cf. \cite{FerreiraRigoliTribuzy93}). Under some assumptions on the Ricci and scalar curvature of those target spaces, they proved additionally that the only pluriharmonic isometric immersion of Kähler manifold $M^{2n}$ into conformally flat Riemannian manifolds are the minimal isometric immersions of Riemannian surfaces.
In a seminal work, Takahashi established a necessary condition on the Ricci curvature $\mathrm{Ric}_M$ for that a given Riemannian manifold $M^n$ admits a minimal isometric immersion into a real space form of constant sectional curvature $c$ (cf. \cite{Takahashi}). This geometric restriction appears naturally as a consequence of Gauss equation for minimal isometric immersions and, up to normalization, the Ricci curvature must satisfy $\mathrm{Ric}_M\leq c(n-1)$, with $n\geq 2$. In another direction of \cite{DajczerRodriguez86}, Dajczer and Rodríguez proved if we want to immerse minimally in $\mathbb S^m$ a Kähler manifold $M^{2n}$, this necessary condition will be more restrictive. In this case, the Ricci curvature must satisfy $\mathrm{Ric}_M\leq nc$, with $n\geq 1$. In both works, under assumption of the existence of a minimal isometric immersion, the authors characterise the equality case as a totally geodesic isometric immersion (Takahashi theorem) and as an isometric immersion with parallel second fundamental form (Dajczer-Rodríguez theorem).
The aim of the work is to generalize these results to some products of real space forms.
First, we show that the only pluriharmonic isometric immersions of a Kähler manifold $M^{2n}$ into $\mathbb S^{m-1}\times \mathbb R$ and $\mathbb H^{m-1}\times \mathbb R$ are the minimal isometric immersions of Riemannian surfaces. We remark that minimal and pluriharmonic isometric immersions of a Kähler manifold into $\mathbb H^{m-1}\times \mathbb R$ are the same objects, by Ferreira-Rigoli-Tribuzy results. Dual results are obtained for maps into $\mathbb S^{m-k}\times\mathbb H^{k}$ and into warped product manifolds $I\times_{\rho}\mathbb{R}^{m-1}$, $I\times_{\rho}\mathbb{S}^{m-1}$ and $I\times_{\rho}\mathbb{H}^{m-1}$, where $I\subset\mathbb R$ is an interval, under some additional hypotheses.
Furthermore, we discuss how the existence of a minimal isometric immersion of a Kähler manifold $M^{2n}$ into $\mathbb{S}^{m-1}\times\mathbb R$ and $\mathbb{S}^{m-k}\times \mathbb{H}^k$ can to impose strong restrictions on the Ricci curvature and the scalar curvature of $M^{2n}$. In this direction, thanks to the complex structure of $M^{2n}$, we obtain a better upper bound of Ricci curvature of minimal isometric immersions of Kähler manifolds into those manifolds, and we characterise the equality case as isometric immersions with parallel second fundamental form. We also obtain an improvement to the upper bound of scalar curvature, and we characterise the equality case as anti-pluriharmonic isometric immersions. Moreover, we observe that our technique generalize those results to isometric immersions of a Kähler manifold into conformally flat Riemannian manifolds. This case was studied by Ferreira, Rigoli and Tribuzy with certain bounds assumptions on the Ricci curvature of the ambient space.
\section{Preliminaries}\label{preliminaries}
Let $c_1,c_2\in\mathbb R$ and $n_1,n_2\in\mathbb N$. We denote by $\mathbb{Q}^{n_i}_{c_i}$ be the $n_i$-dimensional simply connected Riemannian manifold of constant sectional curvature $c_i$, for $i=1,2$. As usual, $\mathbb Q^{n}_{c}=\mathbb{S}^{n}_{c}$ is the $n$-Sphere for $c>0$, $\mathbb Q^{n}_{c}=\mathbb{R}^{n}$ is the Euclidean $n$-space for $c=0$ and $\mathbb Q^{n}_{c}=\mathbb{H}^{n}_{c}$ is the Hyperbolic $n$-space for $c<0$. Finally, we consider $\mathbb{Q}^m = \mathbb{Q}^{n_1}_{c_1}\times \mathbb{Q}^{n_2}_{c_2}$ be the Riemannian product manifold endowed with the product metric, denoted by $\dotprod{\cdot}{\cdot}$, where $m=n_1+n_2$, and let $\pi_i:\mathbb Q^m\to\mathbb Q^{n_i}_{c_i}$ be the projection onto the factor $\mathbb{Q}^{n_i}_{c_i}$, for $i=1,2$.
Throughout this work, we consider $(M^{2n},\dif s^2)$ a $2n$-dimensional simply connected \emph{Kähler} manifold with almost complex structure $J$, with $n\in\mathbb N$. This means that $M$ is a $2n$-dimensional simply connected smooth manifold, endowed with a Riemannian metric $\dif s^2$ (also denoted by $\dotprod{\cdot}{\cdot}$), such that the almost complex structure $J$ is a parallel orthogonal tensor on the tangent bundle of $M$, i.e.,
$J^2 = -\mathrm{Id}_{\mathrm T M}$,
\[\langle JX,JY\rangle = \langle X, Y \rangle\]
and
\[(\nabla_X J)Y = \nabla_X JY - J\nabla_X Y = 0,\]
for all $X, Y \in \mathfrak{X}(M)$, where $\mathrm{Id}_{\mathrm T M}$ is the identity tensor on $\mathrm{T}M$, $\nabla$ denotes the Riemannian connection of $M$ and $\mathfrak{X}(M)=\Gamma(\mathrm T M)$ denotes the section of $\mathrm T M$.
We fix the Riemann curvature tensor $\mathcal{R}$ of $M$, given by
\[\mathcal{R}(X,Y)Z = \nabla_X \nabla_Y Z- \nabla_Y \nabla_X Z -\nabla_{[X,Y]}Z,\]
and the Ricci tensor $\mathrm{Ric}$ of $M$, is given by
\[\mathrm{Ric}(X,Y) = \text{ trace of the mapping } Z\mapsto \mathcal{R}(Z,X)Y,\]
for all $X,Y,Z\in\mathfrak{X}(M)$. In our convention, we consider the Ricci curvature, in the direction of a unit vector $X\in\mathfrak{X}(M)$, by the contraction of the Ricci tensor, i.e.,
\[\mathrm{Ric}(X)=\mathrm{Ric}(X,X),\]
and the scalar curvature function of $M$ by the trace of the Ricci curvature, i.e.,
\[\mathrm{Scal} = \sum_{i=1}^{2n} \mathrm{Ric}(X_i),\]
where $\{X_1,\ldots,X_{2n}\}$ is an orthonormal frame of $\mathrm{T} M$. In particular, when $n=1$, we have that
\[\mathrm{Ric} (X_i) = K_M \ \text{and} \ \mathrm{Scal} = 2K_M,\]
for $i=1,2$, where $K_M$ denotes the intrinsic curvature of $\dif s^2$.
We remark that the almost complex structure $J$ and the Riemann curvature tensor $\mathcal{R}$ satisfy
\[\mathcal{R}(X,Y)\circ J = J\circ\mathcal{R}(X,Y) \text{ \ and \ } \mathcal{R}(JX,JY)=\mathcal{R}(X,Y),\]
and the almost complex structure $J$ and the Ricci tensor $\mathrm{Ric}$ satisfy
\[\mathrm{Ric}(JX,JY)=\mathrm{Ric}(X,Y),\]
for all $X, Y \in \mathfrak{X}(M)$ (for more details, the reader may refer to \cite[Chapter 9]{MR1393941}).
Given an isometric immersion $f: M^{2n} \to \mathbb{Q}^m$, $2n<m$, we denote by $\mathcal{R}^{\perp}$ the curvature tensor of the normal bundle $\mathrm T^{\perp}M$, by $\alpha$, seen as section of the bundle $\mathrm T^{\ast}M\oplus \mathrm T^{\ast}M\oplus \mathrm T^{\perp}M,$ the second fundamental form of $f$
and by $A_{\xi}$ its Weingarten operator in the normal direction $\xi\in\mathfrak{X}(M)^\perp$, given by
$$\dotprod{A_\xi X}{Y} = \dotprod{\alpha(X, Y)}{\xi},$$
for all $X, Y \in \mathfrak{X}(M)$, where $\mathfrak{X}(M)^\perp=\Gamma(\mathrm T^\perp M)$ denotes the section of $\mathrm T^\perp M$.
In order to obtain a Bonnet-type theorem for isometric immersions into $\mathbb{Q}^m$ (and, more generally, considering the signature cases), Lira, Tojeiro and Vitório introduced in \cite{LiraTojeiroVitorio10} the tensors $R$, $S$ and $T$ defined by
\[R = L^t L, \ S = K^t L \text{ \ and \ } T = K^tK,\]
where
\[L = \dif\pi_2\circ f_{\ast} \in \Gamma(\mathrm T^{\ast}M \oplus \mathrm T\mathbb{Q}^{n_2}_{c_2,\mu_2}) \text{ \ and \ } K = \dif\pi_2|_{\mathrm T^{\perp}M}\in \Gamma((\mathrm T^{\perp}M)^{\ast}\oplus \mathrm T\mathbb{Q}^{n_2}_{c_2,\mu_2} ).\]
The tensors $R$ and
$T$ are non-negative symmetric operators whose eigenvalues lie in $[0,1]$. In particular, $\mathrm{tr} \,{R}\in [0,n]$, as noted in \cite{MendoncaTorjeiro}. In a similar way, we can define
\[\widetilde{L} = \dif\pi_1\circ f_{\ast} \in \Gamma(\mathrm T^{\ast}M \oplus \mathrm T\mathbb{Q}^{n_1}_{c_1,\mu_1}), \ \widetilde{K} = \dif\pi_1|_{\mathrm T^{\perp}M}\in \Gamma((\mathrm T^{\perp}M)^{\ast}\oplus \mathrm T\mathbb{Q}^{n_1}_{c_1,\mu_1} ),\]
\[\widetilde{R} = \widetilde{L}^t \widetilde{L}, \ \widetilde{S} = \widetilde{K}^t \widetilde{L} \text{ \ and \ } \widetilde{T} = \widetilde{K}^t\widetilde{K},\]
where $\widetilde{R}$ and $\widetilde{T}$ are nonnegative symmetric operators whose eigenvalues lie in $[0,1]$, $\mathrm{tr} \,{\widetilde{R}}\in [0,n]$, and therefore
\[R+\widetilde{R}=\mathrm{Id}_{\mathrm T M}.\]
Under these notations, in \cite{LiraTojeiroVitorio10} the authors write the Gauss, Codazzi and Ricci equations as
\begin{multline}\label{Gauss-equation}
\mathcal{R}(X,Y)Z=\Big(c_1(X\wedge Y-X\wedge RY-RX\wedge Y)\\
+(c_1+c_2)RX\wedge RY\Big)Z+A_{\alpha(Y,Z)}X-A_{\alpha(X,Z)}Y,
\end{multline}
\begin{multline}\label{Codazzi-equation}
(\nabla_X^\perp \alpha)(Y,Z)-(\nabla_Y^\perp \alpha)(X,Z)=
c_1(\dotprod{X}{Z}SY-\dotprod{Y}{Z}SX)\\
+(c_1+c_2)(\dotprod{RY}{Z}SX-\dotprod{RX}{Z}SY),
\end{multline}
\begin{equation}\label{Ricci-equation}
\mathcal{R}^\perp(X,Y)\eta = \alpha(X,A_\eta Y)-\alpha(A_\eta X,Y)+(c_1+c_2)(SX\wedge SY)\eta,
\end{equation}
where $(X\wedge Y)Z= \dotprod{Y}{Z}X-\dotprod{X}{Z}Y$, for all $ X, Y,Z \in \mathfrak{X}(M)$.
In \cite{Dajczer-Gromoll-85}, Dajczer and Gromoll introduced the \emph{circular} isometric immersions of a Kähler manifold. Those isometric immersions are also known in the literature as \emph{$(1,1)$-geodesic} immersions and also as \emph{pluriharmonic} immersions \cite{FerreiraRigoliTribuzy93,Udagawa}. In this work, we adopt the pluriharmonic terminology, and we recall that an isometric immersion $f:M^{2n}\to\mathbb Q^m$ of a Kähler manifold is said \emph{pluriharmonic} if the second fundamental form of $f$ satisfies
\[\alpha(X,JY) = \alpha(JX,Y), \text{ \ for all \ } X, Y \in \mathfrak{X}(M),\]
or equivalently, if the Weingarten operator $A_{\xi}$ of $f$ anticommutes with the almost complex structure $J$:
\[A_{\xi}J+JA_{\xi} = 0, \text{ \ for any \ }\xi\in \mathfrak{X}(M)^{\perp}.\] In particular, pluriharmonic immersions are minimal.
It is important to point out that for $n=1$, any orientable Riemannian surface $(\Sigma,\dif s^2)$ has a natural almost complex structure $J$. This structure is given by the rotation of angle $\pi/2$ on the tangent bundle $\mathrm T\Sigma$ of $\Sigma$. Since for any $\xi\in\mathfrak{X}(\Sigma)^{\perp}$ we have $A_\xi J+JA_\xi = (\mathrm{tr} \, A_\xi) J$, then $f:\Sigma\to\mathbb Q^m$ is pluriharmonic if and only if $\mathrm{tr} \,{A_\xi}=0$ for any $\xi\in\mathfrak{X}(\Sigma)^{\perp}$, that is, if $f$ is a minimal immersion.
\section{Obstruction results}\label{obstruction-results}
In this section, we study the conditions to a minimal isometric immersion of a Kähler manifold into $\mathbb Q^m$ be a pluriharmonic immersion. As a consequence, we analyse the obstruction conditions to the existence of pluriharmonic immersions of Kähler manifolds into $\mathbb Q^m$.
In \cite{DajczerRodriguez86}, Dajczer and Rodríguez studied those kinds of problems for space forms $\mathbb Q^m_c$. They concluded that pluriharmonic submanifolds and minimal submanifolds are the same objects in the Euclidean space. On the other hand, in the hyperbolic case, they showed that only surfaces can be immersed under assumption of minimality; and, in the spherical case, only surfaces can be immersed under assumption of pluriharmonicity. More generally in \cite{FerreiraRigoliTribuzy93}, Ferreira, Rigoli and Tribuzy show that pluriharmonic submanifolds are also equivalent to minimal submanifolds in locally symmetric Riemannian manifold of non-compact type.
For a given $f:M^{2n}\to\mathbb Q^m$ minimal isometric immersion of a Kähler manifold, our results are based in a \emph{pluriharmonicity property}: an equation that must be satisfied by the tensor $R$. Before finding this equation, we recall a general characterization of isometric immersions into slices of products of space forms proved by Mendonça and Tojeiro, \cite[Proposition 8]{MendoncaTorjeiro}. In terms of the trace of $R$, theses submanifolds are those on which either $\mathrm{tr} \, R = 0$ or $\mathrm{tr} \, R = \textrm{dim}M$. It is important to notice that, for their result, any assumption about almost complex structure on $M$ is required.
\begin{proposition}\label{prop-slice}
Let $f: M^n \to \mathbb Q^m$ be an isometric immersion. Then $f(M^n) \subset \mathbb Q^{n_1}_{c_1}\times\{p\}$ for some $p\in \mathbb Q^{n_2}_{c_2}$ $($resp. $f(M^n) \subset \{p\}\times\mathbb Q^{n_2}_{c_2}$ for some $p\in \mathbb Q^{n_1}_{c_1})$, if and only if $\mathrm{tr} \,{R} = 0$ $($resp. $\mathrm{tr} \, R=n)$.
\end{proposition}
\begin{proof}
Since $R+\widetilde{R}=\mathrm{Id}_{\mathrm T M}$, then $\mathrm{tr} \, R+\mathrm{tr} \,\widetilde{R}=n$. Moreover, the eigenvalues of $R$ and $\widetilde{R}$ lies in $[0,1]$, $\|L\|^2 = \mathrm{tr} \,{R}$ and $\|\widetilde{L}\|^2 = \mathrm{tr} \,{\widetilde{R}}$. Therefore, $\mathrm{tr} \, R=0$ if, and only if, $L=0$; and $\mathrm{tr} \, R=n$ if, and only if, $\mathrm{tr} \, \widetilde{R}=0$, i.e., $\widetilde{L}=0$. By definition of $L$, $\dif\pi_2\circ f_{\ast}=0$ holds if, and only if, $f(M^n) \subset \mathbb Q^{n_1}_{c_1}\times\{p\}$ for some $p\in \mathbb Q^{n_2}_{c_2}$; and by definition of $\widetilde{L}$, $\dif\pi_1\circ f_{\ast}=0$ holds if, and only if, $f(M^n) \subset \{p\}\times\mathbb Q^{n_2}_{c_2}$ for some $p\in \mathbb Q^{n_1}_{c_1}$.
\end{proof}
In the next result, we discuss a necessary and sufficient condition to a minimal isometric immersion of a Kähler manifold into $\mathbb Q^m$ be a pluriharmonic immersion.
\begin{lemma}[Pluriharmonicity property]\label{lemma-pluri-prop} Let $f:M^{2n}\to\mathbb Q^m$ be a minimal isometric immersion of a
Kähler manifold. Then $f$ is pluriharmonic if and only if the tensor $R$ satisfies the following equation
\begin{equation}\label{eq-pluri-property}
4c_1(n-1)(n-\mathrm{tr} \,{R})+(c_1+c_2)\Big((\mathrm{tr} \,{R})^2-\|R\|^2-\dotprod{RJ}{JR}\Big)=0.
\end{equation}
\end{lemma}
\begin{proof}
At a point $p\in M$, we consider an orthonormal basis $\{X_1,\ldots,X_{2n}\}$ of $\mathrm{T}_p M$ such that $X_{2j}=JX_{2j-1}$, for $1\leq j\leq n$. Then, at this point, by the Gauss equation \eqref{Gauss-equation} we get
\begin{multline*}
\dotprod{\mathcal{R}(X_j,X)Y}{X_j} =\dotprod{\alpha(X,Y)}{\alpha(X_j,X_j)}-\dotprod{\alpha(X,X_j)}{\alpha(Y,X_j)}\\
+c_1\Big(\dotprod{X}{Y}-\dotprod{X}{X_j}\dotprod{Y}{X_j}
-\dotprod{X}{Y}\dotprod{RX_j}{X_j}\\
-\dotprod{RX}{Y}
+\dotprod{RX}{X_j}\dotprod{Y}{X_j}+\dotprod{Y}{RX_j}\dotprod{X}{X_j}\Big)\\
+(c_1+c_2)\Big(\dotprod{RX_j}{X_j}\dotprod{RX}{Y}-\dotprod{RX}{X_j}\dotprod{RX_j}{Y}\Big).
\end{multline*}
Since $f$ is minimal, for $X=Y=X_i$, summing in $j$ from $1$ to $2n$, we have
\begin{multline}\label{exp-ricci-1}
\mathrm{Ric}(X_i) =
-\sum_{j=1}^{2n}\|\alpha(X_i,X_j)\|^2\\
+c_1\Big(2n-1-\mathrm{tr} \,{R}-2(n-1)\dotprod{RX_i}{X_i}\Big)\\
+(c_1+c_2)\Big(\dotprod{RX_i}{X_i}\mathrm{tr} \,{R}-\|RX_i\|^2\Big),
\end{multline}
and, similarly for $X=Y=JX_i$,
\begin{multline}\label{exp-ricci-2}
\mathrm{Ric}(JX_i) =
-\sum_{j=1}^{2n}\|\alpha(JX_i,X_j)\|^2\\
+c_1\Big(2n-1-\mathrm{tr} \,{R}-2(n-1)\dotprod{RJX_i}{JX_i}\Big)\\
+(c_1+c_2)\Big(\dotprod{RJX_i}{JX_i}\mathrm{tr} \,{R}-\|RJX_i\|^2\Big).
\end{multline}
On the other hand, the Kähler structure on $M$ implies that
\[\dotprod{\mathcal{R}(X_j,X_i)X_i}{X_j} = \dotprod{\mathcal{R}(X_j,X_i)JX_i}{JX_j},\]
and therefore, by the Gauss equation, summing in $j$ from $1$ to $2n$, we get
\begin{multline*}
\mathrm{Ric}(X_i) =
\sum_{j=1}^{2n} \dotprod{\alpha(X_i,JX_i)}{\alpha(X_j,JX_j)}
-\sum_{j=1}^{2n} \dotprod{\alpha(X_i,JX_j)}{\alpha(X_j,JX_i)}\\
+c_1\Big(1-\dotprod{RX_i}{X_i}-\dotprod{RJX_i}{JX_i}\Big)\\
+(c_1+c_2)\Big(\dotprod{RJX_i}{JRX_i} - \dotprod{RX_i}{JX_i}\mathrm{tr} \,{JR}\Big).
\end{multline*}
However, we notice that the first term of expression above is equal to zero, because of $X_{2j}=JX_{2j-1}$, for $1\leq j\leq n$. Moreover, since $R$ is symmetric and $J$ anti-symmetric, we have $\mathrm{tr} \,{JR} =0$. Thus,
\begin{multline}\label{exp-ricci-3}
\mathrm{Ric}(X_i) =
-\sum_{j=1}^{2n} \dotprod{\alpha(X_i,JX_j)}{\alpha(X_j,JX_i)}\\
+c_1\Big(1-\dotprod{RX_i}{X_i}-\dotprod{RJX_i}{JX_i}\Big)
+(c_1+c_2)\dotprod{RJX_i}{JRX_i}.
\end{multline}
Consider $E =\bigoplus_{j=1}^{2n} \mathrm T_p^{\perp}M$ endowed with the standard inner product. We set
\begin{align*}
u_i &= (\alpha(X_i ,JX_1),\alpha(X_i ,JX_2),\ldots, \alpha(X_i ,JX_{2n})),\\
v_i &= (\alpha(X_1,JX_i),\alpha(X_2,JX_i),\ldots, \alpha(X_{2n},JX_i)).
\end{align*}
Since $\mathrm{Ric}(JX,JY)=\mathrm{Ric}(X,Y)$, for all $X, Y \in \mathfrak{X}(M)$ and $|u_i-v_i|^2 = |u_i|^2+|v_i|^2-2\dotprod{u_i}{v_i}$, by equations \eqref{exp-ricci-1}, \eqref{exp-ricci-2} and \eqref{exp-ricci-3} we have
\begin{multline*}
|u_i-v_i|^2 = 2c_1\Big(2(n-1)-\mathrm{tr} \,{R}-(n-2)\big(\dotprod{RX_i}{X_i}+\dotprod{RJX_i}{JX_i}\big)\Big)\\
+(c_1+c_2)\Big(\big(\dotprod{RX_i}{X_i}+\dotprod{RJX_i}{JX_i}\big)\mathrm{tr} \,{R}\\
-\|RX_i\|^2-\|RJX_i\|^2-2\dotprod{RJX_i}{JRX_i}\Big),
\end{multline*}
for $1\leq i\leq n$. Then,
\begin{equation*
\frac{1}{2}\sum_{i=1}^{2n} |u_i-v_i|^2 = 4c_1(n-1)(n-\mathrm{tr} \, R)+(c_1+c_2)\Big((\mathrm{tr} \, R)^2-\|R\|^2-\dotprod{RJ}{JR}\Big),
\end{equation*}
that is, $|u_i-v_i|^2=0$ for all $1\leq i\leq n$ if, and only if, equation \eqref{eq-pluri-property} holds. Therefore, observing that $|u_i-v_i|^2=0$ for all $1\leq i\leq n$ if, and only if, $f$ is a pluriharmonic immersion, we conclude our assertion.
\end{proof}
\begin{remark} We notice that since $R+\widetilde{R}=\mathrm{Id}_{\mathrm T M}$, we obtain an analogous Pluriharmonicity property's Lemma for the tensor $\widetilde{R}$:
\begin{equation*}
4c_2(n-1)(n-\mathrm{tr} \,{\widetilde{R}})+(c_1+c_2)\Big((\mathrm{tr} \,{\widetilde{R}})^2-\|\widetilde{R}\|^2-\dotprod{\widetilde{R}J}{J\widetilde{R}}\Big)=0.
\end{equation*}
\end{remark}
\begin{remark}\label{lawn-remark} By the approach used in \cite{LawnRoth}, we show that the pluriharmonic property is given by
\[c_1\Big((\mathrm{tr} \,{R})^2-\|R\|^2-\dotprod{RJ}{JR}\Big)+c_2\Big((\mathrm{tr} \,{\widetilde{R}})^2-\|\widetilde{R}\|^2-\dotprod{\widetilde{R}J}{J\widetilde{R}}\Big)=0,\]
which is equivalent to the one provided by Lemma \ref{lemma-pluri-prop}, by the relation $R+\widetilde{R}=\mathrm{Id}_{\mathrm T M}$. Moreover, the Pluriharmonic property's Lemma can be generalised for minimal isometric immersions of a Kähler manifolds into multiproducts of space forms $\mathbb Q_{c_1}^{n_1}\times\cdots\times\mathbb Q_{c_k}^{n_k}$. In this case, the pluriharmonic property is given by
\[\sum_{j=1}^{k}c_j\Big((\mathrm{tr} \,{R_j})^2-\|R_j\|^2-\dotprod{R_jJ}{JR_j}\Big)=0,\]
where $R_j$ is the $f_j$ symmetric tensor that appear in \cite{LawnRoth}. In particular, for the case $\mathbb Q^{n_1}_{c_1}\times\mathbb Q^{n_2}_{c_2}$, we have that $R_1=\widetilde{R}$ and $R_2=R$.
\end{remark}
As a consequence of Pluriharmonicity property's Lemma, we analyse the obstruction conditions to the existence of pluriharmonic immersions of Kähler manifolds into $\mathbb Q^{m-1}_c\times\mathbb R$.
\begin{theorem}\label{thm-obstruction-QmxR}
Let $f:M^{2n}\to\mathbb Q^{m-1}_c\times\mathbb R$ be an isometric immersion of a Kähler manifold, with $c \neq 0$. Assume that
\begin{itemize}
\item[$\mathrm{i)}$] either $c < 0$ and $f$ is minimal;
\item[$\mathrm{ii)}$] or $c > 0$ and $f$ is pluriharmonic.
\end{itemize}
Then $n = 1$.
\end{theorem}
\begin{proof}
Firstly, since $\mathbb{H}_c^{m-1}\times \mathbb R $ is a locally symmetric Riemannian manifold of non-compact type follows from \cite[Proposition 1]{FerreiraRigoliTribuzy93} that $f$ is also a pluriharmonic immersion. Moreover, for $\mathbb Q^m = \mathbb Q^{m-1}_c\times\mathbb R$ we have that
$RX= \dotprod{X}{\partial_t^\top}\partial_t^\top$, where $\partial_t^\top$ is the projection of the unit vertical vector $\partial_t$ (corresponding to the factor $\mathbb R$) onto $\mathrm{T}M$. By Pluriharmoniciy property's Lemma, we have
\[4(n-1)\big(n-\|\partial_t^\top\|^2\big)=0,\]
that is, either $n=1$ or $\|\partial_t^\top\|^2=n$. Since $\|\partial_t^\top\|^2\leq 1$, in both cases we get $n=1$.
\end{proof}
\begin{remark}
We point out that Theorem \ref{thm-obstruction-QmxR} was also obtained by de Almeida in her thesis \cite[Theorem 3.1]{Kelly}, using similar methods.
\end{remark}
\begin{remark}
We notice that Theorem \ref{thm-obstruction-QmxR} can be extended for an isometric immersion of a Kähler manifold $M^{2n}$ into a warped product manifold $I\times_{\rho}\mathbb{Q}^{m-1}_c$ endowed with the metric $\dif s^2 = \dif t^2 +\rho(t)^2 \dif\theta^2$, where $I\subset\mathbb R$ is an interval, $\rho: I\to \mathbb R$ is a non-constant positive smooth function and $\dif\theta^2$ denotes the metric of $\mathbb{Q}^{m-1}_c$. Indeed, by the Gauss equation (cf. \cite{ChenXiang,ribeiro2019}), we compute the pluriharmonicity property by
\begin{equation}\label{casowarped}
4(n-1)\big(n\lambda(t)-\|\partial_t^\top\|^2\mu(t)\big)=0,
\end{equation}
where $\lambda(t) =\dfrac{c-\rho'(t)^2}{\rho(t)^2} \,\,\, \mbox{and}\,\,\, \mu(t)=\dfrac{c-\rho'(t)^2}{\rho(t)^2}+\dfrac{\rho''(t)}{\rho(t)}.$ However, we observe that when either $\rho''(t)\geq 0$ and $c\leq 0$, or $\rho''(t)\leq 0$ and $c>\rho'(t)^2$, then
\[n\lambda(t)-\|\partial_t^\top\|^2\mu(t)=\big(n-\|\partial_t^\top\|^2\big)\dfrac{c-\rho'(t)^2}{\rho(t)^2}-\|T\|^2\dfrac{\rho''(t)}{\rho(t)}= 0\]
if, and only if, either $\rho''(t)= 0$ and $\|\partial_t^\top\|^2=n$, that is, if $\{(t,\rho(t)) : t\in I\}\subset\mathbb R^2$ is a line and $n=1$, since $\|\partial_t^\top\|^2\leq 1$. Therefore, if $f:M^{2n}\to I\times_{\rho}\mathbb{Q}^{m-1}_c$ is a pluriharmonic immersion of a Kähler manifold such that either $\rho''(t)\geq 0$ and $c\leq 0$, or $\rho''(t)\leq 0$ and $c>\rho'(t)^2$, the pluriharmonicity property \eqref{casowarped} implies that $n=1$.
\end{remark}
\begin{corollary}\label{3.8}
Let $f:M^{2n}\to\mathbb Q^{n_1}_{c}\times\mathbb Q^{n_2}_{-c}$ be a pluriharmonic immersion of a Kähler manifold, with $c\neq 0$. Then either $\mathrm{tr} \, R = n$ or $n=1$.
\end{corollary}
\begin{proof}
Since $c_1+c_2 =0$, it follows directly of Pluriharmonicity property's Lemma that $4c(n-1)(n-\mathrm{tr} \, R)=0$.
\end{proof}
\begin{corollary}\label{NonExistence2}
Let $f:M^{2n}\to\mathbb Q^{m}$ be a pluriharmonic immersion of a Kähler manifold. Assume that $RJ+JR =0$ $($resp. $RJ+JR =2J)$. Then either $c_1=0$ or $n=1$ $($resp. $c_2=0$ or $n=1)$.
\end{corollary}
\begin{proof} Since $RJ+JR =0$ if, and only if, $J^{-1}RJ+R =0,$ then $\mathrm{tr} \,{R} =0.$ Moreover, $\dotprod{RJ}{JR}=-\dotprod{RJ}{RJ}=-\|R\|^2$. By Pluriharmonicity property's Lemma, we have $4c_1(n-1)n =0$, i.e., either $c_1 = 0$ or $n=1$.
Analogously, if $RJ+JR =2J$ then $\mathrm{tr} \,{R} =2n$ and $\mathrm{tr} \,{\widetilde{R}} =0$. Thus, by Pluriharmonicity property's Lemma for $\widetilde{R }$, we have $4c_2(n-1)n =0$, i.e., either $c_2 = 0$ or $n=1$.
\end{proof}
\begin{remark} We observe that if $RJ+JR=0$ then $\mathrm{tr} \,{R}=0$, and by Proposition \ref{prop-slice}, we have $f(M^{2n})\subset\mathbb Q_{c_1}^{n_1}\times \{p\}$, for some $p\in \mathbb Q^{n_2}_{c_2}$. By Corollary \ref{NonExistence2}, either $c_1=0$ and $f(M^{2n})\subset \mathbb R^{n_1}\times \{p\}$, or $n=1$ and $f(M^2)\subset\mathbb Q_{c_1}^{n_1}\times \{p\},$ that is, $f$ can be seen as isometric immersion into $\mathbb Q^{n_1}_{c_1}$ and then we recover a Dajczer-Rodríguez result presented in \cite{DajczerRodriguez86}.
\end{remark}
\section{Curvature estimates}\label{curvature-estimates}
The goal of this section is to study upper bounds of the Ricci and scalar curvatures of Kähler manifolds, when we suppose the existence of minimal isometric immersions of this manifolds into some product of space forms.
In a classical work about minimal isometric immersions, Takahashi proved that the existence of minimal isometric immersion $f$ of an arbitrary Riemannian manifold $M^n$ into a space form $\mathbb Q^m_c$, $n\geq 2$, imposes an upper bound of the Ricci curvature of $M$, namely,
\begin{equation*}
\frac{n-1}{n}\big(cn-\|\alpha\|^2\big)\leq
\mathrm{Ric} \leq c(n-1),
\end{equation*}
where $\|\alpha\|^2$ denotes the norm square of the second fundamental form \cite[Theorem 1]{Takahashi}. The equality case holds if and only if $f(M)$ is a totally geodesic submanifold of $\mathbb Q^m_c$. Dajczer and Rodríguez proved that the upper bound of the Ricci curvature of $M$ is more restrictive for a minimal isometric immersion $f$ of a Kähler manifold $M^{2n}$ into $\mathbb S^m_c$. In this case, they showed that $\mathrm{Ric} \leq cn$, with equality implying that $f$ has parallel second fundamental form \cite[Theorem 1.2]{DajczerRodriguez86}.
In order to improve the upper bounds of the Ricci and scalar curvatures of minimal Kähler submanifolds into some products space forms, we study firstly a general upper bound of the scalar curvature of those submanifolds in $\mathbb Q^{n_1}_{c_1}\times\mathbb Q^{n_2}_{c_2}$, given in terms of the tensor $R$. For this purpose, we said that an isometric immersion $f$ of a Kähler manifold $M^{2n}$ is an \emph{anti-pluriharmonic} immersion if the second fundamental form of $f$ satisfies
\[\alpha(X,JY) = -\alpha(JX,Y), \text{ \ for all \ } X, Y \in \mathfrak{X}(M),\]
or equivalently, if the Weingarten operator $A_{\xi}$ of $f$ commutes with the almost complex structure $J$:
\[A_{\xi}J=JA_{\xi}, \text{ \ for any \ }\xi\in \mathfrak{X}(M)^{\perp}.\]
Anti-pluriharmonic immersions into Euclidean space were firstly studied by Rettberg in \cite{Rettberg} and by Ferus in \cite{Ferus1980}, where was proved that anti-pluriharmonic immersions into $\mathbb Q^m_c$ have parallel second fundamental forms. We notice that, in an analogous way of holomorphic immersions, if the target space is also a Kähler manifold, then anti-holomorphic isometric immersions are anti-pluriharmonic immersions.
\begin{lemma}\label{lemma-ricci}
Let $f:M^{2n}\to\mathbb{Q}^m$ be a minimal isometric immersion of a Kähler manifold. Then the scalar curvature of $M$ satisfies
\[\mathrm{Scal} \leq 2nc_1(n-\mathrm{tr} \,{R})+\dfrac{(c_1+c_2)}{2}\Big((\mathrm{tr} \,{R})^2-\|R\|^2+\dotprod{RJ}{JR} \Big).\]
Moreover, the equality holds if, and only if, $f$ is an anti-pluriharmonic immersion.
\end{lemma}
\begin{proof}
At a point $p\in M$, we consider an orthonormal basis $\{X_1,\cdots,X_{2n}\}$ of $\mathrm{T}_p M$ such that $X_{2j}=JX_{2j-1}$, for $1\leq j\leq n$. Consider $E =\bigoplus_{j=1}^{2n} \mathrm T_p^{\perp}M$ endowed with the standard inner product. We set
\begin{align*}
u_i &= (\alpha(X_i ,JX_1),\alpha(X_i ,JX_2),\ldots, \alpha(X_i ,JX_{2n})),\\
v_i &= (\alpha(X_1,JX_i),\alpha(X_2,JX_i),\ldots, \alpha(X_{2n},JX_i)).
\end{align*}
Since $\mathrm{Ric}(JX,JY)=\mathrm{Ric}(X,Y)$, for all $X, Y \in \mathfrak{X}(M)$, by equations \eqref{exp-ricci-1}, \eqref{exp-ricci-2} and \eqref{exp-ricci-3}, and the Parallelogram identity, we get that
\begin{equation}\label{lemma-eq-ricci}
|u_i+v_i|^2 = -4\mathrm{Ric}(X_i)+2A_i+B_i+C_i,
\end{equation}
for $1\leq i\leq 2n$, where the coefficients $A_i$, $B_i$ and $C_i$ are given by
\begin{align*}
A_i &= c_1\Big(1-\dotprod{RX_i}{X_i}-\dotprod{RJX_i}{JX_i}\Big)
+(c_1+c_2)\dotprod{RJX_i}{JRX_i},\\
B_i &= c_1\Big((2n-1)-\mathrm{tr} \,{R}-2(n-1)\dotprod{RX_i}{X_i}\Big)\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
+(c_1+c_2)\Big(\dotprod{RX_i}{X_i}\mathrm{tr} \,{R}-\|RX_i\|^2\Big),\\
C_i &=c_1\Big((2n-1)-\mathrm{tr} \,{R}-2(n-1)\dotprod{RJX_i}{JX_i}\Big)\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
+(c_1+c_2)\Big(\dotprod{RJX_i}{JX_i}\mathrm{tr} \,{R}-\|RJX_i\|^2\Big).
\end{align*}
Thus, by equation \eqref{lemma-eq-ricci}, we obtain
\begin{equation}\label{ineq-ricci}
\mathrm{Ric}(X_i) \leq \frac{1}{4}(2A_i+B_i+C_i).
\end{equation}
On the other hand, we compute $2A_i+B_i+C_i$ by
\begin{multline*}
2A_i+B_i+C_i=2c_1\Big(2n-\mathrm{tr} \,{R}-n\big(\dotprod{RX_i}{X_i}+\dotprod{RJX_i}{JX_i}\big)\Big)\\
+(c_1+c_2)\Big(\big(\dotprod{RX_i}{X_i}+\dotprod{RJX_i}{JX_i}\big)\mathrm{tr} \,{R}\\
-\|RX_i\|^2-\|RJX_i\|^2+\dotprod{RJX_i}{JRX_i}\Big).
\end{multline*}
Therefore, summing in $i$ from $1$ to $2n$, we obtain
\begin{align*}
4\mathrm{Scal} &\leq \sum_{i=1}^{2n}(2A_i+B_i+C_i) \\
&= 8nc_1(n-\mathrm{tr} \,{R})+ (c_1+c_2)\Big(2(\mathrm{tr} \,{R})^2-2\|R\|^2-2\dotprod{RJ}{JR}\Big),
\end{align*}
that conclude our assertion.
Note that the equality holds if, and only if
\begin{equation*}
\sum_{i=1}^{2n} |u_i+v_i|^2 = 0,
\end{equation*}
that is, $u_i = -v_i$, for $1\leq j\leq n$, i.e.,
\begin{equation*}
\alpha(X,JY)+\alpha(JX,Y)=0
\end{equation*}
for $X,Y\in \mathfrak{X}(M)$, therefore, if, and only if, $f$ is anti-pluriharmonic.
\end{proof}
In our next results, we show that the existence of a minimal isometric immersion of a Kähler manifold $M^{2n}$ into either $\mathbb{S}_c^{m-1}\times\mathbb R$ or $\mathbb{S}_c^{m-k}\times \mathbb{H}_{-c}^k$ imposes strong restrictions on the Ricci curvature and the scalar curvature of $M^{2n}$.
\begin{theorem}\label{thm4.1}
Let $f:M^{2n}\to\mathbb{S}_c^{m-1}\times\mathbb R$ be a minimal isometric immersion of a
Kähler manifold. Then the Ricci curvature of $M$ satisfies $\mathrm{Ric} \leq c(2n-\|\partial_t^\top\|^2)/2$, with equality implying that $f(M^{2n})\subset \mathbb{S}_c^{m-1}\times\{t\}$ for some $t\in\mathbb R$, and that $f$ has parallel second fundamental form.
\end{theorem}
\begin{proof}
In the case of $\mathbb{S}_c^{m-1}\times\mathbb R$, we have that
$RX= \dotprod{X}{\partial_t^\top}\partial_t^\top$, where $\partial_t^\top$ is the projection of the unit vertical vector $\partial_t$ (corresponding to the factor $\mathbb R$) onto $\mathrm{T}M$. Then, the coefficients $A_i$, $B_i$ and $C_i$ are given by
\begin{align*}
A_i &=c\Big(1-\dotprod{\partial_t^\top}{X_i}^2-\dotprod{\partial_t^\top}{JX_i}^2\Big),\\
B_i &= c\Big(2n-1-\|\partial_t^\top\|^2-2(n-1)\dotprod{\partial_t^\top}{X_i}^2\Big),\\
C_i &= c\Big(2n-1-\|\partial_t^\top\|^2-2(n-1)\dotprod{\partial_t^\top}{JX_i}^2\Big),
\end{align*}
for $1\leq i\leq 2n$. We compute $2A_i+B_i+C_i$ by
\begin{align*}
2A_i+B_i+C_i&=2c\Big(2n-\|\partial_t^\top\|^2-n(\dotprod{\partial_t^\top}{X_i}^2+\dotprod{\partial_t^\top}{JX_i}^2)\Big).
\end{align*}
Thus, by equation \eqref{ineq-ricci}, we obtain
\begin{equation}\label{ricci-inequality-2}
\mathrm{Ric}(X_i) \leq \frac{c}{2}\Big(2n-\|\partial_t^\top\|^2-n(\dotprod{\partial_t^\top}{X_i}^2+\dotprod{\partial_t^\top}{JX_i}^2)\Big),
\end{equation}
and therefore,
\begin{equation}\label{ricci-inequality-3}
\mathrm{Ric}(X_i) \leq \frac{c}{2}(2n-\|\partial_t^\top\|^2),
\end{equation}
for $1\leq i \leq 2n$ and $n\geq 1$.
If the equality holds on \eqref{ricci-inequality-3}, then by inequality \eqref{ricci-inequality-2}, we have that $\dotprod{\partial_t^\top}{X_i}^2+\dotprod{\partial_t^\top}{JX_i}^2 =0,$ for $1\leq i \leq 2n$, i.e., $\partial_t^\top=0$. Thus, $f(M^{2n})$ lies into a slice $\mathbb{S}^{m-1}\times \{t\}$, for some $t\in \mathbb R$, and satisfies $\mathrm{Ric} = nc$. Therefore, by \cite[Theorem 1.2]{DajczerRodriguez86}, $f$ has parallel second fundamental form.
\end{proof}
\begin{remark} Given a minimal isometric immersion $f:M^n\to \mathbb Q^{m-1}_c\times\mathbb R$ of an arbitrary manifold $M^n$, $n\geq 2$, the Gauss equation provides a natural bound for the Ricci curvature, controlled by $\|\partial_t^\top\|^2$, on which $\|\partial_t^\top\|^2\leq 1$, in the following sense:
\begin{align*}
\mathrm{Ric}&\leq c\big(n-1-\|\partial_t^\top\|^2\big), \text{ \ for $c>0$,} \\
\mathrm{Ric}&\leq c(n-1)\big(1-\|\partial_t^\top\|^2\big), \text{ \ for $c<0$.}
\end{align*}
In both cases, the equality case holds if and only if either $f(M^2)$ is a totally geodesic surface in $\mathbb Q^{m-1}_c\times\mathbb R$, or $f(M^n)$ is a totally geodesic submanifold that lies into a slice of $\mathbb Q^{m-1}_c\times\mathbb R$.
When we assume that $M^{2n}$ is a Kähler manifold, with $n>1$, and $c>0$, this upper bound is less restrictive than the one provides by Theorem \ref{thm4.1}. However for $n=1$, the upper bound provided by the Gauss equation is more restrictive than the one provides our Theorem \ref{thm4.1}.
\end{remark}
\begin{remark}
We point out that the upper bound provides by Theorem \ref{thm4.1} also holds in $\mathbb{Q}_c^{m-1}\times\mathbb R$, with $c\in \mathbb R$. Moreover, when $c=0$, this upper bound is the same obtained by Gauss equation. For $c<0$, by the previous remark, we can check that this upper bound is less restrictive than the one provides by Gauss equation; and we recall that surfaces are the only minimal Kähler submanifolds in $\mathbb{H}_c^{m-1}\times\mathbb R$ (Theorem \ref{thm-obstruction-QmxR}).
\end{remark}
\begin{corollary}\label{thm4.2}
Let $f:M^{2n}\to\mathbb{S}_c^{m-1}\times\mathbb R$ be a minimal isometric immersion of a Kähler manifold. Then the scalar curvature of $M$ satisfies $\mathrm{Scal}\leq 2nc(n-\|\partial_t^\top\|^2)$. The equality holds if, and only if, $f$ is an anti-pluriharmonic immersion.
\end{corollary}
\begin{proof}
Since $RX= \dotprod{X}{\partial_t^\top}\partial_t^\top,$ by a direct computation we get that $\dotprod{RJ}{JR} =0$ and $\|R\|^2 = \|\partial_t^\top\|^4 =( \mathrm{tr} \,{R})^2$. Therefore, by Lemma \ref{lemma-ricci}, we obtain our assertion.
\end{proof}
\begin{remark}
Theorem \ref{thm4.1} give us an upper bound of the scalar curvature of $M^{2n}$, precisely $\mathrm{Scal} \leq nc(2n-\|\partial_t^\top\|^2)$. However, this upper bound is less restrictive than the one provides by Corollary \ref{thm4.2}.
\end{remark}
\begin{corollary}\label{ricci-SxQ}
Let $f:M^{2n}\to \mathbb{S}_c^{m-k}\times \mathbb{H}_{-c}^k$ be a minimal isometric immersion of a Kähler manifold. Then $\mathrm{Ric} \leq c(2n-\mathrm{tr} \,{R})/2,$ with equality implying $f(M)\subset \mathbb{S}_c^{m-k}\times\{p\}$ for some $p\in\mathbb{H}_{-c}^k$, $\mathrm{Ric} = cn$ and that $f$ has parallel second fundamental form.
\end{corollary}
\begin{proof}
Since $c_1=-c_2=c$, then the coefficients $A_i$, $B_i$ and $C_i$ are given by
\begin{align*}
A_i &= c\Big(1-\dotprod{RX_i}{X_i}-\dotprod{RJX_i}{JX_i}\Big),\\
B_i &= c\Big((2n-1)-\mathrm{tr} \,{R}-2(n-1)\dotprod{RX_i}{X_i}\Big),\\
C_i &= c\Big((2n-1)-\mathrm{tr} \,{R}-2(n-1)\dotprod{RJX_i}{JX_i}\Big),
\end{align*}
for $1\leq i\leq 2n$. We compute $2A_i+B_i+C_i$ by
\begin{align*}
2A_i+B_i+C_i&=2c\Big(2n-\mathrm{tr} \,{R}-n(\dotprod{RX_i}{X_i}+\dotprod{RJX_i}{JX_i})\Big).
\end{align*}
Thus, by equation \eqref{ineq-ricci}, we obtain
\begin{equation*}
\mathrm{Ric}(X_i) \leq 2c\Big(2n-\mathrm{tr} \,{R}-n(\dotprod{RX_i}{X_i}+\dotprod{RJX_i}{JX_i})\Big),
\end{equation*}
and therefore,
\begin{equation*}
\mathrm{Ric}(X_i) \leq \frac{c}{2}(2n-\mathrm{tr} \,{R}),
\end{equation*}
for $1\leq i \leq 2n$, since $R$ is a non-negative operator.
If the equality holds, we have that $\dotprod{RX_i}{X_i}+\dotprod{RJX_i}{JX_i} =0,$ for $1\leq i \leq 2n$, i.e., $\mathrm{tr} \,{R}=0$, and thus $M$ satisfies $\mathrm{Ric} = nc$. By Proposition \ref{prop-slice}, $f(M^{2n})$ lies into a slice $\mathbb{S}^{m-k}\times \{p\}$ for some $p\in \mathbb{H}^{k}$ and, therefore, by \cite[Theorem 1.2]{DajczerRodriguez86}, $f$ has parallel second fundamental form.
\end{proof}
\begin{remark} Given a minimal isometric immersion $f:M^n\to \mathbb{Q}_c^{m-k}\times \mathbb{Q}_{-c}^k$ of an arbitrary manifold $M^n$, $n\geq 2$ and $c\neq 0$, the Gauss equation provides a natural bound for the Ricci curvature, controlled by $\mathrm{tr} \, R$, on which $0\leq \mathrm{tr} \, R\leq n$, in the following sense:
\begin{align*}
\mathrm{Ric}&\leq c\big(n-1-\mathrm{tr} \, R\big), \text{ \ for $c>0$,} \\
\mathrm{Ric}&\leq c(n-1)\big(1-\mathrm{tr} \, R\big), \text{ \ for $c<0$ \ and \ $\mathrm{tr} \, R \leq 1$,}\\
\mathrm{Ric}&\leq c\big(1-\mathrm{tr} \, R\big), \text{ \ for $c<0$ \ and \ $\mathrm{tr} \, R>1$.}
\end{align*}
For either $c>0$ or $c<0$ and $\mathrm{tr} \, R < 1$, the equality case holds if and only if either $f(M^2)$ is a totally geodesic surface in $\mathbb{Q}_c^{m-k}\times \mathbb{Q}_{-c}^k$, or $f(M^n)$ is a totally geodesic submanifold that lies into a slice $\mathbb{Q}_c^{m-k}\times\{p\}$, for some $p\in\mathbb{Q}_{-c}^k$. For $c<0$ and $\mathrm{tr} \, R > 1$, the equality case holds if and only if either $f(M^2)$ is a totally geodesic surface in $\mathbb H^{m-1}_c\times\mathbb{S}_{-c}^k$, or $f(M^n)$ is a totally geodesic submanifold that lies into a slice $\{p\}\times\mathbb{S}_{-c}^k$, for some $p\in\mathbb{H}_c^{m-k}$. Finally, for $c<0$ and $\mathrm{tr} \, R = 1$, the equality case holds if and only if $f(M^2)$ is a totally geodesic surface in $\mathbb{H}_c^{m-k}\times \mathbb{S}_{-c}^k$.
When we assume that $M^{2n}$ is a Kähler manifold, with $n>1$ and $c>0$, this upper bound is less restrictive than the one provides by Theorem \ref{thm4.1}. However for $n=1$, the upper bound provided by the Gauss equation is more restrictive than the one provides our Theorem \ref{thm4.1}.
\begin{remark}
We point out that the upper bound provides by Corollary \ref{ricci-SxQ} also holds in $\mathbb{H}_c^{m-k}\times \mathbb{S}_{-c}^k$. However, by the previous remark, we can check that this upper bound is less restrictive than the one provides by Gauss equation.
\end{remark}
\end{remark}
\begin{corollary}
Let $f:M^{2n}\to \mathbb{Q}_c^{m-k}\times \mathbb{Q}_{-c}^k$ be a minimal isometric immersion of a Kähler manifold, with $c\neq 0$. Then the scalar curvature of $M$ satisfies $\mathrm{Scal} \leq 2nc(n-\mathrm{tr} \,{R}).$ The equality holds if, and only if, $f$ is an anti-pluriharmonic immersion.
\end{corollary}
\begin{remark} In the previous section, we see that minimal isometric immersions into $\mathbb{Q}_c^{m-k}\times \mathbb{Q}_{-c}^k$ with $\mathrm{tr} \,{R}=n$ are not necessarily surfaces (Pluriharmoniciy property’s Lemma and Corollary \ref{3.8}). However, the latest corollaries give us some information about what occurs in this case; we obtain that $\mathrm{Ric}\leq nc/2$ and $\mathrm{Scal}\leq 0$.
\end{remark}
\begin{remark} Let $f:M^{2n}\to \widetilde{M}^m$ be a minimal isometric immersion of a Kähler manifold into an arbitrary Riemannian manifold $\widetilde{M}^m$ and denote by $\widetilde{R}$ the Riemann curvature tensor of $\widetilde{M}$. In the general case, with the conventions used in the proofs of Lemma \ref{lemma-pluri-prop} and Lemma \ref{lemma-ricci}, our results are obtained by the study of the quantities $\omega_{i,-}=|u_i-v_i|^2$ and $\omega_{i,+}=|u_i+v_i|^2$, given by
\begin{multline*}
\omega_{i,\pm} = -2\big(\mathrm{Ric}(X_i)\pm\mathrm{Ric}(X_i)\big)
+\sum_{j=1}^{2n}\Big(\dotprod{\widetilde{R}(X_j,X_i)X_i}{X_j}\\
\pm2\dotprod{\widetilde{R}(X_j,X_i)JX_i}{JX_j}
+\dotprod{\widetilde{R}(X_j,JX_i)JX_i}{X_j}\Big),
\end{multline*}
where $\{X_1,\cdots,X_{2n}\}$ is an orthonormal basis of $\mathrm{T}_p M$, such that $X_{2j}=JX_{2j-1}$, for $1\leq j\leq n$. Then $\sum_{i=1}^{2n}\omega_{i,-}=0$ is the pluriharmonicity property and $\omega_{i,+}\geq 0$ provides the Ricci and scalar estimates for $M^{2n}$. When $\widetilde{M}^m$ is a conformally flat Riemannian manifold, its Riemann curvature tensor is given by
\begin{multline*}\dotprod{\widetilde{R}(X, Y)Z}{W} = \mathcal{S}(X,W)\dotprod{Y}{Z}+\mathcal{S}(Y,Z)\dotprod{X}{W}\\
-\mathcal{S}(X,Z)\dotprod{Y}{W}- \mathcal{S}(Y,W)\dotprod{X}{Z},
\end{multline*}
where $\mathcal{S}$ is the \emph{Schouten tensor} of $\widetilde{M}^m$, defined by
\begin{equation*}
\mathcal{S}(X,Y) = \frac{1}{m-2}\Bigg(\widetilde{\mathrm{Ric}} (X,Y) - \frac{\widetilde{\mathrm{Scal}}}{2(m-1)}\dotprod{X}{Y}\Bigg),
\end{equation*}
for $X, Y,Z,W \in \mathfrak{X}(\widetilde{M}).$ If $\mathcal{S}|_{\mathrm T M}$ denotes the restriction of $\mathcal{S}$ to $\mathrm T M\times \mathrm T M$, then
\begin{align*}
\sum_{i=1}^{2n} \omega_{i,-} &= 8(n-1)\mathrm{tr} \,{\mathcal{S}|_{\mathrm T M}},\\
\sum_{i=1}^{2n} \omega_{i,+} &=4\Big(2n\mathrm{tr} \,{\mathcal{S}|_{\mathrm T M}}-\mathrm{Scal}\Big).
\end{align*}
Therefore, if $f:M^{2n}\to \widetilde{M}^m$ is a minimal isometric immersion of a Kähler manifold into a conformally flat Riemannian manifold $\widetilde{M}^m$ then $\mathrm{Scal} \leq 2n\mathrm{tr} \,{\mathcal{S}|_{\mathrm T M}},$ where the equality holds if, and only if, $f$ is an anti-pluriharmonic immersion. Moreover, if $f$ satisfies $\mathrm{tr} \,{\mathcal{S}|_{\mathrm T M}}\neq 0$ then it is pluriharmonic if, and only if, $n=1$. We observe that special cases satisfying this trace assumption were studied by Ferreira, Rigoli and Tribuzy in \cite{FerreiraRigoliTribuzy93}.
\end{remark}
\bibliographystyle{amsplain}
|
1,116,691,498,358 | arxiv | \section{Introduction}
Spin exchange (SE) is among the most elementary two body interactions in quantum many body systems. Between two neutral atoms, this exchange can occur within valence electron spins, nuclear spins, or between the electron and nuclear spins. Its coherent teeterboard-like coupling facilitates excitation exchange between two spinor particles and plays an important role in interesting quantum phenomena ranging from versatile magnetic ordered states such as ferromagnetic or antiferromagnetic phases \cite{Ho1998,Ohmi1998}, collective atomic spin-mixing dynamics in both bosonic~\cite{Pechkis2013,Kuwamoto2004,Schmaljohann2004,Chang2004,Chang2005,Widera2005, Kronjager2006,Black2007,Klempt2009,He2015} and fermionic~\cite{Krauser2012,Krauser2014,PhysRevLett.110.250402,PhysRevA.87.043610} quantum gases, etc. SE can also be employed for spin-squeezing and entangled state generation and preparation in atomic spinor systems \cite{Luo620,Lucke773,Gross2011,PhysRevLett.107.210406}, and for coherence and quantum state transfer in quantum information studies using color centers or NMR techniques \cite{Chen2015,Neumann542,PhysRevLett.102.057403,PhysRevLett.93.130501,Plenio2013,Cai2013}.
SE interaction between heteronuclear atoms is typically small
or even minute in magnitude compared to other energy scales,
such as the density dependent mean field, linear or even quadratic Zeeman shifts, etc.
Controlled SE is thus difficult unless a resonance is encountered.
Between atoms of the same species, this exchange resonance naturally appears
due to their identical pseudo-spin construct, i.e., with the same level spacing,
as has already been studied extensively for spin mixing in $^{87}$Rb atomic Bose-Einstein condensate (BEC) \cite{Kronjager2006,Widera2005}.
If two atoms in the $F=1$ ground states are initially prepared in the $m_F=0$ state,
SE flips one atom spin up into the $m_F=+1$ state while the other one gets flipped down into the $m_F=-1$
, or {\it vice versa}. For $^{87}$Rb atoms, this interaction is calibrated by a spin dependent scattering length $c_2\sim 0.3\,(a_B) <0$,
which denotes a ferromagnetic interaction (with $a_B$ the Bohr radius). It is much smaller than the spin independent
scattering length $c_0\sim 100\,(a_B)>0$. At realized condensate densities, $|c_2|$ is typically not more than a few Hz.
The quadratic Zeeman shift, which differentially detunes the level spacings between
the up ($|m_F=0\rangle\to |m_F=1\rangle$) and down ($|m_F=0\rangle\to |m_F=-1\rangle$) flips,
causes the SE to be off resonant. Thus despite of the null out of the
linear Zeeman shifts respectively for the up and down spin flips,
observation of coherent spin mixing limits
the background bias $B$ field to be around $1$ Gauss. Further tuning around the resonance
can be accomplished via the ac-stark shifts from a dressing microwave coupled to the $F=2$ manifold \cite{Luo620,Gerbier2006,Zhao2014}. In NMR physics, spin exchange between electronic and nuclear spin can be tunned by Hartmann-Hahn double resonance (HHDR) \cite{HHDR1962,Plenio2013,Cai2013} since nuclear spin is not sensitive to external field.
In addition to spin mixing dynamics,
recent studies in SE also concern the physics associated with interspecies SE interactions in mixtures
of heteronuclear atoms and their properties such as the
ground state phases and entanglement~\cite{Shi2006,Luo2007,Xu2009,Xu2010,Shi2010,Zhang2010,Xu2010b,Xu2011,Shi2011,Xu2012,Li2015}.
The first SE driven coherent heteronuclear spin dynamics are
observed in an ultracold bosonic mixture of ($F=1$) $^{87}$Rb and $^{23}$Na atoms~\cite{Li2015},
which is nicely described by mean field based theories as in single atomic species~\cite{Xu2009,Xu2012}.
The dynamical effort of SE interaction $\propto (s_+^{(a)}s_-^{(b)}+s_-^{(a)}s_+^{(b)})$ between
two unlike ($\eta={a,b}$) spin-$1/2$ atoms ($\vec s^{(\eta)}$) heavily depends on their differential Zeeman shifts.
For the case of $^{87}$Rb and $^{23}$Na atoms in the $F=1$ ground states mentioned above,
their Land{\'e} g-factors are essentially the same because of their equal nuclear and electron spins.
Hence, an accidental interspecies SE resonance occurs at $B_c\sim 1.69\,\rm G$, a small but non-zero $B$ field. More generally, the Land{\'e} g-factors for unlike atoms can be very different, leading to a large Zeeman level spacing mismatch ($\sim$ 1 MHz) even at a moderately low magnetic field ($\sim$ 1 Gauss). Such a large detuning can completely overwhelm the typical rate $|c_2|$ of SE.
The other option of working at a near zero bias $B$ field is difficult due to the
experimental challenge of controlling the (fluctuating) ambient magnetic field.
This paper presents a general scheme for promoting resonant SE
between heteronuclear atoms by compensating for their energy level mismatch
using an appropriately modulated $B$ field or rf-field. The basic idea is
illustrated in Fig. \ref{fig1} with the modulation frequency resonant to the
level spacing mismatch. Such a scheme is of course limited to realizable frequency
ranges of available technologies.
The different Land{\'e} g-factors for the heteronuclear atoms result in
the different couplings with the modulated $B$ field.
As we will show in the following tuning the amplitude and/or the frequency of the driving field
controls the interspecies SE dynamics.
We will first illustrate the basic operation of our scheme for a simple model of
two unlike atoms. The result obtained is then applied to a
realistic experiment of $^{87}$Rb and $^{23}$Na mixture,
accompanied with detailed numerical simulations.
Perspective applications to more general cases are then discussed together with
a realistic assessment of the potential restrictions.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{fig1.pdf}
\caption{(color online). (a) A schematic illustration for interspecies SE assisted by periodic driving.
(b) The time-dependent detuning $\delta(t)$ (black solid line)
in the presence of the drive with a period $T=2\pi/\omega$.
Effective interspecies SE occurs when $|\delta(t)|\le c$ (blue shaded region) for the (red) highlighted time windows
for various driving amplitude $\Omega\lesssim \delta$, $\Omega=\delta$, $\Omega\gtrsim \delta$, and $\Omega\gg \delta$.}
\label{fig1}
\end{figure}
\section{Two atom physics}
Without loss of generality,
we assume an isotropic interspecies spin-spin interaction (SSI) of strength $c$ between
the two heteronuclear atoms. The model Hamiltonian thus becomes
\begin{eqnarray}
H &=&\hbar\omega_a s_z^{(a)}+\hbar\omega_b s_z^{(b)}+c\,\bold{s}^{(a)}\cdot\bold{s}^{(b)}+H_D(t), \label{sec1:model_Hamiltonian}\\
H_D(t) &=& \hbar\Omega_a\, s_z^{(a)}\cos{\omega t}+\hbar\Omega_b\, s_z^{(b)}\cos{\omega t},
\label{sec1:effective_Hamiltonian}
\end{eqnarray}
where $s_\mu^{(\eta)}$ ($\mu=x,y,z$, and $\eta=a,b$) denotes the spin-1/2 matrix for atom $\eta$
with level spacing $\hbar\omega_\eta$ between spin up $|e_\eta\rangle$
and down $|g_\eta\rangle$ states.
$H_D(t)$ describes the couplings between atoms and an external periodic driving ($B$) field
along the $z$-axis direction. Other forms of coupling
such as $\propto s_x^{(\eta)}$ or $\propto s_y^{(\eta)}$ give similar results
and will not be discussed here explicitly.
Even at a small $B$ field, the mismatch between the
pseudo-spin level spacings for two unlike atoms,
can be much larger than their SE interaction, i.e., $\delta=\omega_a-\omega_b\gg |c|/\hbar$, assuming $\omega_a>\omega_b$.
Thus efficient SE dynamics calls for suitable level shifts to compensate for this mismatch.
Ac-stark shift from a microwave field is often employed,
although it provides for only a small $\delta$ \cite{Gerbier2006,Zhao2014}.
Our idea is instead to apply an external $\pi$-polarized oscillating rf or microwave field
with frequency $\omega\sim \delta$.
As illustrated in Fig.~{\ref{fig1}(a), when the above condition is satisfied,
the interspecies SE $|g_a,e_b\rangle \leftrightarrow |e_a,g_b\rangle$
can hit a resonance assisted by the absorption or emission of an oscillation quantum (or photon)
of energy $\hbar\omega$.
The instantaneous level mismatch between the two-atom states $|g_a,e_b\rangle$ and $|e_a,g_b\rangle$
reduces to $\delta(t)=(\omega_a+\Omega_a\cos{\omega t})-(\omega_b+\Omega_b\cos{\omega t})=\delta+\Omega \cos{\omega t}$.
The differential coupling $\Omega\equiv\Omega_a-\Omega_b$ tunes SE into resonance $\delta(t)\sim c$
analogous to differential Zeeman shifts tunes a magnetic Feshbach resonance,
albeit at selected instants due to the explicit time dependence here.
At a fixed $\omega$, the windows for near-resonant SE within one driving period
are highlighted (red) in Fig.~\ref{fig1}(b) for various driving amplitude.
The largest time window appears for $\Omega\gtrsim \delta$,
which is more rigorously confirmed by the Floquet theory.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.96\linewidth]{fig2.pdf}
\caption{(color online). Numerical results compared to analytical ones for $\delta=\omega_a-\omega_b=3\,\rm kHz$ and $c/\hbar=10\,\rm Hz$
with detuning $\Delta=\omega-\delta$. (a) Time evolution of fractional populations for $\Delta=5$ Hz and $\Omega=\omega=\delta$.
Spin oscillation periods (b) and amplitudes (c) from numerical evolutions with the original Hamiltonian Eq.~(\ref{sec1:model_Hamiltonian}) (black solid lines) and the effective Hamiltonian (red dashed lines). (d) The dependence of $c_{\text{eff}}$ on $\Omega$ at $\omega=\delta$.
The red dashed line denotes the analytic formula $c_{\rm eff}=cJ_1(\Omega/\omega)$ while the black solid line is
based on the oscillation periods computed from the dynamics of the original Hamiltonian.}
\label{fig2}
\end{figure}
In the high frequency limit $\omega\sim\delta\gg c/\hbar$,
an effective time-independent Hamiltonian emerges
\begin{eqnarray}
H_{\rm eff}&=& \hbar(\omega_a-{\omega}/{2})s_z^{(a)}+\hbar(\omega_b+{\omega}/{2})s_z^{(b)} \nonumber \\
&&-c_{\text{eff}}\,\bold{s}^{(a)}\cdot\bold{s}^{(b)} +\tilde{c}\,s_z^{(a)}s_z^{(b)},
\label{eqn13}
\end{eqnarray}
as detailed in the appendix below
with $c_{\rm eff}=cJ_1(\Omega/\omega)$ and $\tilde{c}=c[1-J_1(\Omega/\omega)]$.
The minus sign in front of $c_{\text{eff}}$ does not imply that
the SSI has changed its sign entirely due to the follow up term $\propto s_z^{(a)} s_z^{(b)}$.
For our idea to work, the coupling amplitudes for the two atoms
must be different, i.e., $\Omega_a\neq\Omega_b$, or $\Omega\neq 0$ as otherwise $c_{\text{eff}}=0$.
Our proposal thus can be applied when the two atoms are coupled to a driving
field with different strength, a condition that is almost always satisfied for heternuclear
atoms when their pseudo-spin states exhibit different Land{\'e} g-factors.
The analytical results above are confirmed by numerical simulations
for the full dynamics including the periodic drive at
$\delta=\omega_a-\omega_b=3\,\rm kHz$ and $c/\hbar=10\,\rm Hz$ (satisfying $\delta\gg c$).
The simulation starts with the two atoms initially in the state $|g_a,e_b\rangle$.
Figure~\ref{fig2} shows the nice agreement between analytical and numerical results.
The peaks for both the period and amplitude are located at $\Delta=\omega-\delta=0$ as expected.
The numerical result for the effective SE interaction strength, as shown in Figs. \ref{fig2}(d)(red dashed line),
is derived by matching the frequency of spin population oscillation (from Fourier analysis)
to the analytical result $\sqrt{4c_{\text{eff}}^2+\Delta^2}/2$ given by effective Hamiltonian (\ref{eqn13}).
We fix $\Delta=0$ and change $\Omega$ such that $c_{\text{eff}}$
reduces simply to the frequency of spin oscillation.
\section{Spinor mixture of ${}^{87}$$\text{Rb}$ and ${}^{23}$$\text{Na}$}
We next extend the above discussion
for two atoms to a mixture of bosonic spinor $^{23} $Na($\eta=a$) and $^{87}$Rb
($\eta=b$) atoms in the ground $F=1$ states~\cite{Li2015}.
This represents a special case as their level spacing
mismatch is smaller because the nuclear and electronic spins for both atoms are the same.
Their near-resonant interspecies spin dynamics are
recently observed around $B_c\sim 1.6$ (Gauss).
In the off resonant case when their energy level mismatch
is much larger than the interspecies SE strength, this combination still represents a nice
example to test our idea of periodic driving assisted resonant SE.
\begin{figure*}[!htp]
\centering
\includegraphics[scale=0.65]{fig3.pdf}
\caption{The dependence of SE dynamics on $\omega$ for Rb (red line) and Na (blue line) atoms
at $B_0=2.2\,\rm G$ where the Zeeman energy level spacing mismatch between the two spin states $|-1,0\rangle$ and $|0,-1\rangle$
is $\delta\simeq 2\pi\times 227$ Hz and $\Omega=\delta$. (a1-a2) Coherent spin oscillations of balanced (a1) and unbalanced (a2)
atomic populations at different detuning. The black dashed lines denote populations of state $|1\rangle$.
(b1-b2) The dependence of oscillation amplitude on $\Delta$ for balanced (b1) and unbalanced (b2) mixtures.
(c1-c2) The same as above but for the oscillation period in balanced (c1) and unbalanced (c2) mixtures. }
\label{fig3}
\end{figure*}
The model Hamiltonian is detailed in the appendix with
$m_\eta$ the atomic mass, and $\mu=m_1m_2/(m_1+m_2)$
the interspecies reduced mass. $V_\eta$ denotes the trap potential, and
$p_\eta$ and $q_\eta$ are respectively the linear and quadratic Zeeman shifts, while
$c_0^{(\eta)}$ and $c_2^{(\eta)}$ label the intra-atomic density-density and SE interaction strengths.
The interspecies spin-independent, spin-exchange, and spin-singlet pairing interaction strengths
are denoted by $\alpha$, $\beta$, and $\gamma$ as before in studies of binary mixture SE dynamics \cite{Xu2009}
and their values are known to be $(\alpha,\beta,\gamma)=2\pi\hbar^2a_B/\mu\times(78.9,-2.5,0.06)$
for this mixture.
The experiments of Ref. \cite{Li2015} are carried out for a $^{23}$Na atomic BEC with
a cold thermal $^{87}$Rb atomic gas in an optical dipole trap.
Their dynamics are governed by the following coupled equations
\begin{widetext}
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}\phi&=&
\left[ -\frac{\hbar^2}{2m_a}\nabla^2-p_aF_z +q_aF_z^2+V_a
+c_0^{(a)}\text{Tr}(n_{a})+c_2^{(a)} (\phi^{\dagger}\mathbf{F}\phi)\cdot\mathbf{F} \right]\phi\nonumber\\
&&+[\alpha\text{Tr}(n_b)
+\beta\text{Tr}(\mathbf{F}n_b) \cdot\mathbf{F}+\gamma\mathcal{U}_{b}]\phi,\\
\frac{\partial}{\partial t}f&=&-\frac{\bold{p}}{m_b}\cdot \nabla_{\bold{r}}f+\nabla_{\mathbf{r}}V_b\cdot\nabla_{\mathbf{p}}f
+\frac{1}{i\hbar}[U,f]+\frac{1}{2}\{\nabla_{\mathbf{r}}U,\nabla_{\mathbf{r}}f\},
\end{eqnarray}
with
\begin{eqnarray}
U&=&-p_bF_z+q_bF_{z}^2+c_0^{(b)}\text{Tr}(n_b) +c_0^{(b)}n_b+c_2^{(b)}\text{Tr}(\mathbf{F}n_b)\cdot\mathbf{F}+c_2^{(b)}\mathbf{F}n_b\cdot\mathbf{F}\nonumber\\
&&+\alpha\text{Tr}(n_a)+\beta\text{Tr}(\mathbf{F}n_a)\cdot\mathbf{F}+\gamma\mathcal{U}_a,
\end{eqnarray}
\end{widetext}
where the Na condensate is described by its mean field
$\phi=\langle\hat{\phi}_a \rangle=(\phi_{1},\phi_0,\phi_{-1})^{T}$
and $(n_{a})_{ij}\equiv\phi^{*}_j\phi_i$,
the Rb gas is described by the collisionless Boltzmann equation
in terms of the Wigner function
$f_{ij}(\bold{r},\bold{p},t)=\langle e^{iHt/\hbar}\hat{f}_{ij}(\bold{r},\bold{p})e^{-iHt/\hbar}\rangle$
and $\hat{f}_{ij}\equiv\int d\bold{r}' e^{-i\bold{p}\cdot\bold{r}/\hbar}\hat{\psi}_j^{\dagger}(\bold{r}-\bold{r}'/2) \hat{\psi}_i(\bold{r}+\bold{r}'/2)$. We define $(n_b(\mathbf{r},t))_{ij}=\int d\mathbf{p} f_{ij}(\mathbf{r},\mathbf{p},t)/(2\pi\hbar)^3$,
$(\mathcal{U}_b)_{ij}=(-1)^{i-j}(n_b)_{\bar{j}\bar{i}}/3$, and
$(\mathcal{U}_a)_{ij}=(-1)^{i-j}(n_a)_{\bar{j}\bar{i}}/3$ with $\bar{i}=-i$.
When one atomic species is non-condensed, the single-mode approximation (SMA)~\cite{Li2015}
is well satisfied for both atomic species. The resulting simplified equations above forms the basis of our numerical study.
The accidental resonance reported in Ref.~\cite{Li2015} at $B_c\sim 1.69\,\rm G$
is between the two atom states
$|m_F^{(a)}=0,m_F^{(b)}=-1\rangle \leftrightarrow |-1,0\rangle$.
Away from this resonance with either increasing or decreasing $B$ field,
the interspecies SE dynamics are suppressed.
Our scheme comes with a $\pi$-polarized periodic rf or microwave field coupled to the atoms
\begin{eqnarray}
H_D(t)=\cos{\omega t}\int d\bold{r}\left\lbrace \hbar\Omega_a\hat{\phi}^{\dagger}F_{z}^{(a)}\hat{\phi} +
\hbar\Omega_b\hat{\psi}^{\dagger}F_{z}^{(b)}\hat{\psi} \right\rbrace . \hskip 12pt
\end{eqnarray}
At $B=2.2\,\rm G$, for instance, the level spacing mismatch between the two atom spin states $|0,-1\rangle$ and $|-1,0\rangle$
is $\delta\simeq 2\pi\times 227~\rm Hz$, which is much larger than the typical SE strength $\beta$.
The intra-species spin dynamics are also suppressed due to the large quadratic Zeeman shifts at this $B$ field.
We numerically explored this case for both balanced and imbalanced populations of $^{87}$Rb and $^{23}$Na atoms,
starting with a coherent superposition internal states for both species.
To promote strong effective interspecies SE, $\Omega=\delta$ is taken,
and $\omega$ is varied in the vicinity of the two atom resonance $\sim\delta$.
For the balanced case with $N_a=N_b=6\times 10^4$ atoms,
we consider an initial configuration with $50\%$ population of Rb (Na) atoms
in the state $|-1\rangle$ ($|0\rangle$), $40\%$ in $|0\rangle$ ($|-1\rangle$), and $10\%$ in $|+1\rangle$ ($|+1\rangle$).
For the unbalanced case of $N_b=6.33\times 10^4$ and $N_a=10.40\times 10^4$, the initial states
for both atoms are prepared with $36$\% population in state $|0\rangle$, $57\%$ in $|-1\rangle$, and $7\%$ in $|+1\rangle$ approximately.
The resulting near-resonant interspecies SE dynamics are shown in Fig.~\ref{fig3}.
Both the amplitude and period of spin oscillations are found to tune with $\omega$.
The resonance peak is seen to be shifted from the two atom case of $\Delta=0$
due to mean field interactions, while the width of resonance remains of the same order
as that induced by the bare SE interaction strength at weak $B$ field shown in Ref.~\cite{Li2015}.
It is interesting to point out that for the controlled SE dynamics
the periodic external drive introduced
does not seem to affect other SSI channels
since it does not induce single particle excitation as shown in Figs.~\ref{fig3}(a1,a2) (black dashed line).
Finally we note that our idea for controlled SE as discussed differs from both recently demonstrated
scenario~\cite{Li2015} and the widely known HHDR applied in NV-center \cite{Plenio2013,Cai2013}.
The first scenario is based on shifting of the resonance field $B_c$
with an optically induced species-dependent (time-independent) static synthetic $B$ field.
Complications to balance the amount of species- and spin-dependent vector light shifts
do not arise in our scheme.
In the second scenario, at least one of the atom system is in strong driving limit and being dressed by the external field. Resonant spin exchange occurs when the dressed states splitting matches the level spacing of another atom. While in our case, the spin state is neither dressed nor flipped by driving filed and collective spin dynamics occurs due to inherent SSI between atoms. Thus our idea is more generally grouped into Floquet engineering and can be applied to tune effective interspecies SE for various types of spinor atomic mixtures.
\section{Conclusion}
In conclusion, we present a general scheme to engineer resonant heteronuclear atomic
spin dynamics by applying a periodic coupling field. This applies for
interatomic species spin dynamics when the Zeeman energy level spacing
mismatch between the two species is much larger than their SSI strength.
Our method is applicable to several ongoing mixture experiments,
and is illustrated for the mixture of $^{23}$Na and $^{87}$Rb atoms
where spin dynamics were previously observed in the $F=1$ ground stats at near zero field.
A simple calculation using Fermi's golden rule shows that inelastic decay rate
associated with SE collision is about $10^{-14}$ $\text{cm}^{3}\cdot \text{s}^{-1}$
for the $^{23}\text{Na}-^{87}\text{Rb}$ atom mixture, which should provide for a sufficiently long life
time to carry out the proposed periodic modulation experiment.
Another promising candidate system for applying our idea
is the $^6$Li-$^{23}$Na (Fermi-Bose) mixture which exhibits two zero crossings
for the Zeeman level mismatch at $B=0$ G and $B=70.2$ G
between the $|-1/2,1\rangle\leftrightarrow |1/2,0\rangle$ states \cite{ArnoTrautmann2016}.
\section*{Acknowledgement}
This work is supported by the National Basic Research Program of China (973 program) (No. 2013CB922004), NSFC (No. 91421305, No. 11574100, No. 11654001, and No. 11374176) and the National Thousand-Young-Talents Program.
|
1,116,691,498,359 | arxiv | \section{Introduction and main results}
One of the first questions addressed in elementary physics is how much time does it take an object to reach its destination.
In non-equilibrium statistical physics this is the first-arrival problem which is fundamental for diffusion controlled chemical reactions and other search problems \cite{Redner2007-0,Benichou2005-0,Benichou2011-0,Godec2016-0,Godec2016-1}.
Seemingly similar to the motion of diffusing particles, quantum systems appear to evolve randomly.
One might therefore ask for the probability that a quantum system initially localized at $\V{x}_\text{in}$ arrives at a target position $\V{x}_\text{d}$ at time $t$ for the first time.
In a more general setup, the target $\V{x}_\text{d}$ and initial position $\V{x}_\text{in}$ can be replaced with any valid states from the Hilbert space, $\sKet{\ensuremath{\psi_\text{d}}}$ or $\sKet{\ensuremath{\psi_\text{in}}}$, respectively.
This question leads to complications.
Soon after the establishment of quantum theory, it became clear that there can be no self-adjoint operator that represents the arrival time \cite{Allcock1969-0}.
Still, since the concept of arrival times is so very basic and intuitive, efforts have been made to define and then calculate the arrival time by a plethora of other methods:
imposing special boundary conditions on the Schr\"odinger equation \cite{Kumar1985-0}, introducing stochastic forces \cite{Lumpkin1995-0}, or via imaginary potentials \cite{Krapivsky2014-0}.
It was also suggested to define the arrival time in terms of positive-operator-valued measures \cite{Kijowski1974-0,Sombillo2016-0}.
In the operative approach, one tries to incorporate the detector directly into the model \cite{Aharonov1998-0,Damborenea2002-0}.
Moreover, some authors used the concept of decoherent histories to describe the path of the system until it reaches the target state \cite{Anastopoulos2006-0,Halliwell2009-0,Halliwell2009-1}.
One of the major conceptual problems is that a quantum particle does not possess a trajectory -- it can not be tracked in the classical sense.
To infer the position or state of a quantum system, the observer must perform a measurement that will collapse the system's wave function.
The correct moment to perform the measurement to achieve success is of course unknown a priori and too frequent measurement will lock the system's dynamics via the quantum Zeno effect \cite{Misra1977-0,Itano1990-0}.
A very pragmatic solution to the dilemma is the introduction of a detection protocol \cite{Gruenbaum2013-0,Dhar2015-0}.
Here, the observer decides before the experiment when he will attempt detection.
This allows the theoretician to weave the backfire of the initial unsuccessful measurements into the remaining (unitary) dynamics of the quantum system.
Periodically measured systems have also been considered in different contexts, e.g. using the measurements as a heat bath \cite{Yi2011-0}.
A popular choice is to attempt detection at fixed intervals of duration $\tau$, the so-called stroboscopic detection protocol.
Such a setup was considered in \cite{Krovi2006-0,Gruenbaum2013-0,Montero2013-0,Bourgain2014-0,Dhar2015-0,Dhar2015-1,Sinkovicz2015-0,Sinkovicz2016-0,Lahiri2017-0} and by the authors in \cite{Friedman2017-0,Friedman2017-1,Thiel2018-0}.
This is also the approach of the present work.
The detection protocol shifts the emphasis of the question: One does not ask for the first arrival of the system, but rather for its {\em first detection} in the target state.
Particularly, we ask what is the probability $F_n$ that the system is first detected in the target at the $n$-th attempt.
The first detection problem is particularly relevant from the perspective of quantum computing.
It is closely related to the quantum search problem \cite{Grover1997-0,Ambainis2001-0,Aaronson2003-0,Childs2004-0,Li2017-0} and translates to the question of when a computation result becomes available.
The popular search algorithms of Refs.~\cite{Grover1997-0,Childs2004-0} focus on tuning a quantum system such that it most effectively transforms some fixed initial state into some a priori unknown oracle state.
Our approach is different in that we fix a target state and ask for the time when it is reached.
Hence, our focus is on the investigation of $F_n$, which can later be optimized.
The canonical tight-binding model will serve as an example throughout the discussion.
It is most conveniently realized in wave-guide-lattice experiments \cite{Perets2008-0}, which could easily be modified to our setup.
The first detection of a quantum walker in the one-dimensional tight-binding model was discussed in Refs.~\cite{Dhar2015-0,Dhar2015-1,Friedman2017-0,Friedman2017-1,Thiel2018-0}.
It was found that for large times the first detection probability decays like a power law with exponent $-3$ upon which strong oscillations are superimposed.
Two dimensional systems have also been considered numerically and within a perturbation approach where different exponents were reported \cite{Dhar2015-1}.
In a more general setting, e.g. in higher dimensions including fractal systems, it is still an open question what controls the large $n$ behavior of $F_n$.
In this paper, we will focus on systems with a continuous energy spectrum, which is shown to give rise to the power-law decay of $F_n$.
In the classical theory of random walks, the first passage probability $F_n^\text{(cl)}$ also decays as a power law, with an exponent that depends on the {\em spectral dimension}, sometimes also called fracton, or harmonic, dimension.
This important quantity first appeared in the discussion of transport in fractal systems \cite{Alexander1982-0,Hughes1995-0}, where it was found that many traditionally equivalent definitions of dimensionality surprisingly give non-integer values and do not coincide.
The spectral dimension is defined as the power law exponent found in the density of energy states (DOS) $\rho\sof{E}$, which behaves like $E^{d_S^\text{DOS}/2-1}$ for small energies $E>0$.
(Throughout our work, we shift the minimal possible energy to $E=0$.)
The DOS is a property of the Hamiltonian and, due to the ubiquity of the Laplacian, also appears in countless other physical problems, e.g. lattice vibrations \cite{Montroll1947-0,vanHove1953-0,Alexander1981-0,Hughes1995-0}.
In non-fractal systems with Euclidean dimension $d$, one finds $d_S^\text{DOS}=d$; the Sierpiensky gasket -- a popular fractal -- has $d_S^\text{DOS} = \ln 9 / \ln 5 \approx 1.365$ \cite{Alexander1982-0}.
For a classical random walk, the probability $P_n^\text{(cl)}\sof{\V{x}|\V{x}}$ to return to its initial position at the $n$-th step can be expressed as the Laplace transform of the DOS \cite{Alexander1981-0}.
Consequently this quantity decays as a power law with exponent $-d_S^\text{DOS}/2$.
The first passage probability is then computed from the return probability \cite{Redner2007-0}, and also decays as a power law with exponent $-\Max{d_S^\text{DOS}/2}{2-d_S^\text{DOS}/2}$.
Logarithmic corrections to the power law appear in the critical dimension $d_S^\text{DOS} = 2$.
The DOS, the first passage probability and the return probability $P_n^\text{(cl)}\sof{\V{x}|\V{x}}$ are in a triangle relation to each other.
\begin{figure}
\includegraphics[width=0.5\columnwidth]{Triangle.pdf}
\caption{
A sketch symbolizing the relations between the energy spectrum (represented by the measurement spectral density of states $f\sof{E}$, see section~\ref{sec:SpecMeas}), the return amplitudes $u_n$ [see Eq.~\eqref{eq:DefRA}], and the first detection probabilities $F_n$.
We present two ways to calculate $F_n$: Directly from the spectrum or via the return amplitudes.
\label{fig:Triangle}
}
\end{figure}
The very same program is carried out in this article in the quantum case.
We relate the Hamiltonian's spectral properties to the first detection probability $F_n$ as well as to the {\em amplitude} of return.
A subtle difference between classical first passage theory and the quantum first detection problem is that the DOS is not the relevant quantity, but rather the so-called {\em measurement spectral density of states} (MSDOS) $f\sof{E}$ of the Hamiltonian (defined below).
This key quantity is well known in the mathematical literature \cite{Marchetti2012-0}.
It is closely related -- and sometimes equal -- to the DOS $\rho\sof{E}$.
The difference is that the MSDOS not only summarizes the properties of the Hamiltonian, but {\em also those of the initial and detection states.}
In layman's terms, it combines information about available energy states with information about the initial or detection states' overlap with these energies.
Hence it allows one to concentrate only on the relevant components of the energy spectrum.
From the power-law behavior $\sAbs{E-E^*}^{d_S/2-1}$ of $f\sof{E}$ around its singularities $E^*$, a spectral dimension $d_S$ can be defined.
In this article we discriminate between two classes of quantum states.
For ``ordinary'' states, $f\sof{E}$ and $\rho\sof{E}$ share the positions and exponents of their singularities, that means $E^*$ can be identified with a van Hove singularity and $d_S = d_S^\text{DOS}$.
However, there are out-of-the-ordinary states for which this identification is not possible; $d_S^\text{DOS}$ and $d_S$ may assume different values.
It is the latter exponent that determines the behavior of the system's transition amplitudes \cite{Marchetti2012-0} -- which decay as $\text{time}^{-d_S/2}$ -- as well as the first-detection properties.
Just as in the classical case, the MSDOS $f\sof{E}$, the first detection probabilities $F_n$, and the (later precisely defined) return amplitudes $u_n$, are cast into a triangle relationship; see Fig.~\ref{fig:Triangle} for an illustration.
The triangle enables us to compute $F_n$ from the MSDOS $f\sof{E}$, or alternatively from the return amplitudes $u_n$, depending on analytical convenience or on the theoretician's taste.
Invoking the MSDOS, we show that the power-law decay of $F_n$ is generic for systems with continuous energy spectrum and that its exponent depends only on the spectral dimension $d_S$ found in $f\sof{E}$.
Our main result is
\begin{equation}
F_n
\sim
\Abs{
\Sum{l=0}{L'-1}
F_{l,d_S} e^{-in\frac{\tau E^*_l}{\hbar}}
}^2
\times
\left\{ \begin{aligned}
\frac{1}{n^{4 - d_S}}, & \; d_S < 2 \\
\frac{1}{n^2 \ln^4 n}, & \; d_S = 2 \\
\frac{1}{n^{d_S}}, & \; d_S > 2 \\
\end{aligned} \right.
.
\label{eq:AsymFDP}
\end{equation}
The quantum power law exponent is exactly double the classical exponent \cite{Redner2007-0}.
This can be understood in a hand-waving fashion by invoking the fact that one deals with amplitudes in the quantum problem.
Although both theories are developed along the same lines, the final squaring operation in going from amplitudes to probabilities doubles the resulting exponents.
Furthermore, we again find the critical dimension to be $d_S = 2$.
Beside the power law decay, oscillations are typically found and they are described by the $\sAbs{\cdots}^2$ term.
These oscillations do not occur in every system, because their frequency can be tuned with the detection period $\tau$ and because it depends on the number $L'$ of non-analytic points $E^*_l$ of the MSDOS.
The oscillations, manifest in the complex exponentials in Eq.~\eqref{eq:AsymFDP}, are a surprising addition from the classical point of view.
From the quantum perspective, they are easily understood as interference phenomena.
In the ``ordinary'' case, the spectral dimension $d_S$ found in $f\sof{E}$ is equal to $d_S^\text{DOS}$, the spectral dimension found in the DOS.
Furthermore, the singularities $E^*_l$ of $f\sof{E}$ can be identified with the {\em van Hove singularities} of the DOS \cite{vanHove1953-0}.
Since $d_S^\text{DOS}$, as well as the van Hove singularities are properties of the DOS and thus of the Hamiltonian, our result, Eq.~\eqref{eq:AsymFDP}, is ``robust'', in the sense, that a different choice of initial and detection state only changes the amplitudes $F_{l,d_S}$, but neither the power law nor the frequencies of the oscillations.
In the classical theory, this is where the story ends, but our quantum problem features an epilogue.
Due to the possibility of superposition of quantum states, the MSDOS $f\sof{E}$ can be wildly different from the DOS $\rho\sof{E}$.
This can go so far that the singularities $E^*_l$ can not be identified with the van Hove singularities or that even the spectral dimensions do not coincide, i.e. $d_S \ne d_S^\text{DOS}$.
Consequently, in the quantum first detection probability may depend sensitively on the particular choice of initial and detection state, which is a considerable departure from the classical point of view.
Eq.~\eqref{eq:AsymFDP} is still valid in this out-of-the-ordinary, but the involved quantities can not be inferred from the DOS anymore.
Throughout the article we will demonstrate our reasoning in the tight-binding model in $d$ dimensions with Hamiltonian:
\begin{equation}
\ensuremath{\hat{H}}
:=
-\gamma
\Sum{\V{x}\in\Omega}{} \Ket{\V{x}}
\Sum{j=1}{d} \brr{
\Bra{\V{x}+a\V{e}_j}
+
\Bra{\V{x}-a\V{e}_j}
- 2\Bra{\V{x}}
}
,
\label{eq:TBHamiltonian}
\end{equation}
where the energy constant $\gamma$ determines the strength of the nearest neighbor hopping, $\sKet{\V{x}}$ is a position (lattice-site) eigenstate and $\V{e}_j$ is the unit vector in the $j$-th coordinate.
This model describes a particle moving coherently on an infinite simple cubic lattice with lattice constant $a$.
We stress though that our result, Eq.~\eqref{eq:AsymFDP}, is fairly generic and holds for any system with a continuous energy spectrum, although with different amplitudes $F_{l,d_S}$.
(Of course, as we briefly mentioned, terms and conditions apply.
These are made explicit later in the text.)
To strengthen this claim, we will also consider a free particle in continuous space.
Furthermore, Eq.~\eqref{eq:AsymFDP} is also applicable for {\em fractional} spectral dimensions, and we demonstrate this in a free particle with an anomalous dispersion relation.
The rest of the paper is organized as follows:
In section \ref{sec:Strobo} we explain the stroboscopic detection protocol, i.e. how the detection process is added to the system's natural dynamics.
We review how to formally obtain the first detection probability using generating functions \cite{Friedman2017-0}.
Then in section \ref{sec:SpecMeas} we present the main conceptual tool in our investigations, the MSDOS, which is closely related to the DOS.
Using these, we show that the first detection probability can be represented as a Fourier transform.
Section \ref{sec:LargeN} follows with an asymptotic formula for Fourier transforms used to derive Eq.~\eqref{eq:AsymFDP}.
The same formula can be applied to the system's free evolution unperturbed by measurement.
This opens up an alternative way to compute $F_n$, which is also done in this section.
Throughout these derivations, we illustrate our reasoning using the tight-binding model.
In the last two sections, \ref{sec:FreePart} and \ref{sec:Levy}, we compute the first detection probability for two other example models: the free particle in continuous space and a L\'evy particle.
We close the article with discussion in section \ref{sec:Disc}.
Derivations that interrupt the flow of presentation have been relegated to the appendices.
Appendices \ref{app:Plemelj} and \ref{app:Tauber} are concerned with an analogue to the Sokhotski-Plemelj formula on the unit circle and an asymptotic formula for Fourier transforms, respectively.
In Appendix \ref{app:Arrival}, we discuss the problem of first detected arrival.
Finally, Appendix \ref{app:Even} treats the case of even dimensions, where logarithmic corrections appear.
\section{The stroboscopic detection protocol}
\label{sec:Strobo}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{DetectionAndProtocol.pdf}
\caption{
Sketch of the detection protocol.
The system evolves unitarily for $\tau$ time units with the evolution operator $\ensuremath{\hat{U}}$ in between the detection attempts that are performed with the projection operator $\ensuremath{\hat{D}}$.
The detection is a strong measurement that erases the wave function in the target site, when it was unsuccessful.
\label{fig:DetProt}
}
\end{figure}
In this section, we will review the derivation of the first detection probabilities using the renewal equation approach \cite{Friedman2017-0,Friedman2017-1}.
This sets the ground for a reformulation in terms of the system's energy spectrum, which is found in the next section.
The system is initially prepared in the state $\sKet{\ensuremath{\psi_\text{in}}}$.
In the tight-binding model of Eq.~\eqref{eq:TBHamiltonian}, we can identify $\sKet{\ensuremath{\psi_\text{in}}} = \sKet{\V{x}_\text{in}}$ with a position eigenstate on the lattice, but in continuous-space systems this must be avoided due to the uncertainty principle.
In a general setup $\sKet{\ensuremath{\psi_\text{in}}}$ could be any state from the Hilbert space of the system.
We will consider both situations (lattice and continuous-space) in the examples.
The main idea to the first detection problem is to fix the times, $\tau < 2\tau < 3\tau < \hdots$, of attempt detection before the experiment.
Here, we choose to measure every $\tau$ units of times.
In between the measurement times, the system evolves unitarily with the operator $\ensuremath{\hat{U}}\sof{\tau} = e^{-i\tau\ensuremath{\hat{H}}/\hbar}$, where $\ensuremath{\hat{H}}$ is the Hamiltonian of the system.
The detection attempt is modeled as a strong measurement using the projector $\ensuremath{\hat{D}} = \KB{\ensuremath{\psi_\text{d}}}{\ensuremath{\psi_\text{d}}}$.
The detection leads to a collapse of the wave function \cite{Cohen-Tannoudji2009-0}.
The detection state $\sKet{\ensuremath{\psi_\text{d}}}$ is subject to the same restrictions as $\sKet{\ensuremath{\psi_\text{in}}}$.
In a lattice system like the tight-binding model, $\sKet{\ensuremath{\psi_\text{d}}}$ can also be chosen to be a lattice site eigenstate $\sKet{\V{x}_\text{d}}$.
Directly before the first measurement at time $\tau^-$, the system is in the state
\begin{equation}
\sKet{\psi\sof{\tau^-}} = \ensuremath{\hat{U}}\sof{\tau}\sKet{\ensuremath{\psi_\text{in}}}.
\label{eq:}
\end{equation}
Throughout the manuscript, the superscript $-$($+$) denotes a limit from below(above).
Now, measurement is attempted, i.e. $\ensuremath{\hat{D}}$ is applied.
The probability to detect the system in $\sKet{\ensuremath{\psi_\text{d}}}$ in the first attempt is therefore:
\begin{equation}
p_1
=
\sBAK{\psi\sof{\tau^-}}{\ensuremath{\hat{D}}}{\psi\sof{\tau^-}}
=
\sNorm{\ensuremath{\hat{D}}\ensuremath{\hat{U}}\sof{\tau}\sKet{\ensuremath{\psi_\text{in}}}}^2
,
\label{eq:}
\end{equation}
where $\sNorm{\sKet{\psi}} = \sqrt{\sBK{\psi}{\psi}}$ denotes the usual Hilbert-space norm of a state.
If the detection was successful, the experiment is finished, and the first detection time is $\tau$.
In the other case, the wave function collapses (under the orthogonal projection $\mathds{1}-\ensuremath{\hat{D}}$, $\mathds{1}$ being the identity operator), and is renormalized.
Directly after the first detection attempt, assumed unsuccessful, the wave function is equal to:
\begin{equation}
\sKet{\psi\sof{\tau^+}}
=
\frac{
\sbr{\mathds{1}-\ensuremath{\hat{D}}} \sKet{\psi\sof{\tau^-}}
}{
\sqrt{\sBAK{\psi\sof{\tau^-}}{\mathds{1}-\ensuremath{\hat{D}}}{\psi\sof{\tau^-}}}
}
=
\frac{\sbr{\mathds{1}-\ensuremath{\hat{D}}} \ensuremath{\hat{U}}\sof{\tau} \sKet{\ensuremath{\psi_\text{in}}}}{\sqrt{1 - p_1}}
.
\label{eq:}
\end{equation}
For example in a discrete lattice system with $\sKet{\ensuremath{\psi_\text{d}}} = \sKet{\V{x}_\text{d}}$, the amplitude of the particle at site $\V{x}_\text{d}$ is zero directly after the measurement at time $\tau^+$.
The paso-doble of unitary evolution and strong measurement is repeated until the first successful detection is registered.
This ``detection protocol'' combines the collapse of the wave function with the unitary dynamics generated by the Hamiltonian.
In our setup, detection is attempted stroboscopically, every $\tau$ time units.
Different choices of the detection times are also possible.
For example, in Ref.~\cite{Varbanov2008-0} the authors sampled the detection times from a Poisson process.
Let us assume that the first success occurred at the $n$-th trial.
Then the wave function directly before this attempt is:
\begin{equation}
\sKet{\psi\sof{n\tau^-}}
=
\frac{
\ensuremath{\hat{U}}\sof{\tau}\sbrr{\sbr{\mathds{1}-\ensuremath{\hat{D}}}\ensuremath{\hat{U}}\sof{\tau}}^{n-1}\sKet{\ensuremath{\psi_\text{in}}}
}{
\sProd{j=1}{n-1}\sqrt{1-p_j}
}
.
\label{eq:}
\end{equation}
The probability to detect the system in this trial {\em under the condition} that it has not been detected before is:
\begin{equation}
p_n
=
\sBAK{\psi\sof{n\tau^-}}{\ensuremath{\hat{D}}}{\psi\sof{n\tau^-}}
=
\frac{
\sNorm{
\ensuremath{\hat{D}}\ensuremath{\hat{U}}\sof{\tau}
\sbrr{\sbr{\mathds{1}-\ensuremath{\hat{D}}}\ensuremath{\hat{U}}\sof{\tau}}^{n-1}
\sKet{\ensuremath{\psi_\text{in}}}
}^2
}{
\sProd{j=1}{n-1}\br{1-p_j}
}
.
\label{eq:}
\end{equation}
Using the conditional probabilities $p_n$, we can write the unconditioned probability of first detection at the $n$-th attempt as the square norm of some state \cite{Friedman2017-0}:
\begin{equation}
F_n
=
p_n \Prod{j=1}{n-1} \sbr{1-p_j}
=
\sNorm{\ensuremath{\hat{D}}\ensuremath{\hat{U}}\sof{\tau}\sbrr{\sbr{\mathds{1}-\ensuremath{\hat{D}}}\ensuremath{\hat{U}}\sof{\tau}}^{n-1}\sKet{\ensuremath{\psi_\text{in}}}}^2
.
\label{eq:FnFormulaRefLater}
\end{equation}
The non-normalized state on the right hand side is called the detection amplitude \cite{Gruenbaum2013-0,Dhar2015-0}.
As it is parallel to $\sKet{\ensuremath{\psi_\text{d}}}$, we can write:
\begin{equation}
\varphi_n
:=
\sBAK{\ensuremath{\psi_\text{d}}}{\ensuremath{\hat{U}}\sof{\tau}\sbrr{\sbr{\mathds{1}-\ensuremath{\hat{D}}}\ensuremath{\hat{U}}\sof{\tau}}^{n-1}}{\ensuremath{\psi_\text{in}}}
.
\label{eq:DefDetAmp}
\end{equation}
The first detection probability is the square norm of this quantity:
\begin{equation}
F_n = \sAbs{\varphi_n}^2
\label{eq:}
\end{equation}
It was demonstrated in \cite{Friedman2017-0} that the detection amplitudes defined by Eq.~\eqref{eq:DefDetAmp} obey the following renewal equation:
\begin{equation}
\varphi_n
=
\sBAK{\ensuremath{\psi_\text{d}}}{\ensuremath{\hat{U}}\sof{n\tau}}{\ensuremath{\psi_\text{in}}}
- \Sum{m=1}{n-1} \sBAK{\ensuremath{\psi_\text{d}}}{\ensuremath{\hat{U}}\sof{\sbr{n-m}\tau}}{\ensuremath{\psi_\text{d}}} \varphi_m
.
\label{eq:QuantumRenewal}
\end{equation}
This equation relates the first detection amplitudes with the free evolution of the wave function unperturbed from any measurement.
The first term is the direct transition from initial to detection state.
The sum, on the other hand, describes the interference that takes place after the system ``first passed'' the detection state.
As has been noted in Refs.~\cite{Gruenbaum2013-0,Friedman2017-0}, this equation is formally equivalent to the renewal equation from the first passage theory of random walks, see \cite{Redner2007-0}.
To obtain the classical equation, one replaces $\varphi_n$ with the first passage probability $F_n^\text{(cl)}$ after $n$ steps.
Furthermore, $\sBAK{\ensuremath{\psi_\text{d}}}{\ensuremath{\hat{U}}\sof{n\tau}}{\ensuremath{\psi_\text{d}}}$ is replaced with the probability to return to $\V{x}_\text{d}$ after $n$ steps, $P^\text{(cl)}_n\sof{\V{x}_\text{d}|\V{x}_\text{d}}$ and $\sBAK{\ensuremath{\psi_\text{d}}}{\ensuremath{\hat{U}}\sof{n\tau}}{\ensuremath{\psi_\text{in}}}$ with the probability to move from $\V{x}_\text{in}$ to $\V{x}_\text{d}$ in $n$ steps, $P^\text{(cl)}_n\sof{\V{x}_\text{d}|\V{x}_\text{in}}$ \cite{Redner2007-0}:
\begin{equation}
F_n^\text{(cl)}
=
P_n^\text{(cl)}\sof{\V{x}_\text{d}|\V{x}_\text{in}}
-
\Sum{m=1}{n-1}
P_{n-m}^\text{(cl)}\sof{\V{x}_\text{d}|\V{x}_\text{d}}
F_m^\text{(cl)}
.
\label{eq:ClassicalRenewal}
\end{equation}
Just like in random walk theory, we solve the equation with generating functions:
\begin{align}
\label{eq:DefZDetAmp}
\varphi\of{z} := & \Sum{n=1}{\infty} z^n \varphi_n \\
\ensuremath{\hat{\mathcal{U}}}\of{z} := & \sSum{n=0}{\infty} z^n \ensuremath{\hat{U}}\sof{n\tau}
.
\label{eq:DefResolv}
\end{align}
Observe that we put $\varphi_0 := 0$, since the first detection attempt happens at $1\times\tau$.
The generating function of the evolution operator is closely related to its resolvent $\ensuremath{\hat{\mathcal{U}}}\sof{z} = z^{-1} \sbrr{ z^{-1} - \ensuremath{\hat{U}}\sof{\tau}}^{-1}$.
Eq.~\eqref{eq:QuantumRenewal} is now multiplied with $z^n$ and summed from $n=1$ to infinity.
One obtains \cite{Friedman2017-0,Friedman2017-1}:
\begin{equation}
\varphi\of{z}
=
\frac{
\BAK{\ensuremath{\psi_\text{d}}}{\ensuremath{\hat{\mathcal{U}}}\of{z}}{\ensuremath{\psi_\text{in}}}
- \BK{\ensuremath{\psi_\text{d}}}{\ensuremath{\psi_\text{in}}}
}{
\BAK{\ensuremath{\psi_\text{d}}}{\ensuremath{\hat{\mathcal{U}}}\of{z}}{\ensuremath{\psi_\text{d}}}
}
.
\label{eq:QuantumRenewalSol2}
\end{equation}
For notational simplicity, we will henceforth consider the {\em return} problem only.
That means we only consider $\sKet{\ensuremath{\psi_\text{d}}} = \sKet{\ensuremath{\psi_\text{in}}}$ in the main text.
We stress that the derivation of the arrival problem (i.e. $\sKet{\ensuremath{\psi_\text{d}}} \ne \sKet{\ensuremath{\psi_\text{in}}}$) follows exactly along the same lines.
This is demonstrated in Appendix~\ref{app:Arrival}, where we explain the necessary modifications.
Asymptotically, Eq.~\eqref{eq:AsymFDP} is valid for the return as well as the arrival problem, although both have different amplitudes, $F_{l,d_S}$.
Consequently, any relation between asymptotic first detection probabilities and the distance between starting and final position appears in these amplitudes, and not in the exponents or frequencies \cite{Thiel2018-0}.
Let us fix our language and notation.
We abbreviate:
\begin{align}
\label{eq:DefRA}
u_n := & \sBAK{\ensuremath{\psi_\text{d}}}{\ensuremath{\hat{U}}\sof{n\tau}}{\ensuremath{\psi_\text{d}}}, \\
u\sof{z} := & \Sum{n=0}{\infty} u_n z^n = \sBAK{\ensuremath{\psi_\text{d}}}{\ensuremath{\hat{\mathcal{U}}}\sof{z}}{\ensuremath{\psi_\text{d}}}
.
\label{eq:DefResolv}
\end{align}
We refer to $u_n$ as the return amplitude and to $u\sof{z}$ as the resolvent.
``The'' generating function refers to $\varphi\sof{z}$.
With these symbols and with the normalization $\sBK{\ensuremath{\psi_\text{d}}}{\ensuremath{\psi_\text{d}}} = 1$, we can rewrite Eq.~\eqref{eq:QuantumRenewalSol2} as \cite{Friedman2017-1}:
\begin{equation}
\varphi\sof{z}
=
1 - \frac{1}{u\sof{z}}
.
\label{eq:GenFunc}
\end{equation}
The $z$-transform can be inverted using Cauchy's integral formula:
\begin{equation}
\varphi_n
=
\frac{1}{2\pi i} \ointctrclockwise\limits_{\Abs{z}=r} \frac{\mathrm{d} z}{z^{n+1}}
\brr{
1
-
\frac{1}{
u\sof{z}
}
}
.
\label{eq:CauchyInt}
\end{equation}
$r\le1$ is the radius of the circle contour that only contains the pole at the origin.
(There is no other pole inside the unit circle, because $\varphi\sof{z}$ is by definition analytic inside the unit disk.)
Now, the integration contour is parametrized as $z=re^{i\lambda}$ and the limit $r\to1^-$ is taken:
\begin{equation}
\varphi_n
=
\Int{0}{2\pi}{\lambda} \frac{e^{-in\lambda}}{2\pi}
\brr{ 1 - \frac{1}{u\sof{e^{i\lambda}}} }
.
\label{eq:FourierInt}
\end{equation}
This identifies $\varphi_n$ as a Fourier transform, for which asymptotic formulae are readily available \cite{Gamkrelidze1989-0,Erdelyi1956-0,Cline1991-0}.
The large $n$ asymptotics of $\varphi_n$ are related to the singularities $\Lambda^*_l$, $l\in\sbrrr{0,\hdots,L-1}$, at which $\varphi\sof{e^{i\lambda}}$ is non-analytic as a function of $\lambda$.
The resolvent of an operator is a standard tool to infer an operator's spectrum.
Since $u\sof{z}$ is equal to the resolvent of the evolution operator up to some factor, its properties are determined by the {\em energy spectrum}.
Consequently, the detection amplitude's properties are determined by the energy spectrum as well.
We will restrict ourselves to systems with a continuous energy spectrum.
This allows us to express the integrand $\varphi\sof{e^{i\lambda}}$ in terms of the so-called MSDOS, which is itself related to the DOS.
This is the subject of the next section.
\section{The density of energy states and the measurement spectral density of states}
\label{sec:SpecMeas}
\subsection{The density of states and the spectral dimension}
Before treating the continuous spectrum case, let us first consider a finite system of Hilbert space dimension $N$ with time independent Hamiltonian $\ensuremath{\hat{H}}$.
The (possibly degenerate) eigen-energies are $E_n$ and the corresponding eigenstates are $\sKet{\chi_{n,j}}$, where $j$ enumerates the degeneracy.
The DOS is defined by:
\begin{equation}
\rho\sof{E}
:=
\frac{1}{N}
\Sum{n,j}{} \delta\sof{E-E_n}
=
\frac{1}{N} \Trace{\delta\sof{E - \ensuremath{\hat{H}}}}
.
\label{eq:DOSFinite}
\end{equation}
In a system with finite-dimensional Hilbert space, $\rho\sof{E}$ is always a sum of delta functions.
When the thermodynamic limit $N\to\infty$ is taken, $\rho\sof{E}$ can become a mixture of delta functions and a density function.
(Instead of $N$, any extensive quantity can be used to normalize the thermodynamic limit.)
In this manuscript, we deal with systems that have no discrete energy states so that $\rho\sof{E}$ contains no delta functions.
An example is the tight-binding Hamiltonian of Eq.~\eqref{eq:TBHamiltonian}, which is diagonalized by free wave states $\sKet{\V{k}} := \sbr{a/(2\pi)}^{d/2} \sSum{\V{x}\in a \ensuremath{\mathbb{Z}} }{} e^{i \V{k}\V{x} }$, where $\V{k}$ is taken from the cube-shaped Brillouin zone $\ensuremath{\mathbb{B}}=[-\pi/a,\pi/a]^d$.
The dispersion relation is the relation between energy and wave-vector:
\begin{equation}
E\of{\V{k}}
:=
4 \gamma
\Sum{j=1}{d}
\sin^2\sof{\tfrac{ak_j}{2}}
,
\label{eq:TBDispersion}
\end{equation}
where $k_j$ is the $j$-th component of the vector $\V{k}$.
The corresponding DOS can be obtained by an integral over the Brillouin zone.
In one dimension, the DOS is given by the arcsin law:
\begin{equation}
\rho\sof{E}
=
\frac{a}{2\pi}
\Int{-\frac{\pi}{a}}{\frac{\pi}{a}}{k}\delta\sof{E - E\sof{k}}
=
\frac{1}{\pi}
\frac{1}{\sqrt{\sbr{4\gamma - E} E}}
.
\label{eq:TBDOS}
\end{equation}
This is seen from plugging Eq.~\eqref{eq:TBDispersion} into the middle expression of Eq.~\eqref{eq:TBDOS} and changing the integration variable.
The allowed energies lie in the interval $[0,4\gamma]$; outside this interval, we have $\rho\sof{E} = 0$.
The DOS in higher dimensions can be interpreted as the PDF of the sum of random variables $E\sof{\V{k}}$ given by Eq.~\eqref{eq:TBDispersion}, when the $k_j$'s are thought of as i.i.d. random variables that are uniformly distributed in $[-\pi/a,\pi/a]$.
Consequently, $\rho\sof{E}$ for higher dimensions can be represented as a convolution integral:
\begin{equation}
\rho^{(d)}\sof{E}
=
\Int{}{}{E'}
\frac{1}{\pi}
\frac{1}{\sqrt{\sbr{4\gamma - E'} E'}}
\rho^{(d-1)}\sof{E-E'}
,
\label{eq:}
\end{equation}
where $\rho^{(1)}\sof{E}$ is given by Eq.~\eqref{eq:TBDOS}.
In two dimensions $\rho\sof{E}$ can be expressed in terms of complete elliptic integrals \cite{Montroll1947-0}.
It is apparent from Eq.~\eqref{eq:TBDOS} that the DOS is not analytic everywhere.
At the singular energies $E^*_0 = 0$ and $E^*_1 = 4\gamma$, it exhibits a transition from one-over-square-root to vanishing behavior.
These singular points are called the van Hove singularities \cite{vanHove1953-0} and are a consequence of differential geometric considerations \cite{Arnold2012-1}.
One of these points is always located at the lowest possible energy (which we fix at zero).
The total number of non-analytic points in $\rho\sof{E}$ depends on the system at hand.
In higher dimensions there may be more than two of those points and the singularity may be present in some derivative of $\rho\sof{E}$.
This behavior is generic and defines the so-called {\em spectral dimension}, $d_S^\text{DOS}$, of the system:
The non-analytic term in $\rho\sof{E}$ behaves as $\sAbs{E-E^*}^{d_S^\text{DOS}/2-1}$ around the singular point $E^*$ \cite{Alexander1982-0}.
Logarithmic factors appear in even dimensions \cite{vanHove1953-0,Maradudin1958-0,Arnold2012-0}.
In certain systems with fractal characteristics, the spectral dimension can differ from the Euclidean one \cite{Hughes1995-0}.
The DOS for the two and three dimensional tight-binding model is plotted in Fig.~\ref{fig:SM}(a-b).
Our plots can be compared with the sketches from \cite{Maradudin1958-0}.
We marked the position of the van Hove singularities by vertical lines in Fig.~\ref{fig:SM}.
\begin{figure*}
\includegraphics[width=0.99\textwidth]{SpecMeas.pdf}
\caption{
MSDOS and WMSDOS for the tight-binding model with detection at the origin.
Left: The MSDOS $f\sof{E}$ of the tight-binding model.
For our special choice of detection and initial states, $f\sof{E}$ is equal to the DOS $\rho\sof{E}$.
(a): $f\sof{E}$ for $d=2$.
There are three singular energies at $0$, $4\gamma$, and $8\gamma$, indicated by the vertical lines.
At the outer singularities, $f\sof{E}$ is discontinuous; it vanishes outside $[0,8\gamma]$.
In the middle there is a logarithmic divergence.
(b): $f\sof{E}$ for $d=3$.
There are four singularities in the derivative of $f\sof{E}$ at multiples of $4\gamma$ indicated by the vertical lines.
Around the singularities $f\sof{E}$ behaves like Eq.~\eqref{eq:SpecMeasAssump}.
Compare these figures with the DOS sketches of \cite{Maradudin1958-0}.
Right: $\mu\sof{\lambda}$ for the one dimensional tight-binding model, Eq.~\eqref{eq:TBWMSDOS}.
(c): For the detection period smaller than the critical value $\tau < \tau_c = (\pi/2) \hbar/\gamma$.
The two singularities are clearly visible.
The right singularity moves further to the right as $\tau$ increases and merges with the left one at the critical value.
(Note that $\mu\sof{\lambda}$ is $2\pi$ periodic.)
Also $\mu\sof{\lambda}$ vanishes outside the singularities because the whole support of $f\sof{E}$ is mapped into the interval $[0,2\pi]$.
(d): $\mu\sof{\lambda}$ for $\tau>\tau_c$.
The second singularity reappears on the left hand side of the plot for $\tau >\tau_c$.
It moves to the right for growing $\tau$ until $\tau$ is equal to another multiple of the critical value.
Notice that $\mu\sof{\lambda}$ does not vanish outside the singularities, because $f\sof{E}$ is wrapped more than once around the unit circle.
The non-vanishing area corresponds to additional terms in the sum of Eq.~\eqref{eq:TBWMSDOS} that appear when $\tau$ is larger than $\tau_c$.
\label{fig:SM}
}
\end{figure*}
For all translational invariant systems, the DOS can be obtained from an integral over the Brillouin zone using the system's dispersion relation.
The latter defines a surface in $\sbr{\V{k},E}$-space.
The van Hove singularities are related to the critical points of the surface \cite{Arnold2012-1}.
Their maximal number is a topological property of this surface and is related to its Betti number \cite{vanHove1953-0}.
For a lattice system, the dispersion relation is always periodic over a Brillouin-zone; hence the energy surface is a torus that has multiple critical points.
For a particle in continuous space, the dispersion relation is a parabola, which has only one critical point.
\subsection{The measurement spectral density of states}
The DOS is a property of the Hamiltonian only, and encodes no information about the detection state.
A more precise tool is needed in our situation: the measurement spectral density of states (MSDOS) associated with the state $\sKet{\ensuremath{\psi_\text{d}}}$.
Instead of taking the trace of the operator $\delta\sof{E-\ensuremath{\hat{H}}}$, as we did in Eq.~\eqref{eq:DOSFinite}, we will now only consider one matrix element.
\begin{equation}
f\sof{E}
:=
\BAK{\ensuremath{\psi_\text{d}}}{\delta\sof{E-\ensuremath{\hat{H}}}}{\ensuremath{\psi_\text{d}}}
=
\Sum{n,j}{} \sAbs{\sBK{\chi_{n,j}}{\ensuremath{\psi_\text{d}}}}^2
\delta\sof{E-E_n}
.
\label{eq:SpecMeasFinite}
\end{equation}
$f\sof{E}$ can be thought of as the squared modulus of $\sKet{\ensuremath{\psi_\text{d}}}$'s ``energy representation''.
Just as before, $f\of{E}$ will approach a function of $E$ without any delta-function contributions in the thermodynamic limit.
It is normalized to unity, $\sInt{0}{\infty}{E} f\sof{E} = \sBK{\ensuremath{\psi_\text{d}}}{\ensuremath{\psi_\text{d}}} = 1$.
(Remember that the lowest possible energy is fixed at zero.)
In the mathematical literature \cite{Marchetti2012-0}, it is known as the Hamiltonian's spectral measure associated with the state $\sKet{\ensuremath{\psi_\text{d}}}$.
If $\sKet{\ensuremath{\psi_\text{d}}}$ is an atomic orbital state or some other spatially localized wave function, $f\sof{E}$ can also be identified with the local DOS \cite{Busch1998-0,Li2001-0,Yeganegi2014-0}.
The advantage of the MSDOS is that matrix elements of the Hamiltonian or related operators can be represented as integrals over all energies, in particular the return amplitudes:
\begin{equation}
u_n
=
\BAK{\ensuremath{\psi_\text{d}}}{e^{-in\frac{\tau\ensuremath{\hat{H}}}{\hbar}}}{\ensuremath{\psi_\text{d}}}
=
\Int{0}{\infty}{E} e^{-in\frac{\tau E}{\hbar}} f\sof{E}
.
\label{eq:RA}
\end{equation}
The return amplitude and the MSDOS are a Fourier transform pair.
Consider the case, when the system at hand is invariant under a certain symmetry transformation.
Then one may be able to find a set of base states $\sKet{\tilde{\chi}_n}$, such that $\sBAK{\tilde{\chi}_n}{\ensuremath{\hat{H}}}{\tilde{\chi}_n} = \sBAK{\tilde{\chi}_{n'}}{\ensuremath{\hat{H}}}{\tilde{\chi}_{n'}}$, for any pair $n$,$n'$.
In this case $N^{-1} \sTrace{\delta\sof{E - \ensuremath{\hat{H}}}} = \sBAK{\tilde{\chi}_n}{\delta\sof{E-\ensuremath{\hat{H}}}}{\tilde{\chi}_n}$ for any $n$.
This means that the MSDOS associated with $\sKet{\tilde{\chi}_n}$ {\em can be identified with the DOS.}
This is exactly the situation in the tight-binding model, for the special case when $\sKet{\ensuremath{\psi_\text{d}}}$ is a lattice site eigenstate $\sKet{\V{x}_\text{d}}$:
By translational invariance, the matrix elements $\sBAK{\V{x}}{\ensuremath{\hat{H}}}{\V{y}}$ only depend on the distance $\V{y}-\V{x}$.
In one dimension, one obtains with Eq.~\eqref{eq:TBDOS}:
\begin{equation}
f\sof{E} = \rho\sof{E}
=
\frac{1}{\pi}
\frac{1}{\sqrt{E\sof{4\gamma-E}}}
\label{eq:TBSM}
\end{equation}
Consequently, the van Hove singularities and the spectral dimension can be found in $f\sof{E}$ as well.
In general, $f\sof{E}$ and $\rho\sof{E}$ are not equal!
As a counterexample, consider the detection state $\sKet{\ensuremath{\psi_\text{d}}} = \sbr{\sKet{a} + \sKet{-a}}/\sqrt{2}$ and compute its MSDOS from the Brillouin zone integral with $\sBK{\pm a}{k} = \sqrt{a/(2\pi)} e^{\pm i a k}$:
\begin{align}
f\sof{E}
= & \nonumber
\frac{1}{2}
\Int{-\frac{\pi}{a}}{\frac{\pi}{a}}{k}
\delta\sof{E - E\sof{k}}
\br{\Bra{a} + \Bra{-a}}
\KB{k}{k}
\br{\Ket{a} + \Ket{-a}}
\\ = &
\frac{a}{4\pi} \Int{-\frac{\pi}{a}}{\frac{\pi}{a}}{k}
\delta\sof{E - E\sof{k}}
4 \cos^2\sof{ak}
=
\frac{1}{\pi} \frac{\br{1-\frac{E}{2\gamma}}^2}{\sqrt{E\sbr{4\gamma-E}}}
.
\label{eq:TBSMCounterEx1}
\end{align}
Thus, here the MSDOS has an additional factor relative to the DOS.
Although the MSDOS and the DOS are different for this detection state, we note that they both feature a one-over-square-root divergence at $E=0$ and at $E=4\gamma$.
That is, the non-analytic points of $f\sof{E}$ are the van Hove singularities and the spectral dimension $d_S$ is the same as the one found in the DOS $d_S^\text{DOS}$.
As we have shown, $f\sof{E}$ and $\rho\sof{E}$ are not equal for a general choice of detection state.
Nevertheless, $f\sof{E}$ may have $L'$ non-analytic points $E^*_l$, just like $\rho\sof{E}$ has the van Hove singularities.
We assume that $f\sof{E}$ admits the following asymptotic expansion around these points:
\begin{equation}
f\sof{E^*_l\pm\epsilon}
\sim
\tilde{f}_l\sof{\pm\epsilon}
+
A^\pm_l \epsilon^{\frac{d_S}{2}-1}
,
\label{eq:SpecMeasAssump}
\end{equation}
where the coefficients $A^\pm_l$ depend on the particular point $E^*_l$ and on the direction of the approach.
Since the singularity may be present only in a derivative, we introduce the ``analytic remainder'', which is nothing else but the Taylor expansion up to order $d_S/2-1$:
\begin{equation}
\tilde{f}_l\sof{\pm\epsilon}
=
\Sum{0\le n < \tfrac{d_S}{2}-1}{}
\frac{f^{(n)}\sof{E^*_l}}{n!} \sbr{\pm\epsilon}^n
.
\label{eq:DefAnalRemain}
\end{equation}
Clearly, an expansion like Eq.~\eqref{eq:SpecMeasAssump} is always possible.
However, the identification of $f\sof{E}$'s singularities with those of $\rho\sof{E}$ is not always possible.
We consider a detection state as ``ordinary'' when the corresponding MSDOS's singularities are located where the van Hove singularities are, and when its spectral dimension coincides with $d_S^\text{DOS}$.
For the 1d tight-binding model with $\sKet{\ensuremath{\psi_\text{d}}} = \sKet{x_\text{d}}$, we see from Eq.~\eqref{eq:TBSM} that
\begin{align}
f\sof{0+\epsilon} = & f\sof{4\gamma - \epsilon} = \frac{1}{\pi\sqrt{4\gamma}} \epsilon^{\frac{1}{2}-1} \\
f\sof{0-\epsilon} = & f\sof{4\gamma + \epsilon} = 0.
\end{align}
This identifies the spectral dimension as unity and $A_0^- = A_1^+ = 0$ as well as $A_0^+ = A_1^- = (4 \pi^2 \gamma)^{-1/2}$ and $\tilde{f}_l\sof{\pm\epsilon} = 0$.
Table~\ref{tab:Constants} lists all the constants used throughout the manuscript.
As mentioned, when the detection state is chosen as a lattice site eigenstate, $f\sof{E}$ and $\rho\sof{E}$ will coincide in the tight-binding model in any dimension.
\begin{table}
\begin{tabular}{c||c|c|c|c}
& TB & FP & L\'evy & Def. \\
\hline \hline
$L$ & $d+1$ & $1$ & $1$ & Sec.~\ref{sec:SpecMeas} \\
$E^*_l$ & $4 l \gamma$ & $0$ & $0$ & Eq.~\eqref{eq:SpecMeasAssump} \\
$d_S$ & $d$ & $d$ & $2\frac{d}{\alpha}$ & Eq.~\eqref{eq:SpecMeasAssump} \\
$A^+_l$ & $\frac{\sbr{-1}^{\frac{l}{2}}\binom{d}{l}}{\sGma{\frac{d}{2}}\sbr{4\pi\gamma}^\frac{d}{2}}$ & $\frac{E_0^{-\frac{d}{2}}}{\Gma{\frac{d}{2}}} $ & $\frac{2}{\alpha}\frac{E^{-\frac{d}{\alpha}}_0}{\Gma{\frac{d}{2}}} $ & Eq.~\eqref{eq:SpecMeasAssump} \\
$A^-_l$ & $\frac{\sbr{-1}^{\frac{d-l}{2}}\binom{d}{l}}{\sGma{\frac{d}{2}}\sbr{4\pi\gamma}^\frac{d}{2}}$ & $0$ & $0$ & Eq.~\eqref{eq:SpecMeasAssump} \\
$C_l$ & $i^l \binom{d}{l} \br{\frac{-i\hbar}{4\pi\gamma\tau}}^{\frac{d}{2}}$ & $\br{\frac{-i\hbar}{E_0\tau}}^{\frac{d}{2}}$ & $\frac{\Gma{1+\frac{d}{\alpha}}}{\Gma{1+\frac{d}{2}}} \br{\frac{-i\hbar}{E_0\tau}}^{\frac{d}{\alpha}}$ & Eq.~\eqref{eq:DefC} \\
\hline
Eq. & \eqref{eq:DefCTB} & \eqref{eq:NRA} & \eqref{eq:LevyA} &
\end{tabular}
\caption{
Table of the different coefficients in the tight-binding model (TB), the free particle (FP) and the L\'evy particle (L\'evy).
In the tight-binding model the detection state is a lattice site eigenstate $\sKet{\ensuremath{\psi_\text{d}}} = \sKet{\V{x}_\text{d}}$.
For the other two models the detection state is a Heisenberg state given by Eq.~\eqref{eq:NRDefPsi}.
$0 < \alpha < 2$ is the L\'evy parameter, see Eq.~\eqref{eq:LevyDisp}.
The last column lists where to find the definition of the quantity.
The last row lists, where the specific result is found in the main text.
In the continuous space models, we used Eq.~\eqref{eq:DefC} to compute the constants $C_l$ from the $A^\pm$'s.
In the tight-binding model, we also used Eq.~\eqref{eq:DefC} to obtain the $A^\pm$'s.\footnote{
Eq.~\eqref{eq:DefC} alone is not sufficient to determine both $A^+_l$ and $A^-_l$ from $C_l$, because it is one equation for two variables.
We employed the additional condition, that $A^\pm_l$ must be real from which we inferred that $A^\pm_l$ vanishes for some $l$.
A rigorous computation involves the Mellin transform of the MSDOS around one of its singular points and uses its representation as an integral over the Brillouin zone \cite{Arnold2012-1}:
\begin{equation*}
\mathcal{M}_l^\pm\sofff{f;s}
:=
\Int{0}{\infty}{\epsilon}
\epsilon^{s-1}
\Int{\ensuremath{\mathbb{B}}}{}{\V{k}}
\delta\sof{E^*_l \pm \epsilon - E\sof{\V{k}}}
\sAbs{\ensuremath{\psi_\text{d}}\sof{\V{k}}}^2
,
\end{equation*}
where $\ensuremath{\psi_\text{d}}\sof{\V{k}}$ is the momentum representation of $\sKet{\ensuremath{\psi_\text{d}}}$.
The delta function is easily resolved, and the remaining integrand is expanded up to second order around the critical points $\V{k}^*$ {\em of the energy surface} $E\sof{\V{k}}$ that correspond to the singular energy $E^*_l$, i.e. $E\sof{\V{k}^*} = E^*_l$.
The Mellin transform has several poles in the complex $s$-plane.
The pole with the largest real part lies at $s=d_S/2-1$ and determines the small $\epsilon$ behavior of the MSDOS.
The coefficients $A^\pm_l$, as well as possible logarithmic factors can be extracted from this pole.
This is possible for any translationally invariant system.
A full derivation will be carried out in another publication.
}
The tight-binding entry for $A^+_l$ is valid for even $l$ and zero otherwise.
The tight-binding entry for $A^-_l$ is valid for even $d-l$ and zero otherwise.
In even dimensions $A^-_0$ and $A^+_d$ vanish and furthermore logarithmic factors appear.
For the free particle and the L\'evy particle there is only one singular point with $l=0$.
\label{tab:Constants}
}
\end{table}
In this special case the return amplitudes are Bessel functions of the first kind.
This can again be seen from an integral over the Brillouin zone using the dispersion relation Eq.~\eqref{eq:TBDispersion}:
\begin{equation}
u_n
=
\frac{a}{2\pi}
\Int{-\frac{\pi}{a}}{\frac{\pi}{a}}{k}
e^{-i 2n\tfrac{\gamma\tau}{\hbar}\br{1 - \cos\sof{ak}}}
=
e^{-i2n\tfrac{\gamma\tau}{\hbar}}
\BesselJ{0}{2n\tfrac{\gamma\tau}{\hbar}}
,
\label{eq:RATB1D}
\end{equation}
where we first used the replacement $2\sin^2\sof{x/2} = 1 - \cos x$, and then the integral representation of the Bessel function.
In higher dimensions of the tight-binding model, the integral over the Brillouin zone factorizes and we find:
\begin{equation}
u_n
=
\Prod{j=1}{d}
\frac{a}{2\pi}
\Int{-\frac{\pi}{a}}{\frac{\pi}{a}}{k_j}
e^{-i n\tfrac{4\gamma\tau}{\hbar}\sin^2\tfrac{ak_j}{2}}
=
\brr{
e^{-in\tfrac{2\gamma\tau}{\hbar}}
\BesselJ{0}{\tfrac{2\gamma\tau}{\hbar}n}}^d
\label{eq:RATB}
\end{equation}
An alternative integral representation of the Bessel function [Eq.~8.411(10) of \cite{Gradshteyn2007-0}] reveals the Bessel function as the Fourier transform of the arcsin law.
This allows us to use Eq.~\eqref{eq:TBSM} directly in Eq.~\eqref{eq:RA} in the 1d case:
\begin{equation}
u_n
=
\frac{1}{\pi}
\Int{0}{4\gamma}{E}
\frac{e^{-in\frac{\tau E}{\hbar}}}{\sqrt{\sbr{4\gamma - E}E}}
=
e^{-i\tfrac{2\gamma\tau}{\hbar}n}
\BesselJ{0}{\tfrac{2\gamma\tau}{\hbar}n}
,
\label{eq:}
\end{equation}
where the variable change $E = 2\gamma(1+x)$ has to be used to recover the reference's formula.
We plot the MSDOS of the tight-binding model for two and three dimensions in Fig.~\ref{fig:SM}(a-b).
\subsection{The wrapped MSDOS}
In Eq.~\eqref{eq:RA}, we expressed the return amplitudes in terms of $f\sof{E}$.
The same can be done to the resolvent [using Eq.~\eqref{eq:RA}, Eq.~\eqref{eq:DefResolv} and the geometric series]:
\begin{equation}
u\sof{z}
=
\Int{0}{\infty}{E}
\frac{f\sof{E}}{1 - z e^{-i \frac{\tau E}{\hbar}}}
=
\frac{1}{2\pi} \Int{0}{2\pi}{\lambda'}
\frac{\mu\sof{\lambda'}}{1 - ze^{-i\lambda'}}
.
\label{eq:DefResolvWMSDOS}
\end{equation}
Since the complex exponential in the denominator is periodic, it makes sense to gather all contributions of $f\sof{E}$ with the same phase.
The result is the ``wrapped MSDOS'' (WMSDOS):
\begin{equation}
\mu\of{\lambda}
:=
\frac{2\pi \hbar}{\tau}
\Sum{m=-\infty}{\infty}
f\of{\tfrac{\hbar}{\tau}\sbrr{\lambda+2\pi m}}
.
\label{eq:DefWMSDOS}
\end{equation}
$\mu\sof{\lambda}$ can be understood as ``$f\sof{E}$ wrapped around the unit circle''.
It is actually the spectral measure of the evolution operator associated with $\sKet{\ensuremath{\psi_\text{d}}}$.
It is normalized according to $(2\pi)^{-1}\sInt{0}{2\pi}{\lambda}\mu\sof{\lambda} = \sBK{\ensuremath{\psi_\text{d}}}{\ensuremath{\psi_\text{d}}} = 1$.
Example plots of $\mu\sof{\lambda}$ can be found in the insets of Fig.~\ref{fig:SM}.
In the mathematical literature, Eq.~\eqref{eq:DefResolvWMSDOS} is called the Cauchy transform of the measure $\mu\of{\lambda}\mathrm{d} \lambda$ \cite{Cima2006-0}.
In contrast to the series definition, Eq.~\eqref{eq:DefResolv}, the integral representation is also valid for $\sAbs{z} > 1$.
A system with an infinite energy band, like a free particle in continuous space, will always have infinitely many terms in the sum of Eq.~\eqref{eq:DefWMSDOS}.
For a system with a finite energy band, like the tight-binding model, most of the terms in Eq.~\eqref{eq:DefWMSDOS} will be zero, because they are outside the support of $f\sof{E}$.
The support of $f\sof{E}$ gets stretched or compressed by a factor $\tau/\hbar$ before it is wrapped onto the interval $[0,2\pi]$.
At certain critical values of $\tau$ a new term will appear in the sum of Eq.~\eqref{eq:DefWMSDOS}.
To better understand Eq.~\eqref{eq:DefWMSDOS}, consider the one dimensional tight-binding model again.
For very small values of $\tau$, smaller than the critical value:
\begin{equation}
\tau_c
:=
\frac{2\pi\hbar}{4\gamma}
=
\frac{\pi\hbar}{2\gamma}
,
\label{eq:TBDefCritTau}
\end{equation}
which is set by the width of the energy band, $4\gamma$ [see Eq.~\eqref{eq:TBDispersion}], the support of $\mu\sof{\lambda}$ is actually smaller than $2\pi$ and $\mu\sof{\lambda}$ is just a rescaled version of $f\sof{E}$.
Additional terms appear in $\mu\sof{\lambda}$, as soon as $\tau$ surpasses a multiple of the critical value $\tau_c$.
Assume that $\sbr{n-1}\tau_c < \tau < n\tau_c$ for some positive integer $n$, then we obtain by combining Eq.~\eqref{eq:DefWMSDOS} and Eq.~\eqref{eq:TBSM}:
\begin{equation}
\mu\sof{\lambda}
=
\Sum{m=0}{n}
\frac{2}{\sqrt{
\sbr{\lambda+2\pi m}
\br{\tfrac{4 \gamma\tau}{\hbar} - \lambda - 2\pi m}
}}
.
\label{eq:TBWMSDOS}
\end{equation}
We see that $f\sof{E}$'s singularities are inherited by $\mu\sof{\lambda}$.
The critical behavior of $\mu\sof{\lambda}$ around its singular points translates to the behavior of the resolvent $u\sof{re^{i\lambda}}$ close to the unit circle, i.e. in the limit $r\to1^-$.
As we show in Appendix~\ref{app:Plemelj}, the chain of definitions for $f\sof{E}$ from Eq.~\eqref{eq:SpecMeasAssump} can be traced forward to $u\sof{e^{i\lambda}}$, in order to find the singularities in the resolvent.
We summarize this behavior in the following equation:
\begin{equation}
u\sof{e^{i\sbr{\Lambda^*_l \pm\epsilon}}}
\sim
\tilde{u}_l\sof{\pm\epsilon}
+
B^\pm_l\epsilon^{\frac{d_S}{2}-1}
,
\label{eq:AsymResolv}
\end{equation}
where the notation is similar to Eq.~\eqref{eq:SpecMeasAssump}.
The constants $B^\pm_l$ and $A^\pm_l$ are related, as we will show later in Eqs.~\eqref{eq:DefC} and \eqref{eq:DefCTB}, where both are computed from information about $u_n$.
The particular way how one obtains these constants -- from $f\sof{E}$, or from $u_n$ -- is a matter of convenience.
The wrapping procedure Eq.~\eqref{eq:DefWMSDOS} shifts the positions of the singularities from $E^*_l$ to:
\begin{equation}
\Lambda^*_l
:=
\frac{E^*_l \tau}{\hbar}
\mod 2\pi
.
\label{eq:DefSingLambdas}
\end{equation}
For the 1d tight-binding model with localized detection state, these are the points
\begin{equation}
\Lambda^*_0 = 0, \qquad \Lambda^*_1 = \frac{4\gamma\tau}{\hbar} \mod 2\pi
.
\label{eq:}
\end{equation}
For special choices of $\tau$, two singular energies $E^*_l$ and $E^*_{l'}$ become equivalent:
\begin{equation}
\tau_c^{(l,l')}
:=
\frac{2 \pi \hbar}{\sAbs{E^*_l - E^*_{l'}}}
.
\label{eq:DefCritTau}
\end{equation}
These are the critical sampling periods \cite{Thiel2018-0}, and we will later show that such choices of $\tau$ yield special behavior of the first detection probabilities.
In the tight-binding model these are:
\begin{equation}
\tau_c^{(l)}
=
\frac{\pi\hbar}{2l\gamma}
,
\label{eq:TBDefCritTaus}
\end{equation}
for $l \in \sbrrr{1,\hdots,d}$.
At these critical detection periods, two or more singularities of $f\sof{E}$ get mapped to {\em one} singularity of $\mu\sof{\lambda}$.
The number $L$ of singularities of $\mu\sof{\lambda}$ is then smaller than $L'$, which is the number of singularities in $f\sof{E}$.
In the one dimensional tight-binding model, this is the already encountered critical value from Eq.~\eqref{eq:TBDefCritTau}.
However, $\mu\sof{\lambda}$'s singularities exhibit exactly the same power laws as $f\sof{E}$.
\subsection{Singularities in the generating function}
We conclude this section with identifying the singularities in the generating function of the detection amplitudes evaluated on the unit circle.
This is done by taking the limit $r\to1^-$ in Eq.~\eqref{eq:GenFunc} and using Eq.~\eqref{eq:AsymResolv}.
The singularities of $\varphi\sof{e^{i\lambda}}$ are the points $\Lambda^*_l$ defined by Eq.~\eqref{eq:DefSingLambdas}.
Close to these points, we have:
\begin{equation}
\varphi\sof{e^{i\sbr{\Lambda^*_l+\epsilon}}}
\sim
1 -
\frac{1}{
\tilde{u}_l\sof{\pm\epsilon}
+
B^\pm_l\epsilon^{\frac{d_S}{2}-1}
}
.
\end{equation}
We find a competition of terms in the denominator.
Depending on the value of the spectral dimension, either the power term or the analytic remainder dominates.
For $d_S\le2$ the analytic remainder $\tilde{u}_l$ is zero, while for $d_S >2$ it constitutes the leading order.
Performing the small-$\epsilon$ expansion, we find that $\varphi\sof{e^{i\lambda}}$'s singularity is always in one of its derivatives.
There is a crossover at the critical dimension $d_S = 2$:
\begin{equation}
\varphi\sof{e^{i\sbr{\Lambda^*_l\pm\epsilon}}}
\sim
\left\{ \begin{aligned}
1 - \frac{
\epsilon^{2-\frac{d_S}{2}-1}
}{
B^\pm_l
}, & \qquad d_S < 2, \\
\tilde{\varphi}_l\sof{\pm\epsilon}
+
\frac{
B^\pm_l
}{
\brr{ u\sof{e^{i\Lambda^*_l}} }^2
}
\epsilon^{\frac{d_S}{2}-1}, & \qquad d_S > 2
\end{aligned} \right.
.
\label{eq:AsymIntegrand}
\end{equation}
When $d_S$ is an even integer, logarithmic corrections appear.
This case is discussed in Appendix \ref{app:Even}.
An analytical remainder of $\varphi\sof{e^{i\lambda}}$ is always present (although it is trivial for small dimensions).
Interestingly, we see that, in higher dimensions, additional constants appear.
They are the derived from the return amplitudes:
\begin{equation}
u\sof{e^{i\Lambda^*_l}}
=
\Sum{n=0}{\infty}
e^{in\Lambda^*_l}
u_n
.
\label{eq:DefAddConstants}
\end{equation}
These series converge for $d_S>2$.
As we have mentioned, $u_n$ and $f\sof{E}$ as well as $\varphi_n$ and $\varphi\sof{e^{i\lambda}}$ are Fourier pairs.
In the next section, we apply an asymptotic formula for Fourier transforms to relate Eq.~\eqref{eq:AsymIntegrand} with the large $n$ behavior of $\varphi_n$.
After that, we apply the same formula to Eq.~\eqref{eq:SpecMeasAssump}.
\section{Using a Fourier-Tauber formula}
\label{sec:LargeN}
According to Refs.~\cite{Erdelyi1956-0,Gamkrelidze1989-0}, the singular points of $\varphi\sof{e^{i\lambda}}$ and the power-law behavior around these points determine the large $n$ asymptotics of its Fourier coefficients, which are the first detection amplitudes $\varphi_n$.
This is basically the Fourier analogue to the Tauberian theorems for the Laplace transform well-known in the theory of random walks \cite{Feller1971-0,Klafter2011-0}.
The notable difference is that in the classical setup, there usually is only one singular point at vanishing Laplace variable with the consequence that the first passage probability decays monotonically in the long-time limit.
This condition is violated in our case, as there are in general multiple singularities in $\varphi\sof{e^{i\lambda}}$.
This fact is clearly related to the presence of quantum interference.
Nevertheless, each of them can be isolated and the Tauberian theorem can be applied from both sides of the singular points.
This leads to a sum of different power-law terms accompanied with a complex exponential factor in $n$.
A derivation of the formula is given in Appendix~\ref{app:Tauber}, while rigorous proofs are found in the above cited references.
The main statement is the following:
Let $h\sof{x}$ be a function with $L$ singularities at $x_l^*$, each admitting an expansion like Eq.~\eqref{eq:SpecMeasAssump}:
\begin{equation}
h\sof{x^*_l\pm\epsilon}
\sim
\tilde{h}_l\sof{\pm\epsilon}
+
H^\pm_l \epsilon^{\nu-1}
,
\label{eq:HAssump}
\end{equation}
for some $l$-independent $\nu>0$.
Then, its Fourier transform behaves for large $n$ like:
\begin{equation}
\frac{1}{2\pi}\Int{0}{2\pi}{x} e^{-inx} h\sof{x}
\sim
\frac{\Gma{\nu}}{2\pi n^{\nu}} \Sum{l=0}{L-1} e^{-inx^*_l} \brr{
\frac{H^+_l}{i^{\nu}}
+
\frac{H^-_l}{\sbr{-i}^{\nu}}
}
.
\label{eq:HResult}
\end{equation}
In our prior publication \cite{Friedman2017-0,Friedman2017-1,Thiel2018-0} the large $n$ behavior was inferred from integrals along branch cuts in the complex plane.
The same procedure would be viable here.
In fact, each singular point $e^{i\Lambda^*_l}$ corresponds to a branch point of $\varphi\sof{z}$.
Instead of writing $\varphi_n$ as a Fourier transform of $\varphi\sof{e^{i\lambda}}$ one could put the branch cuts of $\varphi\sof{z}$ along rays to complex infinity and integrate around them.
However, the connection to the energy properties is clearer using the Fourier-Tauber theorem.
Since $\varphi\sof{e^{i\lambda}}$ admits the expansion Eq.~\eqref{eq:AsymIntegrand} around each of the singular points $\Lambda^*_l$, one finds that $\varphi_n$ behaves for large $n$ like:
\begin{equation}
\varphi_n
\sim
\Sum{l=0}{L-1}
\frac{e^{-i n \Lambda^*_l}}{2\pi}
\times
\left\{ \begin{aligned}
\tfrac{\sGma{2-\frac{d_S}{2}}}{n^{2-\frac{d_S}{2}}}
\brr{
\tfrac{i^{\frac{d_S}{2}}}{B^+_l}
+
\tfrac{\sbr{-i}^{\frac{d_S}{2}}}{B^-_l}
}
, & \; d_S < 2 \\
\tfrac{\sGma{\frac{d_S}{2}}}{\sbrr{u\sof{e^{i\Lambda^*_l}}}^2 n^{\frac{d_S}{2}}}
\brr{
\tfrac{B^+_l}{i^{\frac{d_S}{2}}}
+
\tfrac{B^-_l}{\sbr{-i}^{\frac{d_S}{2}}}
}
, & \; d_S > 2
\end{aligned} \right.
\label{eq:AsymFDA1}
\end{equation}
The squared absolute value of this expression is the desired first detection probability and reproduces Eq.~\eqref{eq:AsymFDP}.
There, we hid most of the constants in the complex numbers $F_{l,d_S}$ which are now made explicit.
Eq.~\eqref{eq:AsymFDP} [and Eq.~\eqref{eq:AsymFDA1}] is our main result and conveys the following qualitative properties:
The first detection probability decays like a power law that only depends on the spectral dimension.
The decay exponent exhibits a crossover at the critical dimension two and it is exactly double the exponent from the first passage probability of classical random walks \cite{Redner2007-0}.
The frequencies of the oscillations are determined by the positions $E^*_l$ of the singularities of the MSDOS via Eq.~\eqref{eq:DefSingLambdas}.
Eq.~\eqref{eq:AsymFDP} is valid for systems with a continuous energy spectrum.
In the classical theory, the spectral dimension can always be identified with the exponent $d_S^\text{DOS}$ in $\rho\sof{E}$.
It is hence a property of the Hamiltonian alone, independent of the initial or detection state.
Such an identification is possible for ordinary detection states whose MSDOS $f\sof{E}$ behaves sufficiently smoothly around the van Hove singularities and also does not vanish at these points.
This is basically a condition on the overlap of $\sKet{\ensuremath{\psi_\text{d}}}$ with certain energy eigenstates, and has been used in a modified form also in \cite{Li2017-0}.
A notable class of exceptions are those states that have no overlap with the eigenstates of the singular energies.
We refer to them as ``insufficiently populated'' states.
Consider the 1d tight-binding model with the detection state $\sKet{\ensuremath{\psi_\text{d}}} = \sbr{\sKet{a} - \sKet{-a}}/\sqrt{2}$.
Repeating the computations that led to Eq.~\eqref{eq:TBSMCounterEx1}, we find the MSDOS for this state to be:
\begin{equation}
f\sof{E}
=
\frac{1}{4\pi\gamma^2}\sqrt{E\sbr{4\gamma-E}}
.
\label{eq:TBSMCounterEx2}
\end{equation}
Although the density of states is an arcsin law, $f\sof{E}$ is a semicircle law.
The spectral dimension in this case is $d_S = 3$ whereas $d_S^\text{DOS} = 1$!
The reason is that $\sKet{\ensuremath{\psi_\text{d}}}$ was chosen to have no momentum components with $k=0$ and $k=\pi/a$, which correspond to the singular energies.
Hence, this $\sKet{\ensuremath{\psi_\text{d}}}$ is insufficiently populated around the singular energies, leading to a discrepancy between $\rho\of{E}$'s and $f\sof{E}$'s spectral dimension.
Another choice would be a state with wave vector representation supported only in the interval $[\pi/(2a),3\pi/(4a)]$.
This example would even have different singular energies, because the support of $f\sof{E}$ lies in $[2\gamma,4\gamma\sin^2\sof{3\pi/8}]$.
Yet another exception is a heavy-tailed detection state that decays like $\sBK{x}{\ensuremath{\psi_\text{d}}} \sim \sAbs{x}^{-1-\nu}$, for some $\nu \in [0,2]$.
In momentum space, such a detection state will behave like $\sAbs{k}^\nu$ around the origin.
This leads to a spectral dimension $d_S = 1+\nu$ as opposed to $d_S^\text{DOS} = 1$.
All these cases show that the spectral dimension and the singular points, that we use in this article, are strictly speaking properties of $f\sof{E}$ and not of $\rho\sof{E}$.
In many important cases, however, the positions of the singularities coincide in both, as do the power-law exponents.
(Although the prefactors $A^\pm_l$ found in $f\sof{E}$ may be different from those found in $\rho\sof{E}$.)
This is what we called ordinary.
The classical theory does not know of insufficiently populated states.
All eigenstates of the Hamiltonian/Laplacian are extended, due to the continuous nature of the spectrum, and have support in every lattice site.
Hence, each lattice site has overlap with every energy, in particular with the singular ones.
The only possible exception is a non-ergodic system that splits up into separate pieces.
As a superposition of lattice states with negative interference in some energy is our of question due to positivity of probabilities, it is not possible to construct an insufficiently populated state in the classical first passage theory.
The coefficients $B^\pm_l$ are often not easy to obtain.
Knowledge about the return amplitudes $u_n$ opens up an alternate way to compute $\varphi_n$.
We remember that $f\sof{E}$ and $u_n$ are also a Fourier pair and that we have access to $f\sof{E}$'s singularities in Eq.~\eqref{eq:SpecMeasAssump}.
Therefore, the Fourier-Tauber theorem is now applied to Eq.~\eqref{eq:RA} using Eq.~\eqref{eq:SpecMeasAssump}.
The result is:
\begin{equation}
u_n
\sim
\frac{1}{n^{\frac{d_S}{2}}}
\Sum{l=0}{L-1}
C_l
e^{-in\Lambda^*_l}
,
\label{eq:RAAsym}
\end{equation}
where
\begin{equation}
C_l
:=
\Gma{\frac{d_S}{2}}
\br{\frac{\hbar}{\tau}}^{\tfrac{d_S}{2}}
\Sum{l'\sim l}{}
\brr{
\frac{
A^+_{l'}
}{
i^{\frac{d_S}{2}}
}
+
\frac{
A^-_{l'}
}{
\sbr{-i}^{\frac{d_S}{2}}
}
}
.
\label{eq:DefC}
\end{equation}
In Eq.~\eqref{eq:RAAsym} we have already identified the critical angles $\Lambda^*_l = \tau E^*_l/\hbar$ from Eq.~\eqref{eq:DefSingLambdas}.
The sum in Eq.~\eqref{eq:DefC} runs over all equivalent energies, i.e. $e^{i\tau E^*_{l'}/\hbar} = e^{i\Lambda^*_l}$.
Remarkably, the return amplitudes oscillate with the same frequencies as do the detection amplitudes, which is also true in the above discussed exceptions, because $u_n$ and $\varphi_n$ are related to $f\sof{E}$ (and not to $\rho\sof{E}$).
The relation between the MSDOS and the time decay of the return amplitudes is known in the literature \cite{Marchetti2012-0}.
In Refs.~\cite{Muelken2006-0,Muelken2011-0} a similar argument was used to relate the spectral dimension to the decay of the return amplitudes.
However, in these references the DOS was used instead of the MSDOS.
Also the role of the van Hove singularities was under-appreciated.
Therefore the authors were only able to predict the decay of the envelope of $u_n$ but not the oscillations.
Often the matrix elements of the evolution operator are much more accessible than the MSDOS.
Consequently the coefficients $C_l$ may be more easily available than the $B^\pm_l$'s.
Therefore, we provide this alternative approach to derive $\varphi_n$.
For example: In Eq.~\eqref{eq:RATB}, we found the return amplitudes for the tight-binding model in arbitrary dimension.
Application of the asymptotic formula $\sBesselJ{0}{x} \sim \cos\sof{x - \pi/4} \sqrt{2/(\pi x)}$, yields:
\begin{align}
u_n
\sim & \nonumber
e^{-i\frac{2d\gamma\tau}{\hbar}n}
\br{\frac{\hbar}{\pi \gamma\tau n}}^{\frac{d}{2}}
\cos^d\of{\frac{2\gamma\tau}{\hbar}n - \frac{\pi}{4}}
\\ = &
\br{\frac{\hbar}{4\pi \gamma\tau n}}^{\frac{d}{2}}
\Sum{l=0}{d}
\binom{d}{l}
e^{-i\frac{4l\gamma\tau}{\hbar}n + i \sbr{2l-d}\frac{\pi}{4}}
.
\end{align}
From here, one identifies:
\begin{equation}
C_l
=
\br{\frac{\hbar}{4\pi\gamma\tau}}^{\frac{d}{2}}
\binom{d}{l}
e^{i\sbr{2l-d}\frac{\pi}{4}}
, \quad
\Lambda^*_l
=
\frac{4\gamma\tau}{\hbar} l
,
\label{eq:DefCTB}
\end{equation}
with $l\in\sbrrr{0,1,\hdots,d}$ for the tight-binding model.
The above equation is correct, provided that $\tau$ does not assume a critical value.
In that case, two singular points merge and the corresponding $C_l$'s have to be added.
The points $4\gamma l$ are exactly the van Hove singularities of the density of states, see Fig.~\ref{fig:SM}(a,b).
Multiplication of Eq.~\eqref{eq:RAAsym} with $r^n e^{i\lambda n}$, summing over $n$, and taking the limit $r\to1^-$, gives us the resolvent $u\sof{e^{i\lambda}}$.
Close to the critical points $\Lambda^*_l$ it behaves as:
\begin{align}
u\of{e^{i\sbr{\Lambda^*_l\pm\epsilon}}}
\sim & \nonumber
\tilde{u}_l\sof{\pm\epsilon}
+
\Gma{1-\tfrac{d_S}{2}}
C_l
\br{1-e^{\pm i\epsilon}}^{\frac{d_S}{2}-1}
\\ \sim &
\tilde{u}_l\sof{\pm\epsilon}
+
\Gma{1-\tfrac{d_S}{2}}
C_l
\sbr{\mp i\epsilon}^{\frac{d_S}{2}-1}
.
\label{eq:}
\end{align}
The last line was obtained by taking $\sbr{1-e^{ix}}^\nu \sim \sbr{-ix}^\nu$.
Comparing this equation with \eqref{eq:AsymResolv} allows us to relate the different coefficients:
\begin{equation}
B^\pm_l
=
\Gma{1-\tfrac{d_S}{2}}
e^{\mp i \tfrac{\pi\sbr{d_S-2}}{4}}
C_l
.
\label{eq:Coeffs1}
\end{equation}
$B^\pm_l$ and $A^\pm_l$ can be related via Eq.~\eqref{eq:DefC}.
Plugging the result into Eq.~\eqref{eq:AsymFDA1}, we can express the detection amplitudes in terms of the decay behavior of the return amplitudes (i.e. in terms of $C_l$):
\begin{equation}
\varphi_n
\sim
\left\{ \begin{aligned}
\frac{\br{1-\frac{d_S}{2}}\sin\of{\frac{\pi d_S}{2}}}{\pi n^{2-\frac{d_S}{2}}}
\Sum{l=0}{L-1}
\frac{e^{-i n \Lambda^*_l}}{C_l}
, & \quad d_S < 2 \\
\frac{1}{n^{\frac{d_S}{2}}}
\Sum{l=0}{L-1}
\frac{C_l}{\sbrr{u\sof{e^{i\Lambda^*_l}}}^2}
e^{-i n \Lambda^*_l}
, & \quad d_S > 2
\end{aligned} \right.
.
\label{eq:AsymFDA2}
\end{equation}
We used $\Gma{x}\Gma{1-x} = \pi/\sin\sof{x\pi}$.
Alternatively, one could express the detection amplitudes in terms of the MSDOS $f\sof{E}$ (that is in terms of $A^\pm_l$) using Eq.~\eqref{eq:DefC}.
For $d_S < 2$ one obtains:
\begin{equation}
\varphi_n
\sim
-\frac{1}{\pi^2n^2} \br{\frac{n\tau}{\hbar}}^{\frac{d_S}{2}}
\Sum{l=0}{L-1}
\frac{
\Gma{2-\tfrac{d_S}{2}}
\sin\of{\tfrac{\pi d_S}{2}}
e^{-i n \Lambda^*_l}
}{
\sSum{l'\sim l}{}
A^+_{l'} \sbr{-i}^{\frac{d_S}{2}}
+ A^-_{l'} i^{\frac{d_S}{2}}
}
.
\label{eq:AsymFDA3a}
\end{equation}
For $d_S > 2$ one obtains:
\begin{equation}
\varphi_n
\sim
\frac{\Gma{\frac{d_S}{2}}}{\br{\frac{n\tau}{\hbar}}^{\frac{d_S}{2}}}
\Sum{l'=0}{L'-1} \frac{e^{-i n\frac{\tau E^*_{l'}}{\hbar}}}{u^2\sof{e^{i\frac{\tau E^*_{l'}}{\hbar}}}}
\brr{
A^+_{l'}\sbr{-i}^{\frac{d_S}{2}}
+
A^-_{l'} i^{\frac{d_S}{2}}
}
.
\label{eq:AsymFDA3b}
\end{equation}
Eqs.~(\ref{eq:AsymFDA2}-\ref{eq:AsymFDA3b}) complement Eq.~\eqref{eq:AsymFDA1}.
They are useful when neither the MSDOS $f\sof{E}$, nor an expansion of the return amplitudes, are available.
Using Eq.~\eqref{eq:DefCTB} in Eq.~\eqref{eq:AsymFDA2} one obtains the detection amplitudes for the tight-binding model for dimensions larger than $2$:
\begin{equation}
\varphi_n
\sim
\br{\frac{\hbar}{4\pi \gamma\tau n}}^{\frac{d}{2}}
\Sum{l=0}{d}
\binom{d}{l}
\frac{
e^{-i\frac{4l\gamma\tau}{\hbar} n + i \sbr{2l-d}\tfrac{\pi}{4}}
}{
\sbrr{u\sof{e^{i\frac{4l\gamma\tau}{\hbar}}}}^2
}
.
\label{eq:TBFDA}
\end{equation}
The modulus squared of this expression is the first detection probability:
\begin{equation}
F_n
\sim
\br{\frac{\hbar}{4\pi \gamma\tau n}}^{d}
\Abs{
\Sum{l=0}{d}
\binom{d}{l}
\frac{
e^{i\sbr{d-2l}\br{\frac{2\gamma\tau}{\hbar} n -\tfrac{\pi}{4}} - 2i\arg{u\sof{e^{i\frac{4l\gamma\tau}{\hbar}}}}}
}{
\sAbs{u\sof{e^{i\frac{4l\gamma\tau}{\hbar}}}}^2
}
}^2
.
\label{eq:TBFDPHighD}
\end{equation}
From the expression of $u_n$, Eq.~\eqref{eq:RATB}, and the definition of $u\sof{z}$, one can infer that $u\sof{e^{i4l\gamma\tau/\hbar}}$ is the complex conjugate of $u\sof{e^{i4\sbr{d-l}\gamma\tau/\hbar}}$.
Hence, for each term in the sum of Eq.~\eqref{eq:TBFDPHighD}, its complex conjugate appears as well.
The complex exponentials can actually be replaced by cosines and half of the terms in the sum can be dropped.
The constants $u\sof{e^{i4l\gamma\tau/\hbar}}$ are defined by Eq.~\eqref{eq:DefAddConstants}, no closed form is known to us, and so they have to be computed numerically.
Eq.~\eqref{eq:TBFDPHighD} holds for all dimensions larger than two.
The one dimensional tight-binding model has already been discussed extensively in Refs.~\cite{Friedman2017-0,Friedman2017-1,Thiel2018-0}.
There, the following formula was reported:
\begin{equation}
F_n
\sim
\frac{4\gamma\tau}{\hbar \pi n^3}
\cos^2\of{\frac{2\gamma\tau}{\hbar} n + \frac{\pi}{4}}
,
\label{eq:TBFDP1D}
\end{equation}
which is perfect accordance with Eq.~\eqref{eq:AsymFDA2}.
Curiously, in both the one- and three dimensional case, one finds a power law with exponent $-3$.
In the two dimensional case logarithmic corrections to the power law appear.
This case is discussed in Appendix~\ref{app:Even}.
We simply state here the result for the 2d tight-binding model:
\begin{equation}
F_n
\sim
\br{ \frac{ 4\pi\gamma\tau}{\hbar n \ln^2 n}}^2
\Abs{
\frac{1}{2}
-
2\sin\of{\frac{4\gamma\tau}{\hbar} n}
}^2
.
\label{eq:TBFDP2D}
\end{equation}
In Eq.~\eqref{eq:TBFDP1D} and Eq.~\eqref{eq:TBFDP2D} the Zeno effect is visible.
When the limit $\tau\to0$ is taken, our asymptotic result vanishes.
From Eq.~\eqref{eq:QuantumRenewal} and Eq.~\eqref{eq:RATB}, one finds that actually $F_n \to \delta_{n,1}$.
The physical meaning is that the particle is found at the detection site, immediately after the experiment commences.
Hence the long-time asymptotics of $F_n$ vanish.
The expression in Eq.~\eqref{eq:TBFDPHighD}, however, diverges for fixed $n$ in the limit $\tau\to0$!
(This can be seen from numerical evaluation of $u\sof{e^{i4l\gamma\tau/\hbar}}$.)
In the numerical simulations of $F_n$, an intermediary regime appears in $F_n$ that is not described by our asymptotic formula.
As $\tau$ decreases, this intermediary region grows in size.
The $F_n$ values in that regime go to zero as $\tau$ decreases.
At the same time, $F_1$ goes to unity, so that in total $F_n\to \delta_{n,1}$, and the Zeno effect is restored.
It can not be observed in our asymptotic formula, though.
Numerical simulations were performed by using Eq.~\eqref{eq:RATB} in Eq.~\eqref{eq:QuantumRenewal} and solving the resulting system of linear equations for $\varphi_n$.
In Fig.~\ref{fig:TB_FDP}, we present numerical simulations for the tight-binding model in dimensions two and three for $\tau = 0.25\hbar/\gamma$.
In the two-dimensional case, we fitted the envelope of the asymptotic result \eqref{eq:TBFDP2D}.
Due to the slow logarithmic convergence it is necessary to replace $\ln^2n$ by $\sbr{\ln n + x}^2$.
$x$ is determined from a fit.
Both expressions are equivalent in the asymptotic limit.
The figure depicts the fit.
One clearly sees the expected dimension dependent power-law decay of Eq.~\eqref{eq:AsymFDP} in the envelope of $F_n$ as well as the oscillations, and an overall good agreement with our prediction.
\begin{figure*}
\includegraphics[width=0.99\textwidth]{TBFDP.pdf}
\caption{
Probability of first detected return for the tight-binding model in a semi-logarithmic plot.
Blues squares are numerical results, orange circles are the predictions of Eq.~\eqref{eq:TBFDP2D} and Eq.~\eqref{eq:TBFDPHighD}.
(a): Two dimensional case with $\tau = 0.25\hbar/\gamma$.
The $n^{-2}$ power-law and the oscillations are clearly visible.
The logarithmic factor was adjusted to $(\pi + \ln n)^{-4}$, see Appendix~\ref{app:Even}.
Inset: Two dimensional case for the critical detection period $\tau = \tau_c = (\pi/2)\hbar/\gamma$.
No oscillations are present here, because $\mu\sof{\lambda}$ has only one singular point.
We find a $n^{-2}$ power law with logarithmic factor $(x + \ln n)^{-4}$, where $x \approx 10.531$, see Appendix~\ref{app:Even}.
(b): Three dimensional case for $\tau = 0.25 \hbar/\gamma$.
Inset: $d=3$ with $\tau = \tau_c = \sbr{\pi/2}\hbar/\gamma$.
Oscillations are only present for non-critical $\tau$.
No fitting was applied to the right-hand side plots.
\label{fig:TB_FDP}
}
\end{figure*}
As discussed, the frequency of the oscillations is given by Eq.~\eqref{eq:DefSingLambdas} and can be controlled via the detection period $\tau$.
This is demonstrated in Fig.~\ref{fig:TB_FFT}, where we plotted the power spectrum of $n^3 F_n$ for $d=3$ and $\tau =0.15\hbar/\gamma$.
(The power spectrum is the modulus squared of the discrete Fourier transform of $n^3 F_n$.)
This makes it possible to visualize the characteristic frequencies.
The frequencies $\Lambda^*_l = 0.6 l$, $l\in\sbrrr{0,1,2,3}$ are visible as peaks in the power spectrum of $n^3 F_n$.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{TBFFT.pdf}
\caption{
Power spectrum of $n^3 F_n$ for $d=3$ and $\tau=0.15\hbar/\gamma$.
Stripping off the power law off of $F_n$ by the multiplication with $n^3$ exposes the oscillatory terms.
The frequencies of the oscillations correspond to peaks in the Fourier transform.
Compare the position of the peaks with the prediction of equation Eq.~\eqref{eq:DefSingLambdas}, $\Lambda^*_l = 0.6 l$, $l=0, 1, 2, 3$.
\label{fig:TB_FFT}
}
\end{figure}
As another demonstration, we present the case of a critical detection amplitude, $\tau_c = (\pi/2) \hbar/\gamma$ from Eq.~\eqref{eq:TBDefCritTau}.
For this value of $\tau$, all critical energies become equivalent and get mapped to the same critical point $\Lambda^*$.
As a consequence the oscillations in $F_n$ disappear and only the power law remains.
This is shown in the insets of Fig.~\ref{fig:TB_FDP}.
Finally, we want to illustrate our discussion on insufficiently populated states and we want to showcase the dependence of the spectral dimension on the detection state.
Therefore, we consider the two dimensional tight-binding model with the detection state:
\begin{equation}
\sKet{\ensuremath{\psi_\text{d}}}
=
\frac{1}{2} \brr{
\sKet{\sbr{a,a}}
+
\sKet{\sbr{-a,-a}}
-
\sKet{\sbr{a,-a}}
-
\sKet{\sbr{-a,a}}
}
.
\label{eq:PsiDetSpecial}
\end{equation}
By construction it has no overlap with the singular energies $E^*_l = 0, 4\gamma, 8\gamma$.
The corresponding MSDOS is a convolution of Eq.~\eqref{eq:TBSMCounterEx2} with itself.
This is a convolution of two MSDOS's with $d_S = 3$ each, resulting in $d_S = 6$.
$f\sof{E}$ is presented in Fig.~\ref{fig:TBSpecial}(a) and does not resemble Fig.~\ref{fig:SM}(a) at all.
For this choice, the return amplitudes are given by:
\begin{equation}
u_n
=
e^{-i\frac{4\gamma\tau}{\hbar}n}
\brr{
\BesselJ{0}{\frac{2\gamma\tau}{\hbar}}
+
\BesselJ{2}{\frac{2\gamma\tau}{\hbar}}
}^2
\label{eq:}
\end{equation}
and decay like $n^{-3}$.
The spectral dimension in $f\sof{E}$ is $d_S = 6$ instead of $d_S^\text{DOS}=2$ found in the DOS!
The simulations depicted in Fig.~\ref{fig:TBSpecial}(b) reflect this fact nicely.
\begin{figure*}
\includegraphics[width=0.99\textwidth]{TBSpecial.pdf}
\caption{
Two dimensional tight-binding model with the special detection state of Eq.~\eqref{eq:PsiDetSpecial}.
(a) The MSDOS $f\sof{E}$.
It is wildly different from the DOS $\rho\sof{E}$ depicted in the inset or in detail in Fig.~\ref{fig:SM}(a).
The spectral dimension found in $f\sof{E}$ is six rather than two, which is obtained from the DOS.
The singularities of $f\sof{E}$ are in the in its second derivative, but they can be found in the same positions as the singularities of $\rho\sof{E}$.
(b) The first detection probabilities $F_n$ for this case.
In contrast to a ordinary choice of the detection state, we do not find a $n^{-2}\ln^{-4} n$ decay of $F_n$ but rather a $n^{-6}$ decay in accordance with Eq.~\eqref{eq:AsymFDP} for large spectral dimensions.
\label{fig:TBSpecial}
}
\end{figure*}
In the next two sections we discuss two off-lattice models.
The first is the free particle in continuous space and the second is the L\'evy particle.
\section{The free particle in continuous space}
\label{sec:FreePart}
\begin{figure*}
\includegraphics[width=0.99\textwidth]{FPFDP.pdf}
\caption{
First detection probabilities for the free particle and the L\'evy particle.
Blue squares: $d=1$, Orange circles: $d=3$, Black lines: theoretical prediction of Eq.~\eqref{eq:NRFDP} and Eq.~\eqref{eq:LevyFDP}
(a): Free particle. $\tau$ is measured in units of $M\sigma^2/\hbar$.
As $\tau$ decreases, $F_1$ moves closer to unity and a plateau appears for small $n$.
The plateau extends to the right as $\tau$ becomes smaller and the plateau value converges to zero.
This is the Zeno effect $F_n\to\delta_{n,1}$.
Our asymptotic solution only describes the non-plateau regime of $F_n$.
(b) L\'evy particle with $\tau = 1 \hbar/E_0$, $\alpha=0.8$ and different $d$.
Prediction and simulations agree nicely.
By tuning $\alpha$ arbitrary power law exponents can be observed.
\label{fig:FreePart}
}
\end{figure*}
We now consider a free non-relativistic particle with mass $M$ in continuous $d$-dimensional space.
The Hamiltonian is given by the kinetic energy
\begin{equation}
\ensuremath{\hat{H}} := \frac{\hbar^2}{2M} \hat{\V{k}}^2
,
\label{eq:FPHamiltonian}
\end{equation}
where we write its momentum in terms of the wave vector $\hbar \V{k}$.
The first difficulty one faces here is the definition of the detection state.
In contrast to the lattice system, position eigenstates have zero width, and projection onto such states can be tricky.
We assume instead that our detector has a finite accuracy $\sigma$ and projects the wave function to a Heisenberg state with minimum uncertainty.
In momentum representation this state is defined as:
\begin{equation}
\psi_{\V{0}}\of{\V{k}}
:=
\br{\frac{2\sigma^2}{\pi}}^{\frac{d}{4}}
\exp\off{-\sigma^2\V{k}^2}
,
\label{eq:NRDefPsi}
\end{equation}
where the wave vector/momentum $\V{k} = \V{p}/\hbar$ is a real $d$-dimensional vector.
Such a wave function has $\sEA{\V{r}} = \V{0}$, $\sEA{\V{p}}=\V{0}$, furthermore: $\sEA{\sbr{\V{r}-\V{x}_0}^2} = d \sigma^2$, and $\sEA{\V{p}^2} = d \hbar^2/(4 \sigma^2)$.
In particular it has minimum uncertainty between the momentum and position coordinates.
We choose this state as initial and detection state: $\sKet{\ensuremath{\psi_\text{d}}} = \sKet{\ensuremath{\psi_\text{in}}} = \sKet{\psi_{\V{0}}}$.
For the free particle the dispersion relation is the usual kinetic energy:
\begin{equation}
E\of{\V{k}}
=
\frac{\sbr{\hbar\V{k}}^2}{2M}
.
\label{eq:NRDispersion}
\end{equation}
As an abbreviation, we define the detection state's energy per degree of freedom: $E_0 := \EA{\V{p}^2/(2m)}/(2d) = \hbar^2/(4m\sigma^2)$.
Using the momentum representation of the evolution operator, we can obtain the return amplitudes from an integral over all wave vectors:
\begin{align}
\sBAK{\psi_{\V{0}}}{\ensuremath{\hat{U}}\of{n\tau}}{\psi_{\V{0}}}
= &
\Int{\ensuremath{\mathbb{R}} ^d}{}{\V{k}}
\sAbs{\psi_{\V{0}}\sof{\V{k}}}^2 e^{-i \frac{n\tau E\sof{\V{k}}}{\hbar}}
=
\frac{1}{\br{1 + i \frac{E_0 \tau}{\hbar}n}^{\frac{d}{2}}}
\label{eq:NRTEO}
.
\end{align}
The expression has two regimes for $n$:
When $1 \gg n E_0\tau/\hbar = (n \hbar \tau) /(4M\sigma^2) $, the return amplitude is approximately unity.
For fixed $n$, this is the case when either $\tau$ is very small or $\sigma$ is very large.
In the opposite case, the return amplitude decays like a power law, which reveals the spectral dimension as equal to the Euclidean one,$d_S =d$.
The crossover between the two regimes appears at $\tilde{n} \approx \hbar/(E_0\tau) = 4M\sigma^2/(\hbar\tau)$ and marks the time at which the momentum-induced dispersion of the wave packet becomes comparable to the initial width of the wave packet.
In the limit $\sigma\to0$, the return amplitudes vanish.
This signals to us that an infinitely precise position measurement is not physically meaningful.
Since the dispersion relation \eqref{eq:NRDispersion} is a parabola, there is only one critical point on the energy surface, which is located at zero momentum.
This shows there will be no oscillations in the first detection probabilities in accord with the simulations depicted in Fig.~\ref{fig:FreePart}(a).
The resolvent at the critical value on the unit circle can be expressed via the Hurwitz zeta function $\zeta\sof{s;a} := \sSum{n=0}{\infty} \sbr{n+a}^{-s}$.
\begin{equation}
u\sof{z=1}
=
\br{\frac{-i\hbar}{E_0\tau}}^{\frac{d}{2}}
\zeta\sof{\tfrac{d}{2};-i\tfrac{\hbar}{E_0\tau}}
,
\label{eq:NRResolv}
\end{equation}
From the momentum presentation and the dispersion relation it is also easy to find the MSDOS.
\begin{equation}
f\sof{E}
=
\Int{\ensuremath{\mathbb{R}} ^d}{}{\V{k}} \sAbs{\psi_{\V{0}}\sof{\V{k}}}^2
\delta\sof{E - \tfrac{\sbr{\hbar \V{k}}^2}{2m}}
=
\frac{e^{-\frac{E}{E_0}}}{\Gma{\tfrac{d}{2}} E_0}
\br{\tfrac{E}{E_0}}^{\frac{d}{2}-1}
.
\label{eq:NRSpecMeas}
\end{equation}
Of course, $f\sof{E} = 0$ for $E<0$.
Therefore, one can easily identify the constants $A^\pm$ of the MSDOS's singularity at $E^*=0$:
\begin{equation}
A^+ := \frac{1}{\Gma{\tfrac{d}{2}} E_0^{\frac{d}{2}} }, \quad A^- = 0, \quad d_S = d
\label{eq:NRA}
\end{equation}
Using Eq.~\eqref{eq:NRResolv} and \eqref{eq:NRA} in Eqs.~\eqref{eq:AsymFDA3a} and \eqref{eq:AsymFDA3b}, and squaring the result, one obtains:
\begin{equation}
F_n
\sim
\left\{ \begin{aligned}
\frac{E_0\tau}{4 \pi^2 \hbar n^3}
, & \quad d = 1 \\
\br{\frac{E_0\tau}{\hbar n \ln^2 n}}^2
, & \quad d = 2 \\
\Abs{\zeta\sof{\tfrac{d}{2};-i\tfrac{\hbar}{E_0\tau}}}^{-4}
\br{\frac{E_0 \tau}{\hbar n}}^{d}
, & \quad d > 2
\end{aligned} \right.
.
\label{eq:NRFDP}
\end{equation}
(For the 2d case see Appendix~\ref{app:Even}.)
Fig.~\ref{fig:FreePart} shows excellent agreement between simulations and Eq.~\eqref{eq:NRFDP}.
To our surprise, the Zeno effect is not visible in our equation for $d>3$!
Using the formula $\sAbs{\zeta\sof{s;-i/x}} \sim x^{s-1}$, as $x\to0$, we find that $F_n \propto \tau^{4-d}$.
In our simulations, however, we still find $F_n\to\delta_{n,1}$ as $\tau\to0$.
As $\tau$ decreases a plateau forms in $F_n$ for small $n$ that increases in size and decreases in height.
In the same time $F_1$ moves closer to unity.
The asymptotic formula is valid only in the non-plateau region.
This is similar to the tight-binding case.
\section{Free L\'evy-particle}
\label{sec:Levy}
Instead of the regular dispersion relation \eqref{eq:NRDispersion}, one can also impose an anomalous energy-momentum relation:
\begin{equation}
E\sof{\V{k}}
=
C \sAbs{\V{k}}^\alpha
,
\label{eq:LevyDisp}
\end{equation}
with $0<\alpha<2$, and some constant $C$ with suitable units.
This can be viewed as a continuous interpolation between a non-relativistic and a relativistic dispersion relation, the latter being attained by putting $\alpha=1$.
Such a dispersion relation can be obtained, when the Laplacian in the Hamiltonian is replaced by a fractional Laplacian (a Riesz-Feller derivative) of order $\alpha/2$, which is obviously very different than the canonical approach that leads to the Dirac equation.
Detection and preparation state are again taken to be the Gaussian, Eq.~\eqref{eq:NRDefPsi}.
We define the energy constant $E_0 := C (2\sigma^2)^{-\alpha/2}$ which is related to the energy of the state $\sKet{\psi_{\V{0}}}$.
Using Eq.~\eqref{eq:NRDefPsi} and the dispersion relation, we can write the return amplitude in terms of an integral.
We use spherical coordinates and change variables to $y = 2\sigma^2 k^2$, to obtain:
\begin{equation}
\sBAK{\psi_{\V{0}}}{\ensuremath{\hat{U}}\sof{t}}{\psi_{\V{0}}}
=
\frac{1}{\Gma{\frac{d}{2}}}
\Int{0}{\infty}{y}
y^{\frac{d}{2}-1}
e^{-y - i \frac{E_0 t}{\hbar} y^{\frac{\alpha}{2}}}
.
\label{eq:LevyTEO}
\end{equation}
For large $t$, expanding the $e^{-y}$ term leads to an asymptotic series in inverse powers of $t$.
The leading order is $t^{-d/\alpha}$.
Therefore the spectral dimension is $d_S = 2d/\alpha$ and can be tuned to any real number larger than $d$ by adjusting $\alpha$.
The same result is obtained when computing $f\sof{E}$ from the Brillouin zone:
\begin{equation}
f\sof{E}
=
\frac{2}{\alpha}\frac{e^{-\br{\frac{E}{E_0}}^{\frac{2}{\alpha}}}}{\Gma{\tfrac{d}{2}}E_0}
\br{\frac{E}{E_0}}^{\frac{d}{\alpha}-1}
.
\label{eq:LevySpecMeas}
\end{equation}
From here we identify the coefficients $A^\pm$:
\begin{equation}
A^+
:=
\frac{2}{\alpha}\frac{1}{\Gma{\tfrac{d}{2}}}
E_0^{-\frac{d}{\alpha}}
, \quad A^- = 0
, \quad d_S = \frac{2}{\alpha} d,
\label{eq:LevyA}
\end{equation}
and write down the first detection probabilities:
\begin{equation}
F_n
\sim
\left\{ \begin{aligned}
\frac{\br{1-\frac{d}{\alpha}}^2}{\pi^2 n^{4 - \frac{2d}{\alpha}}}
\br{\frac{\Gma{\frac{d}{2}+1}}{\Gma{\frac{d}{\alpha}+1}}}^2
\br{ \frac{\tau E_0}{\hbar}}^{\frac{2d}{\alpha}}
, & \quad d < \alpha \\
\br{\frac{\tau E_0\Gma{1+\tfrac{d}{2}}}{\hbar n \ln^2 n} }^2
, & \quad d = \alpha \\
\br{\frac{\Gma{\frac{d}{\alpha}+1}}{\Gma{\frac{d}{2}+1}}}^2
\frac{1}{\sAbs{u\sof{z=1}}^4}
\br{ \frac{\hbar}{n \tau E_0} }^{\frac{2d}{\alpha}}
, & \quad d > \alpha
\end{aligned} \right.
.
\label{eq:LevyFDP}
\end{equation}
The remaining constant can be computed via a numeric integral:
\begin{equation}
u\sof{z=1}
=
\frac{1}{\Gma{\frac{d}{2}}}
\Int{0}{\infty}{y}
\frac{y^{\frac{d}{2}-1} e^{-y}}{
1 - e^{-i\frac{E_0\tau}{\hbar} y^{\frac{\alpha}{2}}}
}
\label{eq:}
\end{equation}
Simulations of such a process are possible by numerically evaluating the integral of Eq.~\eqref{eq:LevyTEO}.
Results are depicted in Fig.~\ref{fig:FreePart}(b).
This simple but important example shows that Eq.~\eqref{eq:AsymFDP} also holds for {\em fractional} values of the spectral dimension.
\section{Discussion}
\label{sec:Disc}
This article exposed the relation between the quantum first detection probability and the MSDOS $f\sof{E}$.
Qualitative features like the power law decay and the frequencies of the oscillations have been obtained from $f\sof{E}$, in particular from its singular points and its behavior in their vicinity.
In the ordinary case these properties can also be inferred from the DOS $\rho\sof{E}$.
We stress that our main results from Eq.~\eqref{eq:AsymFDP}, also hold for the problem of the first detected {\em arrival}, i.e. when $\sKet{\ensuremath{\psi_\text{in}}} \ne \sKet{\ensuremath{\psi_\text{d}}}$.
The additional steps are carried out in Appendix~\ref{app:Arrival}.
In the main text, we only restricted ourselves to the problem of first detected return for notational economy.
The frequencies of the oscillations in $F_n$ can be found in the asymptotic decay of the return amplitudes.
This is clear from comparison of Eq.~\eqref{eq:RAAsym} and Eq.~\eqref{eq:AsymFDA1}.
The reason is that they both are related via $f\sof{E}$.
The $\tau$-dependence of the frequencies gives rise to the existence of critical detection periods, when the number of different frequencies changes abruptly.
In the ordinary case, the frequencies are determined by the van Hove singularities.
The power law exponents of Eq.~\eqref{eq:AsymFDP} are the exact double of the classical exponents of the first passage problem for random walks.
The hand-waving argument is that Eq.~\eqref{eq:QuantumRenewal} is an equation for amplitudes, whereas its classical analogue Eq.~\eqref{eq:ClassicalRenewal} is an equation for probabilities.
The necessary squaring operation brings the additional factor of two in the exponent.
This exponent doubling when going from the classical to the quantum problem was also reported in Refs.~\cite{Muelken2006-0,Muelken2011-0} for the return probability, and also in Ref.~\cite{Boettcher2015-0} where it manifested in a halving of the walk dimension for certain discrete-time quantum walks.
However, only for ordinary states the relevant spectral dimension $d_S$ agrees with the one found in the DOS, $d_S^\text{DOS}$.
As a consequence of the large decay exponents of $F_n$, it assumes substantial values only for small $n$.
The larger exponent comes with the price that the quantum system is not {\em almost surely detectable}, in the sense that the total probability of detection, $P_\text{det} = \sSum{n=1}{\infty} F_n$, is always smaller than unity.
This holds unless $\mu\sof{\lambda}$ is a sum of delta functions \cite{Gruenbaum2013-0}.
Hence, in our infinite space models, featuring a continuous energy spectrum, there is always a non-zero probability that the particle escapes the detector.
Such a deficit in the total return probability is also known in the classical problem, where it marks the dichotomy between recurrent and transient random walks \cite{Polya1921-0}.
Random walks with a spectral dimension smaller than the critical dimension two will eventually return to their initial position with probability one and therefore are considered recurrent.
For transient random walks with a spectral dimension larger than two there is a finite probability that they never return to their initial site.
Although the total detection probability is smaller than unity, one can compute the average first detection time: $\sEA{n} = \sSum{n=1}{\infty} F_n n / P_\text{det}$ {\em under the condition} that the system was detected at all.
As a consequence of the larger exponents we found (the slowest decay is $n^{-2}\ln^{-4}n$), this expectation is always finite, contrasting the classical situation.
The conditional variance of the first detection time is finite only in dimensions larger than three.
We found that $d_S = 2$ is a critical dimension, which also plays an important role for quantum search algorithms.
For a coined quantum search algorithm in dimensions larger than two, the Grover efficiency $\sLandau{\sqrt{N}}$ can be attained \cite{Aaronson2003-0}.
In coinless, oracle quantum searches, the critical dimension is four \cite{Childs2004-0,Li2017-0}.
However, the last reference shows that the decisive quantity is exactly the {\em spectral} dimension, just as in our problem.
Finally, our main assumption for the identification of the spectral dimensions found in the DOS and in the MSDOS, is that neither $\sKet{\ensuremath{\psi_\text{d}}}$ nor $\sKet{\ensuremath{\psi_\text{in}}}$ are insufficiently populated, i.e. they both overlap with the critical van Hove energies.
Exceptions have been discussed in the main text and show that the first detection probabilities subtly depend on the initial and detection state.
This is in sharp contrast to the classical problem, where the particular choice of initial state often only changes the transient, but not the long-time behavior of the first passage probability.
This new dependence on the detection state is encoded in the MSDOS $f\sof{E}$ associated with the detection state.
As we have shown in the main text this important quantity is related to, but ultimately different from the DOS.
It requires a high degree of symmetry in the system and a special choice of states for the DOS and the MSDOS to coincide.
We felt that $f\sof{E}$ is rarely known outside the mathematical community, hence we focused on ``ordinary'' situations where the spectral dimension and the singularities of $f\sof{E}$ can also be found in the well-known density of states.
Still, this state-dependence can lead to very counter-intuitive results, if one does not carefully confirm the overlap condition.
Many open questions remain for the future.
These include the details of the arrival problem, for instance the dependence of the amplitudes $F_{l,d_S}$ from Eq.~\eqref{eq:AsymFDP} on the distance between initial an detector position.
We do not have a clear intuition on the situation when the initial state is insufficiently populated, but the detection state is not.
Also unclear is what happens when the energy spectrum is neither completely discrete nor completely continuous, that is when it is singular continuous as in the Aubrey-Harper model \cite{Lahiri2017-0}.
It is possible to extend our arguments to coined quantum walks, by discussing the WMSDOS $\mu\sof{\lambda}$ of their evolution operator.
Our most surprising finding is the sensitivity to the initial and detection states for out-of-the-ordinary states, which is a purely quantum phenomenon related to interference.
This necessitates an adjustment of the concept of spectral dimension from $d_S^\text{DOS}$, defined by the DOS, to $d_S$ defined by the MSDOS.
We are confident, that this work will lift the quantum first detection theory closer to the level of its classical brother, the first passage theory of random walks.
\acknowledgements
The authors acknowledge support from the Israeli Science Foundation under Grant No. 1898/17.
FT is funded by the Deutsche Forschungsgemeinschaft under grant TH-2192/1-1.
|
1,116,691,498,360 | arxiv | \section{Introduction}
In this paper we are concerned with the calculation of amplitudes for
processes that take place in a constant background magnetic field $B$.
There are many papers in the literature in which this kind of process
are considered \cite{Esposito:1995db, Elmfors:1996gy,
Ioannisian:1996pn, Erdas:1998uu, Ganguly:1999ts, Bhattacharya:2001nm,
Bhattacharya:2002aj, Tinsley:2001kb, Nieves:2003kw,
Bhattacharya:2003hq, Nieves:2004qp}, a significant fraction of which
have to do with neutrino processes that may take place in a variety of
astrophysical environments. It is useful to keep those particular
situations in mind, but for our purposes it is convenient to setup a
more general framework.
Thus, let us suppose that we want to calculate the amplitude
for the transition
\begin{eqnarray}
\ket i \stackrel B\longrightarrow \ket f \,,
\label{i->f}
\end{eqnarray}
where $\ket i$ and $\ket f$ denote two states, and the
letter $B$ above the arrow indicates that the
transition takes place in the presence of the external magnetic field.
For reasons that will become clear below, we restrict the initial and
final states to contain no charged particles, only neutral ones,
and we consider the calculation of the amplitude up to terms linear in $B$.
There are at least two ways to proceed with such a calculation.
One way is to start by computing in standard Feynman diagram
perturbation theory the Green's function for
\begin{eqnarray}
\ket {i + \gamma} \to \ket f \,,
\label{igf}
\end{eqnarray}
in the absence of $B$, and in particular with the photon off-shell,
which corresponds to the matrix element $\langle f|j_\mu|i\rangle$ of
the electromagnetic current. Since the external particles are assumed
to be neutral, then in the context of the perturbative calculation the
off-shell photon is attached only to the internal lines of any diagram
that contributes to the Green's function. The amplitude for the
transition in \Eq{i->f} is then obtained at the end by inserting the
appropriate photon field that corresponds to the background magnetic
field, together with the wavefunctions of the particles in the initial
and final states involved in the transition. This procedure yields
the amplitude to first order in $B$. This was in fact the approach
employed in the original calculation of the neutrino index of
refraction in the presence of a magnetic field \cite{D'Olivo:1989cr}.
We will refer to this method as the \emph{Perturbative Method}, or P
method for short.
An alternative approach is to calculate the Green's function
for the transition
\begin{eqnarray}
\ket i \rightarrow \ket f \,,
\end{eqnarray}
but employing the Schwinger \cite{Schwinger:1951nm}
propagator in place of the Feynman propagator for all the
charged internal lines that appear in any diagram that contributes
to the amplitude. Then, if the result is expanded in powers of $B$,
the conventional expectation is that the linear term in $B$ should
coincide with the result of the P method as specified above.
We will refer to the procedure that we have just described
as the \emph{Linear Schwinger Method}, or S method for short.
Indeed, the familiar calculations already mentioned involving
neutrino processes, confirm this expectation that both methods
yield the same result. The question we address here is
whether this is a special feature of the processes that have been
considered, and whether we can expect the result to hold
for any other process of the type we are considering.
The purpose of this paper is to show that
there are processes for which this equivalence does not hold.
To be specific, in those cases, the diagrams that contribute to the
amplitude in the P method can be classified in two topologically distinct
groups that we call type-1 and type-2 diagrams, which are distinguished
according to whether the electromagnetic vertex has only internal
lines attached to it, or whether it has some external lines attached as well.
As we show, the S method yields a result that is equivalent to the
result of the type-1 diagrams.
For processes for which the type-2 diagrams do not exist,
both the P and S methods yield the same result, which is the case
of the neutrino processes that we have mentioned. But in the
more general case in which the type-2 diagrams exist, the S
method does not yield the complete amplitude. The
total amplitude is obtained by taking the result of the type-1 diagrams,
which can be calculated by either method, and then adding
the result of the type-2 diagrams using the P method.
The paper is organized as follows. In Sec.~\ref{s:linsch}, we derive
the linearized forms of the Schwinger propagators for fermions,
scalars and vector particles in order to set up the stage for
calculations in the S method. In Sec.~\ref{s:ptb}, we outline the P
approach, introducing the classifications of all relevant diagrams
into two types, which we have called type-1 and type-2 above. In
Sec.~\ref{s:equiv}, we show the example of a process in which the S
and the P methods yield identical results because only type-1 diagrams
are present. In Sec.~\ref{s:noneq}, we discuss processes where type-2
diagrams are also present, so that the S method does not give the
total amplitude. Finally, in Sec.~\ref{s:conclu}, we present our
conclusions.
\section{Linear Schwinger approach}\label{s:linsch}
The Schwinger formula \cite{Schwinger:1951nm} gives the fermion
propagator to all orders in the $B$ field, and the analogous formulas
for the scalar and vector propagators are also known
\cite{Erdas:gy}. Since we will be looking at the amplitudes calculated
to first order in $B$, a simpler formula, which is correct to that
order, is sufficient for our purpose. Below we give a short derivation
of the linear formulas for the propagators, in a manner that will be
useful in what follows.
\subsection{Fermion propagator in a magnetic field}
We follow the derivation given in \Ref{Nieves:2004qp}
for the fermion propagator,
and subsequently extend it to obtain the analogous result
for the scalar and vector propagator.
The propagator of a Dirac particle of mass $m$ and charge $eQ$
in an external electromagnetic field is
\begin{eqnarray}
\Big[ i \gamma^\mu {\cal D}_\mu - m \Big] S(x,x')
= \delta^4 (x - x') \,,
\label{coordeq}
\end{eqnarray}
where the the electromagnetic gauge covariant (EGC) derivative ${\cal
D}_\mu$ is defined by
\begin{eqnarray}
{\cal D}_\mu = {\partial \over \partial x^\mu} + ieQ A_\mu(x)\,.
\label{Dmu}
\end{eqnarray}
The Schwinger solution is of the form
\begin{eqnarray}
S(x,x') = \Phi(x,x')\int\,\frac{d^4 p}{(2\pi)^4}
e^{-ip\cdot(x - x')} S_F(p) \,,
\label{ansatz}
\end{eqnarray}
where the overall phase $\Phi(x,x')$ is chosen such that
\begin{eqnarray}
\label{Phieq}
i{\cal D}_\mu \Phi(x,x^\prime) = \frac{eQ}{2}F_{\mu\nu}(x - x^\prime)^\nu
\Phi(x,x^\prime) \,,
\end{eqnarray}
and it depends on the gauge choice for the background field.
For a constant field $F_{\mu\nu}$ corresponding to the
background magnetic field, we can choose the gauge for the
vector potential such that
\begin{eqnarray}
A_\mu(x) = -\frac{1}{2}F_{\mu\nu}x^\nu \,,
\label{Amu}
\end{eqnarray}
With this choice,
\begin{eqnarray}
\Phi(x,x') = \exp \Big( \frac{i}{2}eQ x^\mu F_{\mu\nu}x'^\nu \Big) \,,
\label{defphi}
\end{eqnarray}
and in a different gauge, the expression for $\Phi(x,x')$ will be
different \cite{Bhattacharya:2004nj, Bhattacharya:2005zu}. We will
always employ the gauge dictated by \Eq{Amu}.
By virtue of \Eq{Phieq}, $\Phi$ has the property that, for an arbitrary
function $f$,
\begin{eqnarray}
{\cal D}_\mu (\Phi f) = \Phi\left[\partial_\mu -\frac{ieQ}{2}F_{\mu\nu}
(x - x^\prime)^\nu\right]f \,.
\end{eqnarray}
Moreover, if $f$ is a translationally invariant function of two
co-ordinates, $f(x,x')$, so that its Fourier transform is given by
\begin{eqnarray}
f(x,x') = \int {d^4p \over (2\pi)^4} e^{-ip\cdot(x - x')} \widetilde
f(p) \,,
\end{eqnarray}
then it follows that
\begin{eqnarray}
{\cal D}_\mu (\Phi f) = -i \Phi(x,x') \int {d^4p \over (2\pi)^4}
e^{-ip\cdot(x - x')} \widetilde{\cal D}_\mu \widetilde f(p) \,,
\label{Dphif}
\end{eqnarray}
where
\begin{eqnarray}
\widetilde{\cal D}_\mu = p_\mu - \frac{ieQ}{2}F_{\mu\nu}
\frac{\partial}{\partial p_\nu} \,.
\label{Dp}
\end{eqnarray}
Substituting \Eq{ansatz} into \Eq{coordeq} and using \Eq{Dphif},
the equation for the momentum space propagator $S_F(p)$ is
\begin{eqnarray}
\Phi(x,x')\int\,\frac{d^4 p}{(2\pi)^4}
e^{-ip\cdot(x - x')} \left[\slash{p} - \frac{ieQ}{2}F^{\mu\nu}
\gamma_\mu\frac{\partial}{\partial p^\nu} - m\right]S_F(p) = {\delta^4
(x - x')} \,.
\end{eqnarray}
Using $\Phi(x,x)=1$ this equation can be solved by setting
\begin{eqnarray}
\left[\slash{p} - m - \frac{ieQ}{2}F^{\mu\nu}
\gamma_\mu\frac{\partial}{\partial p^\nu} \right]S_F(p) = 1 \,.
\label{SFeq}
\end{eqnarray}
The exact solution of this equation gives the Schwinger formula.
As already mentioned, for our purpose it is enough to obtain the
solution only to the linear order in $B$, and therefore we write
it as
\begin{eqnarray}
S_F(p) = S_0(p) + S_B(p) \,,
\label{0+B}
\end{eqnarray}
where $S_0(p)$ is the propagator in the vacuum,
\begin{eqnarray}
S_0(p) = {1 \over \slash p - m + i\epsilon} \,,
\label{S0}
\end{eqnarray}
and $S_B(p)$ is the correction due to the $B$ field.
Substituting this form in \Eq{SFeq} and solving perturbatively,
we then obtain
\begin{eqnarray}
S_B(p) = S_0(p) \left[ \frac{ieQ}{2}F^{\mu\nu}
\gamma_\mu\frac{\partial}{\partial p^\nu} \right] S_0(p) \,.
\label{SB}
\end{eqnarray}
\subsection{Propagator of scalars and charged gauge bosons in a
magnetic field}
Following the approach outlined above, we now find the analogous
expressions for the propagators of charged gauge bosons and
scalars, up to the linear order in $B$.
The scalar field propagator satisfies the equation
\begin{eqnarray}
\left[ {\cal D^\mu} {\cal D_\mu} + m^2 \right]
\Delta(x,x') = \null - \delta^4(x-x') \,.
\end{eqnarray}
Taking the ansatz
\begin{eqnarray}
\label{scalaransatz}
\Delta(x,x') = \Phi(x,x')\int\,\frac{d^4 p}{(2\pi)^4}
e^{-ip\cdot(x - x')} \Delta_F(p) \,,
\end{eqnarray}
the equation for $\Delta_F(p)$ is
\begin{eqnarray}
\Big[ \widetilde{\cal D}^\mu \widetilde{\cal D}_\mu -
m^2 \Big]\Delta_F(p) = 1 \,.
\end{eqnarray}
Retaining only up to linear terms in $B$, the equation becomes
\begin{eqnarray}
\label{scalareq}
\left[ p^2 - m^2 - ieQ F_{\mu\nu} p^\mu {\partial \over \partial
p_\nu} \right] \Delta_F(p) = 1 \,,
\end{eqnarray}
which we solve by writing
\begin{eqnarray}
\Delta(p) = \Delta_0(p) + \Delta_B(p) \,,
\end{eqnarray}
with
\begin{eqnarray}
\Delta_0(p) = {1 \over p^2 - m^2 + i\epsilon} \,.
\label{Delta0}
\end{eqnarray}
Substituting this form in \Eq{scalareq} we then obtain
for the $B$-dependent term
\begin{eqnarray}
\Delta_B(p) = ieQ F_{\mu\nu} p^\mu \Delta_0(p) {\partial \over \partial
p_\nu} \Delta_0(p) \,.
\label{DeltaB}
\end{eqnarray}
We consider now the charged gauge bosons. The $W$-bosons do not have
minimal couplings with the photon. In other words, in addition to the
couplings obtained by replacing all partial derivatives in their free
Lagrangian by the EGC derivative defined in \Eq{Dmu}, they have an
anomalous coupling of the form $ W^\dagger_\mu F^{\mu\nu} W_\nu$.
Thus, the terms in the pure gauge Lagrangian involving the quadratic
terms in $W$ and their couplings to the photon can be written in this
suggestive form used by Erdas and Feldman \cite{Erdas:gy}:
\begin{eqnarray}
\mathscr L_{WA} = - \Big({\cal D}_\mu W_\nu \Big)^\dagger
\Big({\cal D}_\mu W_\nu - {\cal D}_\nu W_\mu \Big) + ie
W^\dagger_\mu W_\nu F^{\mu\nu} \,,
\label{LWA}
\end{eqnarray}
where $W_\mu$ is the field operator which annihilates the $W^+$
boson. In addition, we need to introduce the gauge fixing terms which
are necessary for quantizing the gauge fields. These terms have the
generic form
\begin{eqnarray}
\mathscr L_{\rm gf} = - {1\over \xi} |f_W|^2 \,,
\label{Lfix}
\end{eqnarray}
where $f_W$ contains the $W$-field as well as the unphysical Higgs
boson fields. In the commonly used $R_\xi$ gauges, one takes
\begin{eqnarray}
f_W = \partial^\mu W_\mu + i\xi M_W \phi^+
\label{Rxi}
\end{eqnarray}
so that the gauge fixing Lagrangian contains no interaction term.
In the presence of a background electromagnetic field, however, the pure
gauge Lagrangian contains EGC derivatives of $W$, not simple
derivatives. Erdas and Feldman \cite{Erdas:gy} pointed out that it is
therefore more convenient that we use a gauge condition that involved
${\cal D}_\mu$ and not just $\partial_\mu$ acting on the $W$-boson
field. Indeed, such a gauge condition was discussed in a different
context much earlier \cite{Fujikawa:1973qs} where,
instead of \Eq{Rxi}, the following choice was made,
\begin{eqnarray}
f_W = {\cal D}^\mu W_\mu + i\xi M_W \phi^+ \,.
\label{nonlinf}
\end{eqnarray}
The resulting gauge fixing term, defined
according to \Eq{Lfix}, must be added to the gauge-invariant part of
the Lagrangian before obtaining Feynman rules. An important
consequence of \Eq{nonlinf} is that the resulting Lagrangian has no
cubic coupling involving the $W$-boson, the unphysical Higgs and the
photon. On the other hand, the Feynman rule for the cubic $WW$-photon
coupling contains the gauge parameter $\xi$, and is given by
\begin{eqnarray}
\begin{picture}(180,70)(-60,-30)
\Photon(-30,30)(0,0){2}{5}
\Text(-17,15)[r]{\rotatebox{-45}{$W^+_\mu(l-k)$}}
\Photon(-30,-30)(0,0){2}{5}
\Text(-17,-15)[r]{\rotatebox{45}{$A_\alpha(k)$}}
\Photon(0,0)(50,0){2}{5}
\Text(25,5)[b]{$W^+_\nu(l)$}
\Text(70,0)[l]{= \quad $ieO_{\alpha\mu\nu}(k,l-k)$}
\SetWidth{2}
\ArrowLine(-16,16)(-14,14)
\ArrowLine(-16,-16)(-14,-14)
\ArrowLine(24,0)(26,0)
\end{picture}
\end{eqnarray}
with
\begin{eqnarray}
O_{\alpha\mu\nu}(k,l-k) = \eta_{\mu\nu} (2l-k)_\alpha -
\eta_{\nu\alpha} \Big( (2-\zeta) k + \zeta l \Big)_\mu +
\eta_{\alpha\mu} (2k - \zeta l)_\nu \,,
\label{cubic}
\end{eqnarray}
where $\eta_{\mu\nu}$ is the metric tensor and the shorthand
\begin{eqnarray}
\zeta = 1 - {1 \over \xi}
\label{zeta}
\end{eqnarray}
has been used.
The equation of motion of the $W$-bosons in the background field, which
is derived from the Lagrangian that follows from
\Eqs{LWA}{Lfix}, together the gauge fixing function defined by
\Eq{nonlinf}, is given by
\begin{eqnarray}
\left[ - \eta_{\alpha\beta} ({\cal D}^2 + M_W^2) + {\cal D}_\alpha
{\cal D}_\beta - {1 \over \xi} {\cal D}_\beta {\cal D}_\alpha - ie
F_{\beta\alpha} \right] W^\alpha = 0 \,,
\end{eqnarray}
where the last term comes from the anomalous electromagnetic coupling
of the $W$-bosons that appears in \Eq{LWA}. Using the
commutation relation
\begin{eqnarray}
\left[ {\cal D}_\alpha, {\cal D}_\beta \right] W^\gamma = ie
F_{\alpha\beta} W^\gamma \,,
\end{eqnarray}
the equation can be rewritten in the form
\begin{eqnarray}
\left[ - \eta_{\alpha\beta} ({\cal D}^2 + M_W^2) + \zeta {\cal
D}_\beta {\cal D}_\alpha - 2ie F_{\beta\alpha} \right] W^\alpha = 0
\,,
\end{eqnarray}
and the propagator in the co-ordinate space will then satisfies
the equation
\begin{eqnarray}
\left[ \eta_{\lambda\mu} \left( {\cal D^\alpha}
{\cal D_\alpha} + M_W^2 \right)- \zeta
{\cal D^\lambda} {\cal D^\mu} + 2ieF_{\lambda\mu} \right]
D^{\mu\nu} (x,x') = \delta_\lambda^\nu \delta^4(x-x') \,.
\end{eqnarray}
Following the same procedure used above for the fermion and scalar fields,
we obtain the equation for the momentum-space propagator,
\begin{eqnarray}
\left[ \eta_{\lambda\mu} \left( - \widetilde{\cal D}_\alpha
\widetilde{\cal D}^\alpha + M_W^2 \right) + \zeta
\widetilde{\cal D}_\lambda \widetilde{\cal D}_\mu +
2ieF_{\lambda\mu} \right]
D_F^{\mu\nu} (p) = \delta_\lambda^\nu \,,
\end{eqnarray}
where $\widetilde{\cal D}$ has been defined in \Eq{Dp}.
Linearization of this equation gives
\begin{eqnarray}
\left[
\eta_{\lambda\mu} (-p^2+M_W^2)
+ \zeta p_\lambda p_\mu
- {ie \over 2} F^{\alpha\beta} R_{\alpha\beta\lambda\mu}
\right] D_F^{\mu\nu} (p) = \delta_\lambda^\nu \,,
\label{DFeq}
\end{eqnarray}
where $R_{\alpha\beta\lambda\mu}$ is defined by
\begin{eqnarray}
R_{\alpha\beta\lambda\mu} = \Big[ -2\eta_{\lambda\mu} p_\alpha + \zeta
p_\lambda \eta_{\alpha\mu} + \zeta p_\mu \eta_{\alpha\lambda} \Big]
{\partial \over
\partial p^\beta} + (\zeta-4) \eta_{\lambda\alpha}
\eta_{\mu\beta} \,,
\end{eqnarray}
and it be expressed in the form
\begin{eqnarray}
R_{\alpha\beta\lambda\mu} = - O_{\alpha\lambda\mu} (0,p) {\partial \over
\partial p^\beta}
+ (\zeta-4) \eta_{\lambda\alpha} \eta_{\mu\beta} \,,
\label{R}
\end{eqnarray}
with $O_{\alpha\lambda\mu}$ being the tensor defined in \Eq{cubic}.
As before, we solve \Eq{DFeq} by decomposing the propagator in the form
\begin{eqnarray}
\label{DFmunu}
D_F^{\mu\nu} (p) = D_0^{\mu\nu} (p) + D_B^{\mu\nu} (p)\,,
\end{eqnarray}
where
\begin{eqnarray}
\label{D0}
D^{\alpha\beta}_0(p) = {1 \over p^2 - M_W^2 + i\epsilon} \left( -
\eta^{\alpha\beta} + {(1-\xi) p^\alpha p^\beta \over p^2 - \xi M_W^2}
\right)\,.
\end{eqnarray}
Then, substituting \Eq{DFmunu} into \Eq{DFeq} and
solving for the linear term in $B$ we obtain
\begin{eqnarray}
D_B^{\mu\nu} (p) = {ie \over 2} F^{\alpha\beta} D_0^{\lambda\mu} (p)
R_{\alpha\beta\lambda\rho} D_0^{\rho\nu} (p) \,.
\end{eqnarray}
In the gauge introduced in \Eq{nonlinf}, the propagator of the
unphysical charged Higgs field in the background magnetic field,
which we denote by $\Delta^{(W)}_F(p)$, can obtained
by making the substitution
\begin{eqnarray}
m^2 \longrightarrow \xi M_W^2
\end{eqnarray}
in the formulas given in \Eqs{Delta0}{DeltaB} for the scalar propagator.
For later reference, we quote the result,
\begin{eqnarray}
\label{DeltaHiggs}
\Delta^{(W)}_F (p) = \Delta^{(W)}_0 (p) + \Delta^{(W)}_B (p)\,,
\end{eqnarray}
where
\begin{eqnarray}
\label{DeltaHiggs0B}
\Delta^{(W)}_0(p) &=& {1 \over p^2 - \xi M_W^2} \,,\nonumber\\
\Delta^{(W)}_B(p) & = & ieQ F_{\mu\nu} p^\mu \Delta^{(W)}_0(p)
{\partial \over \partial p_\nu} \Delta^{(W)}_0(p) \,.
\end{eqnarray}
The Fadeev-Popov ghost propagator should also be modified in a
magnetic field, but we will not need it in our subsequent discussion.
\section{The perturbative approach}\label{s:ptb}
Here we consider the calculation of the $B$-dependent contribution to
the off-shell amplitude for the process $i\stackrel{B}{\rightarrow}f$
using the P method, which we denote by
$\mathbb M^{(P)}_{i\to f}$. For clarity, we consider first the case
in which $i$ and $f$ are single particle states (e.g., a neutrino) and
extend the result afterwards to the general case.
\subsection{One-particle amplitude}
\label{subsec:onepartamp}
In the P method the amplitude is expressed in terms of the
off-shell electromagnetic vertex function $\Gamma_\mu(p_i,p_f)$,
which is defined such that the on-shell matrix element
of the electromagnetic current operator $j_\mu(x)$ is given by
\begin{eqnarray}
\label{defgamma}
\langle f(p_f)|j_\mu(0)|i(p_i)\rangle = \overline w_f
\Gamma_\mu(p_i,p_f) w_i \,,
\end{eqnarray}
where $w_{i,f}$ denote the momentum space wavefunctions appropriate
for the particle (e.g., Dirac spinors for fermions, polarization
vectors for spin-1 particles). The off-shell amplitude for the
transition $i\to f$ in an external electromagnetic field is then given by
\begin{eqnarray}
\mathbb M^{(P)}_{i \to f}(p_i,p_f) = -i
\int d^4x \; e^{i(p_f - p_i)\cdot x}A^\mu(x) \Gamma_\mu(p_i,p_f) \,,
\end{eqnarray}
which can be written as
\begin{eqnarray}
\label{MPonedef}
\mathbb M^{(P)}_{i \to f}(p_i,p_f) = -i
\left[\int d^4x \; e^{ik\cdot x}A^\mu(x) \Gamma_\mu(p_i,p_i + k)
\right]_{k = p_f - p_i} \,.
\end{eqnarray}
For $A(x)$ we now substitute the vector potential given in \Eq{Amu}.
Then using the representation
\begin{eqnarray}
\delta'(t) = {i \over 2\pi} \int_{-\infty}^{+\infty} dz\; ze^{izt}
\end{eqnarray}
as well as the relation
\begin{eqnarray}
\delta' (t - a)f(t) = \null - \delta(t - a) f' (a)
\end{eqnarray}
for the derivative of the delta function, the amplitude is given by
\begin{eqnarray}
\mathbb M^{(P)}_{i \to f}(p_i,p_f)
&=& - {1\over 2} (2\pi)^4 \delta^4 (p_f - p_i) F^{\mu\nu}
\left[{\partial \over \partial k^\nu} \Gamma_\mu(p_i,p_i+k)
\right]_{k = 0} \,,
\label{ampwithgamma}
\end{eqnarray}
where in the last factor we have set $k = 0$
by virtue of the delta function. The $B$-dependent contribution
to the self-energy, which is identified by writing
\begin{eqnarray}
\mathbb M^{(P)}_{i\to f}(p_i,p_f) = -i (2\pi)^4 \delta^4(p_i-p_f)
\Sigma^{(P)}(p_i) \,,
\end{eqnarray}
is therefore given by
\begin{eqnarray}
\Sigma^{(P)}(p) = \null - {i \over 2} F^{\mu\nu}
\left[ {\partial \over \partial k^\nu} \Gamma_\mu(p,p+k)\right]_{k=0} \,.
\label{derivreln}
\end{eqnarray}
\subsection{General amplitude}
\label{subsec:generalamp}
We now consider the general case. Suppose the state $\ket i$
contains $n$ particles with momenta $p_{1,2,...,n}$, and the state
$\ket f$ contains $n'$ particles with momenta $p'_{1,2,...,n'}$. We
denote the total momenta of each state by
\begin{eqnarray}
P & = & \sum_{i = 1}^{n} p_i \,,\nonumber\\
P' & = & \sum_{f = 1}^{n'} p'_f \,.
\end{eqnarray}
In analogy with the one-particle case discussed already, we now define
the off-shell vertex function involving the photon in such a way that
the on-shell matrix element of the electromagnetic current operator
$j_\mu(x)$ between the states $|i\rangle$ and $|f\rangle$ is given by
\begin{eqnarray}
\langle f|j_\mu(0)|i\rangle = \Big( \overline w_f \Big)^{\alpha'_1\cdots
\alpha'_{n'}}
\Gamma_{\mu\alpha_1\cdots\alpha_n \alpha'_1\cdots \alpha'_{n'}}
(p_1,p_2,...,p_n,p'_1,p'_2,...,p'_{n'}) \Big( w_i \Big)^{\alpha_1\cdots
\alpha_n} \,,
\end{eqnarray}
where the middle factor is the vertex function, and the other two
factors symbolically denote the collection of momentum space
wavefunctions from the final and initial state particles. We have put
a general kind of index for these external particles. For a scalar in
the external states, the corresponding $w$-factor will be unity, and
the index should be absent in the vertex function. For a fermion
field, we usually suppress the index in favor of a matrix notation.
For vector and tensor fields, the indices appear explicitly. For the
moment, we will omit all indices on the vertex function except the
photon index, and use the compact notation for the momenta to write
just $\Gamma_\mu(p_i,p'_f)$ for the vertex function, but it should be
understood to be the full quantity defined above, with all indices and
all momenta.
We classify the diagrams that contribute to the vertex function into
two types. As stated in the Introduction, we deal with processes
where all particles in external states are electrically neutral.
Thus, in the diagrams, the electromagnetic current operator $j_\mu(0)$
is necessarily attached either to an internal line, representing an
electrically charged particle, or to a vertex. We refer to these
types of diagram as \emph{type-1} and \emph{type-2}, respectively.
Schematic examples of both kinds of diagrams have been shown in
\fig{f:example}.
Let us consider type-1 diagrams first, and denote their contribution
to the vertex function as $\Gamma^{(1)}_\mu$. In
each such diagram, we can always label the loop-integration momentum
$l$ in such a way that $l$ is the momentum carried by the charged
particle line outgoing from the electromagnetic vertex. A little
thought reveals that, as a consequence, the line coming in to the same
vertex carries the momentum $l + P - P^\prime$. The convention is
represented in \fig{f:example}a. Similarly, we consider a generic
type-2 as shown in \fig{f:example}b. Here, some of the external
lines are attached to the photon vertex, and we denote by $q$
the net momentum flowing into the diagram due to all these other external
lines. In addition, we denote by $\tilde P$ and $\tilde P^\prime$ the
partial total momentum of the remaining incoming and outgoing lines,
so that $\tilde P - \tilde P^\prime + q = P - P^\prime$.
Then, choosing the integration variable $l$ such that the
momentum carried by the internal line going into the
vertex is again $l + P - P^\prime$, it follows that
the outgoing internal line has momentum $l + q$,
as indicated in \fig{f:example}b.
\begin{figure}
\begin{center}
\begin{picture}(70,60)(-35,-40)
\ArrowLine(0,0)(-16,-12)
\ArrowLine(0,0)(-12,-16)
\Text(0,-20)[]{\bf \ldots}
\Text(0,-50)[bc]{(a)}
\Text(20,-20)[]{$P$}
\Text(-20,-20)[]{$P^\prime$}
\ArrowLine(16,-12)(0,0)
\ArrowLine(12,-16)(0,0)
\ArrowArcn(0,10)(10,90,-90)
\Text(12,10)[l]{$l$}
\ArrowArcn(0,10)(10,260,90)
\Text(-14,10)[r]{$l + P - P^\prime$}
\Photon(0,20)(0,40){2}{4}
\GCirc(0,0){5}{.3}
\end{picture}
\hspace{64pt}
\begin{picture}(70,60)(-35,-40)
\ArrowLine(0,0)(-16,-12)
\ArrowLine(0,0)(-12,-16)
\Text(0,-20)[]{\bf \ldots}
\Text(0,-50)[bc]{(b)}
\Text(20,-20)[]{$\tilde P$}
\Text(-20,-20)[]{$\tilde P^\prime$}
\ArrowLine(16,-12)(0,0)
\ArrowLine(12,-16)(0,0)
\ArrowArcn(0,10)(10,90,-90)
\Text(12,10)[l]{$l + q$}
\ArrowArcn(0,10)(10,260,90)
\Text(-14,10)[r]{$l + P - P^\prime$}
\Photon(0,20)(-20,40){1}{5.5}
\Line(0,20)(15,40)
\Line(0,20)(15,35)
\Line(0,20)(15,30)
\Text(20,35)[l]{$\Big\} \Longleftarrow q$}
\GCirc(0,0){5}{.3}
\end{picture}
\end{center}
\caption{Schematic examples of (a) a type-1 diagram, and (b) a type-2
diagram, for the vertex function $\Gamma_\mu(p_i,p_f)$.
The momentum variables are defined in the text.
\label{f:example}
}
\end{figure}
We now imagine constructing the amplitudes corresponding to each type
of diagram. For each such amplitude, we define an auxiliary
function by making the replacement
\begin{eqnarray}
\label{introducek}
l + P - P^\prime \rightarrow l - k\,,
\end{eqnarray}
in the propagator (and the tree-level vertex function) of the internal
charged line that goes into the electromagnetic vertex, where $k$ is
an arbitrary vector. We denote by $\overline\Gamma^{(1)}_\mu(p_i,p_f,k)$
the sum of the auxiliary functions corresponding to the type-1 diagrams,
and similarly by $\overline\Gamma^{(2)}_\mu(p_i,p_f,k)$ the corresponding
sum obtained in the same way for the type-2 diagrams. Furthermore, we define
the total auxiliary vertex function as the sum
\begin{eqnarray}
\overline\Gamma_\mu(p_i,p_f,k) = \overline\Gamma^{(1)}_\mu(p_i,p_f,k) +
\overline\Gamma^{(2)}_\mu(p_i,p_f,k)\,.
\end{eqnarray}
The function $\overline\Gamma_\mu(p_i,p_f,k)$ does not have a direct physical
meaning, but by construction it is such that
\begin{eqnarray}
\overline\Gamma_\mu(p_i,p_f,k)\bigg|_{k = P' - P} =
\Gamma_\mu(p_i,p_f) \,,
\label{Gammakrel}
\end{eqnarray}
which is the important relation for us in what follows.
We want to consider the process of \Eq{i->f}. The off-shell amplitude
in presence of the external magnetic field, which is given by
\begin{eqnarray}
\mathbb M^{(P)}_{i \to f}(p_i,p_f) = -i
\int d^4x \; e^{i(P^\prime - P)\cdot x}A^\mu(x)\Gamma_\mu(p_i,p_f)\,,
\end{eqnarray}
can be expressed in terms of the auxiliary vertex function
$\overline\Gamma_\mu(p_i,p_f,k)$ as
\begin{eqnarray}
\mathbb M^{(P)}_{i \to f}(p_i,p_f) = -i
\left[\int d^4x \; e^{ik\cdot x}A^\mu(x)\overline\Gamma_\mu(p_i,p_f,k)
\right]_{k = P^\prime - P}\,.
\label{amplB}
\end{eqnarray}
The same manipulations that we applied to \Eq{MPonedef} then
leads us here to an equation that resembles \Eq{ampwithgamma},
\begin{eqnarray}
\mathbb M^{(P)}_{i \to f}(p_i,p_f)
= - {1\over 2} (2\pi)^4 \delta^4 (P^\prime - P) F^{\mu\nu}\left\{
{\partial \over \partial k^\nu} \overline\Gamma^{(1)}_\mu(p_i,p_f,k) +
{\partial \over \partial k^\nu} \overline\Gamma^{(2)}_\mu(p_i,p_f,k)
\right\}_{k = 0} \,.
\label{ptbresult}
\end{eqnarray}
It should be remembered that we have been omitting all but the
photon index in writing the $\overline\Gamma^{(1,2)}_\mu$, but in fact they are
assumed to be present on both sides of this equation.
In spite of the similarity between the type-1 and type-2 contributions
to this equation, there is an important difference between the two types
which a closer look at \fig{f:example} reveals clearly.
If we denote generically by $\tilde S(p)$ the propagator of the
internal line where the electromagnetic vertex is attached (whether
it is a scalar, fermion of vector particle), then in type-1
diagrams that particular propagator appears in a combination
that schematically looks like
\begin{eqnarray}
\label{type1}
\left[\frac{\partial}{\partial k}
\tilde S(l) V\tilde S(l - k)\right]_{k = 0}\,,
\end{eqnarray}
where $V$ is a vertex factor. On the other hand,
for type-2 diagrams the corresponding combination is
\begin{eqnarray}
\label{type2}
\left[\frac{\partial}{\partial k}
\tilde S(l + q)V^\prime \tilde S(l - k)\right]_{k = 0}\,.
\end{eqnarray}
As we will show in the next section, in all cases, whether scalar,
fermion or vector particles, the combination given
in \Eq{type1} can be expressed in terms of the Schwinger
propagator for a particle with momentum $l$,
and that will allow us to prove the equivalence between
the P-method calculation of the type-1 diagrams and the
S-method. On the other hand, no such relation exists
for the combination given in \Eq{type2}, which among
other things depends not just on $l$ but also on $q$.
Therefore, in process for which there are no type-2 diagrams,
the P and the S methods are equivalent.
But otherwise, the S-method does not yield the total
amplitude. The type-2 diagrams of the P-method must
be calculated separately to yield the total amplitude.
\section{Example of equivalence of the two approaches}\label{s:equiv}
We consider the P-method calculation of the
neutrino self-energy in a background magnetic field.
For the sake of simplicity, we will consider neutrino
interactions with electrons only, which in the 4-Fermi approximation
is given by
\begin{eqnarray}
\mathscr L = \sqrt{2} G_F [\overline e\gamma_\mu(X + Y\gamma_5)e]
[\overline\nu \gamma^\mu L \nu] \,.
\end{eqnarray}
Here $L$ is the projection operator for the left-chiral components of
fermion fields, while $X$ and $Y$ stand for the weak coupling
constants of the electron.
\begin{figure}
\begin{center}
\begin{picture}(120,100)(-60,-50)
\ArrowLine(50,-20)(0,-20)
\ArrowLine(0,-20)(-50,-20)
\Photon(0,20)(0,60){2}{5}
\ArrowArc(0,0)(20,-90,90)
\ArrowArc(0,0)(20,90,270)
\Text(55,-20)[b]{$\nu(p)$}
\Text(-55,-20)[b]{$\nu(p')$}
\Text(-28,0)[br]{$e$}
\Text(28,0)[bl]{$e$}
\Text(6,40)[l]{$\gamma(k)$}
\SetWidth{2}
\ArrowLine(0,41)(0,39)
\end{picture}
\hspace{4cm}
\begin{picture}(120,100)(-60,-50)
\ArrowLine(50,-20)(0,-20)
\ArrowLine(0,-20)(-50,-20)
\Text(55,-20)[b]{$\nu(p)$}
\Text(-55,-20)[b]{$\nu(p)$}
\Text(-28,0)[b]{$e$}
\SetWidth{2}
\ArrowArc(0,0)(20,-90,270)
\end{picture}
\end{center}
\begin{minipage}[t]{0.47\textwidth}
\caption{Diagram for the off-shell vertex function of the neutrino.
This is required for computing the background field-dependent
contributions to neutrino self-energy in the perturbative
approach.}\label{f:nuPer}
\end{minipage}
\hfill
\begin{minipage}[t]{0.47\textwidth}
\caption{Diagram for computing the neutrino self-energy
in a magnetic field. The thick line indicates that the Schwinger
propagator has to be used.}\label{f:nuSch}
\end{minipage}
\end{figure}
As explained in Section\ \ref{subsec:onepartamp}, in the P-method
we start from the neutrino electromagnetic vertex function
and then determine the $B$-dependent part of the
self-energy by means of \Eq{derivreln}.
In one-loop, the relevant diagram is depicted in \fig{f:nuPer},
which in straightforward fashion yields
\begin{eqnarray}
i\Gamma_\mu (p,p+k) = - \surd 2 G_F \gamma^\alpha L
\int {d^4l \over (2\pi)^4} \mathop{\rm Tr}
\Big[ i\gamma_\alpha (X + Y\gamma_5) iS_0(l) ie\gamma_\mu iS_0(l-k)
\Big] \,,
\end{eqnarray}
where, for the electron, we have used $Q=-1$. The $k$ dependence
of the vertex function comes only from the factor $S_0(l - k)$,
and therefore using the relation
\begin{eqnarray}
{\partial \over \partial k^\nu} S_0(l-k) = - {\partial \over
\partial l^\nu} S_0(l-k) \,,
\label{lkderiv}
\end{eqnarray}
and taking the limit $k \rightarrow 0$, \Eq{derivreln} yields
\begin{eqnarray}
i\Sigma^{(P)}
&=& \surd 2 G_F \gamma^\alpha L
{i \over 2} F^{\mu\nu} \int {d^4l \over (2\pi)^4} \mathop{\rm Tr}
\Big[ i\gamma_\alpha (X + Y\gamma_5) iS_0(l) ie\gamma_\mu {\partial
\over \partial l^\nu} iS_0(l) \Big] \,.
\label{sigmaPtb}
\end{eqnarray}
In the S-method, the self-energy is determined to one-loop
from the diagram shown in \fig{f:nuSch}, using the Schwinger propagator
for the internal electron line. In the linear approximation
that we are considering, we use the linear formula for the
propagator given \Eq{0+B}. Thus, denoting by $\Sigma^{(S)}$
the $B$-dependent part of the self-energy calculated in this way,
we obtain
\begin{eqnarray}
i\Sigma^{(S)} = - \surd 2 G_F \gamma^\alpha L
\int {d^4l \over (2\pi)^4} \mathop{\rm Tr} \Big[ i\gamma_\alpha(X + Y\gamma_5)
iS_B(l) \Big] \,,
\label{sigmaSch}
\end{eqnarray}
Remembering \Eq{SB}, it follows that $\Sigma^{(S)}$ is identical
to $\Sigma^{(P)}$ given in \Eq{sigmaPtb}.
Looking closer at the two methods, we can see why they are equivalent
in this case. In the P-method, the photon vertex appears
between the two electron propagators in the combination
\begin{eqnarray}
C_\mu (k,l) \equiv iS_0(l) \Big( -ieQ\gamma_\mu \Big) iS_0(l - k) \,,
\label{Cmu}
\end{eqnarray}
which using \Eq{lkderiv}, is seen to satisfy
\begin{eqnarray}
\null - {i\over 2} F^{\mu\nu}
\left[ {\partial \over \partial k^\nu} C_\mu (k,l) \right]_{k=0} =
iS_B(l) \,.
\label{crucial}
\end{eqnarray}
This Ward-like identity
is the crucial relation that guarantees the equivalence of the
two approaches. Diagrammatically, it can be represented in the form
\begin{eqnarray}
\null - {i\over 2} F^{\mu\nu} \left[ {\partial \over \partial k^\nu}
\left(
\begin{picture}(70,40)(-35,0)
\Line(0,0)(-16,-12)
\Line(0,0)(-12,-16)
\Text(0,-20)[]{\bf \ldots}
\Line(0,0)(16,-12)
\Line(0,0)(12,-16)
\ArrowArc(0,10)(10,-90,90)
\Text(12,10)[l]{$l-k$}
\ArrowArc(0,10)(10,90,270)
\Text(-12,10)[r]{$l$}
\Photon(0,20)(0,30)22
\ArrowLine(0,31)(0,30)
\Text(0,36)[b]{$k$}
\GCirc(0,0){10}{.3}
\end{picture}
\right) \right]_{k=0}
=
\begin{picture}(60,50)(-30,0)
\Line(0,0)(-16,-12)
\Line(0,0)(-12,-16)
\Text(0,-20)[]{\bf \ldots}
\Line(0,0)(16,-12)
\Line(0,0)(12,-16)
\CArc(0,10)(10,0,360)
\CArc(0,10)(12,0,360)
\GCirc(0,0){10}{.3}
\end{picture}
\label{diageqn}
\end{eqnarray}
where the lines at the bottom are external lines, the double line
represents only the magnetic part of the propagator,
and the blob denotes everything else in the diagram.
Let us now consider a more general amplitude that may involve
diagrams in which
the photon line, in the P-method, attaches to an internal scalar line.
Denoting the charge of the scalar by $eQ$,
the factor analogous to the one quoted in \Eq{Cmu} would be in this case
\begin{eqnarray}
C_\mu^\prime (k,l) \equiv i\Delta_0(l) \Big( -ieQ (2l_\mu-k_\mu) \Big)
i\Delta_0(l - k) \,.
\end{eqnarray}
Although the electromagnetic coupling of the scalar
is momentum-dependent, it still follows that
\begin{eqnarray}
\null - {i\over 2} F^{\mu\nu}
\left[ {\partial \over \partial k^\nu} C_\mu^{(S)} (k,l) \right]_{k=0} =
i\Delta_B(l) \,,
\label{crucialS}
\end{eqnarray}
as can be simply verified. Thus, the diagrammatic equation of
\Eq{diageqn} applies to this case as well. Furthermore, this conclusion is
unchanged if the scalar mode is unphysical.
If the photon is attached to an internal $W$-boson line, the factor
analogous to the one quoted in \Eq{Cmu} is given by
\begin{eqnarray}
{C^{\lambda\rho}}_\mu (k,l) \equiv iD^{\alpha\lambda}_0(l) \Big( ie
O_{\mu\alpha\beta} (k,l-k) \Big) iD^{\beta\rho}_0(l - k) \,,
\end{eqnarray}
where the $O_{\mu\alpha\beta}$ is defined in \Eq{cubic}.
Using the analog of \Eq{lkderiv} for the $W$ propagator, we can write
\begin{eqnarray}
\lim_{k\to0} {\partial \over \partial k^\nu} {C^{\lambda\rho}}_\mu &=&
-ieD^{\alpha\lambda}_0(l)\lim_{k\to0} {\partial \over \partial k^\nu}
\left[ O_{\mu\alpha\beta} (k,l-k) D^{\beta\rho}_0(l - k)
\right] \nonumber \\
&=& -ieD^{\alpha\lambda}_0(l) \left[ - O_{\mu\alpha\beta} (0,l)
{\partial \over \partial l^\nu} + \left(
{\partial \over \partial k^\nu} O_{\mu\alpha\beta} (k,l-k)
\right)_{k=0} \right] D^{\beta\rho}_0(l)\,,\nonumber\\
\end{eqnarray}
and by direct computation using \Eq{cubic} it follows that
\begin{eqnarray}
{\partial \over \partial k^\nu} O_{\mu\alpha\beta} (k,l-k) =
- \eta_{\mu\nu} \eta_{\alpha\beta} - (2-\zeta)
\eta_{\nu\alpha} \eta_{\mu\beta} + 2 \eta_{\alpha\mu} \eta_{\nu\beta} \,.
\end{eqnarray}
Therefore, contracting with the antisymmetric tensor $F^{\mu\nu}$, we obtain
\begin{eqnarray}
\null - {i\over 2} F^{\mu\nu} \left[ {\partial \over \partial k^\nu}
{C^{\lambda\rho}}_\mu \right]_{k=0} = i D^{\lambda\rho}_B \,,
\label{crucialW}
\end{eqnarray}
which proves the equivalence also when the photon is attached to a
$W$-boson line.
Notice that we have exhausted all the possible ways in which
the photon, in the P-method, can be attached to an internal line
in a diagram since, as emphasized earlier,
there is no $W\phi$-photon trilinear coupling in
the gauge chosen for the $W$'s.
This completes the proof that there is a one-to-one correspondence
between the diagrams in the S-method, and the type-1 diagrams in the
P-method, in which the photon appears attached to the internal
lines of the diagram. Therefore, for transition amplitudes
for which there are no contributions from type-2 diagrams,
both methods give equivalent results.
However, as we will see next, there are
amplitudes for which the P-method involve the type-2, diagrams which have
no counterpart in the S-method.
\section{Examples of non-equivalence of the two approaches}\label{s:noneq}
\subsection{Processes involving charged gauge bosons}\label{chgb}
Let us consider the amplitude for the process
\begin{eqnarray}
Z(p) \stackrel{B}{\rightarrow} \nu(p_1) \overline\nu(p_2) \,,
\label{Zdk}
\end{eqnarray}
and focus the attention on the modification to the tree-level,
$B$-independent, term due to the background magnetic field.
Without any of the essential features that are important
for us, we can simplify the discussion by
assuming that the neutrinos are
massless, so that there is no neutrino mixing, and the final state
contains the electron neutrino and its antiparticle. The one-loop
diagrams which contribute to the amplitude are shown in
\fig{f:Zdk}, where the internal fermion lines represent the electron.
\begin{figure}[t]
\begin{center}
\begin{picture}(100,100)(0,-50)
\Photon(0,0)(30,0)25
\ArrowLine(90,-30)(60,-30)
\ArrowLine(60,30)(90,30)
\SetWidth{1.5}
\ArrowLine(30,0)(60,30)
\ArrowLine(60,-30)(30,0)
\Photon(60,30)(60,-30)2{7.5}
\Text(50,-50)[]{\large (a)}
\end{picture}
\quad \quad
\begin{picture}(100,100)(0,-50)
\Photon(0,0)(30,0)25
\ArrowLine(90,-30)(60,-30)
\ArrowLine(60,30)(90,30)
\SetWidth{1.5}
\ArrowLine(30,0)(60,30)
\ArrowLine(60,-30)(30,0)
\DashLine(60,30)(60,-30)4
\Text(50,-50)[]{\large (a$'$)}
\end{picture}
\\
\begin{picture}(100,100)(0,-50)
\Photon(0,0)(30,0)2{5.5}
\ArrowLine(90,-30)(60,-30)
\ArrowLine(60,30)(90,30)
\SetWidth{1.5}
\Photon(30,0)(60,30)2{5.5}
\Photon(60,-30)(30,0)2{5.5}
\ArrowLine(60,-30)(60,30)
\Text(50,-50)[]{\large (b)}
\end{picture}
\quad \quad
\begin{picture}(100,100)(0,-50)
\Photon(0,0)(30,0)2{5.5}
\ArrowLine(90,-30)(60,-30)
\ArrowLine(60,30)(90,30)
\SetWidth{1.5}
\DashLine(30,0)(60,30)4
\DashLine(60,-30)(30,0)4
\ArrowLine(60,-30)(60,30)
\Text(50,-50)[]{\large (b$'$)}
\end{picture}
\end{center}
\caption{One-loop diagrams for the process $Z\to \nu\overline\nu$. The
internal solid, wavy and dashed lines represent the electron, the
$W$-boson and the unphysical charged Higgs, respectively. In the
S-method, we should take Schwinger propagators for the
thick lines.}\label{f:Zdk}
\end{figure}
In the S-method, the Schwinger propagators must be used
for the internal lines in these diagrams.
In the linear approximation that we are using,
we need to consider the $B$-dependent part of
the propagator of each line at a time.
\begin{figure}[t]
\begin{center}
\begin{picture}(100,100)(0,-50)
\Photon(0,0)(30,0)25
\ArrowLine(30,0)(60,30)
\ArrowLine(60,-30)(30,0)
\Photon(60,30)(60,-30)27
\ArrowLine(90,-30)(60,-30)
\ArrowLine(60,30)(90,30)
\Photon(45,15)(15,15)25
\Text(50,-50)[]{\large (a1)}
\end{picture}
\begin{picture}(100,100)(0,-50)
\Photon(0,0)(30,0)25
\ArrowLine(30,0)(60,30)
\ArrowLine(60,-30)(30,0)
\Photon(60,30)(60,-30)27
\ArrowLine(90,-30)(60,-30)
\ArrowLine(60,30)(90,30)
\Photon(45,-15)(15,-15)25
\Text(50,-50)[]{\large (a2)}
\end{picture}
\begin{picture}(100,100)(0,-50)
\Photon(0,0)(30,0)25
\ArrowLine(30,0)(60,30)
\ArrowLine(60,-30)(30,0)
\Photon(60,30)(60,-30)27
\ArrowLine(90,-30)(60,-30)
\ArrowLine(60,30)(90,30)
\Photon(60,0)(90,0)25
\Text(50,-50)[]{\large (a3)}
\end{picture}
\end{center}
\caption{1-loop diagrams for the process $Z + \gamma\to
\nu\overline\nu$ obtained by attaching a photon line to the diagram of
\fig{f:Zdk}a.}\label{f:Zdk+ph}
\end{figure}
On the other hand, in the P-method, the diagrams are all those
that can be obtained by attaching a
photon to each diagram of Fig.\ \ref{f:Zdk}, in all possible ways.
For example, \fig{f:Zdk+ph} shows all the possible ways
in which a photon line can be
attached to the diagram of \fig{f:Zdk}a. From the discussion of
Sec.~\ref{s:equiv}, and in particular by means of the identity
in \Eq{crucialW}, it follows that the P-method evaluation
of diagram (a3) of \fig{f:Zdk+ph},
yields a result that is identical to
the contribution that comes from the $B$-dependent part of the
$W$-boson propagator in the S-method evaluation of \fig{f:Zdk}a.
Similarly, \Eq{crucial} implies that the P-method calculation
of the diagrams (a1) and (a2) of \fig{f:Zdk+ph} is identical
to the contribution that from the $B$-dependent part of the
electron propagator in the S-method evaluation of \fig{f:Zdk}a.
In summary, the S-method evaluation of \fig{f:Zdk}a,
and the P-method evaluation of the
diagrams (a1), (a2) and (a3) of \fig{f:Zdk+ph}, yield the same result.
The same conclusion holds for \fig{f:Zdk}a$'$ and the corresponding
diagrams of the P-method, since in the latter set of diagrams
the photon is attached to an internal line,
and the identities in \Eqss{crucial}{crucialS}{crucialW}
can be invoked once again.
In the terminology that we have used, the P-method diagrams
that we had to consider so far are type-1 diagrams.
The situation with diagrams \fig{f:Zdk}b and \fig{f:Zdk}b$'$
is different because, in addition to the possibility of
attaching the photon line to each of the internal lines,
that does not exhaust all possibilities.
For example, in
\fig{f:Zdk}b, an extra photon line can be added to the $WWZ$ vertex,
turning it into a $WWZ\gamma$ vertex. The same can be done to the
diagram in \fig{f:Zdk}b$'$ as well.
The resulting diagrams, shown in
\fig{f:Zdk+phex}, have no counterpart in the Schwinger method.
In our terminology, they are type-2 diagrams.
We are not showing the type-1 diagrams that correspond to the S-method
evaluation of \fig{f:Zdk}b and \fig{f:Zdk}$'$b,
since they are similar to those
shown in \fig{f:Zdk+ph} and, from the arguments of
Sec.~\ref{s:equiv}, it follows that the evaluation of the
corresponding diagrams in the two methods give the same result once again.
\begin{figure}[b]
\begin{center}
\begin{picture}(100,100)(0,-50)
\Photon(0,0)(30,0)25
\Photon(30,0)(60,30)25
\Photon(60,-30)(30,0)25
\ArrowLine(60,-30)(60,30)
\ArrowLine(90,-30)(60,-30)
\ArrowLine(60,30)(90,30)
\Photon(30,30)(30,0)25
\Text(50,-50)[]{\large (bx)}
\end{picture}
\begin{picture}(100,100)(0,-50)
\Photon(0,0)(30,0)25
\DashLine(30,0)(60,30)4
\DashLine(60,-30)(30,0)4
\ArrowLine(60,-30)(60,30)
\ArrowLine(90,-30)(60,-30)
\ArrowLine(60,30)(90,30)
\Photon(30,30)(30,0)25
\Text(50,-50)[]{\large (b$'$x)}
\end{picture}
\end{center}
\caption{1-loop diagrams for the process $Z + \gamma\to \nu\overline\nu$,
obtained by attaching a photon line to the $Z$ vertices of the
diagrams of \fig{f:Zdk}b and \fig{f:Zdk}b$'$.}\label{f:Zdk+phex}
\end{figure}
To complete the demonstration of non-equivalence of the two methods,
we now need to show that the contribution of the type-2 diagrams shown
in \fig{f:Zdk+phex} is non-vanishing. Following the
prescription for constructing the auxiliary amplitudes
that correspond to the diagrams, we can write them in the form
\begin{eqnarray}
-i \overline\Gamma_{\mu\nu}^{\rm (bx)} &=&
\left({ig \over \sqrt{2}} \right)^2 \int {d^4l \over (2\pi)^4} \Big[
\gamma_\alpha L iS_0(p-p_2-l) \gamma_\beta L \Big] iD^{\alpha\sigma}_0(l+k)
iD^{\beta\tau}_0(l-p) iQ_{\sigma\tau\mu\nu} \,, \\
-i \overline\Gamma_{\mu\nu}^{\rm (b'x)} &=&
\left({igm_e \over \sqrt{2} M_W} \right)^2 \int {d^4l \over (2\pi)^4}
\Big[ L iS_0(p-p_2-l) R \Big] i\Delta^{(W)}_0(l+k) i\Delta^{(W)}_0(l-p)
iQ_{\mu\nu} \,,
\end{eqnarray}
where the propagators of the $W$-boson and the unphysical charge Higgs
boson are given in \Eqs{D0}{DeltaHiggs}. In addition, we have denoted
by $Q_{\sigma\tau\mu\nu}$ and $Q_{\mu\nu}$ the quartic
couplings of the $Z\gamma$ with $WW$ and the unphysical
Higgs which, in the gauge introduced in \Eq{nonlinf},
are given by
\begin{eqnarray}
Q_{\sigma\tau\mu\nu} & = & -eg\cos\theta_W \Big( 2 \eta_{\sigma\tau}
\eta_{\mu\nu} - \eta_{\sigma\mu} \eta_{\tau\nu}
- \eta_{\sigma\nu} \eta_{\tau\mu} \Big)\,,\nonumber\\
Q_{\mu\nu} & = & {eg \cos 2\theta_W \over \cos \theta_W} \eta_{\mu\nu}\,,
\end{eqnarray}
respectively. From \Eq{ptbresult}, it then follows that
these diagrams give the following contribution to the amplitude
$Z\to \nu\overline\nu$ amplitude given by
\begin{eqnarray}
\Gamma_\mu^{\rm (x)} &=& - {i\over 2} F^{\nu\lambda} \left[ {\partial
\over \partial k^\lambda} \Gamma_{\mu\nu}^{\rm (bx+b'x)}
\right]_{k=0} \nonumber \\
&=& - {g^2\over 4} F^{\nu\lambda} \int {d^4l \over (2\pi)^4} \bigg[
\Big[ \gamma_\alpha L S_0(p_1-l) \gamma_\beta L \Big]
{\partial D^{\alpha\sigma}_0(l) \over \partial l^\lambda}
D^{\beta\tau}_0(l-p) Q_{\sigma\tau\mu\nu} \nonumber\\*
&& +
\left({m_e \over M_W} \right)^2
\Big[ L S_0(p_1-l) R \Big]
{\partial \Delta^{(W)}_0(l) \over \partial l^\lambda} \Delta^{(W)}_0(l-p)
Q_{\mu\nu} \bigg] \,,
\end{eqnarray}
where we have made use of relations analogous to \Eq{lkderiv}.
Certainly, this contribution is non-vanishing. It also shows the
general structure displayed in \Eq{type2}, where the two
propagators appear with different momenta despite the fact that the external
photon momentum has been set to zero.
\newcounter{subd}
\renewcommand{\thesubd}{\alph{subd}}
\begin{figure}
\def\stepcounter{subd{\stepcounter{subd}
\SetWidth{.5}
\ArrowLine(50,0)(0,0)
\ArrowLine(0,0)(-50,0)
\SetWidth{2}
\ArrowArc(0,20)(20,0,360)
\SetWidth{.5}
\Text(0,-20)[]{(\thesubd)}
}
\begin{center}
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Graviton(35,0)(35,30)25
\end{picture}
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Graviton(-35,0)(-35,30)25
\end{picture}
\\
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Graviton(0,40)(0,70)25
\end{picture}
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Graviton(0,0)(40,-10)25
\end{picture}
\end{center}
\caption{One-loop diagrams for the process $\nu_1\rightarrow \nu_2 +
{\cal G}$ in the four-Fermi interaction approximation.
\label{f:nunu'g}
}
\end{figure}
\subsection{Theory with linearized gravity}\label{s:lingrav}
In the linear theory of gravity, the couplings of the graviton
with the matter and gauge fields can be determined by
writing the space-time metric in the form
\begin{eqnarray}
g_{\lambda\rho} = \eta_{\lambda\rho} + 2\kappa h_{\lambda\rho} \,,
\label{eta+h}
\end{eqnarray}
where $h_{\lambda\rho}$ is identified with the graviton field,
and then expanding the couplings in the Lagrangian
up to the linear order in $\kappa$. The constant $\kappa$
is defined in terms of Newton's gravitational constant by
\begin{eqnarray}
\kappa = \sqrt{8\pi G} \,,
\end{eqnarray}
which is such that the field $h_{\lambda\rho}$ has the properly
normalized kinetic energy term in the Lagrangian. This point of view
for treating processes involving the gravitational and Standard Model
interactions is the same as that employed in some recent works for the
calculation of quantum gravity amplitudes \cite{Bjerrum-Bohr:2002kt,
Bjerrum-Bohr:2002ks, Nieves:2005ti}, in which General Relativity is
treated as an effective field theory for energies below the Planck
scale \cite{Donoghue:1994dn, Burgess:2003jk}.
The interaction Lagrangian so obtained contains vertices
involving both the photon and graviton that give rise to
type-2 diagrams when we consider processes involving gravitons
in the presence of a $B$ field. In order to make the
discussion concrete, we consider the process
\begin{eqnarray}
\nu_1 \stackrel{B}{\rightarrow} \nu_2 + {\cal G} \,,
\label{nunu'g}
\end{eqnarray}
where ${\cal G}$ is the graviton, and $\nu_1$, $\nu_2$ are two different
neutrino eigenstates. The couplings involving gravitons that are
relevant to this process have been deduced in the literature
\cite{Choi:1994ax, Shim:1995ap, Nieves:1998xz, Nieves:1999rt,
Nieves:2000dc}. We do not reproduce them here since we will not carry
out the calculation of the amplitude for this process. We will limit
ourselves to indicate what are the diagrams that are relevant for such
a calculation using the two methods that we have considered.
In the absence of the $B$ field, the amplitude is determined from the
diagrams shown in \fig{f:nunu'g}, the tree-level contribution being
zero due to the fact that the gravitational couplings are flavor
diagonal and universal. The process cannot occur in the vacuum
because of angular momentum conservation. But in the background $B$
field, this obstruction is lifted. We can then try to calculate the
amplitude of the process by interpreting the charged fermion
propagators of \fig{f:nunu'g} as Schwinger propagators. Further,
although the internal loop can contain all charged leptons, we can
imagine considering only the contributions for the electron in the
loop, with the understanding that restoring the contributions of the
muon and the tau can be made in analogous fashion. Carrying out the
calculation in this way is the prescription of the S-method.
\begin{figure}
\def\stepcounter{subd{\stepcounter{subd}
\SetWidth{.5}
\ArrowLine(50,0)(0,0)
\ArrowLine(0,0)(-50,0)
\ArrowArc(0,20)(20,0,360)
}
\begin{center}
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Photon(0,40)(0,70)25
\Graviton(35,0)(35,30)25
\Text(0,-20)[]{(a)}
\end{picture}
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Photon(0,40)(0,70)25
\Graviton(-35,0)(-35,30)25
\Text(0,-20)[]{(b)}
\end{picture}
\\
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Photon(-35,50)(-16,32)25
\Graviton(35,50)(16,32)25
\Text(0,-20)[]{(c1)}
\end{picture}
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Photon(35,50)(16,32)25
\Graviton(-35,50)(-16,32)25
\Text(0,-20)[]{(c2)}
\end{picture}
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Photon(0,40)(0,70)25
\Graviton(0,0)(40,-10)25
\Text(0,-20)[]{(d)}
\end{picture}
\\
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Photon(0,40)(-20,70)25
\Graviton(0,40)(20,70)25
\Text(0,-20)[]{(e)}
\end{picture}
\begin{picture}(120,100)(-60,-20)
\stepcounter{subd
\Photon(0,40)(0,70)25
\Graviton(0,55)(35,55)25
\Text(0,-20)[]{(f)}
\end{picture}
\end{center}
\caption{One-loop diagrams for the $\nu_1 + \gamma \rightarrow \nu_2 +
{\cal G}$ process in the 4-Fermi approximation. In our terminology,
the diagrams (a)-(-d) are type-1 diagrams while (e) and (f) are type-2.}
\label{f:nupnu'g}
\end{figure}
Let us now shift our attention to the P-method and look at the process
\begin{eqnarray}
\nu_1 + \gamma \to \nu_2 + {\cal G} \,,
\label{nupnu'g}
\end{eqnarray}
for which the relevant diagrams are shown in \fig{f:nupnu'g}.
Clearly, diagrams (a)-(d) contain the usual QED vertex for the
electron and therefore the factor $C_\mu$ given in \Eq{Cmu}. The
calculation of the amplitudes of these diagrams yields the same result
as that obtained from the S-method calculation of the diagrams of
\fig{f:nunu'g}.
However, the contribution of the type-2 diagrams
(e) and (f) of \fig{f:nupnu'g} have no counterpart in the S-method.
If we do not consider their contribution to the total amplitude,
then if we consider the process of \Eq{nupnu'g} as a physical process
involving a real photon, the amplitude does not satisfy
the transversality requirements due to the gravitational and
electromagnetic gauge invariance.
Moreover, the total amplitude, taking those two diagrams
into account does satisfy the aforementioned conditions.
We have verified this explicitly.
This reinforces our conclusion that the S-method does not yield
the total amplitude in those cases in which the P-method involves
type-2 diagrams.
\section{Conclusions}\label{s:conclu}
In this work we have considered the calculation of amplitudes
for processes that take place in a constant background magnetic field $B$
with the purpose of comparing two methods.
One, to which refer as the P-method, uses the standard method
for the calculation of an amplitude
in an external field. The other, the S-method, utilizes the
Schwinger propagator for charged particles in a magnetic field.
We showed that there are processes for which the two methods
of calculating the amplitude yield equivalent results. We illustrated
this with the specific example of the neutrino forward scattering
amplitude in a magnetic field, and we indicated specifically the
propagator identities that operate to guarantee the equivalence
in that case.
However, we pointed out that there are processes
for which the Schwinger propagator method does not yield the total
amplitude. For illustrative purposes, we considered the amplitude for $Z$ decay
into a neutrino-antineutrino pair in a $B$ field, in the context
of the standard model.
In that case, the diagrams that must be included in the P-method of calculation
can be divided in two groups, that we called type-1 and type-2 diagrams.
In the type-1 diagrams the electromagnetic vertex is attached to an internal
propagator line, while in type-2 diagrams there are some external lines as well
attached to that vertex. We showed that there is a one-to-one correspondence
between the diagrams of the S method and the type-1 diagrams of the P-method
and that their calculations yield the same result.
Therefore, for processes for which the type-2 diagrams do not exist,
both the P and S methods yield the same result, which is the case
of the neutrino processes that we have mentioned.
However, the type-2 diagrams have no counterpart in the S-method and therefore
this method does not yield the complete amplitude. The
total amplitude is obtained by taking the result of the type-1 diagrams,
which can be calculated by either method, and then adding
the result of the type-2 diagrams using the P method.
Moreover, we indicated that leaving out the type-2 diagrams in the
calculation of the amplitude does not yield a gauge-invariant result.
We have verified this explicitly by considering the neutrino
decay into another neutrino and a graviton in a $B$ field, which is another
process for there are type-2 diagrams.
In that particular case, we also verified that by including the type-2
diagrams the gauge invariance of the amplitude is restored,
which reinforces our conclusion that the S-method does not yield
the total amplitude in the general case in which the P-method
contains type-2 diagrams.
Needless to say, in the original context of the Schwinger propagator,
namely QED, there are no type-2 diagrams, so that both methods
are equivalent. However, the situation is different when the
other particles and interactions of the standard model are taken
into account, and/or possibly the gravitational interactions
as well. Our remarks should be taken within these broader contexts.
\subsection*{Acknowledgements}
The work of JFN was supported by the U.S. National Science
Foundation under Grant 0139538.
|
1,116,691,498,361 | arxiv | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{I}n many applications, it is essential to have high resolution images for precise analysis and inferences from them. However, a certain amount of degradations (blur and noise) are unavoidable in many of the imaging systems due to several factors, such as the limited aperture size, the involved medium (both atmosphere and optics), and the imaging sensors. With advances in the computational technologies, the digital image restoration techniques, such as image deblurring, have been proven to be an economic way to enhance resolution, signal-to-noise ratio, and contrast of the acquired images. In this paper we can consider the situation where we assume that the information about blur and the statistics of noise is know a priori, and then the restoration is referred to as nonblind deblurring. In general, image deblurring is an ill-posed inverse problem \cite{titterington1985general,demoment1989image}. In a Bayesian setting it can be expressed as \emph{maximum-a-posteriori} estimation problem \cite{richardson1972bayesian}, which boils down to the following numerical optimization problem:
\begin{align}
\label{Eq:generic_img_deblurring_problem}
\V{x}^{\ast} := \operatorname*{arg\,min}_{\V{x}} \{ \varPsi_{\text{data}} (\V{y}, \V{x}) + \lambda \ \varPsi_{\text{prior}} (\V{x}) \}
\end{align}
where the vectors $\V{y} \in \Set{R}^{m}$ and $\V{x} \in \Set{R}^{n}$ represent the observed (acquired blurry and noisy) image, and the unknown crisp image to be estimated, respectively. Two dimensional (2D) images are represented as column vectors by lexicographically ordering their pixels values. The first term $\varPsi_{\text{data}}$ in (\ref{Eq:generic_img_deblurring_problem}) is called \emph{likelihood} or data-fidelity term that depends upon the noise and the image formation model. The second term $\varPsi_{\text{prior}}$ is called \emph{a priori} or regularizer that imposes any prior knowledge on the unknown image $\V{x}$. The scalar parameter $\lambda$ keeps trade-off between the \emph{likelihood} and \emph{a priori} terms, which generally depends upon the noise level in the observed image.
\IEEEpubidadjcol
Blur in an acquired image is characterized by the impulse response of the imaging system, commonly known as point-spread-function (PSF). As far
as a narrow field-of-view is concerned, PSF can be considered constant throughout the field-of-view leading to a shift-invariant blur. The
blurring operation in this case is modeled as simple a convolution between the sharp image and the PSF, and can be performed efficiently in Fourier
domain. In many cases, however, blur varies throughout the field-of-view due to several causes: relative motion between the camera and the scene; moving objects with respect to the background; variable defocussing of non-planar scenes with some objects located in front or behind the in-focus plane; optical aberrations such as space-variant distortions, vignetting or phase aberrations. In here we consider shift-variant blur across the field-of-view of a planar scene, i.e. all the objects in the scene lie in-focus plane; see \cite{porter1984compositing} for discussion on shift-variant blur in the case of non-planar scene. Two subtly different situations can be considered for shift-variant blur in planar scene: i) a large field-of-view is captured into several pieces by multiple imaging systems each capturing a small portion of the whole scene, and ii) the same scene is captured by a single imaging system having wide field-of-view.
In the former case, each imaging system can have slightly different blur from each other operating under different settings. Thus, the captured images can have slightly different blurs from each other resulting into piece-wise constant blur in the whole scene. We will refer to this situation as piece-wise constant shift-variant blur. In the latter case, the blur in the whole field-of-view can vary smoothly, e.g. blur due to optical aberrations. We will refer to this situation as smooth shift-variant blur. The shift-invariant and piece-wise constant shift-variant blurs can be considered as special cases of the smooth shift-variant blur. The blurring operation in the smooth shift-variant case cannot be modeled by a simple convolution, and there does not exists any efficient and straight-forward way to perform the operation. However, there are some fast approximations of smooth shift-variant blur operators proposed in \cite{Nagy1998,Hirsch2010,Denis2011}; see \cite{Denis2015} for detailed comparison between different approximations.
If we approximate the noise in observed image by a non-stationary white Gaussian noise as considered in \cite{mugnier2004mistral} (see \cite{Lanteri2005, Benvenuto2008, Chouzenoux2015} for more refined noise model), then a discrete image formation model can
be written as:
\begin{equation}
\label{Eq:image_formation}
\V{y} = \M{H} \ \V{x} + \V{\varepsilon}
\end{equation}
where the matrix $\M{H} \in \Set{R}^{m\times n}, \ m < n$, denotes the blur operator\footnote{Blur operator is a rectangular matrix since size of observed image is restricted by the physical size of the image sensor, i.e., sensor captures little less than what optics can see.}, and $\V{\varepsilon}$ is the zero-mean non-stationary white Gaussian noise, i.e. $\V{\varepsilon}(\ell) \sim \mathcal{N}(0, \V{\sigma}^{2}(\ell))$ with $\V{\sigma}^{2}(\ell)$ denoting the noise variance at $\ell$th component of the vector $\V{y}$. We assume that the raw captured image is preprocessed to yield an image that closely follows the above image formation model (\ref{Eq:image_formation}). The preprocessing may includes the correction of the background and of the flat field, the correction of defective pixels and possibly of its correlated noise, and the scaling of the image in photons.
Before we proceed further, let us introduce some more notations. Hereafter, we will use upper-case bold and lower-case bold letters for denoting matrices (and linear operators), and column vectors, respectively. For every set $\Mcal{T}$, we denote by $\arrowvert \Mcal{T}\arrowvert$ the cardinality of the set, and by $\Set{R}^{\Mcal{T}}$ the set of functions on $\Mcal{T}\to\Set{R}$. Let $\Mcal{X}$ represents Euclidean space, then we denote by $\ps{\,.\,,\,.\,}$ the standard inner product on $\Mcal{X}$, and by $\|\,.\,\|$ the Euclidean norm. Letting $\M{V}$ denote a positive definite linear operator of ${\mathcal X}$ onto itself, we use the notation $\ps{\V{x},\V{y}}_\M{V}=\ps{\V{x},\M{V}\V{y}}$, and we denote by $\|\,.\,\|_\M{V}$ the corresponding norm. When $\M{V}$ is a diagonal operator of the form $(\M{V}\V{x}):\ell \mapsto {\V{\alpha}}(\ell) \ {\V{x}}(\ell)$ we equivalently denote $\|\V{x}\|_\M{V}^2$ by $\|\V{x}\|_{\V{\alpha}}^{2}=\sum_\ell \V{\alpha}(\ell)\V{x}(\ell)^2$, where $\V{\alpha} = (\alpha_1, \alpha_2,\cdots)$. We also denote $\| \V{x}\|_{1,\M{V}}=\sum_\ell \V{\alpha}(\ell) \ |\V{x}(\ell)|$ the weighted $L_1$-norm, simply noted $\|\V{x}\|_1$ when $\M{V}$ is the identity. Moreover, if $\M{L}$ is a linear operator, we denote by $\M{L}^{^{\TransposeLetter}}$ its adjoint operator.
Considering the above image formation model (\ref{Eq:image_formation}) with a known PSF, the (nonblind) image deblurring problem (\ref{Eq:generic_img_deblurring_problem}) can be explicitly expressed as:
\begin{align}
\label{Eq:specific_img_deblurring_problem}
\V{x}^{\ast} := \operatorname*{arg\,min}_{\V{x} \geq \V 0} \{ \frac{1}{2} \| \V{y} - \M{H} \ \V{x} \|^{2}_{\M{W}} + \lambda \ \phi(\M{D} \ \V{x}) \}
\end{align}
where the matrix $\M{W}$ is diagonal with its components given by $\M{W}(\ell,\ell) = 1 / \V{\sigma}^{2}(\ell)$ for observed pixels and $\M{W}(\ell,\ell) = 0$ for unmeasured pixels. The function $\phi$ represents some regularizer (e.g., Tikhonov's $L_{2}$-norm, sparsity promoting $L_{1}$-norm, or Huber's mixed $L_{1}$-$L_{2}$-norm), and $\M{D}$ is some linear operator, e.g., finite forward difference or some discrete wavelet transform. The formalism (\ref{Eq:specific_img_deblurring_problem}) of image deblurring problem is referred to as analysis-based approach where the unknown variable is expressed in the image domain itself. An alternative formalism, referred to as synthesis-based approach, is also considered in literature where the unknown variable is in some transformed domain, e.g., coefficients of some discrete wavelet; see \cite{Elad2007} for detailed discussion on the two approaches and their comparisons. Without loss of generality, in this paper we consider only the analysis-based approach for expressing the image deblurring problem.
Depending upon the structures of the two terms in the optimization problem (\ref{Eq:specific_img_deblurring_problem}), the solution can be obtained in a single step i.e., a closed-form solution (e.g., Wiener deconvolution), or one has to rely on iterative solvers, which may require, at each iteration, the same computational expense or even more than the closed-form solution\footnote{Applying blur operator $\M{H}$ or its adjoint $\M{H}^{^{\TransposeLetter}}$ is an expensive operation, and one may require to apply them several times per iterations of the solver.}. The latter one is the frequently occurring case in many applications, e.g., the regularization term is not differentiable (nonsmooth). Though, there exists a vast literature on numerical optimization, but we refer the readers to \cite{Beck2009,Afonso2010,Matakos2013,Mourya2015} and the references therein for some of the recent and fast optimization algorithms specially developed for inverse problems in imaging.
\subsection{Motivation: Deblurring Extremely Large Image}
\label{subsec:problem}
With the advances in imaging technologies, the applications in astronomy, satellite imagery and others are able to capture extremely large image of wide field-of-view. The size of such images can vary from few tens megapixels up to gigapixels. As discussed previously, we consider two imaging situations where such large images are acquired: piece-wise constant and smooth shift-variant blur. In the former situation, the images from multiple narrow field-of-view imaging systems are supposed to be mosaicked together into a single image provided that their relative positions within the whole field-of-view are known. For further simplification, we assume that all the narrow field-of-view imaging systems have the same noise performance. In the both imaging situations, it is highly desirable to obtain a single crisp image from the acquired image(s) for any further applications.
Image deblurring is well studied topic and a vast literature exists \cite{richardson1972bayesian, titterington1985general, demoment1989image, Thiébaut2005, hansen2006deblurring, Benvenuto2008, Figueiredo2009, Matakos2013} for moderate size (few megapixels) images with shift-invariant blur.
These methods differ from each other in the way they formulate the data-fidelity term or the regularization term, and the optimization algorithms they propose to solve the resulting problem. The image deblurring problem becomes more complicated when one considers the images suffering from shift-variant blur. Some recent works \cite{Nagy1998,Hirsch2010,Denis2011,Denis2015} proposed efficient methods for deblurring images with shift-variant blur. However, all the methods listed above are designed and implemented to work efficiently on a centralized computing system, possibly with multiple processor cores sharing a single memory space. Such a computing system is commonly referred to as multi-threaded shared memory system based on single instruction multiple data (SIMD) architecture. Hereafter, we will refer to all such existing methods by \emph{centralized} deblurring techniques. In contrast to deblurring a moderate size image, deblurring an extremely large image is a challenging problem since the largest size of the image that can be handled is then limited by the capacity of shared memory available on the centralized system. It is not cost effective to build a centralized system with several processor cores and a huge shared memory when the modern distributed computing systems (consist of several connected processing nodes each having reasonable amount of computational resources) are proving to be far more economical way for solving huge-scale problems than the centralized system. Many domains such as machine learning, data mining \cite{bekkerman2011scaling}, and others are already benefiting from distributed computing approach for discovering patterns in huge datasets distributed at different nodes. As per our knowledge, there are very few works, e.g., \cite{cui2013distributed, wang2013distributed, ferrari2014distributed, meillier2016two}, considering distributed computing approach for image restoration/reconstruction problems. Within this context, we propose a distributed image deblurring algorithm for large images acquired from either of the imaging situations mentioned above.
\subsection{Our contributions}
\label{subsec:contributions}
In this paper, we propose a distributed algorithm for deblurring large images, which needs less frequent inter-node communication. Our algorithm can handle the acquired image(s) from both the imaging situations: smooth shift-variant and piece-wise constant shift-variant blur. For the first imaging situation, we consider splitting the large image into sufficiently small overlapping blocks and then deblur them simultaneously on the several processing nodes while maintaining certain coherencies among them so as to obtain a single crisp image without any transition artifacts among the deblurred blocks. For the second imaging situation, we assume that the relative positions of narrow images in the whole field-of-view are known, and there are certain overlaps among them. If a narrow image is not sufficiently small to fit in the memory of a node, then we can split it further into smaller overlapping blocks. Similar to the first situation, we deblur the small images simultaneously to obtain a single crisp image. To do so, we reformulate the image deblurring problem (\ref{Eq:generic_img_deblurring_problem}) into a distributed optimization problem with consensus, and then present an efficient optimization method to solve it. Our distributed deblurring algorithm is rather generic in the sense that it can handle different situations such as shift-invariant or shift-variant blur with any possible combination of the data-fidelity and regularization terms. Depending on the structures of the data-fidelity and regularizer, we can select any fast optimization algorithm for solving the local deblurring problems at processing nodes. By several numerical experiments, we show that our algorithm is cost effective in term of computational resources for large images, and is able obtain the similar quality of deblurred images as the existing \emph{centralized} deblurring techniques that are applicable only to images of moderate size.
The remaining parts of the paper is organized as follows. In Section \ref{sec:distributed_image_deblurring}, we discuss the difficulties associated with distributed formulations of image deblurring problem, and the possible approaches to overcome it. In Section \ref{sec:proposed_approach}, we present the distributed formulation and an efficient distributed algorithm to solve it. We discuss the criteria for selecting the different parameters associated with our approach. In Section \ref{sec:numerical_experiments_results}, we discuss the
implementation details and present several numerical experiments where we compare the results obtained from different approaches. Finally, in
Section \ref{sec:conclusion}, we conclude our work with possible future enhancements.
\section{Distributed Computing Approach for Image Deblurring}
\label{sec:distributed_image_deblurring}
A generic approach for dealing with large-scale problem is the ``divide and conquer'' strategy, in which a large-scale problem is decomposed into smaller subproblems in such way that solving them and assembling theirs results would produce the expected final result or, at least, reasonably close to it. Distributed computing approach has emerged as a framework of such a strategy. Distributed computing systems are consist of several processing nodes, each having a reasonable amount of computational resources (in term of memory and processor cores), connected together via some communication network so that nodes can exchange messages to achieve a certain common goal. They are built based on the multiple instructions multiple data (MIMD) architecture. Nodes in a distributed system may not be necessarily located at the same physical location, so a high speed communication links may not be always feasible among them. Thus, an efficient distributed algorithm is the one which is computation intensive rather than communication demanding. In many applications such as machine learning, data mining, distributed approaches have become de-facto standard for efficiently estimating extremely large number of parameters using huge dataset distributed on the different nodes. Taking such an inspiration, one can devise a distributed strategy for deblurring extremely large images. A possible approach is to use the distributed array abstraction available on modern distributed systems, and reimplement the standard \emph{centralized} deblurring techniques using distributed arrays instead of a shared memory array. However, the bottleneck of such an approach would be extensive data communication among the nodes at each iteration of the optimization algorithm. To overcome this, a straightforward approach would be to split the given observed image $\V{y}$ into $N$ smaller contiguous blocks $\V{y}_{i} \in \Set{R}^{m_{i}}, i=1,2,\cdots,N $, and deblur them independently to obtain the deblurred blocks $\hat{\V{x}}_{i} \in \Set{R}^{n_{i}}, i=1,\cdots,N$ that can be merged to obtain the single deblurred image $\hat{\V{x}} \in \Set{R}^{n}$. However, there may arise several issues from both the theoretical and practical point-of-views as discussed below.
\subsection{Issues with Distributed Approach for Image Deblurring}
\label{subsec:the_difficulties}
For sake of simplicity, let us first consider the shift-invariant image deblurring problem, and point out some of the major issues:
\begin{enumerate}
\item Practically, it is infeasible to explicitly create the blur operator matrix $\M{H}$. For this reason blur operation is efficiently performed in Fourier domain using Fast Fourier Transforms (FFT) algorithm. Thus, there is no straight forward way to split the matrix $\M{H}$ into block matrices $\subM{H}_{i} \in \Set{R}^{m_{i}\times n_{i}}$ so that they could be used to perform independent deblurring of the observed blocks $\V{y}_{i}$ such that the final result would be equivalent to using the original matrix $\M{H}$ on whole image $\V{y}$. Any approximation of block matrices $\subM{H}_{i}$ chosen not appropriately may create artifacts such as nonsmooth transition among the deblurred blocks.
\item Blur operation performed in Fourier domain assumes circular boundary condition. The circulant blur assumption in image deblurring process can lead to severe artifacts due to discontinuity at boundaries caused by the periodic extension of image \cite{Matakos2013}. Thus, deblurring the individual blocks $\V{y}_{i}$ can introduce boundary artifacts into them that will eventually worsen the quality of final deblurred image.
\item Splitting the image into blocks can also raise issues with the regularization term. Many frequently used regularizers are not separable, i.e., its value on the whole image is not equivalent to the sum its value on the blocks. Such an approximation can result into structural incoherencies among the deblurred blocks.
\end{enumerate}
The issues mentioned above complicate further the shift-variant image deblurring problem, since there is no any efficient and straightforward way to formulate shift-variant blur operator. Considering the above issues associated with the na\"{\i}ve ``split, deblur independently, and merge'' approach, it is obvious that one needs to approximate the problem (\ref{Eq:specific_img_deblurring_problem}) in a way so that it can be solved efficiently in a distributed manner, yet obtain the solution that remains reasonably close to the solution of the original problem (\ref{Eq:specific_img_deblurring_problem}). In the next section, we describe how to tackle the above issues, and then we will present our distributed image deblurring algorithm.
\subsection{Tackling the Issues in Distributed Image Deblurring}
\label{subsec:tackling_the_difficulties}
As discussed above, it is nontrivial problem to explicitly create a blur operator and then decompose it into block matrices to operate them independently for distributed image deblurring. However, we can formulate an approximation of the block matrices $\subM{H}_{i}$ such that when they are used in deblurring process, certain homogeneities among the deblurred blocks are imposed. Let us consider the smooth shift-variant blur case. Provided that we are able to sample $N$ local PSFs $\V{h}_{i}, i=1,\cdots, N$ at regular grid points within the field-of-view, the approximation of shift-variant blur operator proposed in \cite{Hirsch2010,Denis2011} suggests an interesting idea to formulate the block matrices $\subM{H}_{i}$. Their approximations of the shift-variant blur operator is based upon the idea that a PSF at any point within field-of-view can be well approximated by linear combination (interpolation) of the PSFs sampled at the neighboring grid points. Said so, the shift-variant blurring operation can be written as:
$\V{y} = \M{R} \sum_{i=1}^{N} \subM{Z}_{i} \subM{H}_{i} \subM{W}_{i} \subM{C}_{i} \V{x}$, where $\subM{C}_{i}$ are chopping operators that select overlapping blocks from the image $\V{x}$, $\subM{W}_{i} = \Diag(\V{\omega}_{i})$ are interpolation weights corresponding to each blocks, $\subM{H}_{i}$ are blur operators corresponding to the sampled local PSFs $\V{h}_{i}$, $\subM{Z}_{i}$ are operators that zero-pads the corresponding blurred blocks keeping it in the same relative position with respect to the whole image, and $\M{R}$ is a chopping operator that restrict the final blurred image to sensor size. Interpolation weights $\V{\omega}_{i}$ are non-zero only within certain locality depending upon the order of interpolation. This approximation allows to perform shift-variant blurring combining several local shift-invariant blurring, which can be performed efficiently in Fourier domain. This approximation renders smooth variation of blur throughout the whole image. Higher accuracy in blurring operation can be achieved by denser sampling of PSFs with the field-of-view. The image deblurring results presented in \cite{Hirsch2010, Denis2015} suggest that the first-order interpolation (e.g., $\V{\omega}_{i}$ is a ramp within the range $[0, 1]$ between the two adjacent grid points for 1D signal as illustrated in Fig.\ref{fig:demonstration1}) is sufficient to achieve reasonably good quality of deblurred images. Using first-order interpolation, the shift-variant blur operator is only four times expensive than the shift-invariant blur operation on the same size of image. However, this fast approximation of shift-variant blur operator is efficient only for a centralized multi-threaded shared memory implementation. It is not directly applicable for distributed setting as it will lead to intensive data communication among the nodes each time the blur operator or its adjoint is used by an iterative solver. Nevertheless, this approximation suggests the following idea for distributed image deblurring: i) split the observed image $\V{y}$ into $N$ overlapping blocks $\V{y}_{i}$, ii) generate 2D first-order interpolation weights $\V{\omega}_{i}$ corresponding to each block, iii) provided with $N$ PSFs $\V{h}_{i}$ sampled locally within each block, distribute the observed blocks, the PSFs, and the interpolation weights among the $N$ nodes, and iv) then on each node perform local image deblurring while maintaining certain consensus among the overlapping pixels of the adjacent blocks. The consensus operation can be a weighted averaging among the overlapping pixels. Using the interpolation weights $\V{\omega}_{i}$ for averaging operation will indirectly impose the smooth variation of blur across whole observed image. Without loss of generality, the aforementioned strategy is also applicable to the cases when an image suffers from shift-invariant or piece-wise constant shift-variant blur since these two operations can be well approximated by the above fast shift-variant blur operator for the smooth blur variation.
As pointed out above, performing local deblurring in Fourier domain may lead to boundary artifacts, thus to avoid such artifacts in the deblurred blocks, we borrow the idea from \cite{Matakos2013}. We express the local blur operator as: $\subM{H}_{i} = \boldsymbol{\mathcal{C}}_{i} \boldsymbol{\mathcal{H}}_{i}, i=1,\cdots, N$, where $\boldsymbol{\Mcal{H}}_{i} \in \Set{R}^{n_{i} \times n_{i}}$ are circular convolution matrices formed from PSFs $\V{h}_{i}$, and $\boldsymbol{\mathcal{C}}_{i} \in \Set{R}^{m_{i} \times n_{i}}$ are chopping operators, which restrict the convolution results to the valid regions. Expressing the local blur operators in this form allows an efficient non-circulant blur operation in Fourier domain while suppressing boundary artifacts in the deblurred patches.
Concerning the last issue about approximating regularizer on the whole image by sum of it on blocks, it depends upon the structure of regularizer. If the regularizer is separable in image domain, e.g., $\phi(\M{D} \V{x}) = \| \V{x} \|_{1} \ \text{or} \ \| \V{x} \|^{2}_{2}$, then the aforesaid approximation holds. However, for many of the frequently used regularizers, e.g., $\|\M{D}\V{x} \|^{2}_{2}$ or $\| \M{D}\V{x} \|_{1}$ or $\| \M{D}\V{x} \|_{2}$ where $\M{D}$ represents finite-forward difference operator or some discrete wavelet transform, the aforementioned approximation does not hold. This can lead to incoherencies among the deblurred blocks, which can render nonsmooth transitions among them. However, as shown hereafter a sufficient amount of overlaps among the adjacent blocks with aforementioned consensus imposed on the overlapping pixels during deblurring process will limit the incoherencies, and suppress any nonsmooth transition in the final deblurred image.
\section{The Proposed Approach}
\label{sec:proposed_approach}
Considering all the ideas developed in Section \ref{sec:distributed_image_deblurring}, we propose a generic framework for distributed image deblurring applicable to images suffering from both the shift-invariant and shift-variant blur. We make some reasonable approximations, and reformulate the original problem (\ref{Eq:specific_img_deblurring_problem}) into the distributed optimization problem presented below. The resulting optimization problem is solved in a distributed manner by Douglas-Rachford (D-R) splitting algorithm that first appeared in \cite{Lions1979}.
\subsection{General Setting}
Consider a distributed computing system with a set of $N$ nodes having peer-to-peer bidirectional connections among them, or at least connection topology shown in Fig. \ref{fig:demonstration2}. Given an observed image $\V{y} \in \Set{R}^{m}$, we split it into $N$ blocks $\V{y}_{i} \in \Set{R}^{m_{i}}, \forall i=1,2,\cdots,N$, with certain overlaps among them. We generate 2D first-order interpolation weights $\V{\omega}_{i}$ corresponding to the observed blocks as shown in Fig.~\ref{fig:shiftinvariant_simulation}(e). Provided that we are able to sample $N$ PFSs $\V{h}_{i}$ within the blocks, and $\subM{H}_{i} \in \Set{R}^{m_{i} \times n_{i}}$ be the corresponding blur operators, we distribute the observed blocks, the corresponding PSFs, and the interpolation weights among the nodes. We, then, seek to distributively estimate the whole unknown crisp image $\V{x} \in \Set{R}^{n}$. Let us denote by $\Mcal{P}_1,\dots,\Mcal{P}_N$ a collection of $N$ subsets of $\{1,\dots,n\}$. For every $i=1,\dots, N$, we assume that $i$th compute node is in charge of estimating the components of $\V{x}$ corresponding to the indices $\Mcal{P}_{i}$. The subsets $\Mcal{P}_1,\dots, \Mcal{P}_N$ are overlapping. Hence, different nodes handling a common component of $\V{x}$ must eventually agree on the value of the latter. Formally, we introduce the product space $\Mcal{X}:=\Set{R}^{\Mcal{P}_1}\times\cdots\times\Set{R}^{\Mcal{P}_N}$, and we denote by $\Mcal{C}$ the set of vectors $(\V{x}_1,\dots,\V{x}_N)\in \Mcal{X}$ satisfying the restricted consensus condition
$$
\forall (i,j)\in\{1,\dots,N\}^2, \, \forall \ell\in \Mcal{P}_i\cap \Mcal{P}_j,\, \V{x}_i(\ell)= \V{x}_j(\ell)\,.
$$
Moreover, we assume that every $i$th node is provided with a local convex, proper and lower semicontinuous function $f_i:\Set{R}^{\Mcal{P}_i}\to (-\infty,+\infty]$. We consider the following constrained minimization problem on $\Set{R}^{\Mcal{P}_1}\times\cdots\times\Set{R}^{\Mcal{P}_N}$:
\begin{equation}
\label{Eq:genericpb}
\operatorname*{arg\,min}_{\V{x}_1\cdots \V{x}_N} \sum_{i=1}^Nf_i(\V{x}_i)\ \text{s.t. } {(\V{x}_1,\dots,\V{x}_N)\in \Mcal{C}}\,.
\end{equation}
For our image deblurring problem, the local cost function $f_{i}$ is composed of the local data-fidelity term $$\V{x}_{i}\mapsto \frac 12 \|\V{y}_i - \subM{H}_{i} \ \V{x}_{i} \|^2_{\M{W}_i},$$ for some positive definite $\M{W}_i(\ell, \ell) = 1 / {\V{\sigma}_{i}^{2}(\ell)}$ as in (\ref{Eq:specific_img_deblurring_problem}), and a regularizer $\phi_{i}( \M{D}_{i} \ \V{x}_{i})$ with positivity constraint on $\V{x}_{i}$, i.e., $\V{x}_{i} \in \Set{R}^{\Mcal{P}_i}_{+}$. If $\Mcal{A}$ is a set, the notation $\iota_{\Mcal{A}}$ stands for the indicator function of the set $\Mcal{A}$, equal to zero on that set and to $+\infty$ elsewhere. Thus, the local cost function needed to be minimized at each of the nodes have the form:
$$
f_i(\V{x}_{i}) = \frac 12\|\V{y}_i - \subM{H}_{i} \ \V{x}_{i} \|^2_{\M{W}_i} + \lambda_{i} \ \phi_{i}(\M{D}_{i} \ \V{x}_{i}) + \iota_{\Set{R}^{\Mcal{P}_i}_{+}}\left(\V{x}_{i}\right)
$$
where $\phi_{i}$ are convex, proper and lower semicontinuous functions and $\M{D}_{i}$ are linear operators on $\Set{R}^{\Mcal{P}_i}$.
\subsection{Optimization Algorithm}
\label{subsec:optimization_algorithm}
Before we present our distributed optimization algorithm for solving problem (\ref{Eq:genericpb}), let us introduce one more notation. For any convex, proper and lower semicontinuous function $h:\Mcal{X}\to(-\infty,+\infty]$, we introduce the \emph{proximity operator}
$$
\Prox_{\M{V}^{-1},h}(\V{v}) = \operatorname*{arg\,min}_{\V{w}\in \Mcal{X}} h(\V{w})+\frac{\|\V{w}- \V{v}\|_\M{V}^2}{2}
$$
for every $\V{v}\in \Mcal{X}$.
For solving (\ref{Eq:genericpb}), we consider D-R Splitting algorithm, thus we reformulate (\ref{Eq:genericpb})
as
$$
\operatorname*{arg\,min}_{\V{x}\in \Mcal{X}} f(\V{x})+g(\V{x})
$$
where $g=\iota_{\Mcal{C}}$ is the indicator function of $\Mcal{C}$ and $f(\V{x})=\sum_if_i(\V{x}_i)$ for every $\V{x}=(\V{x}_1,\dots,\V{x}_N)$ in ${\mathcal X}$.
Let us equip the Euclidean space $\Mcal{X}$ with the inner product $\ps{\,.\,,\,.\,}_\M{V}$ for some positive definite linear operator $\M{V}:\Mcal{X}\to\Mcal{X}$.
Let $\rho^{(k)}$ be a sequence in $]0, 2[$, and $\V{\epsilon}^{(k)}_{1}$ and $\V{\epsilon}^{(k)}_{2}$ be sequences in $\Mcal{X}$, then the D-R splitting algorithm writes as:
\begin{align*}
\V{x}^{(k+1)} &=\Prox_{\M{V}^{-1},f}(\V{u}^{(k)}) + \V{\epsilon}^{(k)}_{1} \\
\V{z}^{(k+1)} &=\Prox_{\M{V}^{-1},g}(2\V{x}^{(k+1)}- \V{u}^{(k)}) + \V{\epsilon}^{(k)}_{2} \\
\V{u}^{(k+1)} &= \V{u}^{(k)} + \rho^{(k)} \left( \V{z}^{(k+1)}-\V{x}^{(k+1)} \right) \;.
\end{align*}
If the following holds:
\begin{enumerate}
\item the set of minimizers of (\ref{Eq:genericpb}) is non-empty,
\item $ \V 0 \in \mathrm{ri}(\mathrm{dom}(f)-{\mathcal C})$, where $\mathrm{ri}$ represents relative interior,
\item $\sum_{k} \rho^{(k)}(2 - \rho^{k}) = +\infty$,
\item and $\sum_{k} \| \V{\epsilon}^{(k)}_{1} \|_{2} + \| \V{\epsilon}^{(k)}_{2} \|_{2} < +\infty$,
\end{enumerate}
then the iterates $\V{u}^{(k)}$ converges weakly to some point in $\Mcal{X}$ and $\V{x}^{(k)}$ converge to a minimizer of (\ref{Eq:genericpb}) as $k\to\infty$; see \cite[Corollary 5.2]{combettes2004solving} for the proof. The parameter $\rho^{(k)}$ is referred to as a relaxation factor, which can be tuned to improve the convergence. The sequences $\V{\epsilon}^{(k)}_{1}$ and $\V{\epsilon}^{(k)}_{2}$ allow some perturbations in the two $\Prox$ operations, which is very useful in the cases when the $\Prox$ operations do not have closed-form solutions and have to rely on some iterative solvers.
From now onward, we assume that $\M{V}$ is a diagonal operator of the form $\M{V}\V{x} =(\M{V}_1\V{x}_1,\dots,\M{V}_N \V{x}_N)$, where for every $i$,
\begin{eqnarray*}
\M{V}_{i} \V{x}_{i}\,:\, \Mcal{P}_{i} &\to&\Set{R} \\
\ell&\mapsto& \V{\alpha}_{i}(\ell)\,\V{x}_{i}(\ell)\,,
\end{eqnarray*}
where $\V{\alpha}_i(\ell)$ is a positive coefficient to be specified later.
For every $\ell\in\{1,\dots,n\}$, we introduce the set $\Mcal{P}_\ell^-=\{i\,:\,\ell\in \Mcal{P}_i\}$.
\begin{lemma}
For every $\V{x}\in\Mcal{X}$, the quantity $\V{z}=\Prox_{\M{V}^{-1}, \iota_{{\mathcal C}} }(\V{x})$ is such that for every
$i\in\{1,\dots,N\}$ and every $\ell\in \Mcal{P}_i$,
$$
\V{z}_i(\ell) = \frac{\sum_{i\in \Mcal{P}^-_\ell}\V{\alpha}_i(\ell) \ \V{x}_i(\ell)}{\sum_{i\in \Mcal{P}^-_\ell}\V{\alpha}_i(\ell)}\,.
$$
\end{lemma}
\begin{proof}
For every $\V{x}\in\Mcal{X}$ of the form $\V{x}=(\V{x}_1,\dots,\V{x}_N)$, and every $\ell\in \{1,\dots,n\}$, we use the notation
$\V{x}_{\Mcal{P}^-_\ell}(\ell) = (\V{x}_i(\ell)\,:\,i\in \Mcal{P}^-_\ell)$. We denote by ${\mathcal C}_\ell$ the linear span of the vector
$(1,\dots,1)$ in ${\mathbb R}^{|\Mcal{P}^-_\ell|}$, where $|\Mcal{P}^-_\ell|$ is the cardinality of $\Mcal{P}^-_\ell$.
The function $g=\iota_{\mathcal C}$ writes
$$
g(\V{x}) = \sum_{\ell=1}^n \iota_{{\mathcal C}_\ell}(\V{x}_{\Mcal{P}^-_\ell}(\ell))\,.
$$
For an arbitrary $\V{x}\in \Mcal{X}$, the quantity $\V{z}=\Prox_{\M{V}^-,g}(\V{x})$ is given by
\begin{align*}
\V{z} &= \arg\min_{\V{w}\in \Mcal{X}} \sum_{\ell=1}^{n} \Bigl( \iota_{{\mathcal C}_\ell}(\V{w}_{\Mcal{P}^-_\ell}(\ell)) \Bigr. \\
& \qquad \qquad \qquad \qquad \Bigl. + \frac{1}{2}\sum_{i\in \Mcal{P}^-_\ell}\V{\alpha}_i(\ell)(\V{w}_i(\ell)-\V{x}_i(\ell))^2 \Bigr)
\end{align*}
Clearly, for every $\ell\in \{1,\dots,n\}$, the components $\V{z}_i(\ell)$ are equal for all $i\in \Mcal{P}^-_\ell$, to some constant w.r.t. $i$, say $\bar{\V{z}}(\ell)$.
The later quantity is given by
\begin{align*}
\bar{\V{z}} (\ell) &= \arg\min_{\V{w}\in {\mathbb R}} \sum_{i\in P^-_\ell}\V{\alpha}_i(\ell)(\V{w}-\V{x}_i(\ell))^2 \\
&= \frac{\sum_{i\in \Mcal{P}^-_\ell}\V{\alpha}_i(\ell)\V{x}_i(\ell)}{\sum_{i\in \Mcal{P}^-_\ell}\V{\alpha}_i(\ell)}\,.
\end{align*}
\end{proof}
By the above lemma, the D-R splitting algorithm can be explicitly written as follows. For every $ i\in \{1,\dots,N\}$ and every $\ell\in
\{1,\dots,n\}$,
\begin{align*}
\V{x}^{(k+1)}_i &= \Prox_{\M{V}_i^{-1},f_i}(\V{u}_i^{(k)}) \\
\bar{\V{z}}^{(k+1)}(\ell) &= \frac{\sum_{i\in \Mcal{P}^-_\ell}\V{\alpha}_{i}(\ell)(2\V{x}_i^{(k+1)}(\ell)-\V{u}_i^{(k)}(\ell))}{\sum_{i\in \Mcal{P}^-_\ell}\V{\alpha}_i(\ell)} \\
\V{u}^{(k+1)}_i(\ell) &= \V{u}^{(k)}_i(\ell) + \rho^{(k)} \left( \bar{\V{z}}^{(k+1)}(\ell)-\V{x}^{(k+1)}_i(\ell) \right).
\end{align*}
The resulting algorithm is a synchronous distributed algorithm without any explicit master node. It is depicted in Algorithm~\ref{algo:proposed_algo}. Hereafter, we will refer to it by \emph{proposed} deblurring method. Given some initial guess of the local solutions at each node, the first step of Algorithm~\ref{algo:proposed_algo} execute the \textsc{Local-Solver} in parallel on all the nodes to obtain local deblurred blocks. Then all the nodes synchronize at the start of the second step, and then they exchange the overlapping pixels with its adjacent nodes to distributively perform the weighted averaging of those pixels. Then, the last step of the algorithm is executed in parallel on all the nodes.
The convergence speed of \emph{proposed} deblurring method is dependent upon the parameters $\V{\alpha}_{i}$, and like in other optimization algorithms e.g., ADMM \cite{Boyd2011}, selecting the optimal values of $\V{\alpha}_{i}$ for fast convergence is a tedious task. We select $\V{\alpha}_{i} = \gamma \; \V{\omega}_{i}$ for $\gamma > 0$, so that we can tune $\gamma$ for fast convergence, and as well impose the smooth variation of the blur among adjacent blocks.
\begin{algorithm}[t]
\begin{algorithmic}
\Procedure{Distributed-Solver}{}
\State{Initialize: $\V{u}_i\gets \V{u}^{(0)}_{i}, \forall i =1,2,\cdots,N$}
\While{not converged}
\For {$i =1\dots N$}
\State $\V{x}_i\gets$ \textsc{Local-Solver}$(\V{u}_i\,;\V{\alpha}_i,f_i)$
\EndFor
\For {$\ell=1\dots n$}
\State Compute distributively at nodes $i\in \Mcal{P}^-_\ell$:
\State $\bar{\V{z}}_{i}(\ell)\gets \sum_{i\in P^-_{\ell}}\V{\omega}_i(\ell) (2\V{x}_i(\ell)-\V{u}_i(\ell)), \forall \ell \in \Mcal{P}_i$
\EndFor
\For {$i =1\dots N$}
\State $\V{u}_i(\ell)\gets \V{u}_i(\ell) + \rho (\bar{\V{z}}_{i}(\ell)-\V{x}_i(\ell)), \forall \ell \in \Mcal{P}_i$
\EndFor
\EndWhile\label{euclidendwhile}
\State \textbf{return} $\V{x}_1,\dots,\V{x}_N$
\EndProcedure
\Procedure{Local-Solver}{$\V{u}\,;\V{\alpha},f$}
\State $\V{w}\gets \Prox_{\V{\alpha}^{-1},f}(\V{u}) = \arg\min_{\V{w}} \{ f(\V{w}) + \frac 12 \|\V{w}-\V{u}\|_{\V{\alpha}}^2 \}$
\State \textbf{return} $\V{w}$
\EndProcedure
\caption{Distributed Image Deblurring}
\label{algo:proposed_algo}
\end{algorithmic}
\end{algorithm}
\begin{figure*}
\centering
{\includegraphics[width=0.95\linewidth]{demonstration1.pdf}}
\caption{An 1D illustration showing the splitting of the observed image, the extent of the overlap, and shape of the interpolation weights for the shift-invariant and smooth shift-variant blur cases. Given an observed image $\V{y}$, the crossbar are equidistant reference grid points where PSFs $\V{h}_{i}$ are sampled. Depending upon the case: shift-invariant or shift-variant blur, the extent of the overlap is selected. The observed image is split into $3$ overlapping patches $\V{y}_{i}$. $\V{x}_{i}$ are the locally deblurred patches at $i$th node from the corresponding observed patches $\V{y}_{i}$. The dashed parts at the ends of each $\V{x}_{i}$ are the extra pixels estimated at boundaries assuming no measurements were available for them. $\V{\omega}_{i}$ are the interpolation weights corresponding to the support of $\V{x}_{i}$, and its values are within range $[0, 1]$ such that $\sum_{i=1}^{3} \V{\omega}_{i}(\ell) = 1, \forall \ell=1,\cdots,n$.
}
\label{fig:demonstration1}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{demonstration2.pdf}
\caption{The communication network topology for our distributed image deblurring algorithm. Each square represents a node connected to its neighbors by bi-directional communication channel. Provided with blurred patches, and the corresponding local PSFs and the interpolation weights, the nodes run simultaneously the \textsc{Local-Solver}s, and exchange their local estimate of overlapping pixels to perform the consensus operation.}
\label{fig:demonstration2}
\end{figure}
\subsection{Size of Image Blocks and the Extent of the Overlaps}
\label{subsec:patch_size_overlap_weights}
The size of observed image blocks and the extent of overlaps among them have impact upon the computational cost and the quality of final deblurred image. Thus, they must be chosen appropriately depending upon the situations: shift-invariant, piece-wise constant shift-variant, and smooth shift-variant blur. Without loss of generality, we assume that all local PSFs have the same size.
In the case of smooth shift-variant blur, the size of image blocks and the extent of overlaps depend upon how densely the grid points, thus the local PSFs, can be sampled within the field-of-view. Denser the grid points, better is the approximation of shift-variant blur in the image. If we consider $N$ equidistant grid points within the field-of-view, then the support of image blocks extend (in both horizontal and vertical directions) over three consecutive grid points, except for the image blocks at the boundaries of image. Thus, each block overlaps half of its size from top, bottom, left and right with its adjacent blocks, i.e., a pixel in the observed image appears into four adjacent blocks. As discussed previously, this extent of overlaps are necessary to impose smooth variation of blur among the image blocks. See the illustration in Fig.~\ref{fig:demonstration1}, which demonstrate for 1D signal the scheme of splitting the observed image into blocks, and the extent of overlaps among them. Similarly, Fig.~\ref{fig:shiftvariant_simulation_2}(a) shows $5\times 5$ grid points overlaid upon the observed image where the local PSFs are sampled, and Fig.~\ref{fig:shiftvariant_simulation_2}(c) shows resulting overlapping image blocks.
In the case of shift-invariant blur, again we can consider $N$ equidistant grid points within the field-of-view, and split the observed image into $N$ overlapping blocks. However, in this case, we do not need the same extent of overlaps among the blocks as in the case of smooth shift-variant blur, but a smaller extent of overlaps. In fact, it is sufficient to have extent of overlaps slightly larger than half the size of the local PSF. In this case, overlaps among the blocks are required not to impose smooth variation of blur (as the PSF same over whole field-of-view), but to minimize any discrepancies in the deblurred image due to approximation introduced in the regularization term. Similarly, in the case of piece-wise constant shift-variant blur, we can consider the same splitting scheme as in the case of shift-invariant blur for the same reason.
In the case of \emph{proposed} deblurring method, the maximum number blocks we can split the image is dependent upon the number of available nodes to process in parallel. In such a situation, the larger the number of available nodes, the larger the number of blocks we can chose, thus, smaller the size of blocks. From a computational point of view, the smaller the size of the blocks, the smaller the memory and computation time required by nodes to execute the \textsc{Local-Solver}\footnote{The size of vectors affects the efficiency of Fast Fourier Transform algorithm and other operations. Smaller vector fit well into the different levels of data-caches in processor so that it consumes lesser clock cycles to execute the operations compared to the situation when the vectors are larger to fit into the data-caches leading to many cache misses and eventually requiring several cycles to fetch them from main memory.}. Moreover, the smaller the size of blocks, the smaller is the extent of overlaps among them, thus, the smaller the amount of data to be exchanged among the nodes. With all these advantages from a computational point-of-view, we may intend to split the observed image into as many blocks as the number of available nodes, given that we are provided with as many local PSFs within the field of view. However, the size of the local PSF suggests a lower limit on the size of blocks. We should not select the size of blocks smaller than the size of the PSF, otherwise the local deblurring problem at each node will be highly underdetermined (more number of unknown pixels at boundaries of the blocks need to be estimated), which would eventually deteriorate the quality of the final deblurred image. Thus, to achieve better quality deblurred image, we should select the size of blocks at least twice the size of PSF.
\section{Numerical Experiments and Results}
\label{sec:numerical_experiments_results}
Our algorithm depicted in Algorithm~\ref{algo:proposed_algo} can be seen as general framework for distributed image deblurring. Depending upon the situation, we can derive a particular instance from it, e.g., shift-invariant or shift-variant deblurring, and depending upon the structures of data-fidelity and regularizer, we can select any efficient optimization method as the \textsc{Local-Solver}. In order to validate the \emph{proposed} deblurring method, we performed two numerical experiments, first, considering the simpler case of shift-invariant deblurring, and then more difficult case of the shift-variant deblurring.
Moreover, to evaluate the performance of the \emph{proposed} deblurring method in terms of quality of deblurred image, we compared it with two other possible approaches. First obvious choice was \emph{centralized} deblurring as a reference method that solves the original problem (\ref{Eq:specific_img_deblurring_problem}). Since \emph{centralized} deblurring method would require a huge memory for a large image, thus we selected images of reasonable sizes for our experiments. We chose the na\"{\i}ve ``split, deblur independently, and merge'' approach as the second method. In oder to minimize incoherencies among the deblurred blocks obtained by the second method, we split the observed image into overlapping blocks, and blended the locally deblurred blocks into single image by weighted averaging of the overlapping regions by using the same 2D first-order interpolation weights as in \emph{proposed} method. Hereafter, we will refer to this approach as \emph{independent} deblurring method. The \emph{independent} deblurring method is not intended to solve correctly either of the optimization problems (\ref{Eq:specific_img_deblurring_problem}) or (\ref{Eq:genericpb}), but it is a straight forward way to deblur large images with minimal computational resources. Similarly, in the case of shift-variant image deblurring, we consider comparing our algorithm with the \emph{centralized} and \emph{independent} deblurring methods. For the \emph{centralized} deblurring method, we used the fast shift-variant blur operator based on first-order PSF interpolation as described in \cite{Denis2015}.
Let us first present the general settings of our experiments, and later more specific details. For the case of shift-invariant deblurring, we considered ``Lena'' image, which we resized to $1024\times 1024$ pixels, and extended its dynamic range linearly to have maximum intensity up to 6000 photons/pixels. We refer to this image as the reference image. The reference image was blurred with an Airy disk PSF (of size $201\times201$ pixels) formed due to a circular aperture of radius 6 pixels, and was corrupted with white Gaussian noise of variance $\sigma^2=400$ photons/pixels to obtain the observed image. To study the impact of factors discussed in Section \ref{subsec:patch_size_overlap_weights}, we considered different cases by varying number of blocks and the extent of overlaps among them. Figure~\ref{fig:shiftinvariant_simulation} shows an instance when observed image is split into $3 \times 3$ blocks with overlaps of $100 \times 100$ pixels among the adjacent blocks.
For the shift-variant deblurring case, we considered ``Barbara'' image, which was resized it to $1151 \times 1407$ pixels, and its dynamic range was extended linearly to have maximum intensity up to 6000 photons/pixels. As above, we call this image as reference image. We generated $9 \times 9$ normalized Gaussian PSFs each of size $201 \times 201$ pixels with the central PSF having full-width-half-maximum (FWHM) $= 3.5 \times 3.5$ pixels, and linearly increasing FWHM in the radial direction up to $16.5 \times 10.5$ pixels for the PSF at extreme corner of the reference image as depicted in Fig.~\ref{fig:shiftvariant_simulation_1}. These PSFs, somehow, mimic the shift-variant blur due to optical aberrations (coma) \cite{Mahajan2011}. We blurred the reference image with this shift-variant PSFs using shift-variant blur operator based on PSF interpolation described in \cite{Denis2015}. We obtained the final observed image by adding white Gaussian noise of variance $\sigma^2=400$ photons/pixels to the blurred image. As in shift-invariant deblurring case, we considered different scenarios by varying the number of blocks. Figure~\ref{fig:shiftvariant_simulation_2} shows an experimental setup when using only $5 \times 5$ gird of PSFs, i.e., when observed image split is into $5 \times 5$ overlapping blocks.
In both the cases, we deliberately selected low level noise in the observed image so that any incoherency or artifact arising in the deblurred image due to the approximations we made was not superseded by the strong regularization level required for low signal-to-noise ratio in observed image. Also, we selected sufficiently large size of PSFs so that they are not band-limited, which is the case in many real imaging systems.
\subsection{Choice of Regularization Term}
For our experiments, we selected the regularization function $\phi$ to be Huber loss and $\M{D}$ to be circular forward finite difference operator, so the regularizer is written as
\[
\phi(\M{D}_{i} \V{x}_{i}) =
\begin{cases}
\frac{1}{2} \| \M{D}_{i} \V{x}_{i} \|^{2}_{2} & \quad \| \M{D}_{i} \V{x}_{i} \|_{2} \leq \delta \\
\delta (\| \M{D}_{i} \V{x}_{i} \|_{2} - \frac{\delta}{2}) & \text{otherwise} \\
\end{cases}
\]
We chose this regularizer for two reasons: i) it is smooth (differentiable), which makes the functions $f_{i}$ in (\ref{Eq:genericpb}) smooth so that we can choose any fast optimization algorithm such as quasi-Newton methods (e.g., BFGS class methods \cite{Nocedal2006}) as the \textsc{Local-Solver}, and ii) it behaves in between Tikhonov and the total-variation regularization, depending upon the value of $\delta$. Thus, it is able to preserve the sharp structures in the images while avoiding stair-case artifacts in smoothly varying regions usually rendered by total-variation regularizer \cite{mugnier2004mistral}. However, our \emph{proposed} deblurring method is not restricted to any particular choice of regularizer; depending upon the application one can chose any regularizer and any fast optimization algorithm for local deblurring problem.
\subsection{Implementation Details}
Since the optimization problems arising in all deblurring methods were smooth, thus we chose a quasi-Newton method called limited-memory variable metric with bound-constraint (VMLM-B)\footnote{An open source implementation of VMLM-B in C programming is available at \url{https://github.com/emmt/OptimPack}.} \cite{Thiebaut2002}. VMLM-B is a variant of the standard limited-memory BFGS with bound-constraint (LBFGS-B) \cite{Zhu1995}, but with some subtle differences. VMLM-B does not require any manual tunning of parameters to achieve fast convergence (the only parameter step-length at each iteration of VMLM-B is estimated by line-search methods satisfying Wolfe's conditions). Thus, this left us with only a single parameter $\gamma$ to be tunned to achieve faster convergence of our algorithm. Moreover, using the same algorithm (VMLM-B) for solving the optimization problems in all three deblurring methods ensured a fair comparison among them in terms of quality of deblurred images and the computational expenses.
In all our experiments, we set $\rho^{(k)} = 1$. After few trials, we found that $\gamma = 0.001$ results in fast convergence of our algorithm. Since D-R splitting algorithm converges even when the $\Prox$ operations are carried out inexactly,
thus to speed-up our algorithm, we allowed the \textsc{Local-Solver} to be less accurate during the initial iterations, and then more accurate as the iterations progress \cite[Section 3.4.4]{Boyd2011}. To do so, we allowed VMLM-B to perform 10 inner iterations at the beginning, and increased it by 10 at every next iteration of main loop. To speed-up further, we did warm-start of the VMLM-B at every iterations of our algorithm by supplying $\V{x}^{(k)}$ as an initial guess for next $\V{x}^{(k+1)}$ estimation. All the results presented below were obtained after 25 iterations of the proposed algorithm. It was observed that 25 iterations were generally sufficient, and more than 25 iterations did not bring any noticeable difference in the results. For the \emph{centralized} and \emph{independent} deblurring methods, we allowed VMLM-B to perform a maximum of 1000 iterations or until it satisfied its own stopping criterion based on gradient convergence or progress in the line-search method. It was noticed that, generally, 500 to 600 iterations were sufficient and further iterations did not bring any noticeable difference.
All the three deblurring methods including the \emph{centralized}, \emph{independent} and the \emph{proposed} were implemented in high-level dynamic programming language ``Julia''\footnote{\url{http://julialang.org/}} \cite{bezanson2012julia}. The \emph{proposed} distributed deblurring is implemented using Message Passing Interface (MPI) based on Open MPI library\footnote{\url{https://www.open-mpi.org/}}. The source codes for all the demonstrations shown in this paper will be freely available at \url{https://github.com/mouryarahul/ImageReconstruction}.
\subsection{Results and Discussion}
\label{subsec:results}
As mentioned above, we conducted two different experiments, one for shift-invariant, and another for smooth shift-variant blurred image. To compare the quality of the deblurred images obtained from all the three methods we considered two image quality metrics: signal-to-noise ratio (SNR), and Structural Similarity Index (SSIM) \cite{Wang2004}. In all the cases, we heuristically fixed the regularization parameter $\delta = 100$, and performed deblurring for different values of $\lambda$ in a sufficiently large range to see if one of the methods produces better quality image for certain range of $\lambda$ than others. The plots (SNR vs $\lambda$ and SSIM vs $\lambda$) in the Fig.~\ref{fig:shiftinvariant_simulation}(i\textendash j) show the results obtained from shift-invariant deblurring when observed image was split into $3 \times 3$ blocks. To study the influence of size of blocks and extent of overlaps on the quality of deblurred image, we conducted several trails with different settings. The plots in the Fig.~\ref{fig:shiftinvariant_simulation}(k\textendash l) show the influence of extent of overlaps on the image quality. The plots shows that our algorithm performed slightly better than the \emph{centralized} deblurring in terms of both the SNR and SSIM. This could be due to the approximation introduced in the regularization term in the \emph{proposed} deblurring method, e.g., sum of the regularizer on the blocks may be favorable than the regularizer on whole for some images depending upon the contents in the image. Moreover, the \emph{proposed} deblurring may also be benefited by the explicit overlaps among the adjacent blocks; each node can result into slightly different values of the overlapping pixels, which eventually leads to better estimate of those pixels after averaging the estimates from different nodes. This is indicated by the fact that there is slight increase in both the SNR and SSIM with increase in the extent of overlaps as seen in the plots Fig.~\ref{fig:shiftinvariant_simulation}(k\textendash l). It also indicates that an overlap equal to half the size of PSF is sufficient, and a larger overlap did not produce much improvement, but, of course, it did increase the computational and communication cost. We also noticed that the SNR and SSIM of deblurred image obtained from \emph{proposed} deblurring is slightly less dependent upon the extent of overlaps compared to that obtained by \emph{independent} deblurring.
The plots in Fig.~\ref{fig:shiftvariant_SNRvsLambda}(a\textendash b) compare the results obtained by the three deblurring methods for the case of smooth
shift-variant blurred image. First, we noticed that the image quality obtained by all the three methods improves drastically with the increase in density of grid of PSFs sampled within field-of-view. The SNR and SSIM of deblurred image obtained using only $3 \times 3$ PSFs is significantly lower than the cases when $5 \times 5$, $6 \times 6$ and $8 \times 8$ grids of PSFs are used. This is due to the fact that the observed image was simulated using a finer $9 \times 9$ grids of PSFs, and the coarser grids of PSFs are less accurate in capturing smooth variation of blur than a finer grid of PSFs. We also noticed that unlike the shift-invariant deblurring case, the values of SNR and SSIM obtained by the \emph{proposed} deblurring is slightly lower than that obtained by the \emph{centralize} deblurring. This could be due to the fact that both the \emph{centralized} and \emph{proposed} deblurring benefits from the explicit overlaps among the block, but some information is always lost at the boundaries of the deblurred blocks in the case of \emph{proposed} deblurring, which is not the case for \emph{centralized} deblurring. Also, there must be some influence of the approximation introduced into the regularization term for the case of \emph{proposed} deblurring. We observed from the plots in Fig.~\ref{fig:shiftinvariant_simulation}(i\textendash j) and Fig.~\ref{fig:shiftvariant_SNRvsLambda}(a\textendash b) that three methods do not attained the highest SNR or SSIM at the same value of the regularization parameter $\lambda$.
As expected, we observed that the na\"{\i}ve \emph{independent} deblurring method performed significantly lower than the other two methods. As pointed out above, this is due to the fact that the method is not intended to solve correctly either the original problem (\ref{Eq:specific_img_deblurring_problem}) or the distributed formulation (\ref{Eq:genericpb}), but it is the crudest and computationally cheapest way to perform image deblurring by splitting image into pieces.
In our experiments, we observed that the first step, i.e., the \textsc{Local-Solver}, of the \emph{proposed} deblurring algorithm is the one which consumed the significant computation time among the three steps; \textsc{Local-Solver} took 600 to 800 times more computation time than the consensus step (including communication time among the nodes). This should be true, in general, for many other local deblurring algorithms devised using different combination of data-fidelity and regularization terms depending upon the applications. Thus, the \emph{proposed} deblurring algorithm is efficient in the sense that it is computation intensive rather than being communication intensive.
For the small or moderate size of images as we considered in our experiments, the \emph{proposed} deblurring algorithm is computationally more expensive than the \emph{centralized} and \emph{independent} deblurring methods; it takes at least 10 to 15 times more computation time than the latter ones. However, in the case of extremely large images for which \emph{centralized} deblurring is practically not feasible, computational expenses of \emph{proposed} deblurring is justified by the better quality of deblurred image, which cannot be achieved by the computationally cheaper \emph{independent} deblurring. Some more simulations performed with different set of images and PSFs are presented in Appendix \ref{sec:appendix}, and the results suggest similar conclusions as above.
\begin{figure*}
\centering
\subfloat[{Reference image (size = $1024 \times 1024$ pixels)}]{\includegraphics[width=0.24\linewidth]{sioriginal.png}} \;
\subfloat[Airy disk PSFs (size = $201 \times 201$ pixels) due to circular aperture of radius $6$ pixels]{\includegraphics[width=0.24\linewidth]{sipsfsnew.png}} \;
\subfloat[{Observed image (SNR= 9.0364 dB, SSIM = 0.7594)}]{\includegraphics[width=0.24\linewidth]{siobserved.png}} \;
\subfloat[{Overlapping observed patches}]{\includegraphics[width=0.24\linewidth]{siobsimgpatches.png}} \\
\subfloat[{$3 \times 3$ blocks of interpolation weights with brightest pixel equal to 1 and the darkest pixel equal to 0.}]{\includegraphics[width=0.24\linewidth]{siweights.png}} \;
\subfloat[{ Image obtained by \emph{centralized} deblurring method (SNR = 14.3818 dB, SSIM = 0.8166 at $\lambda = 0.001$)}] {\includegraphics[width=0.24\linewidth]{sicentrallyestimated.png}} \;
\subfloat[{Image obtained by \emph{independent} deblurring method (SNR = 14.3402 dB, SSIM = 0.8162 at $\lambda = 0.001$)}] {\includegraphics[width=0.24\linewidth]{siindependentlyestimated.png}} \;
\subfloat[{Image obtained by \emph{proposed} deblurring method (SNR = \textbf{14.5931} dB, SSIM = \textbf{0.8188} at $\lambda = 0.002$)}] {\includegraphics[width=0.24\linewidth]{siestimated.png}} \\
\subfloat[{SNR vs $\lambda$}] {\includegraphics[trim = 0mm 0mm 20mm 16mm, clip,width=0.32\linewidth]{siselectedSNRvsLambda.pdf}} \;
\subfloat[{SSIM vs $\lambda$}] {\includegraphics[trim = 0mm 0mm 20mm 16mm, clip,width=0.32\linewidth]{siselectedSSIMvsLambda.pdf}}\\
\subfloat[{SNR vs Overlap}] {\includegraphics[trim = 0mm 0mm 20mm 16mm, clip,width=0.32\linewidth]{siSNRvsOverlap.pdf}} \;
\subfloat[{SSIM vs Overlap}] {\includegraphics[trim = 0mm 0mm 20mm 16mm, clip,width=0.32\linewidth]{siSSIMvsOverlap.pdf}}
\caption{Experiment 1: experimental setup and results from shift-invariant deblurring of ``Lena'' image. The observed image (c) is obtained by blurring the reference image (a) with shift-invariant PSF (b), and then corrupting with white Gaussian noise of variance $\sigma^2 = 400$. Image blocks (d) are obtained by splitting observed image into $3 \times 3$ blocks with overlaps of $100 \times 100$ pixels among them. The 2D first-order interpolation weights (e) are of the same size as observed blocks. Plots (i\textendash j) show the image quality of deblurred images obtained for different strengths of regularization. The legends ``\textcolor{red}{Central}'', ``\textcolor{green}{Proposed}'', and ``\textcolor{blue}{Indpndt}'' represent results from the \emph{centralized}, the \emph{proposed} and the \emph{independent} deblurring methods, respectively. Plots (k\textendash l) show impact of extent of overlap on the image quality of deblurred images. The legends ``\textcolor{green}{Proposed-2x2}'' and ``\textcolor{blue}{Indpndt-2x2}'' represent the \emph{proposed} and \emph{independent} deblurring methods for the case when the image is split into $2\times 2$ blocks. Similarly, the other two legends represent the case when the image is split into $3 \times 3$ blocks.}
\label{fig:shiftinvariant_simulation}
\end{figure*}
\begin{figure}
\centering
\subfloat[{Reference image ($1151 \times 1407$ pixels) with $9 \times 9$ grid points (overlaid in green) where PSFs are sampled.}]{\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.85\linewidth]{sv13trueimage.png}} \\
\subfloat[Shift-variant PSFs (each of size $201 \times 201$ pixels) generated at the $9 \times 9$ grid points in (a).]{\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.85\linewidth]{sv13psfs.png}}\\
\subfloat[{Blurred and Noisy (observed) image}]{\includegraphics[trim = 30mm 15mm 25mm 15mm, clip,width=0.85\linewidth]{sv13observedimage.png}}
\caption{Experiment 2: experimental setup for shift-variant deblurring of ``Barbara'' image. The grid of PSFs (b) contains normalized Gaussian PSFs with central PSF having FWHM = $3.5 \times 3.5$ pixels, and linearly increased up to FWHM = $16.5 \times 10.5$ pixels for the extreme corner PSF. The observed image (c) is obtained by blurring the reference image (a) with the shift-variant PSFs, and then corrupting white Gaussian noise of variance $\sigma^2 = 400$ photons/pixels.}
\label{fig:shiftvariant_simulation_1}
\end{figure}
\begin{figure}
\centering
\subfloat[$5 \times 5$ grid points overlaid upon the observed image Fig. \ref{fig:shiftvariant_simulation_1}(c) where the PSFs are sampled.] {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.85\linewidth]{sv12observed3.png}} \\
\subfloat[Shift-variant PSFs (each of size $201 \times 201$ pixels) sampled at $5\times 5$ grid points shown in (a).] {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.85\linewidth]{sv12psfs.png}} \\
\subfloat[$5 \times 5$ overlapping observed blocks obtained after splitting image in (a).] {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.85\linewidth]{sv12obspatches.png}}
\caption{Experiment 2: experimental setup for shift-variant deblurring when using $5 \times 5$ grid of PSFs, i.e., observed image is split into $5\times 5$ overlapping blocks.}
\label{fig:shiftvariant_simulation_2}
\end{figure}
\begin{figure}
\centering
\subfloat[SNR vs $\lambda$]{\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.95\linewidth]{sv13SNRvsLambdaNew.pdf}} \\
\subfloat[SSIM vs $\lambda$]{\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.95\linewidth]{sv13SSIMvsLambdaNew.pdf}}
\caption{Experiment 2: results from shift-variant image deblurring comparing the image quality (in terms of SNR and SSIM) obtained by the three different deblurring methods for different strength of regularization. The legends ``\textcolor{red}{Central3x3}'', ``\textcolor{green}{Proposed-3x3}'', ``\textcolor{blue}{Indpndt-3x3}'' denotes the results from the \emph{centralized}, the \emph{proposed}, and \emph{independent} deblurring methods, respectively, when using only $3\times 3$ grid of PSFs. Similarly, other legends denotes for the results obtained when using $5\times 5$, $6 \times 6$ and $8 \times 8$ grid of PSFs sampled in the field-of-view.}
\label{fig:shiftvariant_SNRvsLambda}
\end{figure}
\begin{figure}
\centering
\subfloat[Estimated by \emph{centralized} deblurring (SNR = \textbf{12.3278} dB, SSIM = \textbf{0.7767} at $\lambda = 0.002$).] {\includegraphics[trim = 30mm 10mm 20mm 15mm, clip,width=0.85\linewidth]{sv13Central6x6.png}} \\
\subfloat[Estimated by \emph{independent} deblurring (SNR = 11.6696 dB, SSIM = 0.7736 at $\lambda = 0.004$)] {\includegraphics[trim = 30mm 10mm 20mm 15mm, clip,width=0.85\linewidth]{sv13Indpndt6x6.png}} \\
\subfloat[Estimated by \emph{proposed} deblurring (SNR = {12.0239} dB, SSIM = {0.7764} at $\lambda = 0.002$)] {\includegraphics[trim = 30mm 10mm 20mm 15mm, clip,width=0.85\linewidth]{sv13proposed6x6.png}}
\caption{Experiment 2: deblurred images obtained by the three methods when using $6\times 6$ grid of PSFs.}
\label{fig:shiftvariant_deblur_result_6x6}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper we have proposed a distributed image deblurring algorithm for large images for which the existing \emph{centralized} deblurring methods are practically inapplicable due to requirement of huge physical memory on a single system. The proposed algorithm is rather a generic framework for distributed image deblurring for it can handle different imaging situations such as deblurring a single large image suffering from shift-invariant blur, a wide field-of-view captured into multiple narrow images by different imaging systems with slightly different PSFs, and a large image of a wide field-of-view suffering from smoothly varying blur captured by a single imaging system. Depending upon the application, one can easily adapt it to include different data-fidelity and regularization terms, and then select any fast optimization algorithm to solve the local deblurring problem at the different nodes of a distributed computing system. Our algorithm is efficient in the sense that it is computation intensive rather than being communication intensive. We showed by experimental results on simulated observed images that the \emph{proposed} deblurring algorithm produce almost similar or little lower quality (measured in term of SNR and SSIM) of deblurred images than that obtained by \emph{centralized} deblurring. But, this small compromise in the quality of deblurred image is trade-off by the cost effectiveness of our distributed approach for large images, which is practically not feasible for the \emph{centralized} deblurring methods. Moreover, we compared the \emph{proposed} deblurring to a na\"{\i}ve and computationally cheaper \emph{independent} deblurring, and showed the latter always performed significantly lower than the former. Thus, when high accuracy is desirable, e.g., in astronomical application, the \emph{proposed} deblurring should be preferred over \emph{independent} deblurring method, of course at expense of extra computational cost.
In this paper, we considered nonblind image deblurring, i.e., the PSFs were known a priori, however, in real imaging scenario calibrating PSFs accurately is a tedious and challenging task. A more practical way would be to follow a blind image deblurring approach which is able estimate the PSFs from the observed image(s), and recover the single crisp image. Thus, the next perspective step would be to extend the \emph{proposed} algorithm toward distributed blind image deblurring method which should be able to estimate distributively the PSFs imposing certain regularity among them, and eventually estimate the unknown crisp image.
\bibliographystyle{IEEEtran}
|
1,116,691,498,362 | arxiv | \section{Introduction}
PG\,0043+039 is a bright (m$_v\sim15.5$) and
luminous quasar (M$_B=-26.11$) at a redshift of
z=0.38512.
A luminosity of $\nu L_{\nu}$~=~2.21 $\times
10^{44}$~erg~s$^{-1}$ at 3000~\AA\
has been determined before for this quasar
(Baskin \& Laor\citealt{baskin04}). This corresponds to
an Eddington luminosity log L/$L_{edd}$~=~-0.648 for a black hole mass of
$M = 8.9 \times 10^{9} M_{\odot}$
(Baskin \& Laor\citealt{baskin05}).
A first optical spectrum of PG\,0043+039 taken in Sept. 1990
has been published by Boroson \& Green\cite{boroson92}.
PG\,0043+039 has been identified by Bahcall et al.\cite{bahcall93}
and Turnshek et al.\cite{turnshek94,turnshek97}
as a weak broad absorption line (BAL) quasar
based on CIV BAL detected with the Hubble Space Telescope (HST).
Bechtold et al.\cite{bechtold02} describe the CIV absorber
in a reanalysis of the HST Faint Object Spectrograph (FOS) spectra
as a very narrow associated absorber.
PG\,0043+039 shows strong Fe II blends in the optical.
No narrow [OIII], [OII] lines have been detected by Turnshek et
al.\cite{turnshek94} before. The observed continuum in the UV is atypical
in the sense that it is much weaker
relative to the optical continuum compared to normal quasars.
There is no evidence for a BAL caused by low-ionization
transitions of, for example, AlII or CII.
A ROSAT nondetection established PG\,0043+039 as an X-ray weak quasar
(Brandt et al.\citealt{brandt00}).
It was not\ detected in pointed observations
with the ASCA satellite in the year 1996
(Gallagher et al.\citealt{gallagher99}). Furthermore,
PG\,0043+039 is the only quasar in the PG sample
(Schmidt \& Green,\citealt{schmidt83}),
which was not detected in a dedicated XMM-Newton pointing
(Czerny et al.\citealt{czerny08}).
PG\,0043+039 is the most extreme X-ray weak quasar known to date, but
surprisingly only shows a weak BAL system.
The majority of BAL quasars are X-ray weak
because of a shielding gas (following Murray et al. \citealt{murray95}) or
an intrinsic X-ray weakness that produces more preferable conditions for
wind launching and driving (e.g., Baskin et al. \citealt{baskin13}).
A conclusive interpretation about the X-ray weakness in PG\,0043+039
was hampered by the absence of simultaneous
measurements, which is mandatory as both the X-ray flux and the BAL
system are known to be variable.
There is the possibility that the X-ray quietness of PG\,0043+039
was only a temporal event
or that it was a false conclusion driven by spectra that were not taken simultaneously in the optical/UV
and X-ray bands. Fluctuations in the accretion process
may have caused a short interruption or huge drop in the X-ray emission as seen
in several low state observations. Examples of the latter are
PG2112+059 (Schartel et al.\citealt{schartel10}) and 1H0707-495
(Fabian et al.\citealt{fabian11}). The source spectroscopic type can
change with time. This was, for example, observed for Fairall\,9 (Sey 1 to
Sey 2; Kollatschny et al.\citealt{kollatschny85}) or NGC\,2617
(Sey 1.8 to Sey 1; Shappee et al.\citealt{shappee14}).
The X-ray spectral characteristics can change with time because of
changes in the column density and/or the ionization state of the
X-ray absorbing material. Examples are NGC\,1365
(Risaliti et al.\citealt{risaliti05}) and NGC\,7583 (Bianchi et al.\citealt{bianchi09}).
BAL systems may develop with time as reported for
WPVS\,007 (Leighly et al.\citealt{leighly09}), where the change in the broad
UV absorption system was reflected in changes of the X-ray brightness
interpreted as due to both X-ray absorption and X-ray weakness.
Here we want to verify that the X-ray weakness
in PG\,0043+039 was not only a temporal event and was not only caused by strong
X-ray variations and or optical/UV variations
in this BAL quasar.
In a first paper (Kollatschny et al.\citealt{kollatschny15}, hereafter called
Paper I) we briefly presented our X-ray detection. Furthermore,
we discussed newly detected strong,
broad humps seen in the UV spectrum of PG\,0043+039
taken with the HST.
We attributed these humps to
cyclotron lines.
\section{Observations}
We took simultaneous X-ray, UV, and optical spectra of PG\,0043+039 in July
2013:
\subsection{XMM-Newton observations}
PG\,0043+039 was observed twice with XMM-Newton (Jansen et al.\citealt{Jansen2001}).
The first observation (Obs1 in the following) was performed on the 15.6.2005 under ObsId. 0300890101.
This observation was free of any periods of high background radiation implying exposure times of 26.6~ks for pn (Str{\"u}der et al.\citealt{Strueder2001}),
31.2~ks for MOS~1 (Turner et al.\citealt{Turner2001}), and 31.1~ks for MOS~2.
The second observation (Obs2) was performed on the 18.7.2013 under ObsId. 0690830201.
This observation was affected by high background radiation periods.
The data were processed with SAS 14.0.0 in January 2015, using the latest calibration and following the methods described in Schartel et al.\cite{Schartel2007}
and Piconcelli et al.\cite{Piconcelli2005}.
All effective areas were calculated applying the method {\it corrarea,} which empirically corrects the EPIC effective areas for the instrumental differences.
We screened for low background periods:
For the energy range from 0.2~keV to 12~keV, we extracted the counts that were registered within an annulus centered at the optical position of PG\,0043+039 with an inner radius of 1~arcmin and an outer radius of 11~arcmin for pn (14~arcmin for the MOSs).
To generate a light curve we binned the counts with 100~s.
We defined low background times as time intervals with a count rate below
6 c/s for pn and 4 c/s for MOSs.
We obtained a clean exposure time of 14.5~ks for pn, 29.0~ks for MOS1,
and 31.3~ks for MOS2.
For each modeling of the X-ray spectra, we assumed Galactic foreground
absorption with an column density of
N$_H$~$=$~3.0~$\times$ 10$^{20}$~cm$^{-2}$ (Savage et al.\citealt{Savage1993}).
During Obs1 several observations
were performed with the OM (Mason et al.\citealt{Mason2001}).
We took the source fluxes and their errors for the image observations with different filters from the second release of the XMM OM Serendipitous Ultraviolet Source Survey catalog (XMM-SUSS2).
The description of the first release of the catalog can be found in Page et al.\cite{Page2012}.
The spectrum from OM V-grism
was manually extracted
using omgsource, to avoid the close-by spectrum of a star that contaminates
the background of the target spectrum in the automatic extraction by SAS.
During Obs2 OM could not operate as PG~0043+039 was too close to Uranus
to enable safe operation of the instrument.
\subsection{HST-COS FUV spectroscopy}
We observed the BAL PG\,0043+039 over one full HST orbit
at RA, Dec (J2000) = 00:45:47.230, +04:10:23.40
with an exposure time
of 1552 seconds on July 18 2013.
We used the far-ultraviolet (FUV) detector of the Cosmic Origins Spectrograph (COS) with the G140L grating and an 2.5 arcsec
aperture (circular diameter).
This spectral set covers the wavelength range from $\sim$ 1140\,\AA\
to $\sim$ 2150~\AA\
with a resolving power of 2000 at 1500\,\AA{}.
To fill up the wavelength hole produced by the chip gap and to reduce
the fixed pattern noise, we split our observation into four separate segments
of 388 s duration at two different FP-POS offset positions and four different
central wavelengths.
The observed spectrum corresponds to $\sim$ 800\,\AA{} to $\sim$ 1550~\AA\
in the rest frame of the galaxy.
The original data were processed using the
standard CALCOS calibration pipeline.
We corrected this UV spectrum, as well as our optical spectra of PG\,0043+039,
for Galactic extinction.
We used the reddening value E(B-V) = 0.02087 deduced from
the Schlafly \& Finkbeiner\cite{schlafly11} recalibration of the
Schlegel et al.\cite{schlegel98} infrared-based dust map. The
reddening law of Fitzpatrick\cite{fitzpatrick99} with R$_{V}$\,=\,3.1
was applied to our UV/optical spectra.
\subsection{Ground-based optical spectroscopy with the SALT
and HET telescopes}
We took one optical spectrum of PG\,0043+039 with the 10m Southern African
Large Telescope (SALT)
nearly simultaneously with the XMM/HST observations on July 21, 2013
under photometric conditions.
However, the moon was bright during this observation.
The spectrum was taken with the Robert Stobie Spectrograph
(RSS; see Burgh et al.\citealt{burgh03}) attached to the telescope
using the pg0900
grating with a 1.5 arcsec wide slit.
With a grating angle of 21.125 degrees, we covered the wavelength range from
6445 to 9400~\AA\ at a spectral resolution of 4.8~\AA\ (FWHM) and a reciprocal
dispersion of 0.97~\AA pixel$^{-1}$. The observed wavelength range corresponds
to a wavelength range from 4653 to 6786~\AA\ in the rest frame of the galaxy.
There are two gaps in the spectrum caused by the gaps between the three CCDs:
one between the blue and the central CCD chip as well as one between the
central and red CCD chip, covering the wavelength ranges
7425-7480~\AA\ and 8438-8491~\AA\ (5360-5400~\AA\ and 6092-6130~\AA\
in the rest frame).
The exposure time of our spectrum was 2200 seconds (37 minutes),
which yielded a S/N of 118 at 7020~$\pm$10~\AA{} in the observed frame.
In addition to the galaxy spectrum, necessary flat-field and
Xe arc frames were observed, as well
as spectrophotometric standard stars for flux calibration (Hiltner~600, LTT4363).
The spectrophotometric standard stars
were used to correct the measured counts for the combined
transmission of the instrument, telescope, and atmosphere
as a function of wavelength.
Flat-field frames
were used to correct for differences in sensitivity both between
detector pixels and across the field.
The bright moon caused fringes in the red CCD at around 9000~\AA{}.
We took a second optical spectrum of PG\,0043+039
with the 9.2m Hobby-Eberly Telescope (HET) at McDonald Observatory
on August 1, 2013 under nearly photometric conditions.
The spectrum was taken with the
Marcario Low Resolution Spectrograph (LRS)
mounted at the prime focus of HET. The detector was
a $3072\times1024$ 15 $\mu$m pixel Ford Aerospace CCD with 2x2 binning.
This spectrum covers the wavelength range from 4390\,\AA\
to 7275~\AA\ (LRS grism 2 configuration)
with a resolving power of 650 at 5000\,\AA\ (8.2\,\AA\ FWHM).
This wavelength range corresponds to 3170 to 5250~\AA{}
in the rest frame of the galaxy.
The spectrum was taken with an exposure time of
1500 seconds (25 minutes), which yielded a
S/N of 102 at 5120~$\pm$10~\AA{} and of 83 at 7020~$\pm$10~\AA{}.
The slit width was
2\arcsec\hspace*{-1ex}.\hspace*{0.3ex}0 projected on the
sky. We took
Xe spectra to enable the
wavelength calibration. A spectrum of the standard star BD40 was
observed for flux calibration as well.
Our SALT spectrum has a spatial
resolution of 0\arcsec\hspace*{-1ex}.\hspace*{0.3ex}2534 per binned pixel.
We extracted eleven columns from each of our object spectra
corresponding to 2\arcsec\hspace*{-1ex}.\hspace*{0.3ex}8.
Our HET spectrum has a spatial
resolution of 0\arcsec\hspace*{-1ex}.\hspace*{0.3ex}472 per binned pixel.
Here we extracted seven columns for our object spectrum
corresponding to 3\arcsec\hspace*{-1ex}.\hspace*{0.3ex}3.
The reduction of the spectra (bias subtraction, cosmic ray correction,
flat-field correction, 2D-wavelength calibration, night sky subtraction, and
flux calibration) was performed in a homogeneous way with IRAF reduction
packages (e.g., Kollatschny et al.\citealt{kollatschny01}).
We corrected the optical spectra for atmospheric absorption bands as well.
All wavelengths were converted to the rest frame of the galaxy (z=0.38512).
\section{Results}
Here we present the results of our observing campaign based on the obtained
spectral data in the
X-ray, UV, and optical frequency bands.
\subsection{X-ray flux in PG\,0043+039}
Visual inspection of the EPIC images of Obs1 does not reveal an X-ray counterpart for PG\,0043+039 and Czerny et al.\cite{czerny08} derived an upper limit for the source flux of $<$8.6~$\times$~10$^{-16}$~ergs~s$^{-1}$~cm$^{-2}$
for the 0.1 to 2.4 keV energy range.
The nearest X-ray source to PG\,0043+039 is 3XMM J004548.8+041018
(Watson et al.\citealt{Watson2009}) at a distance of 24.06~arcsec.
In 3XMM-DR5 the source has a EPIC (CR(pn) \& CR(MOS1) \& CR(MOS2))
count rate of 1.92~$\pm$0.10~$\times$~10$^{-2}$counts~s$^{-1}$
in the 0.2 to 12. keV energy range
and is most likely the counterpart to SDSS-DR8 J004548.80+041019.0.
In order to check for weakest X-ray signal of PG\,0043+039, we reanalyzed
all three EPIC exposures together.
We extracted the possible source counts in a circle centered at the optical position of PG\,0043+039 with a radius of 10~arcsec for pn and 12~arcsec of MOS and we extracted the background counts from a circle centered at 0:45:43.62 +4:10:00.95 with a radius of 34~arcsec.
The background area is source free and located at the same CCD.
The combined analysis leads to a weak signal with
a total EPIC pn+MOS count rate of
3.7~$\pm$~1.1~$\times$~10$^{-4}$counts~s$^{-1}$ for the 0.3 to 12.0~keV energy range.
We performed the same analysis for three control positions
(0:45:48.15 +4:10:41.79, 0:45:49.73 +4:10:40.53, and 0:45:50.53 +4:10:20.74).
Each position shows a distance of ~24~arcsec to 3XMM J004548.8+041018
and is located at the same CCD as PG\,0043+039.
We obtained the following count rates: 0.43~$\pm$~9.42~$\times$~10$^{-5}$~counts~s$^{-1}$,
0.70~$\pm$~9.00~$\times$~10$^{-5}$~counts~s$^{-1}$, and
-0.92~$\pm$9.19~$\times$~10$^{-5}$~counts~s$^{-1}$.
The obtained count rates of the control positions are significantly lower than the signal obtained for the optical position of PG\,0043+039.
The obtained signal cannot be explained with the short distance to 3XMM J004548.8+041018
and we conclude that the combined analysis of all there EPIC cameras reveal a weak X-ray signal from PG\,0043+039
3XMM J004548.8+041018 is present in the Obs2, too, but the source shows
a significantly decreased flux with an Epic count rate of
4.19~$\pm$0.73~$\times$~10$^{-5}$~counts~s$^{-1}$ for the
0.2 to 12 keV region.
PG\,0043+039 is clearly visible as point source in the images of all there exposures.
We extracted source counts exactly as described for Obs1 above except that we centered the circle at the eye-determined center of the X-ray emission (0:45:47.07 +4:10:23.50) and obtained a count rate of 1.42~$\pm$0.17~$\times$~10$^{-3}$~counts~s$^{-1}$ for all EPIC cameras together, in the 0.3 to 12 keV energy band.
We extracted the background counts from a circle centered at 0:45:43.41 +4:10:04.25
with a radius of 34~arcsec.
We repeated the analysis for three control positions (0:45:48.03 +4:10:42.07, 0:45:49.66 +4:10:40.22, 0:45:50.43 +4:10:16.87) selected as described above and obtained count rates
that are in agreement with no source flux -0.63~$\pm$~0.93~$\times$~10$^{-4}$~counts~s$^{-1}$, -1.17~$\pm$~0.88~$\times$~10$^{-4}$~counts~s$^{-1}$, and
-1.40~$\pm$0.87~$\times$~10$^{-4}$~counts~s$^{-1}$.
\subsection{X-ray spectra of PG\,0043+039}
Given the low number of accumulated counts we added the pn, MOS1, and MOS2 spectra for each observation and calculated the corresponding auxiliary files.
For the calculations, each input file was weighted with the corresponding exposure time of the camera.
All modeling was done with {\it Xspec 12.8.2} (Arnaud, \citealt{Arnaud1996}).
We used C-statistics and a spectrum that was slightly binned, such that each bin contains at the minimum two counts.
In addition, for Obs. 2, we analyzed a spectrum that was binned such that each bin contained 15 counts.
For this spectrum, we applied the $\chi^2$-statistics and F-test.
\begin{figure*}
\centering
\includegraphics[width=8.1cm,angle=270]{xspec_nice_plot_30358.ps2}
\caption{EPIC spectrum of PG\,0043+039 from 2013 is shown in comparison to the best-fit power law absorbed by Galactic column density.
Data are slightly binned, such that each bin contains at the minimum, two source counts.
}
\vspace*{-3mm}
\label{xspec_nice_plot_30358.ps2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=8.1cm,angle=270]{xspec_nice_plot_31439.ps2}
\caption{EPIC spectrum of PG\,0043+039 from 2013 is shown in comparison to the best-fit power law absorbed by Galactic column density and intrinsic neutral absorption. Data are slightly binned, such that each bin contains, at the minimum, two source counts.
}
\vspace*{-3mm}
\label{xspec_nice_plot_31439.ps2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=8.1cm,angle=270]{xspec_nice_plot_32282.ps2}
\caption{EPIC spectrum of PG\,0043+039 from 2013 is shown in comparison to
the best-fit intrinsic absorbed power law plus neutral iron line.
Data are slightly binned, such that each bin contains, at the minimum, two
source counts.
}
\vspace*{-3mm}
\label{xspec_nice_plot_32282.ps2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=8.1cm,angle=270]{xspec_nice_plot_13288.ps2}
\caption{EPIC spectrum of PG\,0043+039 from 2013 is shown in comparison to
model assuming an absorbed primary power-law continuum and reflection on
distant, optical thick material.
Data are slightly binned, such that each bin contains, at the minimum, two
source counts.
}
\vspace*{-3mm}
\label{xspec_nice_plot_13288.ps2}
\end{figure*}
The errors of best-fit parameters are provided to the 90\% confidence level if not stated otherwise.
For the description of absorption, we used the xspec model {\it tbnew,} which is based on Wilms et al.\cite{Wilms2000}, solar abundances from Wilms et al.\cite{Wilms2000}, and photoelectric cross-sections from Verner et al.\cite{Verner1996}.
\paragraph{Observation 2:}
\noindent
Figure~\ref{xspec_nice_plot_30358.ps2} shows the best-fit power law on the EPIC spectrum
of PG\,0043+039 from 2013 compared to the data.
The fit (C~$=$~61.5, d.o.f.~$=$~58) reveals a very hard spectrum with $\Gamma$~$=$~1.09$\pm$0.24.
We therefore modeled the data with a power law absorbed by neutral material at the redshift of the quasar (Fig.~\ref{xspec_nice_plot_31439.ps2}), which allowed us to decrease C by $\Delta$C~$=$~6.0.
The residuals show enhanced emission at about 4.8~keV, which could correspond to neutral iron K$_{\alpha}$ in the rest frame of the quasar.
We therefore added a neutral iron line to the model (E~$=$~6.4~keV, $\sigma$~$=$~10~eV),
which allowed us to further decrease C by $\Delta$C~$=$~5.8
(Fig.~\ref{xspec_nice_plot_32282.ps2}).
For the fit, we obtained C~$=$~49.7 for d.o.f.~$=$~56
with the following parameters:
N$_H$~$=$~$5.5_{-3.9}^{+6.9}$~$\times$10$^{21}$~cm$^{-2}$,
$\Gamma$~$=$~1.70$_{-0.45}^{+0.57}$,
N(power law)~$=$~$6.6_{-2.9}^{+6.8}$~$\times$10$^{-6}$~keV$^{-1}$~cm$^{-2}$~s$^{-1}$ at 1 keV and
N(Gauss)~$=$~$3.8_{-2.8}^{+4.1}$~$\times$10$^{-7}$~photons~cm$^{-2}$~s$^{-1}$.
For the total model, we obtained the following fluxes:
F(2.0-10.0~keV)~$=$~1.80$_{-0.29}^{+0.24}$~$\times$10$^{-14}$~ergs~cm$^{2}$~s$^{-1}$,
F(0.2-2.0~keV)~$=$~5.22$_{-0.95}^{+0.42}$~$\times$10$^{-15}$~ergs~cm$^{2}$~s$^{-1}$, and
F(0.2-12.0~keV)~$=$~2.48$_{-0.38}^{+0.18}$~$\times$10$^{-14}$~ergs~cm$^{2}$~s$^{-1}$, where the flux errors are provided for the 68\% confidence.
In addition we modeled the EPIC spectrum of Obs.~2, binned such that each bin containing 15 counts, using $\chi^2$-statistics.
We obtained $\chi^2$~$=$~12.9 at d.o.f.~$=$~8 for a simple power-law fit
and $\chi^2$~$=$~4.6 at d.o.f.~$=$~6
for an intrinsically absorbed power law plus an iron line.
An F-test shows that the probability of finding the improvement by random chance
is below 5\%.
The performed fits applying the C-statistics show that allowing intrinsic
absorption and then adding an iron line both improve the description
of the data.
The achieved decrease of C does not allow us to claim a significant detection.
The F-test confirms that modeling the data with intrinsic absorption
and an iron line
improves the description, but does not allow us to claim a significant detection.
X-ray weakness of quasars might be explained with a completely absorbed X-ray
continuum in combination with a weak so-called escaping reflection component.
Therefore, we modeled the EPIC spectrum of Obs.~2, assuming an absorbed primary
power-law continuum and an unabsorbed reflection component.
We described the reflection with the model {\it pexmon,} which is provided in
{\it Xspec} (Nandra et al.\citealt{Nandra2007}).
This model assumes neutral Compton reflection at distant, optically thick
material and considers line emission consistently
(Nandra et al.\citealt{Nandra2007}).
Figure~\ref{xspec_nice_plot_13288.ps2} shows the best-fit model
assuming an intrinsic absorbed power law plus
an unabsorbed reflection compared to the data.
For the fit, we obtained C~$=$~49.6 for d.o.f.~$=$~56.
The primary power-law continuum is absorbed with
N$_H$~$=$~6.3$_{-3.2}^{+5.0}$~$\times$10$^{21}$~cm$^{-2}$.
As the power-law index is poorly constrained, we fixed the index to $\Gamma$~$=$~1.9, which corresponds to the mean value obtained for radio-quiet quasars of the Palomar-Green (PG) Bright Quasar Survey sample for the 2keV to 10 keV energy range (Piconcelli et al.\citealt{Piconcelli2005}) and obtained
N(power law)~$=$~7.4$_{-2.2}^{+2.8}$~$\times$10$^{-6}$~keV$^{-1}$~cm$^{-1}$~s$^{-1}$.
For the reflection component, we fixed the photon index to $\Gamma$~$=$~1.9
(Piconcelli et al.\citealt{Piconcelli2005}), the cut-off energy to 100~keV,
the metals to solar abundance, and the inclination angle to 45 degree.
We determined the iron abundance Fe$_{abund.}$~$=$~6.8$_{-5.9}^{+94.}$ and
N({\it pexmon})~$=$~$2.0_{-1.5}^{+2.0}$~$\times$10$^{-5}$~photons~cm$^{-2}$~s$^{-1}$ at 1~keV; see Nandra et al.\cite{Nandra2007} and Arnaud\cite{Arnaud1996} for further details.
\begin{figure*}
\centering
\includegraphics[width=10cm,angle=270]{pg0043_hst.ps}
\caption{HST-COS FUV spectrum of PG\,0043+039 corrected for Galactic
reddening. The brown line (see Paper 1)
shows an approximation of the continuum. Below the spectrum
the locations of possible absorption lines are indicated.
}
\label{pg0043_hst.ps}
\end{figure*}
Finally, we tested whether an absorbed, reflection-only model could describe the data.
The idea is to assume a completely absorbed primary continuum so that only the reflected component can reach an observer in our direction.
The obtained C values were significantly larger with $\Delta$~C~$=$~26.1 for d.o.f.~$=$~55.
In addition, we tried to fit the spectrum, which is binned with 15 counts per bin, with this model by applying the $\chi^2$-statistics.
We were unable to obtain a statistically valid description of the data and therefore we exclude this scenario.
\paragraph{Observation 1:}
\noindent
Given the low number of counts collected during the XMM-Newton observation of
PG\,0043+039 in 2005 we analyzed the data compared to our findings described
in the previous paragraph.
This analysis was led by two questions: (1) Can we detect the components that
we found in Obs~2? and (2) Are the components absorbed by additional material that was not present in Obs~2?
We fitted the EPIC spectra of Obs.1 with a model consisting of an absorbed
power law plus iron emission line.
With exception of the normalizations, all parameters were fixed to the values
obtained for Obs.~2, e.g.,
N$_H$~$=$~5.5$\times$10$^{21}$~cm$^{-2}$ and $\Gamma$~$=$~1.70.
We
obtained C~$=$~66.3 for d.o.f.~$=$~40 with
N(power law)~$=$~1.33$_{-0.58}^{+0.69}$~$\times$~10$^{-6}$~keV$^{-1}$~cm$^{-2}$~s$^{-1}$ at 1 keV and
N(Gauss)~$=$~1.9$_{-1.9}^{+1.9}$~$\times$~10$^{-8}$~photons~cm$^{-2}$~s$^{-1}$.
Given these numbers we are not able to detect the iron line in Obs~1.
We then took the power-law continuum as determined for Obs~2 and determined
a formal absorbing column density, which is required to model the data of Obs~1.
We obtained
N$_H$(power law)~$=$~5.4$_{-3.3}^{+6.9}$~$\times$~10$^{23}$~cm$^{-2}$
(C~$=$~75.5 for d.o.f.~$=$~41).
We applied the same procedure for the Gaussian line
fit and obtained:
N$_H$(Gauss)~$=$~1.7$_{-1.4}^{+1.5}$~$\times$~10$^{24}$~cm$^{-2}$
(C~$=$~83.7 for d.o.f.~$=$~41).
We followed the same strategy with the physical model assuming an absorbed
primary power-law continuum and Compton reflection on distant, optically
thick material (Nandra et al.\citealt{Nandra2007}).
Again we fixed all model parameters to the best-fit values obtained for Obs.~2
with the exception of the normalizations, which were free to vary.
We obtained
C~$=$~66.7 for d.o.f.~$=$~40 with
N(power law)~$=$~1.50$_{-0.72}^{+0.85}$~$\times$~10$^{-6}$~keV$^{-1}$~cm$^{-2}$~s$^{-1}$ at 1 keV and
N(reflection)~$=$~2.1$_{-2.1}^{+9.4}$~$\times$~10$^{-6}$~photons~cm$^{-2}$~s$^{-1}$.
In agreement with the analysis above, we conclude that we are unable
to detect the reflection component in the data of Obs.~1.
Similarly, we determined formal column densities for the physical model.
For each component (continuum and reflection) individually we froze the model parameters to the values of the best fit of Obs.~2 and determined
a formal column density required to describe the data of Obs.~1.
We obtained
N$_H$(power law)~$=$~4.3$_{-3.6}^{+7.0}$~$\times$~10$^{23}$~cm$^{-2}$
(C~$=$~76.3 for d.o.f.~$=$~41) and
N$_H$(reflection)~$=$~1.1$_{-1.1}^{+5.0}$~$\times$~10$^{24}$~cm$^{-2}$
(C~$=$~82.4 for d.o.f.~$=$~41).
\subsection{HST UV/FUV spectra of PG\,0043+039}
The UV spectrum of PG\,0043+039 we took with
HST in 2013 is shown in
Fig.~\ref{pg0043_hst.ps}.
The observed wavelength range from $\sim$1140\,\AA{}
to $\sim$2150~\AA{} corresponds to an intrinsic wavelength range of
$\sim$820\,\AA{} to $\sim$1550~\AA{}.
\begin{figure*}
\centering
\includegraphics[width=10cm,angle=270]{pg0043_hst_2013_1991.ps}
\caption{UV spectral variability: HST UV spectra of PG\,0043+039
taken in the
years 2013 (red)
and in 1991, with flux multiplied by a factor 1.8 (black), as
well as their difference spectrum (blue).
}
\label{pg0043_hst_2013_1991.ps}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=10cm,angle=270]{pg0043_het_salt.ps}
\caption{Combined optical spectrum of PG\,0043+039 taken with the
HET and SALT telescopes in 2013.
}
\label{pg0043_het_salt.ps}
\end{figure*}
\begin{table*}
\tabcolsep+2.8mm
\caption{Emission line intensities in rest frame and corrected for Galactic extinction}
\begin{tabular}{lcccccl}
\hline
\noalign{\smallskip}
Emission line &Flux & Rel.Flux & Wavelength range & \multicolumn{2}{c}{Pseudo-continuum} &Telescope \\
& & & [\AA{}] & blue side [\AA{}] & red side [\AA{}] & \\
\noalign{\smallskip}
(1) & (2) & (3) & (4) & (5) & (6) & (7)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
?? & 53.3$\pm{}$4. & .682 & 812 -- 869 & & & HST\\
cycB5,A7 & 50.0$\pm{}$4. & .640 & 927 -- 981 & & & HST\\
?? & 24.2$\pm{}$3. & .310 & 984 -- 1011 & & & HST\\
$\ion{O}{vi}\,\lambda 1038$ & 98.9$\pm{}$7. & 1.27 & 1011 -- 1054 & & & HST\\
cycA6 & 144$\pm{}$11. & 1.84 & 1054 -- 1102 & & & HST\\
cycB4 & 137$\pm{}$11. & 1.75 & 1118 -- 1161 & & & HST\\
$\ion{C}{iii}\,\lambda 1175$ & 33.2$\pm{}$11. & .425 & 1162 -- 1081 & & & HST\\
Ly$\alpha$ & 339$\pm{}$22. & 4.34 & 1181 -- 1234 & & & HST\\
$\ion{N}{v}\,\lambda 1243$ & 10.8$\pm{}$7. & .138 & 1234 -- 1248 & 1709 & & HST\\
cycA5 & 202$\pm{}$22. & 2.59 & 1243 -- 1287 & & & HST\\
$\ion{Si}{ii}\,\lambda 1306 + \ion{O}{i}\,\lambda 1303$ & 103$\pm{}$14 & 1.32 & 1287 -- 1318 & & & HST\\
$\ion{Si}{v}\,\lambda 1403$ + cycB3 &93.9$\pm{}$22.&1.20 & 1375 -- 1460 & & & HST\\
$[\ion{O}{ii}]\,\lambda 3727$ & 0.33$\pm{}$ .04 & .004 & 3720 -- 3732 & 3719 -- 3721 & 3731 -- 3733 & HET\\
H$\delta$ & 3.6$\pm{}$ 1.0 & .046 & 4060 -- 4155 & 4040 -- 4060 & 4690 -- 4745 & HET\\
H$\gamma$ & 22.9$\pm{}$ 4.0 & 0.29 & 4265 -- 4435 & 4255 -- 4265 & 4435 -- 4445 & HET\\
\ion{Fe}{ii}\,$\lambda\lambda 4500$ & 98.1$\pm{}$ 4.0 & 1.26 & 4155 -- 4690 & 4040 -- 4060 & 4690 -- 4745 & HET\\
H$\beta$ & 78.1$\pm{}$ 4.0 & 1.00 & 4745 -- 4965 & 4690 -- 4745 & 5635 -- 5755 & HET/SALT\\
\ion{Fe}{ii}\,$\lambda\lambda 5020$ & 13.9$\pm{}$ 1.0 & 0.18 & 4965 -- 5070 & 4690 -- 4745 & 5635 -- 5755 & SALT\\
\ion{Fe}{ii}\,$\lambda\lambda 5320$ & 83.2$\pm{}$ 4.0 & 1.07 & 5070 -- 5635 & 4690 -- 4745 & 5635 -- 5755 & SALT\\
\ion{He}{i}\,$\lambda 5876$ & 5.3$\pm{}$ 1.0 & .068& 5790 -- 6005 & 5745 -- 5790 & 6005 -- 6045 & SALT\\
H$\alpha$ & 186.$\pm{}$ 15. & 2.38 & 6360 -- 6715 & 6310 -- 6360 & 6715 -- 6750 & SALT\\
\noalign{\smallskip}
\hline
\end{tabular}\\
Line fluxes (2) in units of 10$^{-15}$\,erg\,s$^{-1}$\,cm$^{-2}$.
\end{table*}
The HST-COS spectrum
has been smoothed by means of
two different running mean widths
($\Delta \lambda = 1.8$ and $5.7$\,\AA{} in rest frame) for highlighting
weaker spectral structures.
We indicate the identifications of the strongest UV emission lines,
of the geo-coronal lines Ly$\alpha$ and $\ion{O}{vi}\,\lambda 1038$,
as well as of other emission lines, which we attribute
to two
cyclotron systems A and B with
their 3rd, 4th, 5th, 6th, and 7th harmonics. The integer numbers identify the
emission humps with multiples of the cyclotron fundamental
(see Paper I). This means, for example, that cycA5 is the fifth harmonic of the
cyclotron system A. The second number gives the wavelength.
We discuss the relative intensities and the equivalent widths
of the strongest emission lines $\ion{O}{vi}\,\lambda 1038$, Ly$\alpha,$
and $\ion{N}{v}\,\lambda 1243$ in PG\,0043+039
in comparison to non-BAL quasars in Sect. 4.2.
We show a power-law continuum N$_\nu\sim\nu^{\alpha}$ with
$\alpha = 0.69 \pm 0.02.$ based on the near and far-UV spectra
(see Fig.~\ref{pg0043_sed_dered_log.ps}
in the discussion section).
The locations of possible absorption lines are indicated
below the spectrum.
We show the positions of the strongest absorption lines of an average
BAL quasar (taken from Baskin et al.\citealt{baskin13}, their Fig.~3).
The distribution of these possible absorption lines blueward of Ly$\alpha$
cannot produce artificial emission features that are comparable with
the observed spectral humps.
During the first inspection of our only FUV spectrum
(Fig.~\ref{pg0043_hst.ps}) it was difficult
to derive the UV continuum. However, regions in the long-ultraviolet spectrum
at about
1330 - 1350 , 1700 - 1720, and 1975 - 2000 \AA{} are generally free
from strong absorption or emission features in AGN
(e.g., Gibson et al.\citealt{gibson09}).
Therefore, we combined our short-wavelength UV spectrum with the long-wavelength
UV spectrum taken in 1991 and tried to fit a continuum in the
long-wavelength region first.
Afterward, we extrapolated this continuum (a simple power law) to our
short-wavelength UV spectrum. This power law perfectly fits the
continuum in the 900 - 1000 \AA{} range.
At the end,
we used the spectral ranges at 895, 983, 1360, 1610, 1690, and 1980\AA{}
with typical widths of 10\AA{} (see Paper I, Fig.~1) to fit the
UV continuum.
The obtained UV/FUV spectrum of PG\,0043+039 is very exceptional in comparison to
other quasars and/or BAL quasars
(Hall et al.\citealt{hall02}; Baskin et al.\citealt{baskin13};
Saez et al.\citealt{saez12}) regarding the emission lines.
The intensities and the wavelengths of identified optical/UV lines
are given in Table 1. The relative line fluxes with respect to
H$\beta$~=~1.0 are given in column~3.
There is no detectable Lyman edge associated with the BAL absorbing gas.
Figure~\ref{pg0043_hst_2013_1991.ps}
presents the common wavelength range of our HST-COS UV spectrum
taken in 2013 together with the
HST-FOS spectrum of PG\,0043+03 taken in 1991
(Turnshek et al.\citealt{turnshek94};
multiplied by a factor of 1.8).
At the bottom of this figure the difference between the two spectra is shown
(i.e., the spectrum taken in 2013 subtracted by the spectrum taken in 1991).
Here some additional weak broad emission lines
(e.g., $\ion{C}{iii}\,\lambda 1175$,
$\ion{N}{v}\,\lambda 1243$, $\ion{Si}{v}\,\lambda 1403$)
sitting on top of broad bumps can be identified by
comparing our recent UV spectrum with the HST-FOS spectrum taken in 1991
(see the discussion section). It is remarkable that these lines
were not identifiable in the UV spectrum taken in 1991.
\subsection{Optical spectra of PG\,0043+039}
A combined optical spectrum of PG\,0043+039 is shown in
Fig.~\ref{pg0043_het_salt.ps}
composed of the two spectra taken with the
HET and SALT telescopes in 2013.
\begin{figure*}
\centering
\includegraphics[width=10cm,angle=270]{pg0043_het_salt_kpno.ps}
\caption{Optical variability: combined optical spectra of PG\,0043+039
taken with the
HET and SALT telescopes in 2013, an optical spectrum taken with the KPNO
in 1990, as well as the difference
between the spectra taken in 2013 and 1990 (additionally shifted by
0.6). A horizontal line is added to the difference spectrum
to guide the eye.
}
\label{pg0043_het_salt_kpno.ps}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=13cm,angle=0]{pg0043_velo3x2.ps}
\caption{Possible absorption profiles belonging to the OVI, NV,
SiIV, CIV, MgII, and Ly$\alpha$
lines. The profiles are divided by the assumed continuum. The
velocity scale is calculated
relative to the reddest line of each multiplet. The location of possible
absorption lines, based on local minima, is
indicated with the vertical tick marks at different velocities.}
\vspace*{-3mm}
\label{pg0043_velo3x2.ps}
\end{figure*}
This spectrum has been corrected for Galactic extinction.
The blue spectrum taken with the
HET telescope and the red spectrum taken with the SALT Telescope
perfectly overlap in the common H$\beta$ region. The observed total spectral
range from
$\sim$4390\,\AA{} to $\sim$9400~\AA{} corresponds to a rest-frame spectrum from
$\sim$3170\,\AA{} to $\sim$6790~\AA{}.
PG\,0043+039 is a strong optical FeII emitter following Turnshek et al.\cite{turnshek94}. In addition, Balmer lines
dominate the optical spectrum, and there are no indications of the presence of narrow
$[\ion{O}{iii}]\,\lambda 5007, \lambda 4959$ lines.
In contrast
to Turnshek et al.\cite{turnshek94}, we clearly detect
in our recent spectra
a broad \ion{He}{i}\,$\lambda 5876$ line and
a weak narrow
$[\ion{O}{ii}]\,\lambda 3727$ line.
The underlying continuum of the
optical spectrum shows a strong blue gradient.
A power-law continuum N$_\nu\sim\nu^{\alpha}$ with
$\alpha = -2.55 \pm 0.02$ is indicated based on
continua points at 3325, 3550, 3805, and 4010 \AA{}
with typical widths of 10\AA{}.
We present in Table~1 the measured UV and optical emission line intensities
for PG\,0043+039 in rest frame and corrected for Galactic extinction.
We integrated the emission line intensities between the wavelength boundaries
given in this table. First we subtracted a linear pseudocontinuum
defined by the wavelength ranges (Col. 5,6), then we integrated the emission
line flux. In some cases, we only extrapolated the continuum from one side.
Figure~\ref{pg0043_het_salt_kpno.ps}
shows the combined optical spectra of PG\,0043+039 taken with the
HET and SALT telescopes in 2013 together with
an optical spectrum taken before with the KPNO 2.1m telescope in Sept. 1990
(Boroson \& Green\citealt{boroson92}).
The spectrum taken in 2013 is brighter
by a constant factor of 1.8 compared to the spectrum taken in 1990.
This variability factor is similar to that of the UV spectra.
In addition, the quotient spectrum
between these two epochs
(additionally shifted by 0.6) is given in Figure~\ref{pg0043_het_salt_kpno.ps}.
We added a horizontal line to guide the eye.
There are no clear spectral differences to be seen for these
two epochs. However, an extended blue wing in H$\beta$
that can be recognized in the FeII
subtracted spectrum of Boroson \& Green\cite{boroson92} might have varied.
\subsection{UV/opt. emission line profiles in PG\,0043+039}
\begin{table}
\tabcolsep+1.3mm
\caption{Line widths (FWHM) and shifts (uppermost 10 percent) of the strongest
broad emission lines.}
\begin{tabular}{llcl}
\hline
\noalign{\smallskip}
Emission Lines &Width &Shift &Telescope\\
&[\kms] &[\kms] &\\
\noalign{\smallskip}
(1) & (2) & (3) & (4)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\ion{O}{vi}\,$\lambda 1038$ & 4150 $\pm{}$ 200 & +390 $\pm{}$ 150 & HST\\
Ly$\alpha$ & 6300 $\pm{}$ 500 & -860 $\pm{}$ 150 & HST\\
H$\beta$ & 4750 $\pm{}$ 200 & 0 & SALT/HET\\
H$\alpha$ & 4010 $\pm{}$ 200 & 0 & SALT\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
cycA6\,$\lambda 1082$ & 10\,200 $\pm{}$ 500 & 0 & HST\\
cycB4\,$\lambda 1136$ & 8600 $\pm{}$ 500 & 0 & HST\\
cycA5\,$\lambda 1253$ & 12\,700 $\pm{}$ 500 & 0 & HST\\
\noalign{\smallskip}
\hline
\end{tabular}\\
\end{table}
The emission line profiles contain information about the kinematics
of their line emitting regions. The strongest single emission lines
in the optical
spectra of PG\,0043+039 are the Balmer lines H$\alpha$ and H$\beta$
(see Fig.~\ref{pg0043_het_salt.ps}). Besides that, the optical spectrum
is dominated by FeII blends.
The strongest single emission lines in the UV are the
Ly$\alpha$ and \ion{O}{vi}\,$\lambda 1038$ lines
(see Fig.~\ref{pg0043_hst.ps}). There are further line identifications in our
FUV HST-COS spectrum, which we attribute to cyclotron lines. Theses lines
have different profiles.
We showed the optical and UV profiles in velocity space in
Paper I.
We present in Table~2 the line widths, i.e., the full width at half maximum
(FWHM) and the shifts of their emission line centers (centroid of the
uppermost 10 percent).
All the broad emission lines exhibit very similar profiles and
have nearly identical line
widths of 4000 to 4800 \kms{} (FWHM) except Ly$\alpha, $ which shows
a slightly broader line
width of 6300\,\kms. This difference might be caused by a varying continuum
or because of an
additional underlying component in the Ly$\alpha$ line.
The width (FWHM) of our recent H$\beta$ line measurement
is narrower by 500 \kms with respect
to the spectrum taken by Boroson \& Green \cite{boroson92} in 1990. However, our
value corresponds with that of Ho \& Kim \cite{ho09} in 2004.
The differences in line width might have been caused by line variations in
the meantime.
All the
cyclotron lines show very similar line widths of about 10\,000 \kms,
on the one hand, and they have entirely different line shapes
in comparison to those of the normal emission lines (FWHM$\sim5000~$\kms),
on the other hand.
The [\ion{O}{ii}]$\lambda$3727 line is the only
narrow forbidden emission line detected in PG\,0043+039.
This [\ion{O}{ii}]$\lambda$3727 line exhibits a line width (FWHM) of 6.6\,\AA,{}
corresponding to a velocity of 530 \kms (see Fig.~\ref{pg0043_CaII.ps}).
The Balmer lines exhibit the same systemic redshift as this narrow
emission line $[\ion{O}{ii}]\,\lambda 3727$.
The \ion{O}{vi}\,$\lambda 1038$ shows an internal redshift
of 390 $\pm{}$ 150 \kms. However, this might be feigned because of
absorption of the blue \ion{O}{vi} doublet component
(see Fig.~\ref{pg0043_velo3x2.ps}).
The Ly$\alpha$ line is definitely blueshifted
by 860 $\pm{}$ 150 \kms.
\subsection{UV/Opt. absorption line systems in PG\,0043+039}
PG\,0043+039 has been classified
as a BAL quasar (Turnshek et al.\citealt{turnshek94})
based on a broad CIV absorption at a blueshift of $\sim$ 10\,000 \kms.
We searched in our far-UV spectrum for additional absorption lines
belonging to commonly expected absorption line systems of, for example,
the $\ion{O}{vi}\,\lambda 1038$,
$\ion{N}{v}\,\lambda 1243$, $\ion{Si}{v}\,\lambda 1403$,
and Ly$\alpha$ lines etc.
The $\ion{C}{iv}\,\lambda 1550$ line profile seen in our spectrum taken in 2013
is very noisy
and only covers the blue wing
at the outermost edge of our HST spectrum. The $\ion{C}{iv}\,\lambda 1550$ line profile,
taken with the HST
in 1991, is overlayed in Fig.~\ref{pg0043_velo3x2.ps}.
We show in Fig.~\ref{pg0043_velo3x2.ps}
possible absorption profiles belonging to the OVI, NV, SiIV,
CIV, MgII, and Ly$\alpha$ lines.
The profiles are divided by the assumed continuum.
The velocity scale is calculated relative to the reddest
line of each multiplet.
The stronger blue component at about -1\,000 \kms is missing in the
$\ion{O}{vi}$\,doublet ($\lambda 1032,1038$). This points to an absorption
component at v $\sim$ 0 \kms. The same behavior is indicated in the other
high-ionization lines of the $\ion{C}{iv}\,\lambda 1550$ and
$\ion{N}{v}\,\lambda 1243$ doublets.
Additional possible positions of
absorption lines belonging to prominent
UV lines are indicated with vertical tick marks at
-20\,000, -16\,000, -13\,700, -11\,000, -9\,000 \kms.
Here we labeled local minima in the spectrum shortward
of the emission lines.
However, no absorption troughs could be unambiguously connected to
any of the emission lines except for the CIV absorption.
\begin{figure}
\includegraphics[width=8cm,angle=0]{pg0043_CaII.ps}
\caption{Enlargement of the optical rest-frame spectrum
around the narrow [\ion{O}{ii}]$\lambda$3727 emission and
CaII H and K absorption lines. The BAL~I system (blue) consists of
the CaH\,$\lambda 3968$, CaK\,$\lambda 3934$ lines
(blueshifted by $\sim$ 4900 \kms) and the
\ion{He}{i}\,$\lambda 3889$ line (blueshifted by $\sim$ 5600 \kms).
}
\vspace*{-3mm}
\label{pg0043_CaII.ps}
\includegraphics[width=6.5cm,angle=270]{pg0043_NaD.ps}
\caption{Enlargement of the optical rest-frame spectrum
around the intrinsic NaD absorption lines as well as around
a possible blueshifted NaD absorption.
}
\vspace*{-3mm}
\label{pg0043_NaD.ps}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.5cm,angle=0]{pg0043_velo_bal1.ps}
\caption{Absorption lines of the BAL~I system (CaH\,$\lambda
3968$, CaK\,$\lambda 3934$, and \ion{He}{i}\,$\lambda 3889$ lines) in
velocity space.}
\label{pg0043_velo_bal1.ps}
\end{figure}
A further aspect of our present study is the detection
of additional blueshifted BALs in the optical spectrum
of PG\,0043+039.
In
Fig.~\ref{pg0043_CaII.ps} we present an extract of our optical spectrum
in the range of the narrow [\ion{O}{ii}]$\lambda$3727 emission line as well as
CaII H and K absorption lines.
Figure~\ref{pg0043_NaD.ps}
shows an enlarged section of our optical spectrum of PG\,0043+039
around the NaD absorption region.
We are able to identify
weak absorption lines of Na\,D\,$\lambda 5890/96$, and the
CaH\,$\lambda 3968$ and CaK\,$\lambda 3934$ lines at the
systemic velocity of the galaxy. Their absorption line equivalent widths
are listed in Table~3.
\begin{table*}
\tabcolsep+5.5mm
\caption{Optical/UV absorption line equivalent widths and their shifts.}
\begin{tabular}{llllcc}
\hline
\noalign{\smallskip}
Absorption Lines &$W_{\lambda}$ & \multicolumn{2}{c}{Pseudo-continuum} & Shift &comments\\
&[\AA{}] & blue side [\AA{}] &red side [\AA{}] & [\kms] &\\
\noalign{\smallskip}
(1) & (2) & (3) & (4) &(5) &(6)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Na\,D\,$\lambda 5890/96$ & .34$\pm{}$ .04 &5888 &5900 & 0 &\\
CaH\,$\lambda 3968$ & .097$\pm{}$ .01 &3962 &3977 & 0 &\\
CaK\,$\lambda 3934$ & .19 $\pm{}$ .05 &3929 &3944 & 0 &\\
\noalign{\smallskip}
CaH\,$\lambda 3968$ & .11$\pm{}$ .03 &3899 &3912 & -4800 $\pm{}$ 300 & BAL\,1\\
CaK\,$\lambda 3934$ & .10$\pm{}$ .03 &3865 &3877 & -5000 $\pm{}$ 200 & BAL\,1\\
\ion{He}{i}\,$\lambda 3889$ & .25$\pm{}$ .05 &3808 &3828 & -5600 $\pm{}$ 200 & BAL\,1\\
\noalign{\smallskip}
MgII\,$\lambda 2798$: & 34.0$\pm{}$6. &2531 &2779 & -19000 $\pm{}$ 1000 & \\
CIV\,$\lambda 1550$ (broad) & 19.2$\pm{}$1. &1456 &1537 & -11100 $\pm{}$ 1000 & \\
CIV\,$\lambda 1550$ (narrow) & .48$\pm{}$.1 &1542 &1552 & -800 $\pm{}$ 400 & \\
\noalign{\smallskip}
\hline
\end{tabular}\\
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=10cm,angle=270]{pg0043_sed_dered_log.ps}
\caption{Combined optical-UV spectra of PG\,0043+039, corrected
for Galactic reddening, taken
in the years 2013 (red lines) and 1990/1991 (blue lines).
In addition the spectral OM data are given
taken with the XMM-Newton satellite in 2005 (green
data). Furthermore, the power-law continua to the UV and optical spectral ranges
are shown.}
\label{pg0043_sed_dered_log.ps}
\end{figure*}
In addition to the absorption system at the systemic velocity,
we can verify an absorption system (BAL~I)
consisting of the CaH\,$\lambda 3968$, CaH\,$\lambda 3934$,
and \ion{He}{i}\,$\lambda 3889$ lines
(see Fig.~\ref{pg0043_CaII.ps}).
Figure~\ref{pg0043_velo_bal1.ps} shows the lines of the BAL~I system
in velocity space. The CaH\,$\lambda 3968$ and CaH\,$\lambda 3934$ lines
are blueshifted by 4900 \kms and the
\ion{He}{i}\,$\lambda 3889$ line is blueshifted by
$\sim$ 5600 \kms.
A similar absorption line system at a related
blueshift has been found before
in the peculiar BAL galaxy Mrk\,231 (Boksenberg et al.\citealt{boksenberg77}).
In this galaxy
even three BAL systems have been identified at different velocities
and they vary independently
(e.g., Kollatschny et al.\citealt{kollatschny92},
Lipari et al.\citealt{lipari09}).
However, in contrast to Mrk\,231 we see
no indication for a Na\,D
absorption at a blueshift of $\sim$ 5100 km $s^{-1}$ within the error limits.
\subsection{Overall spectral slope in PG\,0043+039}
We present
a combination of our optical and UV spectra of PG\,0043+039 taken
in the summer of 2013 in Fig.~\ref{pg0043_sed_dered_log.ps}.
In addition a transition spectrum is overplotted taken
with the coaligned 30-cm optical/UV telescope (OM) on board
the XMM-Newton satellite in 2005.
We multiplied the observed OM spectrum with a factor of 1.7 to align
it with the optical and UV spectra taken in 2013.
In addition, we present the power-law continua adapted
to the UV as well as optical spectral ranges.
There is a clear maximum in the overall continuum flux
at around 2500 \AA{} (rest frame).
The flux is getting weaker toward shorter and longer wavelengths.
We compute a very steep UV power-law slope parameter
$\alpha_{ox}$=$-$2.55 $\pm{}$ 0.3 based on the near- and far-UV data.
A maximum in the transition region between the optical and UV spectral
range has been noted before by
Turnshek et al.\cite{turnshek94}.
They assigned this slope to intrinsic
reddening of E(B-V) = 0.11 mag from dust similar to that found in the Small
Magellanic Cloud (SMC).
\begin{figure}
\includegraphics[width=9.0cm,angle=0]{pg0043_sed_15_3_3.ps}
\caption{Combined overall spectrum (radio to hard X-ray) for PG\,0043+039
}
\vspace*{-3mm}
\label{pg0043_sed_15_3_3.ps}
\end{figure}
The multiwavelength spectral energy distribution of PG\,0043+039
from the 6cm radio (VLA) to the hard X-ray (NuStar) frequencies
is shown in Fig.~\ref{pg0043_sed_15_3_3.ps}. The
radio observations with the VLA taken in 1982
were published by Kellermann et al.\cite{kellermann89}.
In addition to its faintness in X-ray wavelengths, PG\,0043+039 is a weak radio source.
This behavior is consistent with the general radio/X-ray luminosity
relation for radio-quiet quasars (Laor \& Behar\citealt{laor08}).
Serjeant \& Hatziminaoglou\cite{serjeant09}
presented the infrared IRAS observations at 60 and 100 $\mu$ taken in 1983.
The Spitzer infrared and GALEX UV data were taken from their archives.
\section{Discussion}
\subsection{Continuum and line intensity variations}
We observed variations in the optical, UV, and X-ray continuum flux
of PG\,0043+039.
Our observations show an increase in the optical and
UV continuum fluxes by a factor of 1.8 for 2013
compared to observations taken in 1990/1991.
Turnshek et al.\cite{turnshek94} reported optical and UV decreasing
flux variations of PG\,0043+039 at earlier epochs
with respect to their spectra taken in 1991. This means that PG\,0043+039
was in a low state in 1991.
PG\,0043+039 was by a factor of $\sim$1.6 brighter in the optical in
1981 in comparison to their observations in 1990.
The same is true for the UV continuum flux in 1986.
PG\,0043+039 has varied in the X-ray continuum as well.
While this object was not verified in the X-ray in 2005
in the first analysis (Czerny et al.\citealt{czerny08}),
it now has been proven in 2013.
The flux increased
by a factor of 3.8~$\pm$0.9.
The X-ray variations of PG\,0043+039 were on the same order or even stronger
in comparison with long-term X-ray variations
of other BAL quasars
(see Saez et al.\citealt{saez12}).
For example, Mrk\,231 varied in the X-ray by a factor 1.3 $\pm{}$ 0.2 over
a period of 20 years.
Details of the spectroscopic variations in PG\,0043+039 seem to be even more
complex in comparison to their general continuum variations.
In the UV difference spectrum, based on HST spectra taken in 1991 and 2013
(see Fig. 6), individual emission lines stick out
after correction for a general intensity that increases by a factor of 1.8.
These lines have not been seen in the 1991 UV spectrum.
In addition, there are evident variations in
some broader UV structures (in intensity and wavelength).
This variability behavior is different with respect
to what is known from normal spectral variations of AGN ( e.g.,
Kollatschny et al.\citealt{kollatschny14}).
A detailed analysis of these variations in PG\,0043+039 is beyond the scope of this paper.
\subsection{Emission lines in PG\,0043+039}
The Balmer lines H$\alpha$, H$\beta$, and H$\gamma$
(see Fig.~\ref{pg0043_het_salt.ps}) are the strongest emission lines
in the optical
spectrum of PG\,0043+039 besides strong FeII blends.
Again the optical spectrum of PG\,0043+039 shows similarities
to that of Mrk\,231, however, the
relative strength of the optical FeII blends is even stronger in Mrk\,231
(Boksenberg et al.\citealt{boksenberg77};
Lipari et al.\citealt{lipari09}) when compared to PG\,0043+039.
The line intensity ratio of the broad H$\alpha$ to H$\beta$ lines
has a value 2.38
(Table~1) and cannot be understood with simple photoionization models
in which values of 2.8 or more are expected.
The [\ion{O}{ii}]$\lambda$3727 line is the only verified
narrow forbidden emission line in PG\,0043+039.
We can verify this line in our optical spectrum in contrast to
Turnshek et al.\cite{turnshek94} who reported their absence.
The [\ion{O}{ii}]$\lambda$3727 line is the only detected
forbidden line in Mrk\,231 as well
(Boksenberg et al.\citealt{boksenberg77};
Lipari et al.\citealt{lipari09}).
Based on the fact that there are only upper limits
for the [\ion{O}{iii}]$\lambda$5007 line intensities in both galaxies,
we find that
the intensity ratio [\ion{O}{ii}]$\lambda$3727/[\ion{O}{iii}]$\lambda$5007
is higher in PG\,0043+039 and Mrk\,231 in comparison to normal AGN.
It is known that there is an anticorrelation between the strengths
of the [\ion{O}{iii}]$\lambda$5007 line and the
FeII blends (Eigenvector 1; e.g., Sulentic et al.\citealt{sulentic00}).
However, an analogical anticorrelation of the
[\ion{O}{ii}]$\lambda$3727/[\ion{O}{iii}]$\lambda$5007 ratio
has not been studied yet.
Typically, the
Ly$\alpha$, \ion{O}{vi}\,$\lambda 1038$, and \ion{N}{v}\,$\lambda 1243$ lines
are the strongest emission lines in the UV range
from 800 to 1500\,\AA{} in AGN.
We derived line intensity ratios for these emission lines in PG\,0043+039
(see Table 1 and Fig.~\ref{pg0043_shull_hst_cos.ps}),
which are very similar to those seen in mean composite AGN spectra
(Shull et al.\citealt{shull12}). However, PG\,0043+039 shows additional
strong broad lines in its FUV spectrum, which
could not be attributed to known emission lines.
We present in Fig.~\ref{pg0043_shull_hst_cos2x1.ps} the combined UV-spectrum
of PG\,0043+039 and a composite HST/COS spectrum of 22 non-BAL quasars
taken from Shull et al.\citealt{shull12}. The UV continua of both spectra
show different gradients
and emission line spectra.
\begin{figure*}
\centering
\includegraphics[width=9.8cm,angle=270]{pg0043_shull_hst_cos2x1.ps}
\caption{Comparison of the UV emission line intensities in
PG\,0043+039 (upper panel) with respect to a composite HST/COS spectrum of 22 AGN
(Shull et al.\citealt{shull12}, lower panel). We combined the two
ultraviolet spectra of PG\,0043+039 taken with the HST in
the years 2013 and 1991. The UV line and continuum intensities increased
by a factor of 1.8 between these two observations. We multiplied the UV
spectral flux taken in 1991 with this factor to match the UV observations taken
in 2013. The continua of both spectra are indicated as
brown lines. }
\vspace*{-3mm}
\label{pg0043_shull_hst_cos2x1.ps}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=9.8cm,angle=270]{pg0043_shull_hst_cos.ps}
\caption{Comparison of the relative UV emission line intensities in
PG\,0043+039 with respect to a composite HST/COS spectrum of 22 AGN
(Shull et al.\citealt{shull12}). The continua have been subtracted first and
both spectra are scaled to the same $\ion{O}{vi}\,\lambda 1038$
and $\ion{N}{v}\,\lambda 1243$ line intensities.
}
\vspace*{-3mm}
\label{pg0043_shull_hst_cos.ps}
\end{figure*}
We compare in Fig.~\ref{pg0043_shull_hst_cos.ps} the relative UV emission
line intensities in
PG\,0043+039 with respect to the composite HST/COS AGN spectrum.
We subtracted the continua first and
scaled both spectra to match the $\ion{O}{vi}\,\lambda 1038$
and $\ion{N}{v}\,\lambda 1243$ line intensities by means of
a single scaling factor.
Both spectra
are similar with respect to the relative intensities of the
strongest emission lines, $\ion{O}{vi}\,\lambda 1038$, Ly$\alpha$
and $\ion{N}{v}\,\lambda 1243,$ as well as their equivalent widths
(see Table~4). However, they differ with respect to
the \ion{C}{iv}\,$\lambda 1550$ line absorption
and with respect to the strong broad humps.
In Paper I we presented a modeling of these observed strong humps in
PG\,0043+039 by means of cyclotron lines.
They are not seen in the composite AGN spectrum.
We derived plasma
temperatures of T $\sim$ 3~keV
and magnetic field strengths of B $\sim$ 2 $\times10^{8}$ G
for the cyclotron line-emitting regions close to the black hole.
With this modeling, we could explain the
wavelength positions of the broad humps and their relative intensities.
Figure~\ref{pg0043_cyc_nu.ps} shows the UV spectrum of PG\,0043+039
in frequency space with the identifications of the cyclotron systems A and B
with their second to seventh harmonics in addition to the normal emission lines
(see Paper I for more details).
\begin{figure*}
\centering
\includegraphics[width=10cm,angle=270]{pg0043_cyc_nu.ps}
\caption{Combined ultraviolet spectra of PG\,0043+039 taken with the HST 2013 and 1991. We indicae the identifications of the strongest
UV emission lines, the geo-coronal lines, and both cyclotron
systems B and A with
their 3rd to 7th harmonics. The integer numbers identify the
emission humps with multiple of the cyclotron fundamental.
We indicate the power-law continuum N$_\nu\sim\nu^{\alpha}$ with
$\alpha = 0.69 \pm 0.02.$
The modeling of both cyclotron line systems B and A is given at the bottom.
}
\vspace*{-3mm}
\label{pg0043_cyc_nu.ps}
\end{figure*}
Independent indications for magnetic fields of at least tens of Gauss (and
possibly considerably higher) on scales on the order of light days from
a central black hole have been recently reported by Marti-Vidal
\cite{marti-vidal15}. They detected Faraday rotation at sub-mm wavelengths
in the nearby AGN 3C84.
PG\,0043+039 is a very luminous AGN (M$_B$=$-26.11$) compared to Mrk\,231
(M$_B$=$-21.3$).
However, the blueshift of the Ly$\alpha$ line is stronger
in Mrk\,231 ($-$3500 \kms; Veilleux et al. \citealt{veilleux13})
than in PG\,0043+039 ($-$860 \kms).
Typical AGN show blueshifts of $-$400 \kms.
Some additional information is contained in the line profiles as well.
Normally, the broad AGN emission lines show different line widths
(Mrk110; e.g., Kollatschny\citealt{kollatschny03a,kollatschny03b})
caused by their formation at different distances
from the central ionizing source.
Besides that, as an example
the AGN Mrk\,231 shows a peculiar Ly$\alpha$ profile
compared to the Balmer lines (Veilleux et al.\citealt{veilleux13}).
In contrast to that, all the broad emission lines in PG\,0043+039 show similar
line profiles and similar line widths
(see Fig.~2 in Paper I)
independent of their ionization state.
This is an indication that the broad emission line region
in PG\,0043+039 is not strictly
structured and that
all these lines originate at similar distances
from the central ionizing source.
\subsection{Absorption line systems in PG\,0043+039}
Bahcall et al.\cite{bahcall93}
and Turnshek et al.\cite{turnshek94,turnshek97} classified PG\,0043+039
as a BAL quasar
based on a broad CIV absorption line at a blueshift of $\sim$ 10,000 \kms
detected with the HST in 1991.
This study did not find convincing evidence for BALs due to low-ionization transitions of
AlII or CII, other than for MgII.
On the other hand, only about 10 percent of the BAL quasars show
these lines (e.g., Trump et al.\citealt{trump06};
Allen et al.\citealt{allen11}).
Bechtold et al.\cite{bechtold02} reanalyzed
the absorption line spectra of all quasars
taken with the high-resolution gratings of the FOS
on board of the HST. They could not confirm the existence
of broad absorption lines in PG\,0043+039 as reported before by
Turnshek et al.\cite{turnshek94}. They only verified narrow
absorption in the Ly$\alpha$ and \ion{C}{iv}\,$\lambda 1550$ lines at the same
redshift as the emission lines.
Based on the UV absorption lines profiles in Fig.~\ref{pg0043_velo3x2.ps},
as well as on the UV spectral distribution in Fig.~\ref{pg0043_sed_dered_log.ps},
we confirm a central absorption component at v $\sim$ $-$800 \kms
in the high-ionization doublet lines $\ion{O}{vi}$, $\ion{C}{iv}$,
$\ion{N}{v}$,
and, in addition, a strong, broad
absorption component at $-$11\,000 \kms in the \ion{C}{iv}\,$\lambda
1550$ line (see Table 3).
The possible identification of a strong, broad absorption component belonging
to the
MgII\,$\lambda 2798$ line at $-$19\,000 \kms cannot
be unambiguously demonstrated.
There is a strong absorption structure in the the UV/optical
spectral distribution at 2620~\AA{} rest frame
(see Fig.~\ref{pg0043_sed_dered_log.ps}).
However, this structure might not be a real absorption line and it might
only be simulated by broad emission line humps
at shorter and longer wavelengths.
The single clear identification of the broad
\ion{C}{iv}\,$\lambda 1550$ absorption at $-$11\,000 \kms
in PG\,0043+039 is atypical compared to the UV absorption
properties in other BAL
quasars (see, e.g., Baskin et al.\citealt{baskin13};
Hamann et al.\citealt{hamann13}).
In general, BAL
quasars also show absorption features
from other high- and low-ionization lines,
such as OVI and Ly$\alpha$.
On the other hand, the UV spectrum of PG\,0043+039
shows similarities with the UV spectrum of Mrk\,213.
Veilleux et al.\cite{veilleux13} demonstrated the surprising
absence of UV absorption in the FUV spectrum in Mrk\,231.
Ly$\alpha$ and \ion{C}{iv}\,$\lambda 1550$ absorption has only been
unambiguously identified
in Mrk\,231. In PG\,0043+039,
not even an indication of Ly$\alpha$
absorption has been seen.
In the optical spectral range, we verified a BAL~I system at a blueshift
of $\sim$ 4900 \kms in the CaH\,$\lambda 3968$ and CaK\,$\lambda 3934$
lines and at a blueshift of $\sim$ 5600 \kms in the
\ion{He}{i}\,$\lambda 3889$ line.
A similar system with similar shifts has been identified before
in the peculiar BAL galaxy Mrk\,231 (Boksenberg et al.\citealt{boksenberg77}).
The velocity of the high-ionization line \ion{He}{i} is slightly
higher than that of the low-ionization \ion{Ca}{ii} lines. It has been proposed
by Voit et al.\cite{voit93} that the low-ionization lines could be produced
in dense cores with relatively lower velocity, while the high-ionization lines
are produced in a thinner wind, possibly ablated from the cores and accelerated
(see also the discussion in Leighly et al.\citealt{leighly14}).
In Mrk\,231
even three BAL systems at different velocities have been found
and these systems varied independently
(e.g., Kollatschny et al.\citealt{kollatschny92};
Lipari et al.\citealt{lipari09}).
In PG\,0043+039 there is no indication for a Na\,D
BAL absorption at a blueshift of $\sim$ 5100 km $s^{-1}$
within the error limits. This
nondetection of Na\,D BAL absorption is an important clue since
Na\,D BAL absorption requires
dust (e.g., Veilleux et al.\citealt{veilleux13}). This strengthens the model
that dust absorption is not responsible for the weak FUV flux in PG\,0043+039.
\subsection{Opt/UV/X-ray slope}
Based on our simultaneous observations in different frequency ranges,
we could show that the X-ray faintness of PG\,0043+039 is intrinsic
and not simulated by variations.
The $\alpha_{ox}$ gradient is shown in
Fig.~\ref{luvVSstra_weak_v2.ps} for a sample of AGN
as a function of the monochromatic
luminosity at rest-frame $2500\,$\AA{} (upper panel) and the monochromatic
luminosity at 2 keV versus the monochromatic
luminosity at $2500\,$\AA{} (lower panel).
Black-filled circles indicate the SDSS objects with 0.1$\leq$z$\leq$4.5 from
Strateva et al. \cite{strateva05}, while gray-filled circles label the
data from Steffen et al.\cite{steffen06}.
The long-dashed lines are the best-fit linear relations for the samples
by Strateva et al. \cite{strateva05}.
The dotted line represents the best fit to the sample of Steffen
et al.\cite{steffen06}.
Further measurements of extreme X-ray weak quasars
(Saez et al.\citealt{saez12}) are incorporated by cyan circles.
Additional observations are given for the quasars PG~2112+059
(blue open triangle; Schartel et al. \citealt{schartel10},
\citealt{Schartel2007}),
PG~1535+547 (green star; Ballo et al. \citealt{ballo08}),
and PG~1700+518 (yellow open circle; Ballo et al. \citealt{ballo11}).
Among all galaxies PG\,0043+039 shows the most extreme $\alpha_{ox}$ gradient
($\alpha_{ox}$=$-$2.55) based on the data taken in 2005.
The X-ray flux of this AGN increased by a factor of 3.8 in 2013.
However, it still shows a very extreme
$\alpha_{ox}$ gradient ($\alpha_{ox}$=$-$2.37) at that epoch.
\begin{figure}
\includegraphics[width=10.cm,angle=0]{luvVSstra_weak_v2.ps}
\caption{The $\alpha_{ox}$ gradient as a function of the monochromatic
luminosity at rest-frame $2500\,$\AA{} (upper panel) and the monochromatic
luminosity at 2 keV versus the monochromatic
luminosity at $2500\,$\AA{} (lower panel).
Black-filled circles indicate the SDSS objects with 0.1$\leq$z$\leq$4.5 from
Strateva et al. \cite{strateva05}, while gray-filled circles label the
data from Steffen et al.\cite{steffen06}.
The dotted line represents the best fit to the sample of Steffen et
al.\cite{steffen06}.
Further measurements of extreme X-ray weak quasars
are indicated with cyan circles (Saez et al.\citealt{saez12}).
The positions of PG\,0043+039 are highlighted for
the years 2005 (red square)
and 2013 (black circle).
}
\vspace*{-3mm}
\label{luvVSstra_weak_v2.ps}
\end{figure}
We presented the extreme UV/X-ray weakness of PG\,0043+039 compared to a mean
spectral distribution of QSOs (Richards et al.\citealt{richards06}) in Fig.~3 of Paper I.
There is the question as to what causes the extreme X-ray faintness of
PG\,0043+039. We observed PG\,0043+039 simultaneously in the optical, UV, and
X-ray bands to exclude that an apparent X-ray weakness is
simulated by variations. Another possible explanation of the extreme X-ray faintness
could be the absorption of the X-ray flux due to gas.
However,
the X-ray faintness of PG\,0043+039 is consistent with the extrapolation
of its faint UV-flux (Fig.~3 of Paper I). Furthermore, the upper
limit seen in the hard X-ray flux, and confirmed by the simultaneous
NuSTAR observations (Luo et al.\citealt{luo14}),
is a further indication of an intrinsically X-ray faintness
(Fig.~\ref{pg0043_sed_15_3_3.ps}).
Other arguments against X-ray absorption
come from the results of X-ray modeling;
the X-ray spectra show no sign of extreme absorption.
Intrinsically absorbed power-law fits give a power-law index
of 1.7. This is compatible with the standard value 1.9
and N$_H$~$=$ 5.5~$\times$ 10$^{21}$~cm$^{-2}$,
which is not enough to explain
the extreme weakness of the quasar. Moreover, the explanation that the X-ray spectrum has a completely absorbed primary
continuum and an absorbed reflection failed (Sect. 3.2).
A possible explanation for the
weakness of the X-ray flux might be the
suppression of a hot inner accretion disk in PG\,0043+039.
It is generally accepted that the main contribution to the X-ray flux of AGN
is coming from a hot corona surrounding the inner accretion disk via
comptonization of UV photons from the disk
(Haardt \& Maraschi\citealt{haardt91}, Luo et al.\citealt{luo14}).
If there is no inner accretion disk, then
a strong X-ray emitting corona could not evolve.
The link between the disk and the corona has also been
independently demonstrated
by Gliozzi et al.\cite{gliozzi13}, based on an
AGN long-term monitoring campaign with Swift.
There are further indications for the nonexistence
of a hot inner accretion disk in
PG\,0043+039 (besides the X-ray weakness): the UV/FUV flux is suppressed in
this galaxy. There is a maximum in the UV continuum flux at around
$\lambda \approx 2500$\AA{}.
The flux decreases toward shorter wavelengths
in contrast to most other AGN in which a maximum is found at around
$\lambda \approx 1000$\AA{} (e.g., Vanden Berk et
al.\citealt{vandenberk01}).
A turnover at this wavelength corresponds to a
maximal accretion disk temperature of $T_{\text{max}} \approx 50\,000$\,K
(Laor \& Davis\citealt{laor14} and
references therein). Therefore, a turnover at 2500 \AA{}
as seen in PG\,0043+039 should only correspond
to a temperature of about 20\,000 K.
It is generally accepted that the observed
FUV continuum emission in AGN is produced by an accretion disk.
The weak UV flux in PG\,0043+039
has been interpreted by Turnshek et al.\cite{turnshek94}
as intrinsic reddening of SMC-like dust. However, their spectral fit
did not correspond with the observations
with this kind of a reddening correction. Furthermore,
that assumption leads to a surprising low BAL region column density.
In the end, they admitted that intrinsic dust extinction might not be the only
plausible explanation for the turndown in the continuum flux shortward
of 2200\AA{}.
Similarly, Veilleux et al.\citealt{veilleux13} tried to model the
turndown in the UV
continuum flux in Mrk\,231. However, another plausible explanation for
the weak FUV continuum
flux might be the hypothesis that there is simply no FUV flux emitted because
of a nonexisting hot inner accretion disk.
In the same spirit,
some quasars have been identified that show unusual weak blue continua
(e.g., Hall et. al.\citealt{hall02}; Meusinger et al.\citealt{meusinger12}).
We see no Lyman edge ($\lambda < 912$\AA{} )
in our spectrum of PG\,0043+039, in accordance to the ultraviolet composite
spectrum of some AGN (Shull et al.\citealt{shull12}).
Baskin et al.\cite{baskin13} also
saw no detectable Lyman
edge associated with the BAL absorbing gas.
It is known from cataclysmic stars as AMHer stars, so-called polars,
as well as from intermediate polars that they host strong magnetic
fields. Their magnetic fields, on
the order of $\times10^{8}$ G, are responsible for the prevention of the
formation of an (inner) accretion disk in these objects.
Analogically,
the expansion of magnetic bridges between the ergosphere and the disk around
rapidly rotating black holes could be responsible for an outward shift
of the inner accretion disk (Koide et al.\citealt{koide06}).
Another indication of an accretion disk that is not that extended comes from the
widths of the Balmer, Ly$\alpha$, and
\ion{O}{vi}\,$\lambda 1038$ lines (see Paper I).
Typically, the broad emission lines have different widths,
as they originate at different distances from the central ionizing
source (e.g., in Mrk\,110, Kollatschny\citealt{kollatschny03a,kollatschny03b}).
However, all the low- and high-ionization lines in PG\,0043+039 show
the same widths, indicating
that they originate at the same distances from the central ionizing region.
\begin{table}
\tabcolsep+5.8mm
\caption{Comparison of the
composite spectrum of "normal" AGN (Shull et al.\citealt{shull12}) with the
spectrum of PG\,0043+039. Equivalent widths of selected UV lines.}
\begin{tabular}{lrr}
\hline
\noalign{\smallskip}
Emission Lines &\multicolumn{2}{c}{Equivalent width [\AA{}]}\\
&Shull+12 & PG\,0043+039 \\
\noalign{\smallskip}
(1) & (2) & (3)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\ion{O}{vi}\,$\lambda 1038$ & 23 $\pm{}$ 3 & 26 $\pm{}$ 3\\
Ly$\alpha$ & 115 $\pm{}$ 5 & 120 $\pm{}$ 5 \\
\ion{N}{v}\,$\lambda 1243$ & 23 $\pm{}$ 2 & 34 $\pm{}$ 3 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\section{Summary}
We took deep X-ray spectra with the XMM-Newton satellite,
FUV spectra with the HST,
and optical spectra of PG\,0043+039 with the
HET and SALT telescopes in July, 2013.
PG\,0043+039 is one of the weakest quasars in the X-ray. We barely
detected this object in our new deep X-ray exposure.
It has
an extreme $\alpha_{ox}$ continuum gradient of $\alpha_{ox}$=$-$2.37.
However, the X-ray spectra show no sign of extreme absorption.
Moreover, an attempt to
explain the X-ray spectrum with a completely absorbed primary
continuum and an absorbed reflection has failed.
PG\,0043+039 shows a maximum in the overall continuum flux at around
$\lambda \approx 2500$\AA{}
in contrast to most other AGN where a maximum is found at
shorter wavelengths. In combination
with its intrinsic X-ray weakness
this is an indication for an accretion disk that is not that hot
compared to most other AGN.
PG\,0043+039 has been classified as a BAL
quasar before, based on a broad
CIV absorption. We detected no further absorption lines in our
FUV spectra. However, in the optical we found a narrow
BAL system in the CaH\,$\lambda 3968$, CaK\,$\lambda 3934$ lines
(blueshifted by 4900 kms$^{-1}$), and in the
\ion{He}{i}\,$\lambda 3889$ line (blueshifted by 5600 kms$^{-1}$).
The UV/optical flux of PG\,0043+039 has increased by a factor of 1.8
compared to spectra taken in 1990/1991. Some UV emission lines
appeared in the new UV spectrum taken in 2013, which were not
present in the spectrum from 1991.
In addition, the UV spectrum is highly peculiar showing no
Lyman edge. Furthermore, strong broad humps are to be seen
in the FUV, which have not been
identified before in other AGN.
We modeled the observed strong humps in the
UV spectrum of PG\,0043+039
by means of cyclotron lines.
We derived plasma
temperatures of T $\sim$ 3~keV
and magnetic field strengths of B $\sim$ 2 $\times10^{8}$ G
for the cyclotron line-emitting regions close to the black hole (see Paper I).
\begin{acknowledgements}
This work has been supported by the DFG grant Ko 857/32-2.
Some of the observations reported in this paper were obtained with
the Southern African Large Telescope (SALT).
The Hobby-Eberly Telescope (HET) is a joint project of the University of
Texas at Austin, the Pennsylvania State University, Stanford University,
Ludwig-Maximilians-Universit\"at M\"unchen, and Georg-August-Universit\"at
G\"ottingen.
This research has made use of the NASA/ IPAC Infrared Science Archive,
which is operated by the Jet Propulsion Laboratory, California Institute
of Technology, under contract with the National Aeronautics and Space
Administration.
Some GALEX data presented in this paper were obtained from the
Mikulski Archive for Space Telescopes (MAST). STScI is operated by the
Association of Universities for Research in Astronomy, Inc., under NASA
contract NAS5-26555. Support for MAST for non-HST data is provided by
the NASA Office of Space Science via grant NNX09AF08G and by other
grants and contracts.
\end{acknowledgements}
|
1,116,691,498,363 | arxiv | \section*{References}}
\begin{document}
\definecolor{MyDarkBlue}{rgb}{1, 0.9, 1}
\lstset{language=Matlab,
basicstyle=\footnotesize,
commentstyle=\itshape,
stringstyle=\ttfamily,
showstringspaces=false,
tabsize=2}
\lstdefinestyle{commentstyle}{color=\color{green}}
\theoremstyle{remark}
\newtheorem{thm}{Theorem}[section]
\newtheorem{rmk}[thm]{Remark}
\definecolor{red}{gray}{0}
\definecolor{blue}{gray}{0}
\begin{frontmatter}
\title{Nitsche's method for two and three dimensional NURBS patch coupling}
\author[cardiff]{Vinh Phu Nguyen \fnref{fn1}}
\author[cardiff]{Pierre Kerfriden \fnref{fn2}}
\author[torino]{Marco Brino \fnref{fn4}}
\author[cardiff]{St\'{e}phane P.A. Bordas \corref{cor1}\fnref{fn3}}
\author[torino]{Elvio Bonisoli \fnref{fn5}}
\cortext[cor1]{Corresponding author}
\address[cardiff]{School of Engineering, Institute of Mechanics and Advanced
Materials, Cardiff University, Queen's Buildings, The Parade, Cardiff \\
CF24 3AA}
\address[torino]{Politecnico di Torino - DIGEP, corso Duca degli Abruzzi 24, 10129 Torino}
\fntext[fn1]{\url [email protected], ORCID: 0000-0003-1212-8311}
\fntext[fn2]{\url [email protected]}
\fntext[fn3]{\url [email protected], ORCID: 0000-0001-7622-2193}
\fntext[fn4]{\url [email protected]}
\fntext[fn5]{\url [email protected]}
\maketitle
\tableofcontents
\begin{abstract}
A Nitche's method is presented to couple non-conforming two and three dimensional NURBS (Non Uniform Rational B-splines)
patches in the context of isogeometric analysis (IGA). We present results for elastic stress analyses under the static condition
of two and three dimensional NURBS geometries. The contribution fills the gap in the literature and enlarges the applicability of
NURBS-based isogeometric analysis.
\end{abstract}
\begin{keyword}
Nitsche's method \sep isogeometric analysis (IGA) \sep multi-patch NURBS IGA \sep finite element method
\end{keyword}
\end{frontmatter}
\section{Introduction}
The predominant technology that is used by CAD to represent complex
geometries is the Non-Uniform Rational B-spline (NURBS). This allows certain
geometries to be represented exactly that are only approximated by
polynomial functions, including conic and circular sections. There is a
vast array of literature focused on NURBS
(e.g. \cite{piegl_book}, \cite{Rogers2001}) and as a result of several decades
of research, many efficient computer algorithms exist for their fast evaluation and
refinement. The key concept outlined by Hughes et al.
\cite{hughes_isogeometric_2005} was to employ NURBS not only as a geometry discretisation technology, but also as a discretisation tool for analysis, attributing such methods to the field of `Isogeometric Analysis'
(IGA). Since this seminal paper, a monograph
dedicated entirely to IGA has been
published \cite{cottrel_book_2009} and applications can now be found
in several fields including structural mechanics, solid
mechanics, fluid mechanics and contact mechanics.
It should be emphasized that the idea of using CAD technologies in finite elements dates back
at least to \cite{NME:NME292,Kagan2000539} where B-splines were used as shape functions in FEM. In addition, similar methods which adopt
subdivision surfaces have been used to model shells \cite{Cirak_2000}.
Structural mechanics is a field where IGA has demonstrated
compelling benefits over conventional approaches
\cite{benson_isogeometric_2010,kiendl_isogeometric_2009,benson_large_2011,
beirao_da_veiga_isogeometric_2012,uhm_tspline_2009,Echter2013170,Benson2013133}.
The smoothness of the NURBS basis functions allows for a straightforward
construction of plate/shell elements. Particularly for thin shells, rotation-free
formulations can be easily constructed \cite{kiendl_isogeometric_2009,kiendl_bending_2010}.
Furthermore, isogeometric plate/shell elements exhibit much less
pronounced shear-locking compared to standard FE plate/shell elements.
In contact formulations using conventional geometry discretisations, the presence of
faceted surfaces can lead to jumps and
oscillations in traction responses unless very fine meshes are used. The benefits of
using NURBS over such an approach are evident, since smooth contact
surface are obtained, leading to more physically accurate contact stresses. Recent work in this area includes
\cite{temizer_contact_2011,jia_isogeometric_2011,temizer_three-dimensional_2012,
de_lorenzis_large_2011,Matzen201327}.
IGA has also shown advantages over traditional approaches in the context of optimisation problems
\cite{wall_isogeometric_2008,manh_isogeometric_2011,qian_isogeometric_2011,xiaoping_full_2010} where the tight coupling with CAD
models offers an extremely attractive approach for industrial
applications. Another attractive class of methods include those that require only a boundary discretisation, creating a truly direct coupling with CAD. Isogeometric boundary element methods for elastostatic analysis were presented in
\cite{simpson_two-dimensional_2012,Scott2013197}, demonstrating that mesh generation can be completely circumvented by using CAD discretisations for analysis.
The smoothness of NURBS basis functions is attractive for analysis of
fluids \cite{gomez_isogeometric_2010,nielsen_discretizations_2011,Bazilevs:2010:LES:1749635.1750210} and for
fluid-structure interaction problems
\cite{bazilevs_isogeometric_2008,bazilevs_patient-specific_2009}.
In addition, due to the ease of constructing high order continuous basis functions, IGA has been
used with great success in solving PDEs that incorporate fourth order (or
higher)
derivatives of the field variable such as the Hill-Cahnard equation
\cite{gomez_isogeometric_2008}, explicit gradient damage models \cite{verhoosel_isogeometric_2011-1} and gradient
elasticity \cite{fischer_isogeometric_2010}.
The high order NURBS basis has also found potential applications in the Kohn-Sham equation for electronic
structure modeling of semiconducting materials \cite{Masud2012112}.
NURBS provide advantageous properties for structural vibration problems
\cite{cottrell_isogeometric_2006,Hughes20084104,NME:NME4282,Wang2013}
where $k$-refinement (unique to IGA)
has been shown to provide more robust and accurate frequency spectra than
typical higher-order FE $p$-methods. Particularly, the optical branches of frequency spectra,
which have been identified as contributors to Gibbs phenomena in wave propagation problems
(and the cause of rapid degradation of higher modes in the $p$-version of FEM),
are eliminated. However when lumped mass matrices were used, the accuracy is limited to second order
for any basis order. High order isogeometric lumped mass matrices are not yet available.
The mathematical properties of IGA were studied in detail by Evans et al.\cite{evans_n-widths_2009}.
IGA has been applied to cohesive fracture \cite{verhoosel_isogeometric_2011}, outlining a framework for
modeling debonding along material interfaces using NURBS and propagating cohesive
cracks using T-splines. The method relies upon the ability to specify the continuity of NURBS and T-splines
through a process known as knot insertion.
As a variation of the eXtended Finite Element Method (XFEM)
\cite{mos_finite_1999}, IGA was applied to Linear Elastic Fracture Mechanics (LEFM) using the partition
of unity method (PUM) to capture two dimensional strong discontinuities and
crack tip singularities efficiently \cite{de_luycker_xfem_2011,ghorashi_extended_2012}. The method is usually
referred to as XIGA (eXtended IGA).
In \cite{Tambat20121} an explicit isogeometric enrichment technique was proposed for modeling
material interfaces and cracks exactly. Note that this method
is contrary to PUM-based enrichment methods which define cracks implicitly.
A phase field model for dynamic fracture was presented in \cite{Borden201277} using adaptive T-spline refinement to provide an effective method for simulating fracture in three dimensions.
In \cite{Nguyen2013} high order B-splines were adopted to efficiently
model delamination of composite specimens and in \cite{nguyen_cohesive_2013}, an isogeometric framework
for two and three dimensional delamination analysis of composite laminates was presented where
the authors showed that using IGA can significantly reduce the usually time consuming pre-processing
step in generating FE meshes (solid elements and cohesive interface elements) for delamination computations.
A continuum description of fracture using
explicit gradient damage models was also studied using NURBS \cite{verhoosel_isogeometric_2011-1}.
In computer aided geometric design, objects of complex topologies are usually represented as
multiple-patch NURBS. We refer to Fig.~\ref{fig:concepts} for such a multi-patch NURBS solid.
Since it is virtually impossible to have a conforming parametrisation at the patch interface,
an important research topic within the IGA context is the implementation of multi-patch methods
with high inter-patch continuity properties.
In this paper, a Nitsche's method is presented to couple non-conforming two and three dimensional NURBS patches
in a weak sense. An exact multipoint constraint method was reported in \cite{cottrel_book_2009} to glue
multiple NURBS patches with the restriction that, in the coarsest mesh, they have the same parametrisation.
Another solution to multi-patch IGA which has gathered momentum from both the computational geometry and analysis communities is the use of T-splines \cite{Sederberg:2003:TT:882262.882295}. T-splines correct the
deficiencies of NURBS by creating a single patch, watertight geometry which can be locally refined and coarsened.
Utilisation of T-splines in an IGA framework has been illustrated in
\cite{bazilevs_isogeometric_2010,doerfel_adaptive_2010,scott_isogeometric_2011}. However T-splines are not yet a standard in CAD
and therefore our contribution will certainly enlarge the application areas of NURBS based IGA. Moreover, the formulation presented in
this contribution lays the foundation for the solid-structure coupling method to be presented in a forthcoming paper \cite{nguyen-nitsche2}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{conrod-input}
\caption{A multi-patch NURBS solid.}
\label{fig:concepts}
\end{figure}
Nitsche's method \cite{nitsche} was originally proposed to weakly enforce Dirichlet boundary conditions
as an alternative to equivalent pointwise constraints.
The idea behind a Nitsche based approach is
to replace the Lagrange multipliers arising in a dual formulation through their physical representation,
namely the normal flux at the interface. Nitsche also added an extra penalty like term to restore
the coercivity of the bilinear form. The method can be seen to lie in between the Lagrange multiplier
method and the penalty method.
The method has seen a resurgence in recent years and was applied for interface problems
\cite{Hansbo20025537,Dolbow2009a}, for connecting overlapping meshes
\cite{MZA:8203296,MZA:8203286,Sanders2012a,Sanders2011a}, for imposing Dirichlet boundary conditions
in meshfree methods \cite{FernándezMéndez20041257}, in immersed boundary methods
\cite{ruess2013weakly,NME:NME3339,embar_imposing_2010}, in fluid mechanics
\cite{Bazilevs200712}, in the Finite Cell Method \cite{NME:NME4522}
and for contact mechanics \cite{nitsche-wriggers}. It has also been applied for stabilising constraints
in enriched finite elements \cite{Sanders2008a}.
The remainder of the paper is organised as follows. The problem description, governing equations
and weak formulation are presented in Section \ref{sec:problem}. Section \ref{sec:discretisation}
discusses the discretisation followed by implementation aspects given in Section \ref{sec:implementation}.
Several two and three dimensional examples are given in Section \ref{sec:examples}.
We denote $d_p$ and $d_s$ as the number of parametric directions and spatial directions respectively.
Both tensor and matrix notations are used.
In tensor notation, tensors of order one or greater are written in boldface. Lower case bold-face letters
are used for first-order tensor whereas upper case bold-face letters indicate high-order tensors.
The major exception to this rule are the physical second order stress tensor and the strain tensor
which are written in lower case. In matrix notation, the same symbols as for tensors are used to denote
the matrices but the connective operator symbols are skipped.
\section{Problem description, governing equations and weak form}\label{sec:problem}
\subsection{Governing equations}
We define the domain $\Omega \subset \mathbb{R}^{d_s}$ with boundary $\Gamma \equiv \partial \Omega$.
For sake of simplicity, we assume
there is only one internal boundary denoted by $\Gamma_*$ that divides the domain into
two non-overlapping domains $\Omega^m, m=1,2$ such that $\Omega=\Omega^1 \cup \Omega^2$.
In the context of multi-patch NURBS IGA, each domain represents a NURBS patch.
Excluding $\Gamma_*$, the rest of $\Gamma$ can be divided into Dirichlet and Neumann parts on each
domain, $\Gamma_u^m$ and $\Gamma_t^m$ respectively.
A superscript, $m$, is used to denote a quantity that is valid over region $\Omega^m$, with $m = 1, 2$.
With the primary unknown displacement field $\vm{u}^m$, the governing equations of linear elastostatic problems
are
\begin{subequations}
\begin{alignat}{2}
-\nabla \;\bsym{\sigma}^m &= \vm{b}^m &\quad\text{on} \quad \Omega^m \\
\vm{u}^m &= \bar{\vm{u}}^m &\quad\text{on} \quad \Gamma_u^m \\
\bsym{\sigma}^m \cdot \vm{n}^m &= \bar{\vm{t}}^m &\quad\text{on} \quad \Gamma_t^m \label{eq:Neumann}\\
\vm{u}^1 &= \vm{u}^2 &\quad\text{on} \quad \Gamma_* \\
\bsym{\sigma}^1 \cdot \vm{n}^1 &= -\bsym{\sigma}^2 \cdot \vm{n}^2 &\quad\text{on} \quad \Gamma_* \label{eq:tr}
\end{alignat}
\end{subequations}
where $\bsym{\sigma}^m$ denotes the stress field;
the last two equations express the continuity of displacements and tractions across $\Gamma_*$.
The prescribed displacement and traction are denoted by $\bar{\vm{u}}^m$ and $\bar{\vm{t}}^m$, respectively.
The outward unit normals to $\Omega^1$ and $\Omega^2$ are $\vm{n}^1$ and $\vm{n}^2$, respectively.
Under the small strain condition, the infinitesimal strain tensor reads
$\bsym{\epsilon}^m=0.5(\nabla\vm{u}^m+\nabla^\mathrm{T}\vm{u}^m)$.
Constitutive equations are given by
\begin{equation}
\bsym{\sigma}^m = \vm{C}^m : \bsym{\epsilon}^m, \quad m=1,2
\end{equation}
where the constitutive tensors are denoted by $\vm{C}^1$ and $\vm{C}^2$. For linear
isotropic elastic materials, the constitutive tensor is written as
\begin{equation}
C_{ijkl} = \lambda \delta_{ik}\delta_{kl} + \mu (\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})
\end{equation}
where $\lambda=\frac{E\nu}{(1+\nu)(1-2\nu)}$ and $\mu=\frac{E}{2(1+\nu)}$ are the Lam{\'e} constants;
$E$ and $\nu$ are the Young's modulus and Poisson's ratio, respectively and $\delta_{ij}$ is the Kronecker
delta tensor.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{domain1}
\caption{Computational domain with an internal interface.}
\label{fig:domain}
\end{figure}
\subsection{Weak form}
We start by defining the spaces, $\bsym{S}^m$ and $\vm{V}^m$ over domain $\Omega^m$ that will contain
the solution and trial functions respectively:
\begin{equation}
\begin{split}
\bsym{S}^m&=\{\vm{u}^m(\vm{x})|\vm{u}^m(\vm{x}) \in \bsym{H}^1(\Omega^m), \vm{u}^m=\bar{\vm{u}}^m \;\;
\text{on $\Gamma_u^m$} \}\\
\bsym{V}^m&=\{\vm{w}^m(\vm{x})|\vm{w}^m(\vm{x}) \in \bsym{H}^1(\Omega^m), \vm{w}^m={\vm{0}} \;\;\text{on $\Gamma_u^m$} \}
\end{split}
\end{equation}
The standard application of Nitsche's method for the coupling is:
Find $(\vm{u}^1,\vm{u}^2) \in \bsym{S}^1 \times \bsym{S}^2$ such that
\begin{multline}
\sum_{m=1}^2\int_{\Omega^m} \bsym{\epsilon}(\vm{w}^m):\bsym{\sigma}^m \mathrm{d}\Omega
-\int_{\Gamma^*} \left(\jump{\vm{w}} \otimes \vm{n}^1\right) : \{\bsym{\sigma}\} \mathrm{d}\Gamma
-\int_{\Gamma^*} \left(\jump{\vm{u}} \otimes \vm{n}^1\right) : \{\bsym{\sigma}(\vm{w})\} \mathrm{d}\Gamma \\+
\int_{\Gamma^*} \alpha \jump{\vm{w}} \cdot \jump{\vm{u}} \mathrm{d}\Gamma
= \sum_{m=1}^2 \int_{\Omega^m} \vm{w}^m\cdot\vm{b}^m \mathrm{d}\Omega +
\sum_{m=1}^2 \int_{\Gamma_t^m} \vm{w}^m \cdot \bar{\vm{t}}^m \mathrm{d}\Gamma
\label{eq:weakform}
\end{multline}
for all $(\vm{w}^1,\vm{w}^2) \in \bsym{V}^1 \times \bsym{V}^2$.
Derivation of this weak form is standard and can be found in, for example, \cite{Sanders2011a}. Note that we have assumed that
essential boundary conditions are enforced point-wise if possible or by other methods than Nitsche's method for we want to focus on the patch
coupling.
In Equation~\eqref{eq:weakform}, the jump and average operators, on the interface $\Gamma^*$, $\jump{\cdot}$
and $\{\cdot\}$ are defined as
\begin{equation}
\jump{\vm{u}} = \vm{u}^1 - \vm{u}^2, \quad
\{\bsym{\sigma}\} = \frac{1}{2}(\bsym{\sigma}^1 + \bsym{\sigma}^2)
\label{eq:jump-average}
\end{equation}
For completeness, note that the average operator for the stress field can be
written generally as \cite{Sanders2012a}
\begin{equation}
\{\bsym{\sigma}\}=\gamma \bsym{\sigma}^1 + (1-\gamma)\bsym{\sigma}^2
\label{eq:general-average}
\end{equation}
where $0 \le \gamma \le 1$. The usual average operator is reproduced if $\gamma=0.5$ is used.
Equation~\eqref{eq:general-average} is often utilized to join a soft model and a stiff one \cite{Sanders2011a}.
Taking $\gamma=1$ (or $\gamma=0$) results in the one-sided mortaring method.
In this paper, the standard average operator is used unless otherwise stated.
Except the second and third terms in the left hand side, Equation~\eqref{eq:weakform} is the same as the penalty
method. As in the penalty method, $\alpha$ is a free parameter for Nitsche's method.
However, rather than being a penalty parameter, it should be viewed as a stabilization parameter
in the context of this method.
It has been shown \cite{Griebel} that a minimum $\alpha$ exists that will guarantee the positive definiteness
of the bilinear form associated with Nitsche's method, thus, the stability of the method.
For discretisation we rewrite Equation~\eqref{eq:weakform} in a matrix form as follows:
Find $(\vm{u}^1,\vm{u}^2) \in \bsym{S}^1 \times \bsym{S}^2$ such that
\begin{multline}
\sum_{m=1}^2\int_{\Omega^m} (\bsym{\epsilon}(\vm{w}^m))^\mathrm{T} \bsym{\sigma}^m \mathrm{d}\Omega -
\int_{\Gamma_*} \jump{\vm{w}}^\mathrm{T} \vm{n} \{\bsym{\sigma}\} \mathrm{d}\Gamma -
\int_{\Gamma_*} \{\bsym{\sigma}(\vm{w})\}^\mathrm{T} \vm{n}^\mathrm{T} \jump{\vm{u}} \mathrm{d}\Gamma \\+
\int_{\Gamma_*} \alpha \jump{\vm{w}}^\mathrm{T} \jump{\vm{u}} \mathrm{d}\Gamma
= \sum_{m=1}^2 \int_{\Gamma_t^m}(\vm{w}^m)^\mathrm{T} \bar{\vm{t}}^m \mathrm{d}\Gamma
+ \sum_{m=1}^2 \int_{\Omega^m} (\vm{w}^m)^\mathrm{T} \vm{b}^m \mathrm{d}\Omega
\label{eq:dg-weakform-matrix}
\end{multline}
for all $(\vm{w}^1,\vm{w}^2) \in \bsym{V}^1 \times \bsym{V}^2$.
Superscript T denotes the transpose operator.
Second order tensors ($\sigma_{ij}$ and $\epsilon_{ij}$) are written using the Voigt notation
as column vectors; $\bsym{\sigma}=[\sigma_{xx}, \sigma_{yy}, \sigma_{zz}, \sigma_{xy}, \sigma_{yz}, \sigma_{xz}]^\mathrm{T}$, $\bsym{\epsilon}=[\epsilon_{xx}, \epsilon_{yy}, \epsilon_{zz}, 2\epsilon_{xy}, 2\epsilon_{yz}, 2\epsilon_{xz}]^\mathrm{T}$,
and $\vm{n}$ (note that we removed the subscript 1 for subsequent derivations)
is a matrix that reads
\begin{equation}
\vm{n}_{2D} = \begin{bmatrix}
n_x & 0 & n_y \\ 0 & n_y & n_x
\end{bmatrix}, \quad
\vm{n}_{3D} = \begin{bmatrix}
n_x & 0 & 0 & n_y & 0 & n_z \\
0 & n_y & 0 & n_x & n_z & 0\\
0 & 0 & n_z & 0 & n_y & n_x
\end{bmatrix}\label{eq:n-matrix}
\end{equation}
for two dimensions and three dimensions, respectively.
\section{Discretisation}\label{sec:discretisation}
\subsection{NURBS}\label{sec:nurbs}
In this section, NURBS are briefly reviewed.
We refer to the standard textbook \cite{piegl_book} for details.
A knot vector is a sequence in ascending order
of parameter values, written $\Xi=\{\xi_1,\xi_2,\ldots,\xi_{n+p+1}\}$
where $\xi_i$ is the \textit{i}th knot, $n$ is the number of basis functions and $p$ is
the order of the B-spline basis. Open knots in which the first and last knots appear $p+1$ times are
standard in the CAD literature and thus used in this manuscript i.e.,\xspace
$\Xi=\{\underbrace{\xi_1,\ldots,\xi_1}_{\text{$p+1$ times}},\xi_2,\ldots,
\underbrace{\xi_m,\ldots\xi_m}_{\text{$p+1$ times}}\}$.
Given a knot vector $\Xi$, the B-spline basis functions are
defined recursively starting with the zeroth order basis
function ($p=0$) given by
\begin{equation}
N_{i,0}(\xi) = \begin{cases}
1 & \textrm{if $ \xi_i \le \xi < \xi_{i+1}$}\\
0 & \textrm{otherwise}
\end{cases}
\label{eq:basis-p0}
\end{equation}
\noindent and for a polynomial order $p \ge 1$
\begin{equation}
N_{i,p}(\xi) = \frac{\xi-\xi_i}{\xi_{i+p}-\xi_i} N_{i,p-1}(\xi)
+ \frac{\xi_{i+p+1}-\xi}{\xi_{i+p+1}-\xi_{i+1}}
N_{i+1,p-1}(\xi)
\label{eq:basis-p}
\end{equation}
\noindent This is referred to as the Cox-de Boor recursion formula.
Note that when evaluating these functions, ratios of the form $0/0$ are
defined as zero.
Some salient properties of B-spline basis functions are (1) they constitute
a partition of unity, (2) each basis function is nonnegative over the entire domain,
(3) they are linearly independent, (4) the support of a B-spline function of order $p$ is $p+1$ knot spans
i.e.,\xspace $N_{i,p}$ is non-zero over $[\xi_i,\xi_{i+p+1}]$, (5) basis functions of order $p$ have $p-m_i$ continuous derivatives across knot $\xi_i$ where $m_i$ is the multiplicity of knot $\xi_i$ and (6)
B-spline basis are generally only approximants (except at the ends of the parametric space interval,
$[\xi_1,\xi_{n+p+1}]$) and not interpolants.
Fig.~\ref{fig:bspline-quad-open} illustrates a corresponding set of basis functions for an open, non-uniform knot vector. Of particular note is the interpolatory nature of the basis function at each
end of the interval created through an open knot vector, and the reduced continuity at $\xi = 4$ due to the presence of the location of a repeated knot where $C^0$ continuity is attained. Elsewhere, the functions are $C^1$ continuous ($C^{p-1}$).
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{bspline2}
\caption{Quadratic B-spline basis functions defined for the open, non-uniform knot vector
$\Xi=\{0,0,0,1,2,3,4,4,5,5,5\}$. Note the flexibility in the construction of
basis functions with varying degrees of regularity.}
\label{fig:bspline-quad-open}
\end{figure}
NURBS basis functions are defined as
\begin{equation}
R_{i,p}(\xi) = \frac{N_{i,p}(\xi)w_i}{W(\xi)} =
\frac{N_{i,p}(\xi)w_i}{\sum_{j=1}^{n}N_{j,p}(\xi)w_j}
\label{eq:rational-basis}
\end{equation}
where $N_{i,p}(\xi)$ denotes the $i$th B-spline basis function of
order $p$ and $w_i$ are a set of $n$ positive weights.
Selecting appropriate values for the $w_i$ permits the description of many
different types of curves including polynomials and circular arcs.
For the special case in which $w_i=c, i=1,2,\ldots,n$ the
NURBS basis reduces to the B-spline basis. Note that for simple geometries,
the weights can be defined analytically see e.g.,\xspace \cite{piegl_book}. For complex
geometries, they are obtained from CAD packages such as Rhino \cite{rhino}.
Let $\Xi^1=\{\xi_1,\xi_2,\ldots,\xi_{n+p+1}\}$,
$\Xi^2=\{\eta_1,\eta_2,\ldots,\eta_{m+q+1}\}$,
and $\Xi^3=\{\zeta_1,\zeta_2,\ldots,\zeta_{l+r+1}\}$ are the knot vectors
and a control net $\vm{P}_{i,j,k} \in \mathds{R}^{d_s}$.
A tensor-product NURBS solid is defined as
\begin{equation}
\vm{V}(\xi,\eta,\zeta) = \sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{k=1}^l
\vm{P}_{i,j,k} R_{i,j,k}^{p,q,r}(\xi,\eta,\zeta)
\label{eq:NURBS-solid1}
\end{equation}
where the trivariate NURBS basis functions $R_{i,j,k}^{p,q,r}$ are given by
\begin{equation}
R_{i,j,k}^{p,q,r}(\xi,\eta,\zeta) = \frac{N_{i}(\xi) M_j(\eta) P_k(\zeta) w_{i,j,k}}{
\sum_{\hat{i}=1}^{n} \sum_{\hat{j}=1}^{m} \sum_{\hat{k}=1}^{l}N_{\hat{i}}(\xi) M_{\hat{j}}(\eta)
P_{\hat{k}}(\zeta) w_{\hat{i},\hat{j},\hat{k}}}.
\label{eq:rational-basis3}
\end{equation}
By defining a global index $A$ through
\begin{equation}
\label{eq:bspline_volume_mapping}
A = (n \times m) ( k - 1) + n( j - 1 ) + i
\end{equation}
a simplified form of Equation~\eqref{eq:NURBS-solid1} can be written as
\begin{equation}
\label{eq:bspline_vol_simple}
\vm{V}(\boldsymbol{\xi}) = \sum_{A=1}^{n \times m \times l} \vm{P}_A R_{A}^{p,q,r}(\boldsymbol{\xi} )
\end{equation}
\subsection{Isogeometric analysis}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{mapping}
\caption{Diagrammatic interpretation of mappings from parent space ($\tilde{\Omega}$)
through parametric space ($\hat{\Omega}$) to physical space ($\Omega$). The parent space is where
numerical quadrature rules are defined.}
\label{fig:iga_mappings}
\end{figure}
Isogeometric analysis also makes use of an isoparametric formulation, but a key difference over its Lagrangian counterpart is the use of basis functions generated by CAD to discretise both the geometry and unknown fields.
In IGA, regions bounded by knot lines with non-zero parametric area lead to a natural definition of
element domains.
The use of NURBS basis functions for discretisation introduces the concept of parametric space which is absent in conventional FE implementations. The consequence of this additional space is that an additional mapping must be performed to operate in parent element coordinates. As shown in Fig.~\ref{fig:iga_mappings}, two mappings are considered for IGA with NURBS: a mapping $\tilde{\phi}^e: \tilde{\Omega} \to \hat{\Omega}^e$ and $\vm{S}: \hat{\Omega} \to \Omega$. The mapping $\vm{x}^e: \tilde{\Omega} \to \Omega^e$ is given by the composition $\vm{S}\circ \tilde{\phi}^e$.
For a given element $e$, the geometry is expressed as
\begin{equation}
\label{eq:iga_geometry_discretisation}
\mathbf{x}^e(\tilde{\boldsymbol{\xi}}) = \sum_{a=1}^{n_{en}} \vm{P}_a^e R_a^e(\tilde{\boldsymbol{\xi}})
\end{equation}
where $a$ is a local basis function index, $n_{en} = (p+1)^{d_p}$ is the number of non-zero basis functions over element $e$ and $\vm{P}_a^e$,$R_a^e$ are the control point and NURBS basis function associated with index $a$ respectively. We employ the commonly used notation of an element connectivity mapping \cite{hughes-fem-book} which translates a local basis function index to a global index through
\begin{equation}
\label{eq:element_connectivity_array}
A = \textrm{IEN}( a, e )
\end{equation}
Global and local control points are therefore related through $\vm{P}_A \equiv \vm{P}_{\textrm{IEN}(a,e)} \equiv \vm{P}_a^e$ with similar expressions for $R_a^e$.
Taking the case $d_p = d_s = 2$, an element defined by $\hat{\Omega}^e = [\xi_i, \xi_{i+1}]\otimes [\eta_i, \eta_{i+1}]$ is mapped from parent space to parametric space through
\begin{align}
\label{eq:phi_mapping}
\tilde{\phi}^e(\tilde{\boldsymbol{\xi}}) =
\left\{
\begin{matrix}
\frac{1}{2}[(\xi_{i+1}-\xi_i)\tilde{\xi} + (\xi_{i+1}+\xi_i)]\\
\frac{1}{2}[(\eta_{j+1}-\eta_j)\tilde{\eta} + (\eta_{j+1}+\eta_j)]
\end{matrix}
\right\}
\end{align}
A field $\vm{u}(\mathbf{x})$ which governs our relevant PDE can also be discretised in a similar manner to Equation~\eqref{eq:iga_geometry_discretisation} as
\begin{equation}
\label{eq:iga_field_discretisation}
\vm{u}^e(\tilde{\boldsymbol{\xi}}) = \sum_{a=1}^{n_{en}} \vm{d}_a^e R_a^e(\tilde{\boldsymbol{\xi}})
\end{equation}
where $\vm{d}^e_a$ represents a control (nodal) variable. In contrast to conventional discretisations, these coefficients are not in general interpolatory at nodes. This is similar to the case of meshless
methods built on non-interpolatory shape functions such as the moving least squares
(MLS) \cite{efg-nayroles,NME:NME1620370205,nguyen_meshless_2008}. Using the Bubnov-Galerkin method, an analog
expansion as Equation~\eqref{eq:iga_field_discretisation} is adopted for the weight function and upon substituting
them into a weak form, a standard system of linear equations is obtained from which $\vm{d}$--the nodal variables
are obtained.
\subsection{Discrete equations}
The two domains $\Omega^m$ are discretised independently using finite elements.
At the interface $\Gamma_*$ there is a mismatch between the two meshes, cf. Fig.~\ref{fig:domain-mesh}.
The approximation of the displacement field is given by
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{domain2}
\caption{Independent discretisations of the domains.}
\label{fig:domain-mesh}
\end{figure}
\begin{equation}
\vm{u}^m = N_A^m \vm{a}_A^m
\label{eq:approximation}
\end{equation}
where $N^m_A$ denotes the finite element shape functions associated to domain $\Omega^m$ (which can be
any Lagrange shape functions or the B-spline and NURBS basis functions presented in Section \ref{sec:nurbs}) and
$\vm{a}_A^m=[a_{xA}^m\; a_{yA}^m\; a_{zA}^m]^\mathrm{T}$ represents the nodal displacements of domain $\Omega^m$.
The stresses, strains and displacements are given by
\begin{equation}
\bsym{\sigma}^m = \vm{C}^m \vm{B}^m \vm{a}^m, \quad
\bsym{\epsilon}^m = \vm{B}^m \vm{a}^m, \quad
\vm{u}^m=\vm{N}^m \vm{a}^m
\label{eq:11}
\end{equation}
where $\vm{B}$ is the standard strain-displacement matrix and $\vm{N}$ represents the standard
shape function matrix. For two dimensional element $e$, they are given by
\begin{equation}
\vm{B}_e^m = \begin{bmatrix}
N_{1,x}^m & 0 & N_{2,x}^m & 0 & \ldots\\
0 & N_{1,y}^m & 0 & N_{2,y}^m & \ldots \\
N_{1,y}^m & N_{1,x}^m & N_{2,y}^m & N_{2,x}^m & \ldots
\end{bmatrix},\quad \vm{N}_e^m = \begin{bmatrix}
N_1^m & 0 & N_2^m & 0 & \ldots\\
0 & N_1^m & 0 & N_2^m & \ldots
\end{bmatrix}
\end{equation}
Expressions for three dimensional elements can be found in many FEM textbooks e.g.,\xspace
\cite{hughes-fem-book}. The notation $N_{I,x}$ denotes
the derivative of shape function $N_I$ with respect to $x$. This notation for partial derivatives
will be used in subsequent sections.
The jump operator and the average operator are given by
\begin{equation}
\begin{split}
\jump{\vm{u}} &= \vm{N}^1\vm{a}^1 - \vm{N}^2 \vm{a}^2\\
\{\bsym{\sigma}\} &= \frac{1}{2}\left( \vm{C}^1 \vm{B}^1 \vm{a}^1 + \vm{C}^2\vm{B}^2 \vm{a}^2 \right)
\end{split}
\label{eq:12}
\end{equation}
and analog expansions are used for $\jump{\vm{w}}$ and $\{\bsym{\sigma}(\vm{w})\} $
\begin{equation}
\begin{split}
\jump{\vm{w}} &= \vm{N}^1 \delta \vm{a}^1 - \vm{N}^2 \delta \vm{a}^2\\
\{\bsym{\sigma}(\vm{w})\} &= \frac{1}{2}\left( \vm{C}^1 \vm{B}^1 \delta \vm{a}^1 +
\vm{C}^2\vm{B}^2 \delta \vm{a}^2 \right)
\end{split}
\label{eq:13}
\end{equation}
Upon substituting Equations~\eqref{eq:11},\eqref{eq:12} and \eqref{eq:13} into
Equation~\eqref{eq:dg-weakform-matrix} and invoking the arbitrariness of $\delta \vm{a}^m$, we obtain
the discrete equation that can be written as
\begin{equation}
\left[\vm{K}^b + \vm{K}^n + (\vm{K}^n)^\mathrm{T} + \vm{K}^s\right] \vm{a} = \vm{f}_\text{ext}
\end{equation}
in which $\vm{K}^b$ denotes the bulk stiffness matrix; $\vm{K}^n$ and $\vm{K}^s$ are the interfacial
stiffness matrices or the coupling matrices. The external force vector is denoted by $\vm{f}_\text{ext}$
and its expression is standard and thus presented here.
The bulk stiffness matrix is given by
\begin{equation}
\vm{K}^b = \sum_m^2 \int_{\Omega^m} (\vm{B}^m)^\mathrm{T} \vm{C}^m \vm{B}^m\mathrm{d} \Omega
\end{equation}
and the coupling matrices are given by
\begin{equation}
\vm{K}^n = \begin{bmatrix}
-\displaystyle\int_{\Gamma_*} \vm{N}^{1\text{T}} \vm{n} \frac{1}{2}\vm{C}^1\vm{B}^1 \mathrm{d}\Gamma &
-\displaystyle\int_{\Gamma_*} \vm{N}^{1\text{T}} \vm{n} \frac{1}{2}\vm{C}^2\vm{B}^2 \mathrm{d}\Gamma \\
\displaystyle\int_{\Gamma_*} \vm{N}^{2\text{T}} \vm{n} \frac{1}{2}\vm{C}^1\vm{B}^1 \mathrm{d}\Gamma &
\displaystyle\int_{\Gamma_*} \vm{N}^{2\text{T}} \vm{n} \frac{1}{2}\vm{C}^2\vm{B}^2 \mathrm{d}\Gamma
\end{bmatrix}\label{eq:nitsche-kdg}
\end{equation}
and by
\begin{equation}
\vm{K}^s = \begin{bmatrix}
\displaystyle\int_{\Gamma_*} \alpha \vm{N}^{1\text{T}} \vm{N}^1 \mathrm{d}\Gamma &
- \displaystyle\int_{\Gamma_*} \alpha \vm{N}^{1\text{T}} \vm{N}^2 \mathrm{d}\Gamma\\
- \displaystyle\int_{\Gamma_*} \alpha \vm{N}^{2\text{T}} \vm{N}^1 \mathrm{d}\Gamma&
\displaystyle\int_{\Gamma_*} \alpha \vm{N}^{2\text{T}} \vm{N}^2 \mathrm{d}\Gamma
\end{bmatrix}\label{eq:nitsche-kpe}
\end{equation}
If the average operator defined in Equation~\eqref{eq:general-average} is used,
we have
\begin{equation}
\vm{K}^n = \begin{bmatrix}
-\gamma\displaystyle\int_{\Gamma_*} \vm{N}^{1\text{T}} \vm{n} \vm{C}^1\vm{B}^1 \mathrm{d}\Gamma &
-(1-\gamma)\displaystyle\int_{\Gamma_*} \vm{N}^{1\text{T}} \vm{n} \vm{C}^2\vm{B}^2 \mathrm{d}\Gamma \\
\gamma\displaystyle\int_{\Gamma_*} \vm{N}^{2\text{T}} \vm{n} \vm{C}^1\vm{B}^1 \mathrm{d}\Gamma &
(1-\gamma)\displaystyle\int_{\Gamma_*} \vm{N}^{2\text{T}} \vm{n} \vm{C}^2\vm{B}^2 \mathrm{d}\Gamma
\end{bmatrix}\label{eq:nitsche-kdg-general}
\end{equation}
\section{Implementation}\label{sec:implementation}
For the computation of the bulk stiffness matrices is standard, in this section we focus on
the implementation of the coupling matrices for both two and three dimensional problems.
For sake of presentation, Lagrange finite elements are discussed firstly and generalisation to
NURBS elements is given subsequently with minor modifications.
\subsection{Two dimensions}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{domain4}
\caption{Independent discretisations of the domains: hierarchical meshes. The interface $\Gamma_*$ is
discretised using the element edges of $\Omega^2$ that intersect $\Gamma_*$. For the grey element, the Gauss point is denoted by the red star which is mapped to the GP in element 1 (green star). }
\label{fig:domain-mesh-hier}
\end{figure}
\subsubsection{Hierarchical meshes}
First, we consider hierarchical meshes as shown in Fig.~\ref{fig:domain-mesh-hier}.
In this case, the interface integrals can be straightforwardly calculated as explained in what follows.
Let assume that a fine mesh is adopted for $\Omega^2$ and a coarse mesh for $\Omega^1$, cf.
Fig.~\ref{fig:domain-mesh-hier}. We use the fine elements on $\Gamma_*$ to evaluate the interfacial integral
\begin{equation}
\int_{\Gamma_*} f(N^1,N^2) d \Gamma = \bigcup_{e=1}^{nbe}
\int_{\Gamma_*^e} f(N^1,N^2) d \Gamma
\end{equation}
where $\Gamma_*^e=\Gamma_* \cap \Omega_e^{2,b}$ and $\{\Omega_e^{2,b}\}_{1}^{nbe}$ denotes elements in $\Omega^2$
that intersect with $\Gamma_*$.
What makes hierarchical meshes attractive is that for a fine element on $\Gamma_*$ one knows
the element in the coarse mesh that locates the other side of the interface.
For the elemental interface integral, a Gauss quadrature rule for line
elements is adopted. For example, two GPs are used for bilinear elements. Let the
GPs denoted by $\{\xi_i\}_{i=1}^{ngp}$. These GPs have to be mapped to two parent elements--
one associated with $\Omega_e^{2,b}$ and one associated with $\Omega_e^{1,b}$.
That is given $\xi_i$, one has to solve for $\bsym{\xi}_i^2$ and $\bsym{\xi}_i^1$
($\bsym{\xi}_i^2=(\xi_i^2,\eta_i^2)$)
\begin{equation}
\begin{split}
\vm{x}_i &= \vm{M}(\xi_i)\vm{x}_l \\
\vm{x}_i &= \vm{N}^2(\bsym{\xi}_i^2)\vm{x}_e^2 \rightarrow \bsym{\xi}_i^2\\
\vm{x}_i &= \vm{N}^1(\bsym{\xi}_i^1)\vm{x}_e^1 \rightarrow \bsym{\xi}_i^1
\end{split}
\end{equation}
where the first equation is used to compute the global coordinates of the GP ($\vm{x}_i=(x_i,y_i)$)
and the second and third equations are used to compute the natural coordinates
of the GP in the parent element associated with $\Omega_e^{k,b}$.
Usually a Newton-Raphson method is used for this.
In the above,
$\vm{M}$ denotes the row vector of shape functions of a two-noded line element; $\vm{x}_l$
are the nodal coordinates of two boundary nodes of $\Gamma_*^e$ (for the example given in
Fig.~\ref{fig:domain-mesh-hier}, they are nodes 7 and 9);
$\vm{x}_e^k$ ($k=1,2$) denotes the nodal coordinates of $\Omega_e^{k,b}$.
$\vm{N}^k$ denote the row vector of shape functions of element $\Omega_e^{k,b}$.
For the example given in Fig.~\ref{fig:domain-mesh-hier}, $\vm{x}_e^2$ stores the
coordinates of nodes 5,7,9 and 6. And, $\vm{x}_e^1$ stores the coordinates of nodes 10,22,20 and 16.
It is now ready to evaluate the interfacial integral as
\begin{equation}
\int_{\Gamma_*^e} f(N^1,N^2) \mathrm{d} \Gamma = \sum_{i=1}^{ngp} f(N^1(\bsym{\xi}_i^1),N^2(
\bsym{\xi}_i^2) ) w_i
\end{equation}
where $w_i$ equals the weight multiplied with the Jacobian of the transformation from
the line parent element $[-1,1]$ to $\Gamma_*^e$.
Finally the coupling terms are assembled to the global stiffness matrix in a standard manner.
For example $\vm{K}^{n,11}$ is assembled using the connectivity of $\Omega_e^{1,b}$
and $\vm{K}^{n,22}$ is assembled using the connectivity of $\Omega_e^{2,b}$.
\subsubsection{Non-matching structured meshes}
Non-matching structured meshes are plotted in Fig.~\ref{fig:domain-mesh5}.
In those cases, the evaluation of the interfacial integrals are more complicated.
We use the trace mesh of $\Omega^1$ on the coupling interface $\Gamma_*$ to perform the
numerical integration. We use two data structures to store the Gauss points namely (for the
concrete example shown in Fig.~\ref{fig:domain-mesh5})
$gp1=\{(\bsym{\xi}^1_i,w_i,e^1_i)\}_{i=1}^4$ and $gp2=\{(\bsym{\xi}^2_i,e^2_i)\}_{i=1}^4$ where
$e^m_i$ indicates the index of element of $\Omega^m$ that contains GP $\bsym{\xi}^m_i$.
After having these GPs, the assembly of the coupling matrices follows the procedure outlined in
Box \ref{box-k-example}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{domain5}
\caption{Independent discretisations of the domains: non-matching structured meshes.}
\label{fig:domain-mesh5}
\end{figure}
\begin{Fbox}
\caption{Assembly of coupling matrices}
\begin{enumerate}
\item Loop over Gauss points (GPs), $i$
\begin{enumerate}
\item Get $\bsym{\xi}^1_i$, $w_i$ and $e^1_i$ from $gp1$
\item Get $\bsym{\xi}^2_i$ and $e^2_i$ from $gp2$
\item Compute shape functions $\vm{N}^1(\bsym{\xi}^1_i)$
\item Compute shape functions $\vm{N}^2(\bsym{\xi}^2_i)$
\item Compute $\vm{K}^{s,12}= -\alpha \vm{N}^{1\text{T}} \vm{N}^2 w_i$
\item Assemble $\vm{K}^{s,12}$ to the global stiffness matrix using the connectivity array
of $e^1_i$ (rows) and $e^2_i$ (columns).
\end{enumerate}
\item End loop over GPs
\end{enumerate}
\label{box-k-example}
\end{Fbox}
\subsection{Three dimensional formulations}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{3D-3D-coupling}
\caption{Coupling of two three dimensional continuum models. For evaluating the coupling terms,
we use the trace mesh of $\Omega^1$ on the coupling interface $\Gamma_*$. In this figure, there is
only one element of the trace mesh for sake of illustration.}
\label{fig:3d-3d-coupling}
\end{figure}
This section presents the implementation for 3D, we refer to Fig.~\ref{fig:3d-3d-coupling}.
The computation of GPs required for the coupling matrices is given in Box \ref{box-gp-3D}.
After having obtained $gp1$ and $gp2$ data structures, the assembly of the coupling matrices
follows Box \ref{box-k-example}.
\begin{Fbox}[thpb]
\caption{Determination of $gp1$ and $gp2$}
\begin{enumerate}
\item For each element $e^1$ of the trace mesh, do
\begin{enumerate}
\item Distribute GPs on the face, $\{(\xi_i,\eta_i,w_i)\}_{i=1}^{ngp}$
\item Loop over the GPs, $i$
\begin{enumerate}
\item Transform GP $i$ to physical space using
\begin{equation}
\label{eq:hang}
\vm{x}_i = \vm{M}(\xi_i,\eta_i)\vm{x}_l
\end{equation}
\item Compute tangent vectors, normal vector and the weight
\begin{equation}
\vm{a}_1 = \vm{M}_{,\xi} \vm{x}_l,\quad
\vm{a}_2 = \vm{M}_{,\eta} \vm{x}_l,
\quad \vm{n}=\frac{\vm{a}_1\times\vm{a}_2}{\norm{\vm{a}_1\times\vm{a}_2}}, \quad
\bar{w}_i = w_i \norm{\vm{a}_1\times\vm{a}_2}
\label{eq:tangents}
\end{equation}
\item Transform GP $i$ from physical space to parent space of $\Omega^1$ using
\begin{equation}
\vm{x}_i = \vm{N}^1(\xi_i^1,\eta_i^1,\zeta_i^1)\vm{x}_e^1
\rightarrow (\xi_i^1,\eta_i^1,\zeta_i^1) \label{eq:hang1}
\end{equation}
\item Find index of element in $\Omega^2$ that contains $\vm{x}_i$, named it $e^2$
\item Transform GP $i$ from physical space to parent space of $\Omega^2$ using
\begin{equation}
\vm{x}_i = \vm{N}^2 (\xi_i^2,\eta_i^2,\zeta_i^2) \vm{x}_e^2 \rightarrow
(\xi_i^2,\eta_i^2,\zeta_i^2) \label{eq:hang2}
\end{equation}
where $\vm{x}_e^2$ are the nodal coordinates of element $e^2$.
\end{enumerate}
\item End loop over GPs
\end{enumerate}
\item End for
\end{enumerate}
\label{box-gp-3D}
\end{Fbox}
\subsection{Extension to NURBS elements}
Since NURBS basis functions are defined on the parameter space not on the parent space, there is a slight modification to the implementation. The GPs are now give by $\{(\tilde{\xi}_i,\tilde{\eta}_i,\tilde{w}_i)\}_{i=1}^{ngp}$.
They are firstly transformed to the parameter space using the mapping defined in Equation~\eqref{eq:phi_mapping}:
$\{(\xi_i,\eta_i,w_i)\}_{i=1}^{ngp}$ where $w_i = \tilde{w}_i J$ with $J$ is the Jacobian of the parent-to-parameter mapping. After that one works with the parameter space, for example the inverse mapping
Equation~\eqref{eq:hang1} determines a point in the parameter space.
Steps (iv) and (v) in the algorithm given in Box \ref{box-gp-3D} demand modifications because one can exploit
the fact that the NURBS mapping, Equation~\eqref{eq:bspline_vol_simple}, is global. Hence, one writes Equation~\eqref{eq:hang2}
as follows
\begin{equation}
\vm{x}_i = \vm{N}^2 (\xi_i^2,\eta_i^2,\zeta_i^2) \vm{x}^2 \rightarrow
(\xi_i^2,\eta_i^2,\zeta_i^2) \label{eq:hang3}
\end{equation}
where $\vm{x}^2$ are the control point of patch 2. Note that in Equation~\eqref{eq:hang1}, $\vm{x}^1_e$ denotes the
control points of only the element under consideration. Using the output $ (\xi_i^2,\eta_i^2,\zeta_i^2)$ and the standard $FindSpan$ algorithm, cf. \cite{piegl_book},
one can determine which element $\vm{x}_i$ belongs to i.e.,\xspace $e^2$.
\begin{rmk}
Note also that if B{\'e}zier extraction is used to implement NURBS-based
IGA, see e.g.,\xspace \cite{borden_isogeometric_2011}, then this section can be ignored since with B{\'e}zier extraction
the basis are the Bernstein basis, which are defined in the parent space as well, multiplied
with some sparse matrices. Moreover, B{\'e}zier extraction will facilitate the incorporation of the non-conforming multi-patch
NURBS IGA into existing FE codes including commercially available FE packages.
\end{rmk}
\section{Numerical examples}\label{sec:examples}
In this section three numerical examples of increasing complexity are presented to assess the performance
of the proposed method. They are listed as follows
\begin{enumerate}
\item Timoshenko beam (2D/2D coupling)
\item Cantilever beam (3D/3D coupling)
\item Connecting rod (complex 3D/3D coupling)
\end{enumerate}
The first two examples are simple problems to verify the implementation and we provide convergence analysis
for the first example.
Unless otherwise stated, we use
MIGFEM--an open source Matlab IGA code which is available at \url{https://sourceforge.net/projects/cmcodes/}
for our computations and the visualisation was performed in Paraview \cite{para}.
\subsection{Timoshenko beam}
Consider a beam of dimensions $L \times D$ (unit thickness), subjected to a
parabolic traction at the free end as shown in Fig.~\ref{fig:beam-geo}.
A plane stress state is assumed. The parabolic traction is given by
\begin{equation}
t_y(y) = -\frac{P}{2I} \biggl ( \frac{D^2}{4} - y^2 \biggr)\label{eq:ty}
\end{equation}
\noindent where $I = D^3/12$ is the moment of inertia. The exact displacement
field of this problem is, see e.g.,\xspace \cite{elasticity_book}
\begin{equation}
\begin{split}
u_x(x,y) &= \frac{Py}{6EI} \biggl [ (6L-3x)x + (2+\nu)\biggl(y^2-\frac{D^2}{4}\biggr) \biggr] \\
u_y(x,y) &= - \frac{P}{6EI} \biggl [ 3\nu y^2(L-x) + (4+5\nu)\frac{D^2x}{4} +(3L-x)x^2 \biggr] \\
\end{split}
\label{eq:tBeamExactDisp}
\end{equation}
\noindent and the exact stresses are
\begin{equation}
\sigma_{xx}(x,y) = \frac{P(L-x)y}{I};
\quad \sigma_{yy}(x,y) = 0, \quad
\sigma_{xy}(x,y) = -\frac{P}{2I} \biggl ( \frac{D^2}{4}-y^2\biggr)
\end{equation}
\noindent In the computations, material properties are taken as $E=
3.0 \times 10^7$, $\nu = 0.3$ and the beam dimensions are $D=6$ and
$L=48$. The shear force is $P = 1000$. In order to model the clamping condition,
the displacement defined by Equation~\eqref{eq:tBeamExactDisp} is prescribed as essential boundary
conditions at $x=0, -D/2 \le y \le D/2$. This problem is solved with bilinear Lagrange elements
and high order B-splines elements. The former helps to verify the implementation in addition to
the ease of enforcement of Dirichlet boundary conditions (BCs). For the latter, care must be taken
in enforcing the Dirichlet BCs given in Equation~\eqref{eq:tBeamExactDisp} since the B-splines
are not interpolatory. The beam is divided into two domains by a vertical line at $x=L/2$ i.e.,\xspace
$\Gamma^*=\{x=L/2,-D/2 \le y \le D/2\}$. \\
\begin{figure}[htbp]
\centering
\psfrag{p}{P}\psfrag{l}{$L$}\psfrag{d}{$D$}\psfrag{x}{$x$}\psfrag{y}{$y$}
\includegraphics[width=0.5\textwidth]{beam}
\caption{Timoshenko beam: problem description.}
\label{fig:beam-geo}
\end{figure}
\noindent \textbf{Lagrange elements} Firstly, a conforming mesh (however there are double nodes at $\Gamma^*$)
is considered and each domain is
discretised by a mesh of $20\times4$ elements as given in Fig.~\ref{fig:beam-meshes}a. Then, a non-conforming
mesh where the left domain is discretised by $20\times8$ elements and
the right domain is meshed by $20\times4$ is considered, cf. Fig.~\ref{fig:beam-meshes}b.
A value of $1\times10^8$ was used for $\alpha$.
The vertical displacements along the midline of the beam
($u_y(0 \le x \le L,y=0$) are plotted in Fig.~\ref{fig:beam-disp} together with the exact solution.
A good agreement can be observed.
The stresses are plotted in Fig.~\ref{fig:beam-stress}.\\
\begin{figure}[htbp]
\centering
\subfloat[Conforming mesh]{\includegraphics[width=0.45\textwidth]{tbeam-mesh1}}\\
\subfloat[Non conforming mesh]{\includegraphics[width=0.45\textwidth]{tbeam-mesh2}}
\caption{Timoshenko beam: conforming and non-conforming meshes. Note that even with the conforming mesh,
there are double nodes at the coupling interface $x=L/2, -D/2 \le y \le D/2$.}
\label{fig:beam-meshes}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[Conforming mesh]{\includegraphics[width=0.45\textwidth]{2D-2D-conforming-disp}}
\subfloat[Non-conforming mesh]{\includegraphics[width=0.45\textwidth]{2D-2D-nonconforming-disp}}
\caption{Timoshenko beam: comparison of $u_y(0 \le x \le L,y=0)$ with the exact solution.}
\label{fig:beam-disp}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[Stresses along the beam length]{\includegraphics[width=0.45\textwidth]{2D-2D-stress-length}}
\subfloat[Stresses over the beam height]{\includegraphics[width=0.45\textwidth]{2D-2D-stress-width}}
\caption{Timoshenko beam: stresses obtained with a conforming mesh ($20\times8$ for each domain).}
\label{fig:beam-stress}
\end{figure}
\noindent \textbf{B-splines elements} Next, we study the performance of the B-splines elements of
which one mesh is given in Fig.~\ref{fig:beam-bspline-mesh}.
Dirichlet BCs are enforced using the least square projection method see e.g.,\xspace \cite{nguyen_iga_review}.
Note that Nitche's method can also be used to weakly enforce the Dirichlet BCs. However, we use
Nitsche's method only to couple the patch interfaces.
As detailed in \cite{hughes-fem-book} for Lagrangian basis functions,
a rule of $(p+1)\times(q+1)$ Gaussian quadrature can be applied for
two-dimensional elements in which $p$ and $q$ denote the orders of
the chosen basis functions in the $\xi$ and $\eta$ direction. The same
procedure is also used for NURBS basis functions in the present work,
although it should be emphasised that Gaussian quadrature is not optimal for IGA
\cite{hughes_efficient_2010,Auricchio201215}. The stresses are given in Fig.~\ref{fig:beam-bspline-stress}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.55\textwidth]{2D-2D-Bspline-mesh}
\caption{Timoshenko beam: B-spline bi-cubic ($p=q=3$) mesh with $4\times4$ elements for the left
domain and $2\times2$ elements for the right one. The filled circles denote the control points.}
\label{fig:beam-bspline-mesh}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{2D-2D-Bspline-stress-length}
\includegraphics[width=0.45\textwidth]{2D-2D-Bspline-stress}
\caption{Timoshenko beam: stresses with B-splines elements. The left domain
is meshed by $8\times8$ cubic elements and the right domain with $2\times2$ cubic elements.}
\label{fig:beam-bspline-stress}
\end{figure}
Finally we present results obtained with a non-hierarchical B-spline mesh as given in
Fig.~\ref{fig:beam-bspline-general}: a $8\times6$ bi-cubic mesh is used for the left domain
and a bi-cubic $4\times3$ mesh is used for the right domain. A quadratic stress profile was obtained
where the theoretical maximum value along the midline of the beam (250) can be observed.\\
\begin{figure}[htbp]
\centering
\includegraphics[width=0.55\textwidth]{tbeam-general}
\caption{Timoshenko beam: non-hierarchical B-spline mesh ($8\times6$ cubic elements
for the left domain and $4\times3$ cubic elements for the right domain).}
\label{fig:beam-bspline-general}
\end{figure}
\noindent \textbf{Convergence study} In order to assess the convergence of the method,
displacement and energy norms are evaluated with the energy norm given by
\begin{equation}
e_\text{energy} = \left[\frac{1}{2}\int_{\Omega} \left(\bsym{\varepsilon}_{\mathrm{num}}
- \bsym{\varepsilon}_{\mathrm{exact}}\right)\cdot \vm{D} \cdot
\left(\bsym{\varepsilon}_{\mathrm{num}} - \bsym{\varepsilon}_{\mathrm{exact}} \right)
\mathrm{d} \Omega \right]^{\frac{1}{2}},
\end{equation}
and the displacement norm defined as
\begin{equation}
e_\text{displacement} = \left\{{\displaystyle \int_{\Omega} \left[ \left( {\bf u}_\text{num} -
{\bf u}_\text{exact} \right) \cdot \left( {\bf u}_\text{num} - {\bf u}_\text{exact}
\right) \right] \mathrm{d} \Omega }\right\}^{1/2},
\end{equation}
\noindent where $\bsym{\varepsilon}_{\mathrm{num}}$, and
$\bsym{\varepsilon}_{\mathrm{exact}}$ are the numerical strain
vector and exact strain vector, respectively. The same
notation applies to the displacement vector ${\bf u}_\text{num}$ and
${\bf u}_\text{exact}$. In the post-processing step, the above norms
are calculated using the same Gauss-Legendre quadrature that has been adopted
for the stiffness matrix computation.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.55\textwidth]{tbeam-convergence-mesh}
\caption{Convergence study of the Timoshenko beam: initial mesh from which refined meshes are obtained
by dividing each knot span into two equal halves.}
\label{fig:beam-convergence-mesh}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[displacement norm]{\includegraphics[width=0.49\textwidth]{tbeam-convergence-disp}}
\subfloat[energy norm]{\includegraphics[width=0.49\textwidth]{tbeam-convergence-energy}}
\caption{Timoshenko beam: convergence plots.}
\label{fig:tbeam-norms}
\end{figure}
The initial mesh from which refined meshes were obtained is given in Fig.~\ref{fig:beam-convergence-mesh}.
It can be shown that for linear elasticity $\alpha$ depends on the element size $h_e$ and the material parameters, see for example \cite{fritz-nitsche-mortar,Bazilevs200712}
\begin{equation}
\alpha = \frac{\lambda + \mu}{2} \frac{\theta(p)}{h_e}
\label{eq:alpha-estimation}
\end{equation}
where $\theta(p)$ is a positive number that depends only on the polynomial order $p$ of the finite element
approximation. For bilinear basis functions, we set $\theta(p=1)=12$ and for bi-quadratic basis functions, we set
$\theta(p=2)=36$. These values were chosen so that the stiffness matrix is positive definite. Thus, for each mesh,
Equation~\eqref{eq:alpha-estimation} was used to compute the stabilisation parameter.
The convergence plots are given in Fig.~\ref{fig:tbeam-norms} where optimal convergence rates for both displacement
and energy norms were obtained. Note that minimum values of $\alpha$ can be computed based on a
numerical analysis of the discrete forms and lead to the global \cite{Griebel}
and local generalized eigenvalue approaches \cite{embar_imposing_2010}.
\subsection{Plate with a center inclusion}
Consider a plate with a center inclusion as given in Fig.~\ref{fig:inclusion-geo}.
The matrix properties are denoted by $E_m$ and $\nu_m$ and the inclusion properties are
denoted by $E_i$ and $\nu_i$.
A traction along the vertical direction is applied on the top edge while nodes along the bottom edge
are constrained. This problem is solved with (1) embedded Nitsche's method and (2) XFEM which are methods
that do not require a mesh conforming to the inclusion. The XFEM mesh is given in Fig.~\ref{fig:inclusion-mesh}a
where $30\times60$ four-noded quadrilateral (Q4) elements are adopted. The material interface is modeled via enrichment
functions (the $abs$ enrichment function) proposed in \cite{Sukumar1}. Meshes in the Nitsche's method, cf. Fig.~\ref{fig:inclusion-mesh}b, consist of
a background mesh for the plate ($32\times64$ Q4 elements) and another mesh for the inclusion which is embedded
in the background mesh ($16\times16$ bi-quadratic NURBS elements).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{inclusion-geom}
\caption{A plate with a center inclusion.}
\label{fig:inclusion-geo}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[XFEM]{\includegraphics[width=0.33\textwidth]{inclusion-xfem-mesh}}\;\;
\subfloat[Nitsche]{\includegraphics[width=0.4\textwidth]{inclusion-nitsche-mesh}}
\caption{Plate with a center inclusion: (a) XFEM mesh with enriched nodes and (b) Nitsche's method
with embedded mesh.}
\label{fig:inclusion-mesh}
\end{figure}
For details on the Nitsche based embedded mesh method, we refer to e.g.,\xspace \cite{Sanders2011a}. Here, we apply
this method in the context of IGA by using NURBS elements.
The implementation is briefly explained as follows. The assembly of inclusion elements is standard and
the assembly of background elements is similar to XFEM for voids--void elements (completely covered by
inclusion elements) do not contribute to the total stiffness matrix, cut elements (elements cut by the inclusion)
require special integration scheme in which the part falls within the inclusion domain is not integrated.
This can be achieved using the standard sub-triangulation technique in the context of XFEM \cite{mos_finite_1999}
or the hierarchical element subdivision employed in the Finite Cell Method \cite{NME:NME4522} or the technique
used in the NEFEM (NURBS Enhanced FEM) \cite{sevillafern'andez-m'endez2008b}. Here, for simplicity, we used
the hierarchical element subdivision method.
We refer to Fig.~\ref{fig:inclusion-nitsche-explain}.
The inclusion Young's modulus is $E_i=1$.
Due to the contrast in Young's moduli, the average operator given in Equation~\eqref{eq:general-average}
was used with $\gamma=E_m/(E_m+E_i)$ as proposed in \cite{Sanders2011a}. The stabilisation parameter
is chosen empirically $\alpha=1e6$.
Fig.~\ref{fig:inclusion-nitsche-res} shows the contour plot of $u_y$ solutions obtained with both methods.
A good agreement of Nitsche solution compared with XFEM solution can be observed.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{inclusion-nitsche-explain}
\caption{A plate with a center inclusion: Nitsche based embedded mesh method.
The red filled squares denote Gauss points to evaluate the coupling matrices. Cyan squares denote void elements
and red squares represent cut elements.}
\label{fig:inclusion-nitsche-explain}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{inclusion-res.png}
\caption{A plate with a center inclusion: contour plot of $u_y$ solutions--xfem (left) and Nitsche (right).}
\label{fig:inclusion-nitsche-res}
\end{figure}
\subsection{3D-3D coupling}
In order to test the implementation for 3D problems, we consider the 3D cantilever beam shown in
Fig.~\ref{fig:beam3D-geo}.
The data are: Young's modulus $E=1000$, Poisson's ratio $\nu=0.3$, $L=10$, $W=H=1$ and the imposed
displacement in the $z$-direction is $1$. The non-conforming
B-splines discretisation is given in Fig.~\ref{fig:beam3D-mesh} where the beam is divided into two
equal parts. A value of ... was used for the stabilisation parameter $\alpha$.
In Fig.~\ref{fig:beam3D-res} the contour plot of $\sigma_{xx}$ is given and a comparison was made with
a standard Galerkin discretisation of $32\times4\times4$ tri-cubic B-splines elements and a good agreement was obtained.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{3d-3d-geo}
\caption{A 3D cantilever beam subjected to an imposed vertical displacement.}
\label{fig:beam3D-geo}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{nitsche-3D3D-spline-mesh}
\caption{A 3D cantilever beam subjected to an imposed vertical displacement: $16\times4\times4$ tri-cubic
B-splines elements for the left domain and $16\times1\times2$ tri-cubic
B-splines elements for the right domain.}
\label{fig:beam3D-mesh}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[Nitsche]{\includegraphics[width=0.55\textwidth]{nitsche-3D3D-spline-stress}}\\
\subfloat[Galerkin]{\includegraphics[width=0.55\textwidth]{iga-3D3D-spline-stress}}
\caption{Timoshenko beam.}
\label{fig:beam3D-res}
\end{figure}
\subsection{Connecting rod}
The method is now applied to a more complicated geometry, taking into account more than one interface coupling, curved interfaces and interfaces with different dimension.
This geometry is a simplified representation of a \emph{connecting rod}, which is a component of an internal combustion engine, and represents a classic linear case in the stress-strain static analysis.
The geometric input model is composed by three NURBS patches (see Fig.~\ref{fig:concepts}) with two coupling interfaces.
The dimensions are consistent with an actual component and the material properties are Young's modulus $E=2\times10^5$ MPa, Poisson's ratio $\nu=0.3$ which come from a standard steel material.
Boundary conditions are represented in Fig.~\ref{fig:conrod3D-BC}: ideal fixed boundary condition on the two vertical surfaces of the (\emph{big-end}) and a vertical total force $F=1000$ N load applied to the internal ring of the \emph{small-end}, according to the effect of the \textit{pin-piston} sub-assembly that transmits a bending moment to the connecting-rod stem.
For the simulation the model is refined with tri-cubic functions and $32\times4\times8$ elements for patch 1, $24\times12\times4$ elements for patch 2 and $64\times4\times8$ elements for patch 3, resulting in a total number of 4224 elements and 11305 control points.
For both coupling interfaces the smaller faces are the regions where the surface integration is performed and a stabilization parameter $\alpha=1\times10^8$ was chosen empirically.
The results are shown in Fig.~\ref{fig:conrod3D-results}, where displacement and stress fields are plotted.
The displacement distribution is the typical progressive cubic polynomial form of the analytical Saint-Venant model.
The pattern distribution of the Von Mises equivalent failure criterion is used for the comparison of the simulation results in IGA approach with respect to \textit{Siemens-NX} (traditional FE model, discretized with second order tetrahedra, 6182 elements and 11332 nodes Fig.~\ref{fig:NASTRAN-results}).
Typical combined compressive and bending stress/action of the connecting-rod stem is representable with Von Mises stresses closed to zero in the mean plane; superior fibres has the maximum value of traction symmetrically equivalent to the compression of inferior fibres, due to the strictly positive equivalent measure of Von Mises yield criterion.
In both analyses interesting three-dimensional effects are detected: maximum stress values correspond to the free fibres of the stem in superior and inferior surfaces that interact with the big-end; the interaction between the stem and both the big-end and small-end produces an increasing stress value in the azure region in proximity of the neutral axis that is very well described in both analysis, thus demonstrating the IGA model effectiveness of the links between patches; the boundary conditions are typically hyperstatic and only the inner part of the big-end transmits traction/compression reactions (green regions); due to this particular load case, parts of the big-end (blue regions) are superfluous in both analyses and could be deleted, reducing the mass of the component; the internal stress distribution in the inner ring of the small-end shows again very good agreement of the combined compressive and bending stress/action behaviour that reaches the pin region.
\begin{figure}[htbp]
\centering
\includegraphics[width=.7\textwidth]{conrod-BC_1.png}
\caption{Connecting-rod: geometry and boundary conditions. The dimensions are in mm.}
\label{fig:conrod3D-BC}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[z-displacement field]{\includegraphics[width=.7\textwidth]{IGA-Conrod-Displ.png}}\\
\subfloat[Stress field]{\includegraphics[width=.7\textwidth]{IGA-Conrod-Stress.png}}
\caption{Results of the connecting rod.}
\label{fig:conrod3D-results}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{NASTRAN-conrod.png}
\caption{Stress plot from the commercial code \textit{NX-NASTRAN}.}
\label{fig:NASTRAN-results}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
We presented a Nitsche's method to couple non-conforming NURBS patches.
Detailed implementation was provided and numerical examples demonstrated the good performance of the method.
The proposed method certainly enlarges the applicability of NURBS based isogeometric analysis.
The contribution was limited to linear elastostatic problems and extension of the method to (1) dynamics problems
and (2) nonlinear material problems is under investigation before one could claim whether Nitsche coupling would be
a viable method for multi-patch NURBS based isogeometric analysis.
As we were preparing the paper for submission, we became aware of contemporary work had been presented the previous week at the US National Congress for Computational Mechanics \cite{Dominik_Nitsche3D} in the context of the finite cell method.
\section*{Acknowledgements}
The authors would like to acknowledge the partial financial support of the
Framework Programme 7 Initial Training Network Funding under grant number
289361 ``Integrating Numerical Simulation and Geometric Design Technology".
St\'{e}phane Bordas also thanks partial funding for his time provided by
1) the EPSRC under grant EP/G042705/1 Increased Reliability for Industrially
Relevant Automatic Crack Growth Simulation with the eXtended Finite Element
Method and 2) the European Research Council Starting Independent Research
Grant (ERC Stg grant agreement No. 279578) entitled ``Towards real time multiscale simulation of cutting in
non-linear materials with applications to surgical simulation and computer
guided surgery''. Marco Brino thanks Politecnico di Torino for the funding that supports his visitor to
iMAM at Cardiff University.
|
1,116,691,498,364 | arxiv | \section[Introduction]{Introduction}
{\bf Exact convex symplectic manifolds and hypersurfaces.}
An {\em exact convex symplectic manifold} $(V,\lambda)$ is a connected
manifold $V$ of dimension $2n$ without boundary with a one-form
$\lambda$ such that the following conditions are satisfied.
\begin{description}
\item[(i)] The two-form $\omega=d\lambda$ is symplectic.
\item[(ii)] The symplectic manifold $(V,\omega)$ is convex at
infinity, i.e.\, there exists an exhaustion $V=\cup_k V_k$ of $V$ by
compact sets $V_k\subset V_{k+1}$ with smooth boundary such that
$\lambda|_{\partial V_k}$ is a contact form.
\end{description}
(cf.~\cite{eliashberg-gromov}). Define a vector field $Y_\lambda$ on
$V$ by $i_{Y_\lambda}\omega=\lambda$. Then the last condition is
equivalent to saying that $Y_\lambda$ points out of $V_k$ along $\partial
V_k$.
We say that an exact convex symplectic manifold $(V,\lambda)$ is {\em
complete} if the vector field $Y_\lambda$ is complete. We say that
$(V,\lambda)$ has {\em bounded topology} if $Y_\lambda\neq 0$ outside
a compact set. Note that $(V,\lambda)$ is complete and of bounded
topology iff there exists an embedding $\phi:M\times\mathbb{R}_+\to V$ such
that $\phi^*\lambda=e^r\alpha_M$ with contact form
$\alpha_M=\phi^*\lambda|_{M\times\{0\}}$, and such that
$V\setminus\phi(M\times\mathbb{R}_+)$ is compact. (To see this, simply apply
the flow of $Y_\lambda$ to $M:=\partial V_k$ for large $k$).
We say that a subset $A \subset V$ is {\em displaceable} if it can be
displaced from itself via a Hamiltonian isotopy, i.e.\,there exists
a smooth family of Hamiltonian functions
$H=H(A) \in C^\infty([0,1]\times V)$
with compact support such that the time one flow $\phi_H$
of the time dependent Hamiltonian vector field $X_{H_t}$ defined by
$dH_t=-\iota_{X_{H_t}}\omega$ for $H_t=H(t,\cdot) \in C^\infty(V)$
and $t \in [0,1]$ satisfies $\phi_H(A) \cap A=\emptyset$.
The main examples of exact convex symplectic manifolds we have in mind
are Stein manifolds. We briefly recall its
definition. A {\em Stein manifold} is a triple $(V,J,f)$ where $V$ is
a connected
manifold, $J$ is an integrable complex structure on $V$ and $f \in
C^\infty(V)$ is an exhausting plurisubharmonic
function, i.e. $f$ is proper and bounded from below,
and the exact two form $\omega=-d d^c f$ is symplectic.
Here the
one form $\lambda=-d^c f$ is defined by the condition
$d^c f(\xi)=d f(J \xi)$
for every vector field $\xi$.
We refer to \cite{cieliebak-eliashberg} for a detailed treatment
of Stein manifolds and Eliashberg's topological characterization of
them.
It is well known that if the plurisubharmonic function $f$ is
Morse, then all critical points of $f$ have Morse index less than
or equal than half the dimension of $V$, see for example
\cite{cieliebak-eliashberg}. The Stein manifold
$(V,J,f)$ is called \emph{subcritical} if this inequality is strict.
In a subcritical Stein manifold, every compact subset $A$ is
displaceable~\cite[Lemma 3.2]{biran-cieliebak}.
{\em Remark. }
Examples of exact convex symplectic manifolds which are not Stein can
be obtained using the following construction. Let $M$ be a
$(2n-1)$-dimensional closed manifold which admits a pair of contact forms
$(\alpha_0,\alpha_1)$ satisfying
$$\alpha_1 \wedge (d\alpha_1)^{n-1}=-\alpha_0 \wedge (d\alpha_0)^{n-1}>0$$
and
$$\alpha_i\wedge(d\alpha_i)^k \wedge(d\alpha_j)^{n-k-1}=0, \quad
0 \leq k \leq n-2$$
where $(i,j)$ is a permutation of $(0,1)$. Then a suitable interpolation
of $\alpha_0$ and $\alpha_1$ endows the manifold $V=M \times [0,1]$
with the structure of an exact convex symplectic manifold, where
the restriction of the one-form to $M \times \{0\}$ is given by
$\alpha_0$ and the restriction to $M \times \{1\}$ is given by
$\alpha_1$. Since $H_{2n-1}(V)=\mathbb{Z}$, the manifold $V$ does not admit a
Stein structure. Moreover, what
makes these examples particularly interesting is the fact that
they have two boundary components, whereas the boundary of a connected
Stein manifold is always connected. The first construction in
dimension four of an exact convex
symplectic manifold of the type above was carried out by D.\,McDuff
in \cite{mcduff}. H.\,Geiges generalized her method in \cite{geiges},
where he also obtained higher dimensional examples.
If $(V,\lambda)$ is an exact convex symplectic manifold then so is
its {\em stabilization} $(V \times \mathbb{C}, \lambda \oplus
\lambda_\mathbb{C})$ for the one form $\lambda_\mathbb{C}=\frac{1}{2}(x\,dy-y\,dx)$ on
$\mathbb{C}$. Moreover, in $(V \times \mathbb{C}, \lambda \oplus
\lambda_\mathbb{C})$ every compact subset $A$ is displaceable.
It is shown in \cite{cieliebak} that each subcritical Stein manifold
is Stein deformation equivalent to a split Stein manifold, i.e.\,a
Stein manifold of the form $(V \times \mathbb{C},J \times i,
f+|z|^2)$ for a Stein manifold $(V,J,f)$.
{\em Remark. }If $(V,\lambda)$ is an exact convex symplectic manifold,
then so is $(V,\lambda+dh)$ for any smooth function $h:V\to\mathbb{R}$ with compact
support. We call the 1-forms $\lambda$ and $\lambda+dh$ {\em
equivalent}. For all our considerations only the equivalence class of
$\lambda$ will be relevant.
\medskip
An {\em exact convex hypersurface} in an exact convex symplectic
manifold $(V,\lambda)$ is a compact hypersurface (without boundary)
$\Sigma\subset V$ such that
\begin{description}
\item [(i)]There exists a contact 1-form $\alpha$ on $\Sigma$ such that
$\alpha-\lambda|_\Sigma$ is exact.
\item[(ii)] $\Sigma$ is {\em bounding},
i.e. $V\setminus\Sigma$ consists of two connected components, one
compact and one noncompact.
\end{description}
{\em Remarks. }
(1) It follows that the volume form $\alpha\wedge(d\alpha)^{n-1}$
defines the orientation of $\Sigma$ as boundary of the bounded
component of $V\setminus\Sigma$.
(2) If $\Sigma$ is an exact convex hypersurface in $(V,\lambda)$ with
contact form $\alpha$, then there exists an equivalent 1-form
$\mu=\lambda+dh$ on $V$ such that $\alpha=\mu|_\Sigma$. To see this,
extend $\alpha$ to a 1-form $\beta$ on $V$. As
$(\beta-\lambda)|_\Sigma$ is exact, there exists a function $h$ on a
neighbourhood $U$ of $\Sigma$ such that $\beta-\lambda=dh$ on $U$. Now
simply extend $h$ to a function with compact support on $V$ and set
$\mu:=\lambda+dh$.
(3) If $H^1(\Sigma;\mathbb{R})=0$ condition (i) is equivalent to
$d\alpha=\omega|_\Sigma$.
(4) Condition (ii) is automatically satisfied if $H_{2n-1}(V;\mathbb{Z})=0$,
e.g.~if $V$ is a stabilization or a Stein manifold of dimension $>2$.
\medskip
{\bf Floer homology. }
In the following we assume that $(V,\lambda)$ is a complete exact
convex symplectic manifold of bounded topology, and $\Sigma\subset V$
is an exact convex hypersurface with contact form $\alpha$.
We will define an invariant $HF(\Sigma,V)$ as the Floer homology of an
action functional which was studied previously by Rabinowitz
\cite{rabinowitz}.
A {\em defining Hamiltonian for $\Sigma$} is a function $H \in C^\infty(V)$
which is constant outside of a compact set of $V$,
whose zero level set $H^{-1}(0)$ equals $\Sigma$,
and whose Hamiltonian vector field
$X_H$ defined by $dH=-\iota_{X_H}\omega$ agrees with the Reeb vector field
$R$ of $\alpha$ on $\Sigma$. Defining Hamiltonians exist since $\Sigma$ is
bounding, and they form a convex space.
Fix a defining Hamiltonian $H$ and
denote by $\mathscr{L}=C^\infty(\mathbb{R}/\mathbb{Z},V)$
the free loop space of $V$.
Rabinowitz' action functional
$$\mathcal{A}^H \colon \mathscr{L}\times \mathbb{R} \to \mathbb{R}$$
is defined as
$$\mathcal{A}^H(v,\eta) := \int_0^1v^*\lambda-\eta
\int_0^1 H(v(t))dt,\quad (v,\eta) \in \mathscr{L}\times \mathbb{R}.$$
One may think of $\mathcal{A}^H$ as the Lagrange multiplier functional
of the unperturbed action functional of classical mechanics also studied
in Floer theory to a mean value constraint of the loop. The critical
points of $\mathcal{A}^H$ satisfy
\begin{equation}\label{crit1}
\left. \begin{array}{cc}
\partial_t v(t)= \eta X_H(v(t)),
& t \in \mathbb{R}/\mathbb{Z}, \\
H(v(t))=0. & \\
\end{array}
\right\}
\end{equation}
Here we used the fact that $H$ is invariant under its Hamiltonian
flow. Since the restriction of the Hamiltonian vector
field $X_H$ to $\Sigma$ is the Reeb vector field,
the equations (\ref{crit1}) are equivalent to
\begin{equation}\label{crit}
\left. \begin{array}{cc}
\partial_t v(t)= \eta R(v(t)),
& t \in \mathbb{R}/\mathbb{Z}, \\
v(t)\in \Sigma, & t \in \mathbb{R}/\mathbb{Z}, \\
\end{array}
\right\}
\end{equation}
i.e. $v$ is a periodic orbit of the Reeb vector field on
$\Sigma$ with period $\eta$.
\footnote{The period $\eta$ may be negative or zero.
We refer in this paper
to Reeb orbits moved backwards as Reeb orbits with negative period
and to constant orbits as Reeb orbits of period zero.}
\begin{thm}\label{well}
Under the above hypotheses, the Floer homology
$HF(\mathcal{A}^H)$ is well-defined. Moreover, if
$H_s$ for $0 \leq s \leq 1$ is a smooth family of defining functions
for exact convex hypersurfaces $\Sigma_s$, then
$HF(\mathcal{A}^{H_0})$ and $HF(\mathcal{A}^{H_1})$ are
canonically isomorphic.
\end{thm}
Hence the Floer homology $HF(\mathcal{A}^H)$ is independent of
the choice of the defining function $H$ for an exact convex
hypersurface $\Sigma$, and the resulting invariant
$$
HF(\Sigma,V) := HF(\mathcal{A}^H)
$$
does not change under homotopies of exact convex hypersurfaces.
The next result is a vanishing theorem for the Floer homology
$HF(\Sigma,V)$.
\begin{thm}\label{zero}
If $\Sigma$ is displaceable, then $HF(\Sigma,V)=0$.
\end{thm}
{\em Remark. }The action functional $\mathcal{A}^H$ is also defined if
$H^{-1}(0)$ is not exact convex. However, in this case the Floer homology
$HF(\mathcal{A}^H)$ cannot in general be defined because the
moduli spaces of flow lines will in general not be compact
up to breaking anymore. The problem is that the Lagrange multiplier $\eta$
may go to infinity. This phenomenon actually does happen as the
counterexamples to the Hamiltonian Seifert conjecture show, see
\cite{ginzburg-gurel} and the literature cited therein.
\medskip
Denote by $c_1$ the first Chern class of the tangent bundle of $V$
(with respect to an $\omega$-compatible almost complex structure and
independent of this choice, see~\cite{mcduff-salamon}). Evaluation of
$c_1$ on spheres gives rise to a
homomorphism $I_{c_1} \colon \pi_2(V) \to \mathbb{Z}$.
If $I_{c_1}$ vanishes then the Floer homology $HF_*(\Sigma,V)$ can be
$\mathbb{Z}$-graded with half integer degrees, i.e. $* \in 1/2+\mathbb{Z}$.
The third result is a computation of the Floer homology for the unit
cotangent bundle of a sphere.
\begin{thm}\label{compute}
Let $(V,\lambda)$ be a complete exact convex symplectic
manifold of bounded topology satisfying $I_{c_1}=0$. Suppose that
$\Sigma\subset V$ is an exact convex hypersurface with contact form
$\alpha$ such that $(\Sigma,\ker\alpha)$ is contactomorphic to the
unit cotangent bundle $S^*S^n$ of the sphere of dimension $n\geq 4$
with its standard contact structure. Then
$$
HF_k(\Sigma,V) = \left\{ \begin{array}{cc}
\mathbb{Z}_2,
& k \in \{-n+\frac{1}{2},-\frac{1}{2},\frac{1}{2},n-\frac{1}{2}\}+
\mathbb{Z}\cdot(2n-2), \\
0, & \mathrm{else.} \\
\end{array}
\right.
$$
\end{thm}
\medskip
{\bf Applications and discussion. }
The following well-known technical lemma will allow us to remove
completeness and bounded topology from the hypotheses of our
corollaries.
\begin{lemma}\label{bt}
Assume that $\Sigma$ is an exact convex hypersurface in the exact convex
symplectic manifold $(V,\lambda)$. Then $V$ can be modified outside of
$\Sigma$ to an exact convex symplectic manifold
$(\hat{V},\hat\lambda)$ which is complete and of
bounded topology. If $I_{c_1}=0$ for $V$ the same holds for
$\hat V$. If $\Sigma$ is displaceable in $V$, then we can
arrange that it is displaceable in $\hat V$ as well.
\end{lemma}
\textbf{Proof: }
Let $V_1\subset V_2\dots$ be the compact exhaustion
in the definition of an exact convex symplectic manifold.
Since $\Sigma$ is compact, it is contained in $V_k$ for some $k$. The
flow of $Y_\lambda$ for times $r\in(-1,0]$ defines an embedding $\phi:\partial
V_k\times(-1,0]\to V_k$ such that $\phi^*\lambda=e^r\lambda_0$,
where $\lambda_0=\lambda|_{\partial V_k}$. Now define
$$
(\hat V,\hat\lambda) := (V_k,\lambda)\cup_\phi\bigl(\partial
V_k\times(-1,\infty),e^r\lambda_0\bigr).
$$
This is clearly complete and of bounded topology. The statement about
$I_{c_1}$ is obvious. If $\Sigma$
is displaceable by a Hamiltonian isotopy generated by a compactly
supported Hamiltonian $H:[0,1]\times V\to\mathbb{R}$, we choose $k$ so large
that ${\rm supp}H\subset[0,1]\times V_k$ and apply the same
construction.
\hfill $\square$
As a first consequence of Theorem~\ref{zero}, we recover some known
cases of the Weinstein conjecture, see~\cite{viterbo},
\cite{frauenfelder-ginzburg-schlenk}.
\begin{cor}
Every displaceable exact convex hypersurface $\Sigma$ in an exact
convex symplectic manifold $(V,\lambda)$ carries a closed
characteristic. In particular,
this applies to all exact convex hypersurfaces in a subcritical Stein
manifold, or more generally in a stabilization $V\times\mathbb{C}$.
\end{cor}
\textbf{Proof: }
In view of Lemma~\ref{bt}, we may assume without loss of generality
that $(V,\lambda)$ is complete and of bounded topology. Then by
Theorem~\ref{zero} the Floer homology $HF(\mathcal{A}^H)$ vanishes,
where $H$ is a defining function for $\Sigma$. On the other hand,
the action functional $\mathcal{A}^H$ always has critical points
corresponding to the constant loops in $\Sigma$. So the vanishing of
the Floer homology implies that there must also exist nontrivial
solutions of (\ref{crit}), which are just closed characteristics,
connected to constant loops by gradient flow lines of $\mathcal{A}^H$.
\hfill $\square$
For further applications, the following notation will be convenient.
An {\em exact contact embedding} of a closed contact manifold
$(\Sigma,\xi)$ into an exact convex symplectic manifold $(V,\lambda)$
is an embedding $\iota \colon \Sigma \to V$ such that
\begin{description}
\item [(i)]There exists a 1-form $\alpha$ on $\Sigma$ such that
$\ker\alpha=\xi$ and $\alpha-\iota^*\lambda$ is exact.
\item[(ii)] The image $\iota(\Sigma)\subset V$ is bounding.
\end{description}
In other words, $\iota(\Sigma)\subset V$ is an exact convex
hypersurface with contact form $\iota_*\alpha$ which is
contactomorphic (via $\iota^{-1}$) to $(\Sigma,\xi)$.
Now Theorems~\ref{zero} and~\ref{compute} together with Lemma~\ref{bt}
immediately imply
\begin{cor}\label{cor:non-displaceable}
Assume that $n\geq 4$ and
there exists an exact contact embedding $\iota$ of
$S^*S^n$ into an exact convex symplectic manifold satisfying
$I_{c_1}=0$. Then $\iota(S^*S^n)$ is not displaceable.
\end{cor}
Since in a stabilization $V\times\mathbb{C}$ all compact subsets are
displaceable, we obtain in particular
\begin{cor}\label{cor:subcrit}
For $n\geq 4$ there does not exist an exact contact embedding of $S^*S^n$
into a subcritical Stein manifold, or more generally, into the
stabilization $(V\times\mathbb{C},\lambda\oplus\lambda_\mathbb{C})$ of an exact convex
symplectic manifold $(V,\lambda)$ satisfying $I_{c_1}=0$.
\end{cor}
{\em Remark. }
If $n$ is even then there are no smooth embeddings of
$S^*S^n$ into a subcritical Stein manifold by topological reasons, see
Appendix~\ref{app:top}. However, at least for $n=3$ and $n=7$ there
are no topological obstructions, see the discussion below.
\medskip
If $(V,J,f)$ is a Stein manifold with $f$ a Morse function,
P.~Biran~\cite{biran} defines the {\em critical coskeleton} as the union
of the unstable manifolds (w.r.~to $\nabla f$) of the critical points
of index $\dim V/2$. It is proved in~\cite{biran} that every compact
subset $A\subset V$ which does not intersect the critical coskeleton
is displaceable. For example, in a cotangent bundle the critical
coskeleton (after a small perturbation) is one given fibre. Thus
Corollary~\ref{cor:non-displaceable} implies
\begin{cor}\label{cor:crit}
Assume that $n\geq 4$ and there exists an exact contact
embedding $\iota$ of $S^*S^n$ into a Stein manifold $(V,J,f)$
satisfying $I_{c_1}=0$. Then
$\iota(\Sigma)$ must intersect the critical coskeleton. In particular,
the image of an exact contact embedding of $S^*S^n$ into a cotangent
bundle $T^*Q$ must intersect every fibre.
\end{cor}
{\em Remark. }
Let $\iota \colon L \to V$ be an {\em exact Lagrangian embedding} of
$L$ into $V$, i.e. such that
$\iota^*\lambda$ is exact. Since by
Weinsteins Lagrangian neighbourhood theorem
\cite[Theorem 3.33]{mcduff-salamon1} a tubular neighbourhood of
$\iota(L)$ can be symplectically identified with a tubular neighbourhood
of the zero section of the cotangent bundle of $L$, we obtain
an exact contact embedding of $S^*L$ into $V$. Thus the last 3
corollaries generalize
corresponding results about exact Lagrangian embeddings. For example,
Corollary~\ref{cor:subcrit} generalizes (for spheres) the well-known
result~\cite{gromov,audin-lalonde-polterovich,biran-cieliebak} that
there exist no exact Lagrangian embeddings into subcritical Stein
manifolds. Corollary~\ref{cor:crit} implies (cf.~\cite{biran}) that
an embedded Lagrangian sphere of dimension $\geq 4$ in a cotangent
bundle $T^*Q$ must intersect every fibre.
\medskip
{\em Remark. }
Let us discuss Corollary~\ref{cor:subcrit} in the cases $n\leq 3$ that
are not accessible by our method of proof. We always equip $\mathbb{C}^n$ with
the canonical 1-form
$\lambda=\frac{1}{2}\sum_{i=1}^n(x_idy_i-y_idx_i)$.
$n=1$: Any embedding of two disjoint circles into $\mathbb{C}$ is an exact
contact embedding of $S^*S^1$, so Corollary~\ref{cor:subcrit} fails in
this case.
$n=2$: In this case Corollary~\ref{cor:subcrit} is true for purely
topological reasons; we present various proofs in
Appendix~\ref{app:top}.
$n=3$: In this case Corollary~\ref{cor:subcrit} is true for
subcritical Stein manifolds and can be
proved using symplectic homology, see the last remark in this section.
\medskip
{\em Example. }
In this example we illustrate that the preceding results about exact
contact embeddings are sensitive to the contact structure. Let $n=3$
or $n=7$. Then $S^*S^n\cong S^n\times
S^{n-1}$ embeds into $\mathbb{R}^{n+1} \times S^{n-1}$.
On the other hand $\mathbb{R}^{n+1} \times S^{n-1}$ is
diffeomorphic to
the subcritical Stein manifold $T^*S^{n-1}\times\mathbb{C}$,
and identifying $S^*S^n$ with a level set in
$T^*S^{n-1}\times \mathbb{C}$ defines a contact structure $\xi$ on $S^*S^n$ . Thus
$(S^*S^n,\xi)$ has an exact contact embedding into a subcritical Stein
manifold (in fact into $\mathbb{C}^n$) for $n=3,7$, whereas $(S^*S^7,\xi_{\rm
st})$ admits no such embedding by Corollary~\ref{cor:subcrit}. In
particular, we conclude
\begin{cor}\label{cor:non-diffeo}
The two contact structures $\xi$ and $\xi_{\rm st}$ on $S^*S^7\cong
S^7\times S^6$ described above are not diffeomorphic.
\end{cor}
{\em Remarks. }
(1) Corollary~\ref{cor:non-diffeo} also holds in the case $n=3$, although
our method does not apply there. Indeed, the contact structures $\xi$
and $\xi_{\rm st}$ on $S^*S^n$ for $n=3,7$ are distinguished by their
cylindrical contact homology (see~\cite{ustilovsky}, \cite{yau}).
(2) The contact structures $\xi$ and $\xi_{\rm st}$ on $S^3\times S^2$
are homotopic as almost contact structures, i.e.~as symplectic
hyperplane distributions. This follows simply from the fact
(see e.g.~\cite{geiges-book}) that on 5-manifolds almost contact
structures are classified up to homotopy by their first Chern classes
and $c_1(\xi)=c_1(\xi_{\rm st})=0$. It would be interesting to know
whether $\xi$ and $\xi_{\rm st}$ on $S^7\times S^6$ are also homotopic
as almost contact structures. Here the first obstruction to such a
homotopy vanishes because $c_3(\xi)=c_3(\xi_{\rm st})=0$, but there
are further obstructions in dimensions $7$ and $13$ which remain to be
analysed along the lines of~\cite{morita}.
{\em Remark }(obstructions from symplectic field theory).
Symplectic field theory~\cite{eliashberg-givental-hofer} also yields
obstructions to exact
contact embeddings. For example, by neck stretching along the image of
an exact contact embedding, the following result is proved
in~\cite{cieliebak-mohnke}: {\em Let $(\Sigma^{2n-1},\xi)$ be a closed
contact manifold with $H_1(\Sigma;\mathbb{Z})=0$ which admits
an exact contact embedding into $\mathbb{C}^n$. Then for every nondegenerate
contact form defining $\xi$ there exist closed Reeb orbits of
Conley-Zehnder indices $n+1+2k$ for all integers $k\geq 0$.}\\
Here Conley-Zehnder indices are defined with respect to
trivializations extending over spanning surfaces. This result applies
in particular to the unit cotangent bundle $\Sigma=S^*Q$ of a closed
Riemannian manifold $Q$ with $H_1(Q;\mathbb{Z})=0$. For example, if $Q$
carries a metric of nonpositive curvature then all indices are $\leq
n-1$ and hence $S^*Q$ admits no exact contact embedding into
$\mathbb{C}^n$. On the other hand, any nondegenerate metric on the sphere
$S^n$ has closed geodesics of all indices $n+1+2k$, $k\geq 0$, so this
result does {\em not} exclude exact contact embeddings
$S^*S^n\hookrightarrow\mathbb{C}^n$.
{\em Remark }(obstructions from symplectic homology).
Corollary~\ref{cor:subcrit} for subcritical Stein manifolds can be
proved for all $n\geq 3$ by combining the following five results on
symplectic homology. See~\cite{cieliebak-oancea} for details.
(1) The symplectic homology $SH(V)$ of a subcritical Stein manifold
$V$ vanishes~\cite{cieliebak-chord}.
(2) If $\Sigma\subset V$ is an exact convex hypersurface in an
exact convex symplectic manifold bounding the compact domain $W\subset
V$, then $SH(V)=0$ implies $SH(W)=0$. This follows from an argument by
M.~McLean~\cite{mclean}, based on Viterbo's transfer map~\cite{viterbo}
and the ring structure on symplectic homology.
(3) If $SH(W)=0$, then the positive action part $SH^+(W)$ of
symplectic homology is only nonzero in finitely many degres. This
follows from the long exact sequence induced by the action
filtration.
(4) $SH^+(W)$ equals the non-equivariant linearized contact homology
$NCH(W)$. This is implicit in~\cite{bourgeois-oancea}, see
also~\cite{cieliebak-oancea}.
(5) If $\partial W=S^*S^n$ and $n\geq 3$, then $NCH(W)$ is independent of
the exact filling $W$ and equals the homology of the free loop space
of $S^n$ (modulo the constant loops), which is nonzero in infinitely
many degrees.
\section{Exact contact embeddings}\label{sec:exact}
Let $\Sigma$ be a connected
closed $2n-1$ dimensional manifold. A {\em contact structure}
$\xi$ is a field of hyperplanes $\xi \subset T \Sigma$ such that
there exists a one-form $\alpha$ satisfying
$$\xi = \ker \alpha, \quad \alpha \wedge d\alpha^{n-1} >0.$$
The one form $\alpha$ is called a contact form. It is determined by
$\xi$ up to multiplication with a function $f>0$. Given a contact
form $\alpha$ the {\em Reeb vector field} $R$ on $\Sigma$ is defined by
the conditions
$$\iota_R d\alpha=0, \quad \alpha(R)=1.$$
Unit cotangent bundles have a natural contact structure as the following
example shows.
{\em Example. }
For a manifold $N$ we denote by $S^*N$ the oriented projectivization of
its cotangent bundle $T^*N$, i.e. elements of $S^*N$ are
equivalence classes $[v^*]$ of contangent vectors $v^* \in T^* N$ under
the equivalence relation $v^* \cong w^*$ if there
exists $r>0$ such that $v^*=r w^*$. Denote by $\pi \colon S^* N \to N$
the canonical projection. A contact structure $\xi$ on $S^*N$ is
given by
$$\xi_{[v^*]}=\ker v^* \circ d\pi([v^*]).$$
If $g$ is a Riemannian metric on $N$ then
$S^*N$ can be identified with the space of tangent vectors
of $N$ of length one and the restriction of the Liouville one form
defines a contact form. Observe that the Reeb vector field generates
the geodesic flow on $N$.
If $\iota \colon \Sigma \to V$ is a exact contact embedding, then
$\alpha=\iota^* \lambda$ defines a contact form for the
contact structure $\xi$. One might ask which contact forms $\alpha$
can arise in this way. The following proposition shows that if one
contact form defining the contact structure $\xi$ arises from an exact
contact embedding, then every other contact form defining $\xi$ arises
as well.
\begin{prop} \label{embe}
Assume that $\iota:(\Sigma, \xi)\to (V,\lambda)$ is an exact contact
embedding with $\xi=\ker\iota^*\lambda$. Then for every contact form
$\alpha$ defining the contact structure $\xi$ on $\Sigma$ there exists
a constant $c>0$ and a bounding embedding $\iota_\alpha \colon \Sigma
\to V$ such that $\iota_\alpha^* \lambda=c\alpha$.
\end{prop}
\textbf{Proof of Proposition~\ref{embe}: }
The proof uses the fact that if there exists
an exact contact embedding of a contact manifold
into an exact convex symplectic manifold $(V,\lambda)$ then the
negative symplectization can be embedded. To see that we need two
facts. Recall that the vector field $Y_\lambda$ is defined by the
condition $\lambda=\iota_{Y_\lambda}d \lambda$.
{\em Fact 1: The flow $\phi_\lambda^t$ of $Y_\lambda$
exists for all negative times $t$.}
\\
Indeed, let $x\in V$. Then $x\in
V_k$ for some $k$. As $Y_\lambda$ points out of $V_k$ along $\partial V_k$,
$\phi_\lambda^t(x)\in V_k$ for all $t\leq 0$ and compactness of $V_k$
yields completeness for $t\leq 0$.
\\ \\
{\em Fact 2: The vector field $Y_\lambda$ satisfies
\begin{equation}\label{liouville}
\iota_{Y_\lambda}\lambda=0, \quad L_{Y_\lambda}\lambda=\lambda,
\end{equation}
where $L_{Y_\lambda}$ is the Lie derivative along the vector field
$Y_\lambda$. In particular, the flow $\phi^r_\lambda$ of $Y_\lambda$
satisfies $(\phi^r_\lambda)^*\lambda=e^r \lambda$.}
\\
Indeed, the first equation in (\ref{liouville}) follows directly
from the definition of $Y_\lambda$. To prove the second one
we compute using Cartan's formula
$$L_{Y_\lambda}\lambda=d\iota_{Y_\lambda}\lambda+\iota_{Y_\lambda}d\lambda
=\lambda.$$
\\
Now set $\alpha_0=\iota^* \lambda$ and
consider the symplectic manifold
$\big(\Sigma \times \mathbb{R}_-, d (e^r \alpha_0)\big)$ where $r$
denotes the coordinate on the $\mathbb{R}$-factor. By Fact 1, the flow
$\phi_\lambda^r$ exists for all $r\leq 0$. By Fact 2, the embedding
$$\hat{\iota} \colon \Sigma \times \mathbb{R}_- \to V, \quad
(x,r) \mapsto \phi_\lambda^r(\iota(x))$$
satisfies
$$(\hat{\iota})^* \lambda=e^r \alpha_0.$$
If $\alpha$ is another
contact form on $\Sigma$ which defines the contact structure
$\xi$ then there exists a smooth function
$\rho_\alpha \in C^\infty(\Sigma)$ such that
$$\alpha=e^{\rho_\alpha} \alpha_0.$$
Set $m:=\max_\Sigma\rho_\alpha$ and $c:=e^{-m}$. Then
$$\iota_\alpha \colon \Sigma \to V, \quad
x \mapsto \hat{\iota}(x,\rho_\alpha(x)-m)$$
gives the required contact embedding for $\alpha$.
This proves the proposition. \hfill $\square$
{\em Remark. }
If the vector field $Y_\lambda$ is complete, then the preceding proof
yields a symplectic embedding of the whole symplectization
$\bigl(\Sigma\times\mathbb{R},d(e^r\alpha_0)\bigr)$ into $(V,\omega)$.
\section{Floer homology for Rabinowitz's action
functional}\label{sec:floer}
In this section we construct the Floer homology for Rabinowitz's action
functional defined in the introduction and prove Theorem~\ref{well}
and Theorem~\ref{zero}. We assume that the reader is familiar
with the constructions in Floer theory which can be found in
Floer's original papers \cite{floer1,floer2, floer3, floer4, floer5} or
in Salamon's lectures \cite{salamon}. The finite dimensional case
of Morse theory is treated in the book of Schwarz \cite{schwarz}.
Throughout this section we maintain the following setup:
\begin{itemize}
\item $(V,\lambda)$ is a complete exact convex symplectic manifold of
bounded topology.
\item $\Sigma\subset V$ is an exact convex hypersurface with contact
form $\alpha$ and defining Hamiltonian $H$.
\end{itemize}
Our sign conventions for Floer homology are as follows:
\begin{itemize}
\item The {\em Hamiltonian vector field} $X_H$ is defined by
$dH=-i_{X_H}\omega$, where $\omega=d\lambda$.
\item An almost complex structure $J$ on $V$ is {$\omega$-compatible} if
$\omega(\cdot,J\cdot)$ defines a Riemannian metric. Thus
the gradient with respect to this metric is related to the
symplectic vector field by $X_H=J\nabla H$.
\item Floer homology is defined using the {\em positive} gradient flow
of the action functional $\mathcal{A}^H$.
\end{itemize}
The action functional $\mathcal{A}^H$ is invariant under
the $S^1$-action on $\mathscr{L}\times \mathbb{R}$ given
by $t_*(v(\cdot),\eta) \mapsto (v(t+\cdot),\eta)$. In particular,
the action functional $\mathcal{A}^H$ will never be Morse.
However, generically it is {\em Morse-Bott}, i.e.
its critical set is a manifold whose tangent space is the kernel of
the Hessian of the action functional.
We make the following nondegeneracy assumption on the Reeb flow
$\phi_t$ of the contact form $\alpha$ on $\Sigma$.
\begin{itemize}
\item[(A)] The closed Reeb orbits of $(\Sigma,\alpha)$ are of {\em
Morse-Bott type}, i.e.~for each $T\in\mathbb{R}$ the set
$\mathcal{N}_T\subset\Sigma$ formed by the $T$-periodic Reeb orbits is a
closed submanifold, the rank of $d\alpha|_{\mathcal{N}_T}$ is locally
constant, and $T_p\mathcal{N}_T=\ker(T_p\phi_T-{\rm id})$ for all $p\in\mathcal{N}_T$.
\end{itemize}
If the assumption (A) does not hold we consider a hypersurface
close by. Note that the contact condition is an open condition and
the assumption (A) is generically satisfied. Since we prove that
our homology is invariant under homotopies we can assume without
loss of generality that (A) holds.
If (A) is satisfied, then the action functional $\mathcal{A}^H$
is Morse-Bott.
{\em Remark. }Generically, we can even achieve that all $T$-periodic
Reeb orbits $\gamma$ with $T\neq 0$ are {\em nondegenerate},
i.e.~the linearization $T_p\phi_T:\xi_p\to\xi_p$ at $p=\gamma(0)$ does
not have 1 in its spectrum. In this case the critical manifold of
$\mathcal{A}^H$ consists of a union of
circles for each nonconstant Reeb orbit and a copy of
the hypersurface $\Sigma$ for the constant solutions, i.e.\,critical
points with $\eta=0$. Moreover, observe that a nonconstant
Reeb orbit gives rise to infinitely many of them because
it can be repeatedly passed and also be passed in the backward direction.
There are several ways to deal with Morse-Bott situations in Floer
homology. One possibility is to choose an additional small perturbation
to get a Morse situation. This was carried out by Pozniak \cite{pozniak},
where it was also shown that the local Floer homology near each
critical manifold coincides with the Morse homology of the critical
manifold. Another possibility is to choose an additional Morse function
on the critical manifold. The chain complex is then generated by
the critical points of this Morse function while the boundary operator
is defined by counting flow lines with cascades. This approach
was carried out by the second named author in \cite{frauenfelder}.
Cascades are finite energy gradient flow lines of the action functional
$\mathcal{A}^H$. In the Morse-Bott case the finite energy
assumption is equivalent to assume that the gradient flow line
converges at both ends exponentially to a point on the critical manifold.
In order to prove that the Floer homology is well defined one
has to show that the moduli spaces of cascades are compact modulo breaking.
There are three difficulties one has to solve.
\begin{itemize}
\item An $L^\infty$-bound on the loop $v \in \mathscr{L}$.
\item An $L^\infty$-bound on the Lagrange multiplier $\eta \in \mathbb{R}$.
\item An $L^\infty$-bound on the derivatives of the loop $v$.
\end{itemize}
Although the first and the third point are nontrivial
they are standard problems in Floer theory one knows how to deal with. The
$L^\infty$-bound for the loop follows from the convexity assumption on
$V$ and the derivatives can be controlled since
our symplectic manifold is exact and hence
there is no bubbling of pseudo-holomorphic spheres.
The new feature is the bound on
the Lagrange multiplier $\eta$. We will explain in detail how this can be
achieved. It will be essential that our hypersurface is convex.
\\ \\
We first explain the metric on the space $\mathscr{L}\times \mathbb{R}$
and deduce from that the equation for the cascades.
The metric on $\mathscr{L}\times \mathbb{R}$ is the product metric
of the standard metric on $\mathbb{R}$ and a metric on
$\mathscr{L}$ coming from a family of $\omega$-compatible almost
complex structures $J_t$ on $V$.
For such a family of $\omega$-compatible almost complex structures
$J_t$ we define the metric $g_J$ on $\mathscr{L}\times \mathbb{R}$
for $(v,\eta) \in \mathscr{L}\times \mathbb{R}$ and
$(\hat{v}_1,\hat{\eta}_1), (\hat{v}_2,\hat{\eta}_2) \in
T_{(v,\eta)}(\mathscr{L}\times \mathbb{R})$ by
$$g_J\big((\hat{v}_1,\hat{\eta}_1),(\hat{v}_2,\hat{\eta}_2)\big)
=\int_0^1 \omega(\hat{v}_1,J_t(v)\hat{v}_2) dt +
\hat{\eta}_1 \cdot \hat{\eta}_2.$$
The gradient of $\mathcal{A}^H$ with respect to this metric is given by
$$
\nabla \mathcal{A}^H=\nabla_{g_J} \mathcal{A}^H =
\left(\begin{array}{cc}
-J_t(v)\bigl(\partial_t v-\eta X_H(v)\bigr)\\
-\int_0^1 H(v(\cdot,t) dt
\end{array}\right)\;.
$$
Thus gradient flow lines of $\nabla \mathcal{A}^H$ are solutions
$(v,\eta) \in C^\infty(\mathbb{R}\times S^1,V \times \mathbb{R})$
of the following problem
\begin{equation}\label{flowline}
\left. \begin{array}{cc}
\partial_s v+J_t(v)(\partial_t v-\eta X_H(v))=0\\
\partial_s \eta+\int_0^1 H(v(\cdot,t)dt=0.
\end{array}
\right\}
\end{equation}
The following proposition is our main tool to bound the Lagrange
multiplier $\eta$.
\begin{prop}\label{mainest}
There exists $\epsilon>0$ such that for every $M > 0$
there exists a constant $c_M<\infty$ such that
$$
\left\{ \begin{array}{c}
||\nabla\mathcal{A}^H(v,\eta)||\leq\epsilon \\
|\mathcal{A}^H(v,\eta)|\leq M \\
\end{array}
\right\}
\quad \Longrightarrow \quad |\eta| \leq c_M.
$$
\end{prop}
We first prove a lemma which says that the action value of a critical
point of $\mathcal{A}^H$, i.e.\,a Reeb orbit,
is given by the period.
\begin{lemma}\label{period}
Let $(v,\eta) \in \mathrm{crit}(\mathcal{A}^H)$, then
$\mathcal{A}^H(v,\eta)=\eta$.
\end{lemma}
\textbf{Proof: } Inserting (\ref{crit}) into $\mathcal{A}^H$ we
compute
$$\mathcal{A}^H(v,\eta)=\eta\int_0^1\lambda(v)R(v)=\eta
\int_0^1\alpha(v)R(v)=\eta.$$
This proves the lemma. \hfill $\square$
\\
\textbf{Proof of Proposition~\ref{mainest}: }
We prove the proposition in three steps. The first step is an
elaboration of the observation in Lemma~\ref{period}.
\textbf{Step 1: }\emph{There exists $\delta>0$ and a constant
$c_\delta<\infty$ with the following property. For every
$(v,\eta) \in \mathscr{L}\times \mathbb{R}$ such that
$v(t) \in U_\delta=H^{-1}\big((-\delta,\delta)\big)$ for every
$t \in \mathbb{R}/\mathbb{Z}$, the following estimate holds:
$$|\eta| \leq 2|\mathcal{A}^H(v,\eta)|+c_\delta||
\nabla\mathcal{A}^H(v,\eta)||.$$
}
Choose $\delta>0$ so small such that
$$\lambda(x)X_H(x) \geq \frac{1}{2}+\delta,
\quad x \in U_\delta.$$
Set
$$c_\delta=2||\lambda|_{U_\delta}||_\infty.$$
We estimate
\begin{eqnarray*}
|\mathcal{A}^H(v,\eta)|&=&
\Bigg|\int_0^1\lambda(v)\partial_t v-\eta\int_0^1 H(v(t))dt\Bigg|\\
&=&\Bigg| \eta \int_0^1\lambda(v)X_H(v)
+\int_0^1\lambda(v)\big(\partial_t v-\eta X_H(v)\big)
-\eta\int_0^1 H(v(t))dt\Bigg| \\
&\geq&
\Bigg| \eta \int_0^1\lambda(v)X_H(v)\Bigg|
-\Bigg|\int_0^1\lambda(v)\big(\partial_t v-\eta X_H(v)\big)\Bigg|\\
& &-\Bigg|\eta\int_0^1 H(v(t))dt\Bigg| \\
&\geq& |\eta|\bigg(\frac{1}{2}+\delta\bigg)-
\frac{c_\delta}{2}||\partial_t v-\eta X_H(v)||_1
-|\eta|\delta\\
&\geq& \frac{|\eta|}{2}-\frac{c_\delta}{2}||\partial_t v-\eta X_H(v)||_2\\
&\geq& \frac{|\eta|}{2}-\frac{c_\delta}{2}||\nabla \mathcal{A}^H(v,\eta)||.
\end{eqnarray*}
This proves Step 1.
\textbf{Step 2: }\emph{For each $\delta>0$ there exists
$\epsilon=\epsilon(\delta)>0$
such that if $||\nabla\mathcal{A}^H(v,\eta)||\leq\epsilon$, then
$v(t) \in U_\delta$ for every $t \in [0,1]$.}
Denote by $\Gamma_\delta$ the set of smooth paths
$\gamma \in C^\infty([0,1],U_\delta)$ such that
$|H(\gamma(0))|=\delta$ and $|H(\gamma(1))|=\delta/2$.
For each $x \in U_\delta$ there is a splitting
$T_x M=T_x H^{-1}(H(x))\oplus T_x ^\perp H^{-1}(H(x))$.
We denote by $\pi_x$ the projection to the second factor.
We introduce the number $\epsilon_0=\epsilon_0(\delta)$ by
$$\epsilon_0=\inf_{\gamma \in \Gamma_\delta}\bigg\{\int_0^1
||\pi_{\gamma(t)}(\dot{\gamma}(t))||dt\bigg\}>0.$$
Now assume that $v \in \mathscr{L}$ has the property that
there exist $t_0,t_1 \in \mathbb{R}/\mathbb{Z}$ such that
$|H(v(t_0))| \geq \delta$ and $|H(v(t_1))| \leq \delta/2$.
We claim that
\begin{equation}\label{pen}
||\nabla \mathcal{A}^H(v,\eta)|| \geq \epsilon_0
\end{equation}
for every $\eta \in \mathbb{R}$.
To see that we estimate
\begin{eqnarray*}
||\nabla \mathcal{A}^H(v,\eta)||
&\geq& \sqrt{\int_0^1||\partial_t v-\eta X_H(v)||^2 dt}\\
&\geq& \sqrt{\int_0^1||\pi_v(\partial_t v-\eta X_H(v))||^2 dt}\\
&=& \sqrt{\int_0^1||\pi_v(\partial_t v)||^2 dt}\\
&\geq& \int_0^1||\pi_v(\partial_t v)||dt\\
&\geq& \epsilon_0.
\end{eqnarray*}
This proves (\ref{pen}).
\\
Now assume that $v \in \mathscr{L}$ has the property that
$v(t) \in M \setminus U_{\delta/2}$ for every $t \in [0,1]$.
In this case we estimate
\begin{equation}\label{npen}
||\nabla \mathcal{A}^H(v,\eta)||
\geq \Bigg|\int_0^1 H(v(t))dt \Bigg| \geq \frac{\delta}{2}
\end{equation}
for every $\eta \in \mathbb{R}$.
From (\ref{pen}) and (\ref{npen}) Step 2 follows with
$\epsilon<\min\{\epsilon_0, \delta/2\}$.
\textbf{Step 3: }\emph{We prove the proposition.}
Combining Step 1 and Step 2 the proposition follows with
$c_M=2M+\epsilon c_\delta$. \hfill $\square$
\\ \\
Proposition~\ref{mainest} allows us to control the
size of the Lagrange multiplier $\eta$. Our first corollary
considers the case of gradient flow lines.
\begin{cor}\label{notime}
Assume that $(v,\eta) \in C^\infty(\mathbb{R} \times S^1,V)\times
C^\infty(\mathbb{R},\mathbb{R})$
is a gradient flow line of $\nabla \mathcal{A}^H$ which satisfies
$\lim_{s \to \pm \infty}(v,\eta)(s,\cdot)=(v^\pm,\eta^\pm)(\cdot)
\in \mathrm{crit}(\mathcal{A}^H)$,
where the limit is uniform in the $t$-variable. Then the
$L^\infty$-norm of $\eta$ is bounded uniformly in terms of
a constant $c<\infty$ which only depends on
$\mathcal{A}^H(v^-,\eta^-)$ and $\mathcal{A}^H(v^+,\eta^+)$.
\end{cor}
To prove invariance of our Floer homology under homotopies we
also have to consider the case of $s$-dependent action functionals.
Let $H^-, H^+ \in C^\infty(V)$ be defining Hamiltonians for two
exact convex hypersurfaces. Consider the
the smooth family of $s$-dependent Hamiltonians $H_s$ defined
as
$$H_s=\beta(s)H^++(1-\beta(s))H^-$$
where $\beta \in C^\infty(\mathbb{R},[0,1])$ is a smooth
monotone increasing cutoff function such that
$\beta(s)=1$ for $s \geq 1$ and $\beta(s)=0$ for $s \leq 0$.
\begin{cor}\label{time}
If $\max_{x \in V}|H^+(x)-H^-(x)|$ is small enough, then for each
gradient flow line $(v,\eta)$ of the $s$-dependent action functional
$\mathcal{A}^{H_s}$ which converges at both ends the Lagrange multiplier
$\eta$ is uniformly bounded in terms of the action values at
the end points.
\end{cor}
\textbf{Proof of Corollary~\ref{notime}: }
Let $\epsilon$ be as in Proposition~\ref{mainest}.
For $\sigma \in \mathbb{R}$ let $\tau(\sigma) \geq 0$ be defined by
$$\tau(\sigma):=\inf\big\{ \tau \geq 0: ||\nabla\mathcal{A}^H\big
((v,\eta)(\sigma+\tau(\sigma))\big)||
< \epsilon\big\}.$$
We abbreviate the energy of the flow line $(v,\eta)$ by
$$
E:=\mathcal{A}^H(v^+,\eta^+)-\mathcal{A}^H
(v^-,\eta^-).
$$
We claim that
\begin{equation}\label{away}
\tau(\sigma) \leq \frac{E}{\epsilon^2}.
\end{equation}
To see this we estimate
\begin{eqnarray*}
E&=&\mathcal{A}^H(v^+,\eta^+)-\mathcal{A}^H(v^-,\eta^-)\\
&=&\int_{-\infty}^\infty\frac{d}{ds}\mathcal{A}^H(v,\eta)ds\\
&=&\int_{-\infty}^\infty d\mathcal{A}^H(v,\eta)
\partial_s(v,\eta)ds\\
&=&\int_{-\infty}^\infty \langle \nabla \mathcal{A}^H(v,\eta),
\partial_s(v,\eta)\rangle ds\\
&=&\int_{-\infty}^\infty||\nabla \mathcal{A}^H(v,\eta)||^2 ds\\
&\geq&\int_{\sigma}^{\sigma+\tau(\sigma)}||\nabla \mathcal{A}^H(v,\eta)||^2 ds\\
&\geq&\tau(\sigma)\epsilon^2.
\end{eqnarray*}
This implies (\ref{away}).
\\
We set
$$M:=\max\{|\mathcal{A}^H(v^+,\eta^+)|,
|\mathcal{A}^H(v^-,\eta^-)|\}.$$
Note that since the action increases
along a gradient flow line we have
$$\big|\mathcal{A}^H\big((v,\eta)(\sigma+\tau(\sigma))\big)\big| \leq
M\text{ for all }\sigma\in\mathbb{R}.$$
We deduce from Proposition~\ref{mainest} and the definition of
$\tau(\sigma)$ that
\begin{equation}\label{eta}
\big|\eta\big(\sigma+\tau(\sigma)\big)\big| \leq c_M.
\end{equation}
We set
\begin{equation}\label{cH}
c_H:=\max_{x \in V}|H(x)|.
\end{equation}
We estimate using (\ref{away}), (\ref{eta}), and (\ref{cH})
\begin{eqnarray*}
|\eta(\sigma)|&\leq&\big|\eta\big(\sigma+\tau(\sigma)\big)\big|+
\int_{\sigma}^{\sigma+\tau(\sigma)}|\partial_s \eta(s)|ds\\
&=&\big|\eta\big(\sigma+\tau(\sigma)\big)\big|+
\int_{\sigma}^{\sigma+\tau(\sigma)}\bigg|\int_0^1H(v(s,t)dt\bigg|ds\\
&\leq&c_M+c_H \tau(\sigma)\\
&\leq&c_M+
\frac{c_H E}
{\epsilon^2}.
\end{eqnarray*}
The right hand side is independent of $\sigma$ and hence we get
\begin{equation}\label{bound}
||\eta||_\infty \leq c_M+\frac{c_H E}{\epsilon^2}.
\end{equation}
This proves the corollary. \hfill $\square$
\\ \\
\textbf{Proof of Corollary~\ref{time}: }In the $s$-dependent case we
define the energy as
$$
E:=\mathcal{A}^{H^+}(v^+,\eta^+)-\mathcal{A}^{H^-}(v^-,\eta^-)
-\int_{-\infty}^\infty \big(\partial_s \mathcal{A}^{H_s}\big)(v,\eta)
ds,
$$
where
$$
\big(\partial_s \mathcal{A}^{H_s}\big)(v,\eta) =
-\eta\int_0^1\frac{\partial H_s}{\partial s}(v)dt = -\eta\int_0^1\beta'(s)
(H^+-H^-)(v)dt.
$$
If we set $c_H:=\max\{c_{H^+},c_{H^-}\}$ and
$\epsilon:=\min\{\epsilon(H^+),\epsilon(H^-)\}$ then (\ref{bound})
can be deduced as in the time-independent case.
However, $E$ is a priori not bounded anymore because of the
term containing the $s$-derivatives of the action functional.
We use the abbreviations
$$\Delta:=\mathcal{A}^{H^+}(v^+,\eta^+)-\mathcal{A}^{H^-}(v^-,\eta^-)$$
and
$$\delta:=\max_{x \in V}|H^+(x)-H^-(x)|$$
and estimate
\begin{eqnarray*}
E&=&\Delta -\int_{-\infty}^\infty (\partial_s \mathcal{A}^{H_s})(v,\eta) ds\\
&=&\Delta +\int_{-\infty}^\infty \beta '(s)\eta(s)\Bigg(\int_0^1
(H^+-H^-)\bigl(v(s,t)\bigr)dt\Bigg)ds\\
&\leq&\Delta +\delta||\eta||_\infty.
\end{eqnarray*}
If we set this estimate into (\ref{bound}) we obtain
$$||\eta||_\infty \leq c_M+\frac{c_H \Delta }{\epsilon^2}+
\frac{c_H \delta}{\epsilon^2}||\eta||_\infty.$$
Now if
$$\delta<\frac{\epsilon^2}{c_H}$$
we obtain the following uniform $L^\infty$-bound for $\eta$
$$||\eta||_\infty \leq \frac{\epsilon^2 c_M+c_H \Delta }
{\epsilon^2-c_H \delta}.$$
This proves Corollary~\ref{time}. \hfill $\square$
\\ \\
\textbf{Proof of Theorem~\ref{well}: }
As we pointed out at the beginning of this section, we may assume without
loss of generality that $\mathcal{A}^H$ is Morse-Bott. Choose
an additional Morse function $h$ on $\mathrm{crit}(\mathcal{A}^H)$.
The Floer chain complex is defined in the following
way. $CF(\mathcal{A}^H,h)$ is the
$\mathbb{Z}_2$-vector space consisting of formal sums
$$\xi=\sum_{c \in \mathrm{crit}(h)} \xi_c c$$
where the coefficients $\xi_c \in \mathbb{Z}_2$ satisfy the finiteness
condition
\begin{equation}\label{novikov}
\#\{c \in \mathrm{crit}(h): \xi_c \neq 0,\,\,
\mathcal{A}^H(c) \leq \kappa\}<\infty
\end{equation}
for every $\kappa \in \mathbb{R}$.
To define the boundary operator, we require
some compatibility condition of the family of $\omega$-compatible
almost complex structures $J_t$ with the convex structure of $V$
at infinity in order
to make sure that our cascades remain in a compact subset of $V$.
As we remarked in the introduction, completeness implies that there
exists a contact manifold $(M,\alpha_M)$
\footnote{Be careful! Do not confuse the contact manifolds
$(M,\alpha_M)$ and $(\Sigma, \alpha)$.}
such that a neighbourhood of infinity of
the symplectic manifold $(V,\omega)$ can be symplectically identified
with
$(M \times \mathbb{R}_+, d(e^r \alpha_M))$, where
$r$ refers to the coordinate on
$\mathbb{R}_+=\{r \in \mathbb{R}: r \geq 0\}$. We may assume
without loss of generality that $H$ is constant on
$M \times \mathbb{R}_+$.
We require the
following conditions on $J_t$ for every $t \in [0,1]$.
\begin{itemize}
\item For each $x \in M$ we have
$J_t(x)\frac{\partial}{\partial r}=R_M$, where $R_M$
is the Reeb vector field on $(M,\alpha_M)$.
\item $J_t$ leaves the kernel of $\alpha_M$ invariant for
every $x \in M$.
\item $J_t$ is invariant under the local
half flow $(x,0) \mapsto (x,r)$
for $(x,r) \in M \times \mathbb{R}_+$.
\end{itemize}
We choose further an additional Riemannian metric
$g_c$ on the critical manifold $\mathrm{crit}(\mathcal{A}^H)$. For
two critical points $c_-, c_+ \in \mathrm{crit}(h)$ we consider
the moduli space of {\em gradient flow lines with cascades}
$\mathcal{M}_{c_-,c_+}(\mathcal{A}^H,h,J,g_c)$ as defined in
Appendix~\ref{app:casc}. .
For generic choice of $J$ and $g_c$ this moduli space of is a smooth
manifold. We claim that its zero dimensional component
$\mathcal{M}^0_{c_-,c_+}(\mathcal{A}^H,h,J,g_c)$ is
actually compact and hence a finite set. To see that we have to prove
that cascades are compact modulo breaking. Since the support of
$X_H$ lies outside of $M \times \mathbb{R}_+$, the first component of
a gradient flow line which enters $M \times \mathbb{R}_+$
will just satisfy the pseudo-holomorphic
curve equation by (\ref{flowline}).
By our choice of the family of almost complex structures the
convexity condition guarantees that it cannot touch any level set $M
\times \{r\}$ from inside (see~\cite{mcduff}), and since its asymptotics
lie outside of $M \times \mathbb{R}_+$ it has to
remain in the compact set $V \setminus M \times \mathbb{R}_+$
all the time. This gives us
a uniform $L^\infty$-bound on the first component.
Corollary~\ref{notime} implies that the second component remains
bounded, too. Since the symplectic form $\omega$ is exact there are
no nonconstant $J$-holomorphic spheres. This excludes
bubbling and hence the derivatives
of (\ref{flowline}) can be controlled, see \cite{mcduff-salamon}.
This proves the claim.
\\
We now set
$$n(c_-,c_+)=\#\mathcal{M}^0_{c_-,c_+}(\mathcal{A}^H,h,J,g_c) \,\,
\mathrm{mod}\,\,2\in
\mathbb{Z}_2$$
and define the Floer boundary operator
$$\partial \colon CF(\mathcal{A}^H,h) \to CF(\mathcal{A}^H,h)$$
as the linear extension of
$$\partial c=\sum_{c' \in \mathrm{crit}(h)}n(c,c')c'$$
for $c \in \mathrm{crit}(h)$. Again using the fact that
the moduli space of cascades are compact modulo breaking, a well-known
argument in Floer theory shows that
$$\partial^2=0.$$
We define our Floer homology as usual by
$$HF(\mathcal{A}^H,h,J,g_c)=\frac{\mathrm{ker}\partial}
{\mathrm{im}\partial}.$$
Standard arguments show that $HF(\mathcal{A}^H,h,J,g_c)$ is independent
of the choices of $h$, $J$, and $g_c$ up to canonical isomorphism and
hence $HF(\mathcal{A}^H)$ is well defined. To prove that
it is invariant under homotopies of $H$ we use Corollary~\ref{notime}
to show that the Floer homotopies which are defined by counting solutions
of the $s$-dependent gradient equation are well defined. This finishes
the proof of Theorem~\ref{well}. \hfill $\square$
\\ \\
\textbf{Proof of Theorem~\ref{zero}: }
We consider the following perturbation of $\mathcal{A}^H$. Let
$F\in C^\infty(\mathbb{R}/\mathbb{Z}\times V)$ be a smooth map
such that $F|_{(0,1)\times V}$ has compact support.
We use the notation $F_t=F(t, \cdot)$ for $t \in \mathbb{R}/\mathbb{Z}$.
Denote by $\phi^t_H$ and $\phi_F^t$ the flows of the Hamiltonian vector
fields of $H$ and $F_t$, respectively. We define
$$\mathcal{A}^H_F \colon \mathscr{L} \times \mathbb{R} \to \mathbb{R}$$
by
$$\mathcal{A}^H_F(v,\eta):=\mathcal{A}^H(v,\eta)-
\int_0^1 F_t \big(\phi^{-t \eta}_H(v(t))\big)dt.$$
We further abbreviate
$$\mathfrak{S}(X_H):=\mathrm{cl}\{x \in M: X_H(x) \neq 0\}$$
the support of the Hamiltonian vector field of $H$.
Theorem~\ref{zero} follows from the following Proposition and a
standard Floer homotopy argument. \hfill $\square$
\begin{prop}\label{nocrit}
Assume that $\phi^1_F(\mathfrak{S}(X_H))\cap \mathfrak{S}(X_H)=\emptyset$.
Then there exists $\tilde{F} \in C^\infty(\mathbb{R}/\mathbb{Z}\times V)$
such that $\tilde{F}|_{(0,1)\times V}$ has compact support and there
are no critical points of $\mathcal{A}^H_{\tilde{F}}$.
\end{prop}
\textbf{Proof: }
Critical points of $\mathcal{A}^H_F$ are solutions
of the problem
$$
\left. \begin{array}{c}
\partial_t v(t)= \Big(\phi_H^{-\eta t}\Big)^*
X_{F_t}(v(t))+\eta X_H(v(t)),
\quad t \in \mathbb{R}/\mathbb{Z}, \\
\int_0^1 \big(t\{H,F_t\}(\phi^{-t\eta}_H(v(t))+H(v(t))\big)dt=0
\end{array}
\right\}
$$
with the Poisson bracket given by
$\{F,H\}=dF(X_H)$.
Define $w \in C^\infty([0,1],M)$ by
$$w(t):=\phi_H^{-t \eta}(v(t)), \quad t \in [0,1].$$
Then $w$ satisfies
\begin{equation}\label{w}
\left. \begin{array}{c}
\partial_t w(t)=X_{F_t}(w(t)), \quad t \in [0,1],\\
w(1)=\phi^{-\eta}_H(w(0)), \\
\int_0^1 \big(t\{H,F_t\}(w(t))+H(w(t))\big)dt=0.
\end{array}
\right\}
\end{equation}
For a smooth map $\rho \in C^\infty([0,1],[0,1])$
satisfying $\rho(0)=0$ and
$t \in \mathbb{R}$ set
$$F^\rho_t:=\dot{\rho}(t)F_{\rho(t)} \in C^\infty(V).$$
Note that
$$\phi^t_{F^\rho}=\phi^{\rho(t)}_F.$$
Equations (\ref{w}) for $F$ replaced by $F^\rho$ become
\begin{equation}\label{wnew}
\left. \begin{array}{c}
w(t)=\phi_{F}^{\rho(t)}(w(0)), \\
w(1)=\phi^{-\eta}_H(w(0)), \\
\int_0^1 \big(t\dot{\rho}(t)\{H,F_{\rho(t)}\}(w(t))+H(w(t))\big)dt=0.
\end{array}
\right\}
\end{equation}
Since the Hamiltonian vector field of $H$ has compact support, there
exists a constant $c$ such that
$$\max_{\substack{x \in V,\\ t \in \mathbb{R}/\mathbb{Z}}}|\{H,F_t\}(x)|\leq c,
\quad \max_{x \in V}|H(x)| \leq c.$$
Using again that the support of the Hamiltonian
vector field is compact together with the fact that
$0$ is a regular value of $H$ we conclude that there exists $\delta>0$
such that
$$\min_{x \in V \setminus \mathfrak{S}(X_H)}|H(x)| = \delta.$$
Choose an $\epsilon>0$ such that
$$\epsilon<\frac{\delta}{2c+\delta}$$
and a smooth function $\rho_\epsilon \in C^\infty([0,1],[0,1])$ such that
$$
\left. \begin{array}{c}
\rho_\epsilon(0)=0\\
\rho_\epsilon(t)=1, \quad t \geq \epsilon, \\
\dot{\rho}_\epsilon(t) \geq 0, \quad t \in [0,1].\\
\end{array}
\right\}
$$
Proposition~\ref{nocrit} follows now with
$\tilde{F}=F^{\rho_\epsilon}$ in view of the lemma below.
\hfill $\square$
\begin{lemma}
Assume that $\phi^1_F(\mathfrak{S}(X_H))\cap \mathfrak{S}(X_H)=\emptyset$.
Then there are no solution of (\ref{wnew}) for $\rho=\rho_\epsilon$.
\end{lemma}
\textbf{Proof: }
Let $w$ be a solution of (\ref{wnew}). We first claim that
\begin{equation}\label{claimw}
w(0) \notin \mathfrak{S}(X_H).
\end{equation}
We argue by contradiction and assume that
$w(0) \in \mathfrak{S}(X_H)$. It follows from the first
equation in (\ref{wnew}) and the assumption of the lemma that
$$w(1)=\phi^{\rho(1)}_F(w(0))=\phi^1_F(w(0))
\notin \mathfrak{S}(X_H).$$
The definition of $\mathfrak{S}(X_H)$ implies that
$$\phi^{\eta}_H(w(1))=w(1).$$
Combining the above two equations together with the second equation
in (\ref{wnew}) we conclude
$$w(0)=\phi^{\eta}(w(1))=w(1) \notin \mathfrak{S}(X_H).$$
This contradicts the assumption that $w(0) \in \mathfrak{S}(X_H)$
and proves (\ref{claimw}).
\\
Combining (\ref{claimw}) with the second equation in (\ref{wnew})
we obtain
\begin{equation}\label{w2}
w(1)=w(0) \notin \mathfrak{S}(X_H).
\end{equation}
Using the definition of $\rho_\epsilon$, the first equation
in (\ref{wnew}), and (\ref{w2}) we get
\begin{equation}\label{w3}
w(t)=\phi^{\rho_\epsilon(t)}_F (w(0))=
\phi^1_F(w(0))=w(1) \notin \mathfrak{S}(X_H), \quad t \geq \epsilon.
\end{equation}
Using the definition of $\mathfrak{S}(X_H)$ and of
$\delta$ we deduce that
\begin{equation}\label{w4}
|H(w(t))|\geq \delta, \quad \{H,F_{\rho_\epsilon(t)}\}
(w(t))=0, \quad t\geq \epsilon.
\end{equation}
Using (\ref{w4}), the definition of $c$ and $\epsilon$, and
the properties of $\rho_\epsilon$ we
estimate
\begin{align*}
&\Bigg|\int_0^1 \big(t\dot{\rho}_\epsilon(t)\{H,F_{\rho_\epsilon(t)}\}
(w(t))+H(w(t))\big)dt\Bigg|\cr
&\geq
-\Bigg|\int_0^\epsilon \big(t\dot{\rho}_\epsilon(t)\{H,F_{\rho_\epsilon(t)}\}
(w(t))+H(w(t))\big)dt\Bigg|\cr
& \ \
+\Bigg|\int_\epsilon^1 \big(t\dot{\rho}_\epsilon(t)\{H,F_{\rho_\epsilon(t)}\}
(w(t))+H(w(t))\big)dt\Bigg|
\cr
&\geq
-\Bigg|\int_0^\epsilon (\epsilon c \dot{\rho}_\epsilon(t)+c)dt\Bigg|+
\delta(1-\epsilon)\cr
&=-2c\epsilon+\delta(1-\epsilon)\cr
&>0.
\end{align*}
This contradicts the third equation in (\ref{wnew}). Hence there
are no solutions of (\ref{wnew}),
which proves the lemma. \hfill $\square$
\section{Index computations}\label{sec:index}
In this section we prove Theorem~\ref{compute}. The proof comes down
to the computation of the indices of generators of the Floer chain
complex in the case that $\Sigma$ is the unit cotangent bundle of the
sphere.
We first have to study the question under which conditions
$HF(\Sigma,V)$ has a $\mathbb{Z}$-grading. Throughout this
section, we make the following assumptions:
\begin{itemize}
\item[(A)]Closed Reeb orbits on $(\Sigma,\alpha)$ are of Morse-Bott
type (see Section~\ref{sec:floer}).
\item[(B)]$\Sigma$ is simply connected and $V$ satisfies $I_{c_1}=0$.
\end{itemize}
Under these assumptions the (transversal) Conley Zehnder index of a
Reeb orbit $v \in C^\infty(S^1,\Sigma)$ can be defined in the
following way. Since $\Sigma$ is simply connected, we can find a
map $\bar{v} \in C^\infty(D, \Sigma)$ on
the unit disk $D=\{z \in \mathbb{C}: |z| \leq 1\}$ such that
$\bar{v}(e^{2\pi i t})=v(t)$. Choose a (homotopically unique)
symplectic trivialization of the symplectic vector bundle
$(\bar{v}^*\xi,\bar{v}^*d\alpha)$. The linearized flow of the
Reeb vector field along $v$ defines a path in the
group $Sp(2n-2,\mathbb{R})$ of symplectic matrices. The Maslov index
of this path \cite{robbin-salamon0} is the {\em (transversal)
Conley-Zehnder index} $\mu_{CZ}\in\frac{1}{2}\mathbb{Z}$. It is independent
of the choice of the disk $\bar{v}$ due to the assumption $I_{c_1}=0$
on $V$.
Let $\mathcal{M}$ be the moduli space of all finite energy gradient
flow lines of the action functional
$\mathcal{A}^H$. Since $\mathcal{A}^H$ is Morse-Bott
every finite energy gradient flow line
$(v,\eta) \in C^\infty(\mathbb{R}\times S^1,V) \times
C^\infty(\mathbb{R},\mathbb{R})$ converges exponentially
at both ends to critical points
$(v^\pm,\eta^\pm) \in \mathrm{crit}(\mathcal{A}^H)$ as the flow parameter
goes to $\pm \infty$. The linearization of the gradient flow
equation along any path $(v,\eta)$ in $\mathscr{L}\times \mathbb{R}$ which
converges exponentially to the critical points of $\mathcal{A}^H$
gives rise to an operator $D_{(v,\eta)}^{\mathcal{A}^H}$.
For suitable weighted Sobolev spaces (the weights are needed because
we are in a Morse-Bott situation) the operator
$D_{(v,\eta)}^{\mathcal{A}^H}$ is a Fredholm operator.
Let $C^-,C^+ \subset \mathrm{crit}(\mathcal{A}^H)$ be
the connected components of the critical manifold of
$\mathcal{A}^H$ containing $(v^-,\eta^-)$ or $(v^+,\eta^+)$
respectively.
The local virtual dimension of $\mathcal{M}$
at a finite energy gradient flow line is defined to be
\begin{equation}\label{i1}
\mathrm{virdim}_{(v,\eta)}\mathcal{M}:=
\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)}+\mathrm{dim}C^-+\mathrm{dim}C^+
\end{equation}
where $\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)}$ is the Fredholm index
of the Fredholm operator $D_{(v,\eta)}^{\mathcal{A}^H}$. For
generic compatible almost complex structures, the moduli space of
finite energy gradient flow lines
is a manifold and the local virtual dimension of the moduli
space at a gradient flow line $(v,\eta)$ corresponds to the
dimension of the connected component of $\mathcal{M}$ containing
$(v,\eta)$. Our first goal is to prove the following index formula.
\begin{prop}\label{index}
Assume that hypotheses (A) and (B) hold.
Let $C^-,C^+ \subset \mathrm{crit}(\mathcal{A}^H)$ be
two connected components of the critical manifold of $\mathcal{A}^H$.
Let $(v,\eta) \in C^\infty(\mathbb{R}\times S^1,V) \times
C^\infty(\mathbb{R},\mathbb{R})$ be a gradient flow line
of $\mathcal{A}^H$ which converges at both ends
$\lim_{s \to \pm \infty}(v,\eta)(s) \to (v^\pm,\eta^\pm)$
to critical points of $\mathcal{A}^H$ satisfying
$(v^\pm, \eta^\pm) \in C^\pm$. Choose maps
$\bar{v}^\pm \in C^\infty(D,\Sigma)$ satisfying
$\bar{v}^\pm(e^{2 \pi i t})=v^\pm(t)$. Then the local virtual
dimension of the moduli space $\mathcal{M}$
of finite energy gradient flow lines
of $\mathcal{A}^H$ at $(v,\eta)$ is given by
\begin{equation}\label{virdim}
\mathrm{virdim}_{(v,\eta)}\mathcal{M}=
\mu_{CZ}(v^+)-\mu_{CZ}(v^-)
+2c_1(\bar{v}^-\#v\#\bar{v}^+)
+\frac{\mathrm{dim}C^-+\mathrm{dim}C^+}{2}
\end{equation}
where $\bar{v}^-\#v\#\bar{v}^+$ is the sphere obtained by
capping the cylinder $v$ with the disks $\bar{v}^+$ and
$\bar{v}^-$, and $c_1=c_1(TV)$.
\end{prop}
The proof is based on a discussion of
{\bf Spectral flows. }It is shown in \cite{robbin-salamon} that the
Fredholm index of $D^{\mathcal{A}^H}_{(v,\eta)}$ can be computed via
the {\em spectral flow} $\mu_{\rm spec}$ (see Appendix~\ref{sec:spec}) of
the Hessian ${\rm Hess}_{\mathcal{A}^H}$ along $(v,\eta)$ by the formula
\begin{equation}\label{i2}
\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)} =
\mu_{\rm spec}\Bigl({\rm Hess}_{\mathcal{A}^H}(v,\eta)\Bigr).
\end{equation}
Our proof compares the
spectral flow of the Hessian of $\mathcal{A}^H$
with the spectral flow of the action functional of classical mechanics
which can be computed via the Conley-Zehnder indices.
For a {\em fixed} Lagrange multiplier $\eta \in \mathbb{R}$
the action functional of classical mechanics arises as
$$
\mathcal{A}^H_\eta := \mathcal{A}^H(\cdot,\eta) \colon \mathscr{L}
\to \mathbb{R}.
$$
Assume first that the periods $\eta^\pm$ of the Reeb orbits $v^\pm$
are nonzero. We begin by homotoping the action functional
$\mathcal{A}^H$ via Morse-Bott functionals with fixed critical manifold
to an action functional $\mathcal{A}^{H^1}$ which satisfies
the assumptions of (the infinite dimensional analogue of)
Lemma~\ref{varlag}. There exists a neighbourhood $U \subset V$ of
$\Sigma$ and an $\epsilon>0$ such that $U$ is
symplectomorphic to
$\big(\Sigma \times (-\epsilon, \epsilon), d (e^r \alpha)\big)$
where $r$ is the coordinate on $(-\epsilon, \epsilon)$.
Since $\AA^H$ is Morse-Bott and the Hamiltonian vector field $X_H(x)$
for $x \in \Sigma$ equals the Reeb vector field $R(x)$, there exists a
homotopy $H^s$ for $s \in [0,1]$ which satisfies the following
conditions:
\begin{itemize}
\item $H ^0=H$.
\item $X_{H^s}(x)=R(x)$ for $x \in \Sigma$ and $s \in [0,1]$.
\item There exist neighbourhoods $U^\pm\subset U$ of the critical
manifolds $C^\pm$ and functions $h_\pm \in C^\infty((-\epsilon,\epsilon))$
satisfying $h_\pm(0)=0$, $h_\pm'(0)=1$,
$h_\pm''(0) \neq 0$, and $h_\pm'(r) \neq 0$ for
$r \in (-\epsilon, \epsilon)$ such that
$H^1(x,r)=h_\pm(r)$ for $(x,r)\in
U^\pm\subset\Sigma\times(-\epsilon,\epsilon)$.
\item $\AA^{H^s}$ is Morse-Bott for all $s\in[0,1]$.
\end{itemize}
Here the signs of $h_\pm''(0)$ are determined by the second derivatives
of $H$ in the direction transverse to $\Sigma$ along $C^\pm$.
Since $\mathcal{A}^H$ can be homotoped to $\mathcal{A}^{H^1}$
via Morse-Bott action functionals with fixed critical manifold, we
obtain
\begin{equation}\label{i3}
\mu_{\rm spec}\Bigl({\rm Hess}_{\mathcal{A}^H}(v,\eta)\Bigr) =
\mu_{\rm spec}\Bigl({\rm Hess}_{\mathcal{A}^{H^1}}(v,\eta)\Bigr).
\end{equation}
If $(v_0,\eta_0) \in C^\infty(S^1,\Sigma\cap U^\pm) \times \mathbb{R}$
is a critical point of
$\mathcal{A}^H$, then $(v_0,\eta_0)$ is also a critical point of
$\mathcal{A}^{H^1}$. Moreover, the family
$(v_\rho,\eta_\rho) \in C^\infty(S^1,U)\times \mathbb{R}$
given by
$$v_\rho(t)=(v_0(t),h^{-1}(-\rho)), \quad
\eta_\rho=\frac{\eta_0}{h'(h^{-1}(-\rho))}$$
consists of critical points for the family of action functionals
$\mathcal{A}^{H^1,\rho} \colon \mathscr{L}\times \mathbb{R} \to \mathbb{R}$
given for $(v,\eta) \in \mathscr{L} \times \mathbb{R}$ by
$$\mathcal{A}^{H^1,\rho}(v,\eta) := \int v^*\lambda-
\eta\bigg(\int_0^1H^1(v(t))dt+\rho\bigg).$$
Note that
$$
\partial_\rho\eta_\rho|_{\rho=0} =
-\frac{\eta_0h_\pm''(0)}{h_\pm'(0)^2}.
$$
Hence for $\eta_0=\eta^\pm\neq 0$ the hypotheses of Lemma~\ref{varlag} are
satisfied. It follows from Theorem~\ref{lagspec} and
Lemma~\ref{varlag} that the spectral flow can be expressed in terms of
the spectral flow of the action functional of classical mechanics plus
a correction term accounting for the second derivatives of $H$
transversally to $\Sigma$ as
\begin{equation}\label{i4}
\mu_{\rm spec}\Bigl({\rm Hess}_{\mathcal{A}^{H^1}}(v,\eta)\Bigr)=
\mu_{\rm spec}\Bigl({\rm Hess}_{\mathcal{A}^{H^1}_\eta}(v)\Bigr)+\frac{1}{2}
\bigg( \mathrm{sign}(\eta^-\cdot h_-''(0))-
\mathrm{sign}(\eta^+ \cdot h_+''(0))\bigg).
\end{equation}
It follows from a theorem due to Salamon and Zehnder
\cite{salamon-zehnder} that the spectral flow of the Hessian of
$\mathcal{A}^{H^1}_\eta$ can be computed via Conley-Zehnder
indices. However, the Conley-Zehnder
indices in the Salamon-Zehnder theorem are not the
(transversal) Conley-Zehnder indices explained above,
but the Maslov index of the linearized flow of the Reeb vector
field on the whole tangent space of $V$ and not just on the contact
hyperplane. For a Reeb orbit $v$ we will denote this second {\em
(full) Conley-Zehnder index} by $\hat{\mu}_{CZ}(v)$. Note that
$\hat{\mu}_{CZ}(v)$ depends
on the second derivatives of $H$ transversally to $\Sigma$ while
$\mu_{CZ}(v)$ does not. Another complication is that we are in a Morse-Bott
situation and we have to adapt the Salamon-Zehnder theorem to this situation.
Formula (\ref{extsf}) defines the spectral flow also for Morse-Bott
situations. To adopt the Conley-Zehnder indices to the Morse-Bott situation
observe that in a symplectic trivialization the linearized flow
of the Reeb vector field can be expressed as a solution of an ordinary
differential equation
$$\dot{\Psi}(t)=J_0 S(t)\Psi(t), \quad \Psi(0)=\mathrm{id},$$
where $t \mapsto S(t)=S(t)^T$ is a smooth path of symmetric matrices.
For a real number $\delta$ we define $\Psi_\delta$ as the
solution of
$$\dot{\Psi}_\delta(t)=J_0 \big(S(t)-\delta\cdot \mathrm{id}\big)\Psi_\delta(t),
\quad \Psi_\delta(0)=\mathrm{id},$$
and set $\mu^{\delta}_{CZ}(v)$, respectively $\hat{\mu}^{\delta}_{CZ}(v)$
as the Conley-Zehnder index of $\Psi_{\delta}$ where in the first case
we restrict $\Psi_\delta$ to the contact hyperplane and in the second
case we consider it on the whole tangent space. We put
$$\mu^+_{CZ}(v):=\lim_{\delta \searrow 0}\mu^{\delta}_{CZ}(v), \quad
\mu^-_{CZ}(v):=\lim_{\delta \searrow 0}\mu^{-\delta}_{CZ}(v)$$
and analoguously $\hat{\mu}^+_{CZ}(v)$ and $\hat{\mu}^-_{CZ}(v)$. Note
that while $\hat{\mu}_{CZ}(v)$ and $\mu_{CZ}(v)$ are half-integers,
$\hat{\mu}^\omega_{CZ}(v)$ and $\mu^\pm_{CZ}(v)$ are actually integers.
We are now in position to state the theorem of Salamon and Zehnder.
\begin{thm}[Salamon-Zehnder~\cite{salamon-zehnder}]\label{diedi}
The spectral flow of the Hessian of $\mathcal{A}^{H^1}_\eta$ is
given by
$$\mu(H_{\mathcal{A}^{H^1}_\eta}(v))=\hat{\mu}^+_{CZ}(v^+)-
\hat{\mu}^-_{CZ}(v^-)+2c_1(\bar{v}^-\#v\#\bar{v}^+).$$
\end{thm}
{\bf Relations between Conley-Zehnder indices. }
The following two lemmata relate the different Conley-Zehnder indices to
each other.
\begin{lemma}\label{coz1}
For a Reeb orbit $v$ with period $\eta\neq 0$, viewed as a 1-periodic
orbit of the Hamiltonian vector field of $\eta H$, we have
$$\hat{\mu}^\pm_{CZ}(v)=\mu^\pm_{CZ}(v)+\frac{1}{2}\bigg(
\mathrm{sign}\big(\eta h''(0)\big)\mp 1\bigg).$$
\end{lemma}
\textbf{Proof: } By the product property \cite{salamon} of the
Conley-Zehnder index the difference of
$\hat{\mu}^\pm_{CZ}(v)$ and $\mu^\pm_{CZ}(v)$ is given by
the Conley Zehnder index of the linearized flow of the Hamiltonian vector
field restricted to the symplectic orthogonal complement $\xi^\omega$ of
the contact hyperplane in the tangent space of $V$. With respect
to the trivialization $\mathbb{C} \to \xi^\omega(v(t))$ given
by $x+iy \mapsto (x \cdot \nabla H(v(t))+y \cdot X_H(v(t)))$ for $t \in
S^1$,
the linearized flow of the Hamiltonian vector field is given by
$$\Psi(t)=\left(\begin{array}{cc}
1 & 0\\
t\eta h''(0) & 1
\end{array}\right).$$
Abbreviate $a := \eta \cdot h''(0)$.
A computation shows that
$$
\Psi_\delta(t) = e^{\delta(a-\delta)t^2}
\left(\begin{array}{cc}
1 & \delta t \\
(a-\delta) t & 1
\end{array}\right).
$$
Recall \cite{salamon} that the Conley-Zehnder index can be computed
in terms of crossing numbers, where a number $t \in [0,1]$ is called
a crossing if $\mathrm{det}(\mathrm{id}-\Psi_{\delta}(t))=0$.
The formula above shows that for $\delta$ small enough the only
crossing happens at zero. Hence by~\cite{salamon} the
Conley-Zehnder index is given by
$$\mu_{CZ}(\Psi_\delta)=\frac{1}{2}\mathrm{sign}
\left(\begin{array}{cc}
a-\delta & 0\\
0 &-\delta
\end{array}\right).$$
If $|\delta|<|a|$ we obtain
$$\mu_{CZ}(\Psi_\delta)=\frac{1}{2}\bigg(
\mathrm{sign}(a)-\mathrm{sign}(\delta)\bigg)=
\frac{1}{2}\bigg(\mathrm{sign}\big(\eta
h''(0)\big)-\mathrm{sign}(\delta)\bigg)$$
and hence
$$\hat{\mu}_{CZ}^\pm(v)-\mu_{CZ}^\pm(v)=
\frac{1}{2}\bigg(\mathrm{sign}\big(\eta h''(0)\big)\mp 1\bigg).$$
This proves the lemma. \hfill $\square$
\begin{lemma}\label{coz2}
Let $v$ be a Reeb orbit with period $\eta\neq 0$ and $C_v$ the
component of the critical manifold of $\mathcal{A}^{H^1}$ which
contains $v$. Then
$$\hat{\mu}_{CZ}(v)=\hat{\mu}^\pm_{CZ}(v)\pm \frac{\mathrm{dim}C_v}{2},
\quad \mu_{CZ}(v)=\mu_{CZ}^\pm(v)\pm \frac{\mathrm{dim}C_v-1}{2}.$$
\end{lemma}
\textbf{Proof: }Obviously
\begin{equation}\label{cz1}
\hat{\mu}^-_{CZ}(v)-\hat{\mu}^+_{CZ}(v)=\mathrm{dim}C_v,
\quad
\mu^-_{CZ}(v)-\mu^+_{CZ}(v)=\mathrm{dim}C_v-1.
\end{equation}
The reason for the minus one in the second formula is that
the transversal Conley-Zehnder index only takes into account
the critical manifold of $\mathcal{A}^{H^1}$ modulo the
$S^1$-action given by the Reeb vector field. The Conley-Zehnder index
can be interpreted as intersection number of a path of Lagrangian
subspaces with the Maslov cycle, see \cite{robbin-salamon0}.
Under a small perturbation the intersection number can only
change at the initial and endpoint. Since the Lagrangian subspace
at the initial point is fixed it will change only at the endpoint.
There the contribution is given by half of the crossing number
which equals $\mathrm{dim}C_v$ in the case one considers the
Conley-Zehnder index on the whole tangent space respectively
$\mathrm{dim}C_v-1$ if one considers the Conley-Zehnder index only
on the contact hyperplane. In particular,
\begin{equation}\label{cz2}
|\hat{\mu}_{CZ}(v)-\hat{\mu}_{CZ}^\pm(v)| \leq \frac{\mathrm{dim}C_v}{2},
\quad |\mu_{CZ}(v)-\mu^\pm_{CZ}(v)| \leq \frac{\mathrm{dim}C_v-1}{2}.
\end{equation}
Comparing (\ref{cz1}) and (\ref{cz2}) the lemma follows. \hfill $\square$
\\ \\
\textbf{Proof of Proposition~\ref{index}: }
We first assume that $\eta^-$ and $\eta^+$ are nonzero.
Combining the theorem of Salamon and Zehnder (Theorem~\ref{diedi}) with
Lemma~\ref{coz1} and Lemma~\ref{coz2}
we obtain
\begin{eqnarray*}
\mu(H_{\mathcal{A}^{H^1}_\eta}(v))&=&
\hat{\mu}^+_{CZ}(v^+)-\hat{\mu}^-_{CZ}(v^-)+
2c_1(\bar{v}^-\#v\#\bar{v}^+)\\
&=&\mu^+_{CZ}(v^+)-\mu^-_{CZ}(v^-)+
2c_1(\bar{v}^-\#v\#\bar{v}^+)-1\\
& &+\frac{1}{2}\bigg(\mathrm{sign}(\eta^+ \cdot h_+''(0))
-\mathrm{sign}(\eta^- \cdot h+-''(0))\bigg)\\
&=&\mu_{CZ}(v^+)-\mu_{CZ}(v^-)+
2c_1(\bar{v}^-\#v\#\bar{v}^+)-\frac{\mathrm{dim}C^-+\mathrm{dim}C^+}{2}\\
& &+\frac{1}{2}\bigg(\mathrm{sign}(\eta^+ \cdot h_+''(0))
-\mathrm{sign}(\eta^- \cdot h_-''(0))\bigg).
\end{eqnarray*}
Combining this equality with (\ref{i1}), (\ref{i2}), (\ref{i3}), and
(\ref{i4}) we compute
\begin{eqnarray*}
\mathrm{virdim}_{(v,\eta)}\mathcal{M}
&=&\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)}+\mathrm{dim}C^-+\mathrm{dim}C^+\\
&=&\mu(H_{\mathcal{A}^{H}}(v,\eta))+\mathrm{dim}C^-+\mathrm{dim}C^+\\
&=&\mu(H_{\mathcal{A}^{H^1}}(v,\eta))+\mathrm{dim}C^-+\mathrm{dim}C^+\\
&=&\mu(H_{\mathcal{A}^{H^1}_\eta}(v))
+\frac{1}{2}\bigg(\mathrm{sign}(\eta^-\cdot h_-''(0))-
\mathrm{sign}(\eta^+\cdot h_+''(0))\bigg)
\\
& &+\mathrm{dim}C^-+\mathrm{dim}C^+\\
&=&\mu_{CZ}(v^+)-\mu_{CZ}(v^-)
+2c_1(\bar{v}^-\#v\#\bar{v}^+)\\
& &+\frac{\mathrm{dim}(C^-)+\mathrm{dim}(C^+)}{2}.
\end{eqnarray*}
This proves the proposition for the case where the periods of the
asymptotic Reeb orbits are both nonzero. To treat also the case where
one of the asymptotic Reeb orbits is constant we consider
the following involution on the loop space $\mathscr{L}$
$$I(v)(t)=v(-t), \quad v \in \mathscr{L},\,\, t\in S^1.$$
We extend this involution to an involution on $\mathscr{L}\times \mathbb{R}$
which we denote by abuse of notation also by $I$ and which is given by
$$I(v,\eta)=(I(v),-\eta), \quad (v,\eta) \in \mathscr{L}\times \mathbb{R}.$$
The action functional $\mathcal{A}^H$ transforms under the involution $I$
by
$$\mathcal{A}^H(I(v,\eta))=-\mathcal{A}^H(v,\eta),\quad
(v,\eta) \in \mathscr{L}\times \mathbb{R}.$$
In particular, the restriction of the involution $I$ to the
critical manifold of $\mathcal{A}^H$ induces an involution
on $\mathrm{crit}(\mathcal{A}^H)$ and the fixed points of this involution
are the constant Reeb orbits.
\\
We consider now a finite energy gradient flow line
$(v,\eta) \in C^\infty(\mathbb{R}\times S^1,V)\times
C^\infty(\mathbb{R},\mathbb{R})$ of the action functional
$\mathcal{A}^H$ whose right end $(v^+,\eta^+)$ is a constant Reeb orbit
and whose left end $(v^-,\eta^-)$ is a nonconstant Reeb orbit.
For the path $(v,\eta)$ in $\mathscr{L}\times \mathbb{R}$ we
consider the path $(v,\eta)_I=(v_I,\eta_I)$
in $\mathscr{L}\times \mathbb{R}$ defined
by $(v,\eta)_I(s)=I(v,\eta)(-s)$ for $s \in \mathbb{R}$.
The path $(v,\eta)_I$ goes from $(v^+,\eta^+)$ to
$I(v^-,\eta^-)$ and gluing the paths $(v,\eta)$ and $(v,\eta)_I$ together
we obtain a path $(v,\eta)\#(v,\eta)_I$ from
$(v^-,\eta^-)$ to $I(v^-,\eta^-)$. The Fredholm indices of the different
paths are related by
$$\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)}=
\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)_I}, \quad
\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)\#(v,\eta)_I}=
\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)}+
\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)_I}
+\mathrm{dim}C^+.$$
From this we compute, using (\ref{virdim}) for the case of
nonconstant Reeb orbits and the equality
$\mu_{CZ}(I(v^\pm))=-\mu_{CZ}(v^\pm)$,
\begin{eqnarray*}
\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)}&=&
\frac{1}{2}\cdot\mathrm{ind}D^{\mathcal{A}^H}_{(v,\eta)\#(v,\eta)_I}
-\frac{\mathrm{dim}C^+}{2}\\
&=&\frac{1}{2}\bigg(\mu_{CZ}(I(v^-))-\mu_{CZ}(v^-)+
2c_1(\bar{v}^-\#v\#v_I\#I\bar{v}^+)\\
& &-\frac{\mathrm{dim}C^-+\mathrm{dim}IC^-}{2}\bigg)
-\frac{\mathrm{dim}C^+}{2}\\
&=&-\mu_{CZ}(v^-)+2c_1(\bar{v}^-\#v)-
\frac{\mathrm{dim}C^-+\mathrm{dim}C^+}{2},
\end{eqnarray*}
from which we deduce (\ref{virdim}) using (\ref{i1}). This proves
the proposition for the case of gradient flow lines whose left end is
a constant Reeb orbit. The case of gradient flow lines whose right end
is constant can be deduced in the same way or by considering the coindex.
This finishes the proof of the Proposition~\ref{index}. \hfill $\square$
\\ \\
In order to define a $\mathbb{Z}$-grading on $HF(\Sigma,V)$
we need that the local virtual dimension just depends on the asymptotics of
the finite energy gradient flow line. By (\ref{virdim}) this is the
case if $I_{c_1}=0$ on $V$. In this case the local virtual dimension
is given by
\begin{equation}\label{virdim2}
\mathrm{virdim}_{(v,\eta)}\mathcal{M}=
\mu_{CZ}(v^+)-\mu_{CZ}(v^-)
+\frac{\mathrm{dim}C^-+\mathrm{dim}C^+}{2}.
\end{equation}
In order to deal with the third term it is useful
to introduce the following index for the Morse function
$h$ on $\mathrm{crit}(\mathcal{A}^H)$. We define the {\em
signature index} $\mathrm{ind}^\sigma_h(c)$ of a critical point $c$
of $h$ to be
$$\mathrm{ind}^\sigma_h(c):=-\frac{1}{2}\mathrm{sign}({\rm Hess}_h(c)),$$
see Appendix~\ref{app:casc}.
The half signature index is related to the {\em Morse index}
$\mathrm{ind}^m_h(c)$, given by the
number of negative eigenvalues of ${\rm Hess}_h(c)$ counted with
multiplicity, by
\begin{equation}\label{signind}
\mathrm{ind}^\sigma_h(c) = -\mathrm{ind}^m_h(c)-\frac{1}{2}
\mathrm{dim}_c\big(\mathrm{crit}(\mathcal{A}^H)\big).
\end{equation}
We define a {\em grading} $\mu$ on $CF_*(\mathcal{A}^H,h)$ by
$$\mu(c):=\mu_{CZ}(c)+\mathrm{ind}^\sigma_h(c).$$
By considering the case of nondegenerate closed Reeb orbits, one sees
that $\mu$ takes values in the set $\frac{1}{2}+\mathbb{Z}$, so it is indeed a
$\mathbb{Z}$-grading (shifted by $\frac{1}{2}$).
Using equation (\ref{virdim2}), it is shown in Appendix~\ref{app:casc}
that the Floer boundary operator $\partial$ has degree $-1$ with
respect to this grading. Hence we get a
$\mathbb{Z}$-grading on the homology $HF_*(\Sigma,V)$.
\\ \\
\textbf{Proof of Theorem~\ref{compute}: }To prove
Theorem~\ref{compute} we use the fact that
the chain groups underlying the Floer homology $HF_*(\mathcal{A}^H)$
only depend on $(\Sigma,\alpha)$ and not on the embedding
of $\Sigma$ into $V$. We show that for the unit cotangent bundle
$S^*S^n$ for $n \geq 4$ the Floer homology equals the chain complex.
More precisely, we choose the standard round metric on $S^n$
normalized such that all geodesics are closed with
minimal period one. For this choice assumption (A)
is satisfied. The critical manifold of $\mathcal{A}^H$ consists of
$\mathbb{Z}$ copies
of $S^*S^n$, where $\mathbb{Z}$ corresponds to the period of the
geodesic. There is a Morse function $h_0$ on $S^*S^n$ with precisely 4
critical points and zero boundary operator (with $\mathbb{Z}_2$-coefficients!)
whose Morse homology satisfies
$$
HM_k(S^*S^n;\mathbb{Z}_2)=CM_k(h_0;\mathbb{Z}_2)=\left\{
\begin{array}{cc}
\mathbb{Z}_2 & k \in \{0,n-1,n,2n-1\}\\
0 & \mathrm{else}.
\end{array}
\right.
$$
Let $h$ be the Morse function on the critical manifold which coincides
with $h_0$ on each connected component. The chain complex is
generated by
$$\mathrm{crit}(h) \cong \mathbb{Z} \times \mathrm{crit}(h_0).$$
A closed geodesic $c$ is also a critical point of the energy functional
on the loop space. The {\em index} $\mathrm{ind}_E(c)$ of a closed geodesic is
defined to be the Morse index of the energy functional at the geodesic
and the {\em nullity} $\nu(c)$ is defined to be the dimension
of the connected component of the critical manifold of the energy
functional which contains the geodesic minus one.
The (transverse) Conley-Zehnder index of a closed geodesic is given by
\begin{equation}\label{morse}
\mu_{CZ}(c)=\mathrm{ind}_E(c)+\frac{\nu(c)}{2}.
\end{equation}
This is proved in \cite{duistermaat, weber} for nondegenerate
geodesics; the degenerate case follows from the nondegenerate one
using a the averaging property of the Conley-Zehnder index
(Lemma~\ref{coz2}).
By the Morse index theorem, see \cite{morse} or
\cite[Theorem 2.5.14]{klingenberg}, the index of a geodesic is
given by the number of conjugate points counted with multiplicity
plus the concavity. The latter one vanishes for the standard round
metric on $S^n$, since each closed geodesic has a variation
of closed geodesics having the same length \cite{ziller}.
\\
Using the Morse index theorem and equations~\eqref{morse}
and~\eqref{signind}, we compute the index of $(m,x) \in
\mathbb{Z}\times \mathrm{crit}(h_0)$:
\begin{eqnarray*}
\mu(m,x)&=&\mu_{CZ}(m,x)+\mathrm{ind}^{\sigma}_h(x)\\
&=&\mathrm{ind}_E(m,x)+\frac{\nu(m,x)}{2}+\mathrm{ind}^\sigma_h(x)\\
&=&(2m-1)(n-1)+\frac{2n-2}{2}+\mathrm{ind}^\sigma_h(x)\\
&=&2m(n-1)+\mathrm{ind}^m_h(x)-\frac{2n-1}{2}.
\end{eqnarray*}
It follows from Lemma~\ref{period} that the action satisfies
$$\mathcal{A}^H(m,x)=m.$$
In order to have a gradient flow line of $\mathcal{A}^H$ from a
critical point $(m_1,x_1)$ to a critical point $(m_2,x_2)$ we need
$$
\mathcal{A}^H(m_2,x_2)-\mathcal{A}^H(m_1,x_1)=m_2-m_1>0
$$
and
$$
\mu(m_2,x_2)-\mu(m_1,x_1) = 2(m_2-m_1)(n-1)+(i_2-i_1) = 1
$$
for $i_1,i_2\in\{0,n-1,n,2n-1\}$, which is impossible if $n \geq 4$.
Hence there are no gradient flow lines, so the Floer homology
equals the chain complex. This proves Theorem~\ref{compute}. \hfill
$\square$
|
1,116,691,498,365 | arxiv | \section{Introduction}
In the setting of {\it multi-agent online learning} (\cite{shalev-shwartz_online_2011,cesa-bianchi_prediction_2006}), $K$ players interact with each other over time. At each time step $t$, each player $k \in \{1, \ldots, K\}$ chooses an {\it action} $\bz\^t\!k$; $\bz\^t\!k$ may represent, for instance, the bidding strategy of an advertiser at time $t$. Player $k$ then suffers a {\it loss} $\ell_t(\bz\^t\!k)$ that depends on both player $k$'s action $\bz\^t\!k$ and the actions of all other players at time $t$ (which are absorbed into the loss function $\ell_t(\cdot)$). Finally, player $k$ receives some {\it feedback} informing them of how to improve their actions in future iterations. In this paper we study gradient-based feedback, meaning that the feedback is the vector $\bg\^t\!k = \grad_{\bz\!k} \ell_t(\bz\^t\!k)$. %
A fundamental quantity used to measure the performance of an online learning algorithm is the {\it regret} of player $k$, which is the difference between the total loss of player $k$ over $T$ time steps and the loss of the best possible action in hindsight: formally, the regret at time $T$ is $\sum_{t=1}^T \ell_t(\bz\^t\!k) - \min_{\bz\!k} \sum_{t=1}^T \ell_t(\bz\!k)$. An algorithm is said to be {\it no-regret} if its regret at time $T$ grows sub-linearly with $T$ for an adversarial choice of the loss functions $\ell_t$. If all agents playing a game follow no-regret learning algorithms to choose their actions, then it is well-known that the empirical frequency of their actions converges to a {\it coarse correlated equilibrium (CCE)} (\cite{moulin_strategically_1978,cesa-bianchi_prediction_2006}). In turn, a substantial body of work (e.g., \cite{cesa-bianchi_prediction_2006,daskalakis2009network,even2009convergence,cai2011minmax,viossat_no-regret_2013,krichene_convergence_2015,bloembergen_evolutionary_2015,monnot_limits_2017,mertikopoulos_learning_2018,krichene_learning_2018}) has focused on establishing for which classes of games or learning algorithms this convergence to a CCE can be strengthened, such as to convergence to a {\it Nash equilibrium (NE)}. %
However, the type of convergence guaranteed in these works generally either applies only to the time-average of the joint action profiles, or else requires the sequence of learning rates to converge to 0. Such guarantees leave substantial room for improvement: a statement about the average of the joint action profiles fails to capture the game dynamics over time (\cite{mertikopoulos_cycles_2017}), and both types of guarantees use newly acquired information with decreasing weight, which, as remarked by \cite{lin_finite-time_2020}, is very unnatural from an economic perspective.\footnote{In fact, even in the adversarial setting, standard no-regret algorithms such as FTRL (\cite{shalev-shwartz_online_2011}) need to be applied with decreasing step-size in order to achieve sublinear regret.} Therefore, the following question is of particular interest (\cite{mertikopoulos_learning_2018,lin_finite-time_2020,mertikopoulos_cycles_2017,daskalakis_training_2017}):
\begin{equation}
\tag{$\star$}\label{eq:main-question}
\parbox{10cm}{\centering\text{\it Can we establish last-iterate rates if all players act according to }\\ \text{\it a no-regret learning algorithm with constant step size?}}
\end{equation}
We measure the proximity of an action profile $\bz = (\bz\!1, \ldots, \bz\!K)$ to equilibrium in terms of the {\it total gap function} at $\bz$ (Definition \ref{def:total-gap}): it is defined to be the sum over all players $k$ of the maximum decrease in cost player $k$ could achieve by deviating from its action $\bz\!k$. \cite{lin_finite-time_2020} took initial steps toward addressing (\ref{eq:main-question}), showing that if all agents follow the {\it online gradient descent} algorithm, then for all {\it $\lambda$-cocoercive games}, the action profiles $\bz\^t = (\bz\^t\!1, \ldots, \bz\^t\!K)$ will converge to equilibrium in terms of the total gap function at a rate of $O(1/\sqrt{T})$. Moreover, linear last-iterate rates have been long known for smooth {\it strongly-monotone} games (\cite{tseng_linear_1995,gidel_variational_2018,liang_interaction_2018,mokhtari_unified_2019,azizian_tight_2019,zhou_robust_2020}), a sub-class of $\lambda$-cocoercive games. Unfortunately, even $\lambda$-cocoercive games exclude many important classes of games, such as bilinear games, which are the adaptation of matrix games to the unconstrained setting. Moreover, this shortcoming is not merely an artifact of the analysis of \cite{lin_finite-time_2020}: it has been observed (e.g.~\cite{daskalakis_training_2017, gidel_variational_2018}) that in bilinear games, the players' actions in online gradient descent not only fail to converge, but diverge to infinity. Prior work on last-iterate convergence rates for these various subclasses of monotone games is summarized in Table \ref{tab:last-iterate} for the case of perfect gradient feedback; the setting for noisy feedback is summarized in Table \ref{tab:last-iterate-stoch} in Appendix \ref{sec:stoch-background}.
\begin{table}
\centering
\caption{
\neurips{Known last-iterate convergence rates for learning in smooth monotone games with perfect gradient feedback (i.e., {\it deterministic} algorithms). We specialize to the 2-player 0-sum case in presenting prior work, since some papers in the literature only consider this setting. Recall that a game $\MG$ has a {\it $\gamma$-singular value lower bound} if for all $\bz$, all singular values of $\partial F_\MG(\bz)$ are $\geq \gamma$. %
$\ell, \Lambda$ are the Lipschitz constants of $F_\MG, \partial F_\MG$, respectively, and $c, C > 0$ are absolute constants where $c$ is sufficiently small and $C$ is sufficiently large. %
Upper bounds in the left-hand column are for the \ExG algorithm, and lower bounds are for a general form of 1-SCLI methods which include \ExG. Upper bounds in the right-hand column are for algorithms which are implementable as online no-regret learning algorithms (e.g., \OG or online gradient descent), and lower bounds are shown for two classes of algorithms containing \OG and online gradient descent, namely $p$-SCLI algorithms for general $p \geq 1$ (recall for \OG, $p = 2$) as well as those satisfying a 2-step linear span assumption (see \cite{ibrahim_linear_2019}). The reported upper and lower bounds are stated for the total gap function (Definition \ref{def:total-gap}); leading constants and factors depending on distance between initialization and optimum are omitted.}
\arxiv{{\small Known last-iterate convergence rates for learning in smooth monotone games with perfect gradient feedback (i.e., {\it deterministic} algorithms). We specialize to the 2-player 0-sum case in presenting prior work, since some papers in the literature only consider this setting. Recall that a game $\MG$ has a {\it $\gamma$-singular value lower bound} if for all $\bz$, all singular values of $\partial F_\MG(\bz)$ are $\geq \gamma$. %
$\ell, \Lambda$ are the Lipschitz constants of $F_\MG, \partial F_\MG$, respectively, and $c, C > 0$ are absolute constants where $c$ is sufficiently small and $C$ is sufficiently large. %
Upper bounds in the left-hand column are for the \ExG algorithm, and lower bounds are for a general form of 1-SCLI methods which include \ExG. Upper bounds in the right-hand column are for algorithms which are implementable as online no-regret learning algorithms (e.g., \OG or online gradient descent), and lower bounds are shown for two classes of algorithms containing \OG and online gradient descent, namely $p$-SCLI algorithms for general $p \geq 1$ (recall for \OG, $p = 2$) as well as those satisfying a 2-step linear span assumption (see \cite{ibrahim_linear_2019}). The reported upper and lower bounds are stated for the total gap function (Definition \ref{def:total-gap}); leading constants and factors depending on distance between initialization and optimum are omitted.
}}}
\label{tab:last-iterate}
\centering
\begin{adjustbox}{center}
\begin{tabular}{lll}
\toprule
& \multicolumn{2}{c}{Deterministic} \\
Game class & \multicolumn{1}{c}{Extra gradient} & \multicolumn{1}{c}{Implementable as no-regret} \\
\midrule
\makecell[l]{$\mu$-strongly \\ monotone}& \makecell[l]{{\it Upper:} $\ell\left(1 - \frac{c\mu}{\ell}\right)^T$ \cite[EG]{mokhtari_unified_2019}\\ {\it Lower: } $\mu \left(1 - \frac{C\mu}{\ell} \right)^T$ \cite[$1$-SCLI]{azizian_tight_2019}}& \makecell[l]{{\it Upper:} $\ell \left(1 - \frac{c\mu}{\ell} \right)^T$ \cite[OG]{mokhtari_unified_2019} \\ {\it Lower:} $\mu \left( 1 - \frac{C\mu}{\ell}\right)^T$ \cite[2-step lin.~span]{ibrahim_linear_2019} \\ {\it Lower:} $\mu \left(1 - \sqrt[p]{\frac{C\mu}{\ell}}\right)^T$ \cite[$p$-SCLI]{arjevani_lower_2015,ibrahim_linear_2019}} \\
\cmidrule(rl){1-3}
\makecell[l]{Monotone, \\ $\gamma$-sing.~val.\\ low. bnd.} & \makecell[l]{{\it Upper:} $\ell \left(1 - \frac{c \gamma^2}{\ell^2} \right)^T$ \cite[EG]{azizian_tight_2019} \\ {\it Lower:} $\gamma \left( 1 - \frac{C\gamma^2}{\ell^2} \right)^T$ \cite[1-SCLI]{azizian_tight_2019}} & \makecell[l]{{\it Upper:} $\ell \left(1 - \frac{c \gamma^2}{\ell^2} \right)^T$ \cite[OG]{azizian_tight_2019} \\ {\it Lower:} $\gamma \left(1 - \frac{C\gamma}{\ell}\right)^T$ \cite[2-step lin.~span]{ibrahim_linear_2019}\\ {\it Lower:} $\gamma \left(1 - \sqrt[p]{\frac{C\gamma}{\ell}}\right)^T$ \cite[$p$-SCLI]{arjevani_lower_2015,ibrahim_linear_2019}} \\
\cmidrule(rl){1-3}
$\lambda$-cocoercive & \multicolumn{1}{c}{Open}& {\it Upper:} $\frac{1}{\lambda\sqrt{T}}$ \cite[Online grad.~descent]{lin_finite-time_2020} \\
\cmidrule(rl){1-3}
Monotone &\makecell[l]{{\it Upper:} $ \frac{\ell + \Lambda}{\sqrt{T}}$ \cite[EG]{golowich_last_2020}\\ {\it Lower: } $\frac{\ell}{\sqrt{T}}$ \cite[1-SCLI]{golowich_last_2020}} & \makecell[l]{{\it Upper:} $ \frac{\ell + \Lambda}{\sqrt{T}}$ ({\bf Theorem \ref{thm:ogda-last}}, OG) \\ {\it Lower: } $\frac{\ell}{\sqrt{T}}$ ({\bf Theorem \ref{thm:mm-spectral}}, $p$-SCLI, lin.~coeff.~matrices)}\\
\bottomrule
\end{tabular}
\if 0
\begin{tablenotes}
\item[$\dagger$] The 1-SCLI methods for which all lower bounds in the left-hand column are proved are slightly more general than taking $p=1$ in Definition \ref{def:pcli}: the vector $\bz\^{t-1}$ in (\ref{eq:scli-update}) is allowed to be multiplied by a matrix which is a {\it polynomial} in $\partial F$ (as opposed to an affine function of $\partial F$).
\end{tablenotes}
\fi
\end{adjustbox}
\vspace{-0.5cm}
\end{table}
\subsection{Our contributions}
In this paper we answer (\ref{eq:main-question}) in the affirmative for all {\it monotone games} (Definition \ref{def:monotone}) satisfying a mild smoothness condition, which includes smooth $\lambda$-cocoercive games and bilinear games. Many common and well-studied classes of games, such as zero-sum polymatrix games (\cite{bregman1987methods,daskalakis2009network,cai_zero-sum_2016}) and its generalization zero-sum socially-concave games (\cite{even2009convergence}) are monotone but are not in general $\lambda$-cocoercive. Hence our paper is the first to prove last-iterate convergence in the sense of (\ref{eq:main-question}) for the unconstrained version of these games as well. In more detail, we establish the following: %
\begin{itemize}
\item We show in Theorem \ref{thm:ogda-last} and Corollary \ref{cor:tgap} that the actions taken by learners following the {\it optimistic gradient (\OG)} algorithm, which is no-regret, exhibit last-iterate convergence to a Nash equilibrium in smooth, monotone games at a rate of $O(1/\sqrt{T})$ in terms of the global gap function. The proof uses a new technique which we call {\it adaptive potential functions} (Section \ref{sec:ada-potential}) which may be of independent interest. %
\item We show in Theorem \ref{thm:mm-spectral} that the rate $O(1/\sqrt{T})$ cannot be improved for any algorithm belonging to the class of $p$-SCLI algorithms (Definition \ref{def:pcli}), which includes \OG. %
\end{itemize}
The \OG algorithm is closely related to the {\it extra-gradient (\ExG)} algorithm (\cite{korpelevich_extragradient_1976,nemirovski_prox-method_2004}),\footnote{\ExG is also known as {\it mirror-prox}, which specifically refers to its generalization to general Bregman divergences.} which, at each time step $t$, assumes each player $k$ has an oracle $\MO_k$ which provides them with an additional gradient at a slightly different action than the action $\bz\^t\!k$ played at step $t$. Hence \ExG does not naturally fit into the standard setting of multi-agent learning. One could try to ``force'' \ExG into the setting of multi-agent learning by taking actions at odd-numbered time steps $t$ to simulate the oracle $\MO_k$, and using the even-numbered time steps to simulate the actions $\bz\^t\!k$ that \ExG actually takes. Although this algorithm exhibits last-iterate convergence at a rate of $O(1/\sqrt{T})$ in smooth monotone games when all players play according to it \cite{golowich_last_2020}, it is straightforward to see that it is {\it not} a no-regret learning algorithm, i.e., for an adversarial loss function the regret can be linear in $T$ (see Proposition \ref{prop:eg-regret} in Appendix \ref{sec:eg-linear-reg}).
Nevertheless, due to the success of \ExG at solving monotone variational inequalities, \cite{mertikopoulos_learning_2018} asked whether similar techniques to \ExG could be used to speed up last-iterate convergence to Nash equilibria. Our upper bound for \OG answers this question in the affirmative: various papers (\cite{chiang_online_2012,rakhlin_online_2012,rakhlin_optimization_2013,hsieh_convergence_2019}) have observed that \OG may be viewed as an approximation of \ExG, in which the previous iteration's gradient is used to simulate the oracle $\MO_k$. Moreover, our upper bound of $O(1/\sqrt{T})$ applies in many games for which the approach used in \cite{mertikopoulos_learning_2018}, namely Nesterov's dual averaging (\cite{nesterov_primal-dual_2009}), either fails to converge (such as bilinear games) or only yields asymptotic rates with decreasing learning rate (such as smooth strictly monotone games). Proving last-iterate rates for \OG has also been noted as an important open question in \cite[Table 1]{hsieh_convergence_2019}. At a technical level, the proof of our upper bound (Theorem \ref{thm:ogda-last}) uses the proof technique in \cite{golowich_last_2020} for the last-iterate convergence of \ExG as a starting point. In particular, similar to \cite{golowich_last_2020}, our proof proceeds by first noting that some iterate $\bz\^{t^*}$ of \OG will have gradient gap $O(1/\sqrt{T})$ (see Definition \ref{def:grad-gap}; this is essentially a known result) and then showing that for all $t \geq t^*$ the gradient gap only increases by at most a constant factor. The latter step is the bulk of the proof, as was the case in \cite{golowich_last_2020}; however, since each iterate of \OG depends on the previous two iterates and gradients, the proof for \OG is significantly more involved than that for \ExG. We refer the reader to Section \ref{sec:ada-potential} and Appendix \ref{sec:ogda-proofs} for further details.
The proof of our lower bound for $p$-SCLI algorithms, Theorem \ref{thm:mm-spectral}, reduces to a question about the spectral radius of a family of polynomials. In the course of our analysis we prove a conjecture by \cite{arjevani_lower_2015} about such polynomials; though the validity of this conjecture is implied by each of several independent results in the literature (e.g., \cite{arjevani_iteration_2016,nevanlinna_convergence_1993}), our proof is more direct than previous ones. %
Lastly, we mention that our focus in this paper is on the unconstrained setting, meaning that the players' losses are defined on all of Euclidean space. We leave the constrained setting, in which the players must project their actions onto a convex constraint set, to future work. %
\subsection{Related work}
\paragraph{Multi-agent learning in games.} In the constrained setting, many papers have studied conditions under which the action profile of no-regret learning algorithms, often variants of Follow-The-Regularized-Leader (\FTRL), converges to equilibrium. However, these works all assume either a learning rate that decreases over time (\cite{mertikopoulos_learning_2018,zhou_countering_2017,zhou_learning_2018,zhou_mirror_2017}), or else only apply to specific types of {\it potential games} (\cite{krichene_convergence_2015,krichene_learning_2018,palaiopanos_multiplicative_2017,kleinberg_multiplicative_2009,chen_generalized_2016,blum_routing_2006,panageas_average_2014}), which significantly facilitates the analysis of last-iterate convergence.\footnote{In {\it potential games}, there is a canonical choice of potential function whose local minima are equivalent to being at a Nash equilibrium. The lack of existence of a natural potential function in general monotone games is a significant challenge in establishing last-iterate convergence.} %
Such potential games are in general incomparable with monotone games, and do not even include finite-state two-player zero sum games (i.e., {\it matrix games}). In fact,
\cite{bailey_multiplicative_2018} showed that the actions of players following \FTRL in two-player zero-sum matrix games {\it diverge} from interior Nash equilibria. Many other works (\cite{hart_uncoupled_2003,mertikopoulos_cycles_2017,kleinberg_beyond_2011,daskalakis_learning_2010,balcan_weighted_2012,papadimitriou_nash_2016}) establish similar non-convergence results in both discrete and continuous time for various types of monotone games, including zero-sum polymatrix games. Such non-convergence includes chaotic behavior such as Poincar\'{e} recurrence, which showcases the insufficiency of on-average convergence (which holds in such settings) and so is additional motivation for the question (\ref{eq:main-question}).
\paragraph{Monotone variational inequalities \& \OG.} The problem of finding a Nash equilibrium of a monotone game is exactly that of finding a solution to a monotone variational inequality (VI). \OG was originally introduced by \cite{popov_modification_1980}, who showed that its iterates converge to solutions of monotone VIs, without proving explicit rates.\footnote{Technically, the result of \cite{popov_modification_1980} only applies to two-player zero-sum monotone games (i.e., finding the saddle point of a convex-concave function). The proof readily extends to general monotone VIs (\cite{hsieh_convergence_2019}).} It is also well-known that the {\it averaged} iterate of \OG converges to the solution of a monotone VI at a rate of $O(1/T)$ (\cite{hsieh_convergence_2019,mokhtari_convergence_2019,rakhlin_optimization_2013}), which is known to be optimal (\cite{nemirovski_prox-method_2004,ouyang_lower_2019,azizian_accelerating_2020}). Recently it has been shown (\cite{daskalakis_last-iterate_2018,lei_last_2020}) that a modification of \OG known as optimistic multiplicative-weights update exhibits last-iterate convergence to Nash equilibria in two-player zero-sum monotone games, but as with the unconstrained case (\cite{mokhtari_convergence_2019}) non-asymptotic rates are unknown. To the best of our knowledge, the only work proving last-iterate convergence rates for general smooth monotone VIs was \cite{golowich_last_2020}, which only treated the EG algorithm, which is not no-regret. There is a vast literature on solving VIs, and we refer the reader to \cite{facchinei_finite-dimensional_2003} for further references.
\section{Preliminaries}
Throughout this paper we use the following notational conventions. For a vector $\bv \in \BR^n$, let $\| \bv \|$ denote the Euclidean norm of $\bv$. For $\bv \in \BR^n$, set $\MB(\bv, R) := \{ \bz \in \BR^n : \| \bv - \bz \| \leq R\}$; when we wish to make the dimension explicit we write $\MB_{\BR^n}(\bv, R)$. For a matrix $\bA \in \BR^{n \times n}$ let $\| \bA \|_\sigma$ denote the spectral norm of $\bA$. %
We let the set of $K$ players be denoted by $\MK := \{ 1, 2, \ldots K \}$. Each player $k$'s actions $\bz\!k$ belong to their {\it action set}, denoted $\MZ_k$, where $\MZ_k \subseteq \BR^{n_k}$ is a convex subset of Euclidean space. Let $\MZ = \prod_{k=1}^K \MZ_k \subseteq \BR^n$, where $n = n_1 + \cdots + n_K$. In this paper we study the setting where the action sets are unconstrained (as in \cite{lin_finite-time_2020}), meaning that $\MZ_k = \BR^{n_k}$, and $\MZ = \BR^n$, where $n = n_1 + \cdots + n_K$.
The {\it action profile} is the vector $\bz := (\bz\!1, \ldots, \bz\!K) \in \MZ$. For any player $k \in \MK$, let $\bz\!{-k} \in \prod_{k' \neq k} \MZ_{k'}$ be the vector of actions of all the other players. Each player $k \in \MK$ wishes to minimize its {\it cost function} $f_k : \MZ \ra \BR$, which is assumed to be twice continuously differentiable. %
The tuple $\MG := (\MK, (\MZ_k)_{k=1}^K, (f_k)_{k=1}^K)$ is known as a {\it continuous game}.
At each time step $t$, each player $k$ plays an action $\bz\^t\!k$; we assume the feedback to player $k$ is given in the form of the gradient $\grad_{\bz\!k} f_k(\bz\^t\!k, \bz\^t\!{-k})$ of their cost function with respect to their action $\bz\^t\!k$, given the actions $\bz\^t\!{-k}$ of the other players at time $t$. We denote the concatenation of these gradients by $F_\MG(\bz) := (\grad_{\bz\!1} f_1(\bz), \ldots, \grad_{\bz\!K}f_K(\bz)) \in \BR^n$. When the game $\MG$ is clear, we will sometimes drop the subscript and write $F : \MZ \ra \BR^n$. %
\paragraph{Equilibria \& monotone games.} A {\it Nash equilibrium} in the game $\MG$ is an action profile $\bz^* \in \MZ$ so that for each player $k$, it holds that $f_k(\bz^*\!k, \bz\!{-k}^*) \leq f_k(\bz\!k', \bz\!{-k}^*)$ for any $\bz\!{k}' \in \MZ_k$. %
Throughout this paper we study {\it monotone} games:
\begin{definition}[Monotonicity; \cite{rosen_existence_1965}]
\label{def:monotone}
The game $\MG = (\MK, (\MZ_k)_{k=1}^K, (f_k)_{k=1}^K)$ is {\it monotone} if for all $\bz, \bz' \in \MZ$, it holds that $\lng F_\MG(\bz') - F_\MG(\bz), \bz' - \bz \rng \geq 0$. In such a case, we say also that $F_\MG$ is a monotone operator.
\end{definition}
The following classical result characterizes the Nash equilibria in monotone games:
\begin{proposition}[\cite{facchinei_finite-dimensional_2003}]
\label{prop:nash-charac}
In the unconstrained setting, if the game $\MG$ is monotone, any Nash equilibrium $\bz^*$ satisfies $F_\MG(\bz^*) = {\ensuremath{\mathbf 0}}$. Conversely, if $F_\MG(\bz) = {\ensuremath{\mathbf 0}}$, then $\bz$ is a Nash equilibrium. %
\end{proposition}
In accordance with Proposition \ref{prop:nash-charac}, one measure of the proximity to equilibrium of some $\bz \in \MZ$ is the norm of $F_\MG(\bz)$:
\begin{defn}[Gradient gap function]
\label{def:grad-gap}
Given a monotone game $\MG$ with its associated operator $F_\MG$, the {\it gradient gap function} evaluated at $\bz$ is defined to be $\| F_\MG(\bz) \|$.
\end{defn}
It is also common (\cite{mokhtari_convergence_2019,nemirovski_prox-method_2004}) to measure the distance from equilibrium of some $\bz \in \MZ$ by adding the maximum decrease in cost that each player could achieve by deviating from their current action $\bz\!k$: %
\begin{defn}[Total gap function]
\label{def:total-gap}
Given a monotone game $\MG = (\MK, (\MZ_k)_{k=1}^K, (f_k)_{k=1}^K)$, compact subsets $\MZ_k' \subseteq \MZ_k$ for each $k \in \MK$, and a point $\bz \in \MZ$, define the {\it total gap function} at $\bz$ with respect to the set $\MZ' := \prod_{k=1}^K \MZ_k'$ by
$
\Gap_\MG^{\MZ'}(\bz) := \sum_{k=1}^K \left(f_k(\bz) - \min_{\bz\!{k}' \in \MZ_k'} f_k(\bz\!{k}', \bz\!{-k})\right).
$
At times we will slightly abuse notation, and for $F := F_\MG$, write $\Gap_F^{\MZ'}$ in place of $\Gap_\MG^{\MZ'}$.
\end{defn}
As discussed in \cite{golowich_last_2020}, it is in general impossible to obtain meaningful guarantees on the total gap function by allowing each player to deviate to an action in their entire space $\MZ_k$, which necessitates defining the total gap function in Definition \ref{def:total-gap} with respect to the compact subsets $\MZ_k'$. We discuss in Remark \ref{rmk:bounded} how, in our setting, it is without loss of generality to shrink $\MZ_k$ so that $\MZ_k = \MZ_k'$ for each $k$. Proposition \ref{prop:gradient-total} below shows that in monotone games, the gradient gap function upper bounds the total gap function:
\begin{proposition}
\label{prop:gradient-total}
Suppose $\MG = (\MK, (\MZ_k)_{k=1}^K, (f_k)_{k=1}^K)$ is a monotone game, and compact subsets $\MZ_k' \subset \MZ_k$ are given, where the diameter of each $\MZ_k'$ is upper bounded by $D > 0$. Then
$$
\Gap_\MG^{\MZ'}(\bz) \leq D \sqrt{K} \cdot \| F_\MG(\bz) \|.
$$
\end{proposition}
For completeness, a proof of Proposition \ref{prop:gradient-total} is presented in Appendix~\ref{sec:basic-proofs}.
\paragraph{Special case: convex-concave min-max optimization.} %
Since in a two-player zero-sum game $\MG = (\{1,2\}, (\MZ_1, \MZ_2), (f_1, f_2))$ we must have $f_1 = -f_2$, it is straightforward to show that $f_1(\bz\!1, \bz\!2)$ is convex in $\bz\!1$ and concave in $\bz\!2$. Moreover, it is immediate that Nash equilibria of the game $\MG$ correspond to saddle points of $f_1$; thus a special case of our setting is that of finding saddle points of convex-concave functions (\cite{facchinei_finite-dimensional_2003}). Such saddle point problems have received much attention recently since they can be viewed as a simplified model of generative adversarial networks (e.g., \cite{gidel_variational_2018,daskalakis_training_2017,chavdarova_reducing_2019,gidel_negative_2018,yadav_stabilizing_2017}). %
\paragraph{Optimistic gradient (\OG) algorithm.}
In the {\it optimistic gradient (\OG)} algorithm, each player $k$ performs the following update:
\begin{equation}
\tag{\OG}\label{eq:opgd}
\bz\^{t+1}\!k := \bz\^t\!k - 2 \eta_t \bg\^{t}\!k + \eta_t \bg\^{t-1} \!k, %
\end{equation}
where $\bg\^t\!k = \grad_{\bz\!k}f_k(\bz\^t\!k, \bz\^{t}\!{-k})$ for $t \geq 0$.
The following essentially optimal regret bound is well-known for the \OG algorithm, when the actions of the other players $\bz\^t\!{-k}$ (often referred to as the {\it environment}'s actions) are adversarial:%
\begin{proposition}
\label{prop:op-noregret}
Assume that for all $\bz\!{-k}$ the function $\bz\!k \mapsto f_k(\bz\!k,\bz\!{-k})$ is convex. Then the regret of \OG with learning rate $\eta_t = O(D/L\sqrt{t})$ is $O(DL\sqrt{T})$, where $L = \max_t \| \bg\^t\!k\|$ and $D = \max \{ \|\bz\!k^*\|, \max_t \|\bz\^t\!k\| \}$.
\end{proposition}
In Proposition \ref{prop:op-noregret}, $\bz\!k^*$ is defined by $\bz^*\!k \in \argmin_{\bz\!k \in \MZ_k} \sum_{t'=0}^t f_k(\bz\!k, \bz\^{t'}\!{-k})$.
The assumption in the proposition that $\| \bz\^t\!k \| \leq D$ may be satisfied in the unconstrained setting by projecting the iterates onto the region $\MB(0, D) \subset \BR^{n_k}$, for some $D \geq \| \bz\!k^*\|$, without changing the regret bound. The implications of this modification to (\ref{eq:opgd}) are discussed further in Remark \ref{rmk:bounded}. %
\section{Last-iterate rates for \OG via adaptive potential functions}
\label{sec:ogda-lir}
In this section we show that in the unconstrained setting (namely, that where $\MZ_k = \BR^{n_k}$ for all $k \in \MK$), when all players act according to \OG, their iterates exhibit last-iterate convergence to a Nash equilibrium. %
Our convergence result holds for games $\MG$ for which the operator $F_\MG$ satisfies the following smoothness assumption:
\begin{assumption}[Smoothness]
\label{asm:smoothness}
For a monotone operator $F : \MZ \ra \BR^n$, assume that the following first and second-order Lipschitzness conditions hold, for some $\ell, \Lambda > 0$:
\begin{align}
\label{eq:1o-smooth} \forall \bz, \bz' \in \MZ, \qquad \| F(\bz) - F(\bz') \| & \leq \ell \cdot \| \bz - \bz' \| \\
\label{eq:2o-smooth} \forall \bz, \bz' \in \MZ, \qquad \| \partial F(\bz) - \partial F (\bz') \|_\sigma & \leq \Lambda \cdot \| \bz - \bz' \|.
\end{align}
Here $\partial F: \MZ \ra \BR^{n \times n}$ denotes the Jacobian of $F$.
\end{assumption}
Condition (\ref{eq:1o-smooth}) is entirely standard in the setting of solving monotone variational inequalities (\cite{nemirovski_prox-method_2004}); condition (\ref{eq:2o-smooth}) is also very mild, being made for essentially all second-order methods (e.g., \cite{abernethy_last-iterate_2019,nesterov_cubic_2006}).
By the definition of $F_\MG(\cdot)$, when all players in a game $\MG$ act according to (\ref{eq:opgd}) with constant step size $\eta$, then the action profile $\bz\^t$ takes the form
\begin{equation}
\label{eq:ogda}
\bz\^{-1}, \bz\^0 \in \BR^n, \qquad \bz\^{t+1} = \bz\^t - 2 \eta F_\MG(\bz\^t) + \eta F_\MG(\bz\^{t-1}) \ \ \forall t \geq 0.
\end{equation}
The main theorem of this section, Theorem \ref{thm:ogda-last}, shows that under the \OG updates (\ref{eq:ogda}), the iterates converge at a rate of $O(1/\sqrt{T})$ to a Nash equilibrium with respect to the gradient gap function:
\begin{theorem}[Last-iterate convergence of \OG]
\label{thm:ogda-last}
Suppose $\MG$ is a monotone game so that $F_\MG$ satisfies Assumption \ref{asm:smoothness}. For some $\bz\^{-1}, \bz\^0 \in \BR^n$, suppose there is $\bz^* \in \BR^n$ so that $F_\MG(\bz^*) = 0$ and $\| \bz^* - \bz\^{-1} \| \leq D, \| \bz^* - \bz\^0 \| \leq D$. Then the iterates $\bz\^T$ of \OG (\ref{eq:ogda}) for any $\eta \leq \min\left\{ \frac{1}{150 \ell}, \frac{1}{1711 D \Lambda} \right\}$ satisfy:
\begin{equation}
\label{eq:last-iterate}
\| F_\MG (\bz\^T) \| \leq \frac{60D}{\eta \sqrt{T}}
\end{equation}
\end{theorem}
By Proposition \ref{prop:gradient-total}, we immediately get a bound on the total gap function at each time $T$:
\begin{corollary}[Total gap function for last iterate of \OG]
\label{cor:tgap}
In the setting of Theorem \ref{thm:ogda-last}, let $\MZ_k' := \MB(\bz\!k\^0, 3D)$ for each $k \in \MK$. Then, with $\MZ' = \prod_{k \in \MK} \MZ_k'$,
\begin{equation}
\label{eq:tgap-ogda}
\Gap_\MG^{\MZ'}(\bz\^T) \leq \frac{180 KD^2}{\eta \sqrt{T}}.
\end{equation}
\end{corollary}
We made no attempt to optimize the consants in Theorem \ref{thm:ogda-last} and Corollary \ref{cor:tgap}, and they can almost certainly be improved.
\begin{remark}[Bounded iterates]
\label{rmk:bounded}
Recall from the discussion following Proposition \ref{prop:op-noregret} that it is necessary to project the iterates of \OG onto a compact ball to achieve the no-regret property. As our guiding question (\ref{eq:main-question}) asks for last-iterate rates achieved by a no-regret algorithm, we should ensure that such projections are compatible with the guarantees in Theorem \ref{thm:ogda-last} and Corollary \ref{cor:tgap}. For this we note that \cite[Lemma 4(b)]{mokhtari_convergence_2019} showed that for the dynamics (\ref{eq:ogda}) without constraints, for all $t \geq 0$, $\| \bz\^t - \bz^* \| \leq 2 \| \bz\^0 - \bz^* \|$.
Therefore, as long as we make the very mild assumption of a known a priori upper bound $\| \bz^* \| \leq D/2$ (as well as $\| \bz\^{-1}\!k \| \leq D/2$, $ \| \bz\^0\!k \| \leq D/2$), if all players act according to (\ref{eq:ogda}), then the updates (\ref{eq:ogda}) remain unchanged if we project onto the constraint sets $\MZ_k := \MB({\ensuremath{\mathbf 0}}, 3D)$ at each time step $t$. %
This observation also serves as motivation for the compact sets $\MZ_k'$ used in Corollary \ref{cor:tgap}: the natural choice for $\MZ_k'$ is $\MZ_k$ itself, and by restricting $\MZ_k$ to be compact, this choice becomes possible.
\end{remark}
\subsection{Proof overview: adaptive potential functions}
\label{sec:ada-potential}
In this section we sketch the idea of the proof of Theorem \ref{thm:ogda-last}; full details of the proof may be found in Appendix~\ref{sec:ogda-proofs}. First we note that it follows easily from results of \cite{hsieh_convergence_2019} that \OG exhibits {\it best-iterate} convergence, i.e., in the setting of Theorem \ref{thm:ogda-last} we have, for each $T > 0$, $\min_{1 \leq t \leq T} \| F_\MG(\bz\^t) \| \leq O(1/\sqrt{T})$.\footnote{In this discussion we view $\eta, D$ as constants.} The main contribution of our proof is then to show the following: if we choose $t^*$ so that $\| F_\MG(\bz\^{t^*}) \| \leq O(1/\sqrt{T})$, then for all $t' \geq t^*$, we have $\| F_\MG(\bz\^{t'}) \| \leq O(1) \cdot \| F_\MG(\bz\^{t^*})\|$. This was the same general approach taken in \cite{golowich_last_2020} to prove that the extragradient (EG) algorithm has last-iterate convergence. In particular, they showed the stronger statement that $\| F_\MG(\bz\^t) \|$ may be used as an approximate potential function in the sense that it only increases by a small amount each step:
\begin{equation}
\label{eq:grow-slow}
\| F_\MG(\bz\^{t'+1}) \| \underbrace{\leq}_{t' \geq 0} (1 + \| F(\bz\^{t'}) \|^2) \cdot \| F_\MG(\bz\^{t'}) \| \underbrace{\leq}_{t' \geq t^*} (1 + O(1/T)) \cdot \| F_\MG(\bz\^{t'}) \|.
\end{equation}
However, their approach relies crucially on the fact that for the EG algorithm, $\bz\^{t+1}$ depends only on $\bz\^t$. For the \OG algorithm, it is possible that (\ref{eq:grow-slow}) fails to hold, even when $F_\MG(\bz\^t)$ is replaced by the more natural choice of $(F_\MG(\bz\^{t}), F_\MG(\bz\^{t-1}))$.\footnote{For a trivial example, suppose that $n = 1$, $F_\MG(\bz) = \bz$, $\bz\^{t'} = \delta > 0$, and $\bz\^{t'-1} = 0$. Then $\|(F_\MG(\bz\^{t'}), F_\MG(\bz\^{t'-1}))\| = \delta$ but $\|(F_\MG(\bz\^{t'+1}), F_\MG(\bz\^{t'}))\| > \delta\sqrt{2 - 4\eta}$.\label{foot:potential-grow}} %
Instead of using $\| F_\MG(\bz\^t) \|$ as a potential function in the sense of (\ref{eq:grow-slow}), we propose instead to track the behavior of $\| \tilde F\^t \|$, where %
\begin{equation}
\label{eq:Ct}
\tilde F\^t := F_\MG(\bz\^t + \eta F_\MG(\bz\^{t-1})) + \bC\^{t-1} \cdot F_\MG(\bz\^{t-1}) \in \BR^n,
\end{equation}
and the matrices $\bC\^{t-1} \in \BR^{n \times n}$ are defined recursively {\it backwards}, i.e., $\bC\^{t-1}$ depends directly on $\bC\^t$, which depends directly on $\bC\^{t+1}$, and so on. For an appropriate choice of the matrices $\bC\^t$, we show that $\tilde F\^{t+1} = (I - \eta \bA\^t + \bC\^t) \cdot \tilde F\^t$, for some matrix $\bA\^t \approx \partial F_\MG(\bz\^t)$. We then show that for $t \geq t^*$, it holds that $\| I - \eta \bA\^t + \bC\^t \|_\sigma \leq 1 + O(1/T)$, from which it follows that $\| \tilde F\^{t+1} \| \leq (1 + O(1/T)) \cdot \| \tilde F\^t \|$. This modification of (\ref{eq:grow-slow}) is enough to show the desired upper bound of $\| F_\MG(\bz\^T) \| \leq O(1/\sqrt{T})$.
To motivate the choice of $\tilde F\^t$ in (\ref{eq:Ct}) it is helpful to consider the simple case where $F(\bz) = \bA \bz$ for some $\bA \in \BR^{n \times n}$, which was studied by \cite{liang_interaction_2018}. Simple algebraic manipulations using (\ref{eq:ogda}) (detailed in Appendix~\ref{sec:ogda-proofs}) show that, for the matrix $\bC := \frac{(I + (2\eta \bA)^2)^{1/2} - I}{2}$, we have $\tilde F\^{t+1} = (I - \eta \bA + \bC) \tilde F\^t$ for all $t$. It may be verified that we indeed have $\bA\^t = \bA$ and $\bC\^t = \bC$ for all $t$ in this case, and thus (\ref{eq:Ct}) may be viewed as a generalization of these calculations to the nonlinear case.
\paragraph{Adaptive potential functions.}
In general, a {\it potential function} $\Phi(F_\MG, \bz)$ depends on the problem instance, here taken to be $F_\MG$, and an element $\bz$ representing the current state of the algorithm. Many convergence analyses from optimization (e.g., \cite{bansal_potential-function_2017,wilson_lyapunov_2018}, and references therein) have as a crucial element in their proofs a statement of the form $\Phi(F_\MG, \bz\^{t+1}) \lesssim \Phi(F_\MG, \bz\^t)$. For example, for the iterates $\bz\^t$ of the EG algorithm, \cite{golowich_last_2020} (see (\ref{eq:grow-slow})) used the potential function $\Phi(F_\MG, \bz\^{t}) := \| F_\MG(\bz\^t) \|$.
Our approach of controlling the the norm of the vectors $\tilde F\^t$ defined in (\ref{eq:Ct}) can also be viewed as an instantion of the potential function approach: since each iterate of \OG depends on the previous two iterates, the state is now given by $\bv\^t := (\bz\^{t-1}, \bz\^t)$. The potential function is given by $\Phi_{\OGmath}(F_\MG, \bv\^t) := \| \tilde F\^t\|$, where $\tilde F^\t$ is defined in (\ref{eq:Ct}) and indeed only depends on $\bv\^t$ once $F_\MG$ is fixed since $\bv\^t$ determines $\bz\^{t'}$ for all $t' \geq t$ (as \OG is deterministic), which in turn determine $\bC\^{t-1}$. However, the potential function $\Phi_{\OGmath}$ is quite unlike most other choices of potential functions in optimization (e.g., \cite{bansal_potential-function_2017}) in the sense that it depends {\it globally} on $F_\MG$: For any $t' > t$, a local change in $F_\MG$ in the neighborhood of $\bv\^{t'}$ may cause a change in $\Phi_{\OGmath}(F_\MG, \bv\^t)$, {\it even if $\| \bv\^t - \bv\^{t'} \|$ is arbitrarily large}. Because $\Phi_{\OGmath}(F_\MG, \bv\^t)$ adapts to the behavior of $F_\MG$ at iterates later on in the optimization sequence, we call it an
{\it adaptive potential function}. We are not aware of any prior works using such adaptive potential functions to prove last-iterate convergence results, and we believe this technique may find additional applications.
\section{Lower bound for convergence of $p$-SCLIs}
\label{sec:scli-lb}
The main result of this section is Theorem \ref{thm:mm-spectral}, stating that the bounds on last-iterate convergence in Theorem \ref{thm:ogda-last} and Corollary \ref{cor:tgap} are tight when we require the iterates $\bz\^T$ to be produced by an optimization algorithm satisfying a particular formal definition of ``last-iterate convergence''. %
Notice that that we cannot hope to prove that they are tight for {\it all} first-order algorithms, since the averaged iterates $\bar \bz\^T := \frac 1T \sum_{t=1}^T \bz\^t$ of \OG satisfy $\Gap_\MG^{\MZ'}(\bar \bz\^T) \leq O\left( \frac{D^2}{\eta T} \right)$ \cite[Theorem 2]{mokhtari_convergence_2019}. Similar to \cite{golowich_last_2020}, we use {\it $p$-stationary canonical linear iterative methods ($p$-SCLIs)} to formalize the notion of ``last-iterate convergence''. \cite{golowich_last_2020} only considered the special case $p=1$ to establish a similar lower bound to Theorem \ref{thm:mm-spectral} for a family of last-iterate algorithms including the extragradient algorithm. The case $p > 1$ leads to new difficulties in our proof since even for $p = 2$ we must rule out algorithms such as Nesterov's accelerated gradient descent (\cite{nesterov_introductory_1975}) and P\'olya's heavy-ball method (\cite{polyak_introduction_1987}), a situation that did not arise for $p=1$.
\if 0
$\MFbil_{n,\ell,D}$, defined to be the set of $\ell$-Lipschitz operators $F : \BR^n \ra \BR^n$ of the form
\begin{equation}
\label{eq:linear-F}
F(\bz) = \bA \bz + \bb, \qquad \bz = \matx{\bx \\ \by}, \quad \bA = \matx{{\ensuremath{\mathbf 0}} & \bM \\ -\bM^\t & {\ensuremath{\mathbf 0}}}, \quad \bb = \matx{\bb_1 \\ -\bb_2},
\end{equation}
for which $\bA$ is of full-rank and $-\bA^{-1} \bb \in \MB_{\BR^{n/2}}(0,D) \times \MB_{\BR^{n/2}}(0,D)$. For $F$ in (\ref{eq:linear-F}), we have $F = F_\MG$ for the two-player zero-sum game $\MG$ with objective function $f(\bx, \by) = \bx^\t \bM \by + \bb_1^\t \bx + \bb_2^\t \by$. The unique Nash eqilibrium of $\MG$ is given by $\bz^* = -\bA^{-1} \bb$.
\fi
\begin{defn}[$p$-SCLIs \cite{arjevani_lower_2015,azizian_accelerating_2020}]
\label{def:pcli}
An algorithm $\MA$ is a {\it first-order $p$-stationary canonical linear iterative algorithm ($p$-SCLI)} if, given a monotone operator $F$, and an arbitrary set of $p$ initialization points $\bz\^0, \bz\^{-1}, \ldots, \bz\^{-p+1} \in \BR^n$, it generates iterates $\bz\^t$, $t \geq 1$, for which
\begin{equation}
\bz\^t = \sum_{j=0}^{p-1} \alpha_j \cdot F(\bz\^{t-p+j}) + \beta_j \cdot \bz\^{t-p+j},
\label{eq:scli-update}
\end{equation}
for $t = 1,2,\ldots$, where $\alpha_j, \beta_j \in \BR$ are any scalars.\footnote{We use slightly different terminology from \cite{arjevani_lower_2015}; technically, the $p$-SCLIs considered in this paper are those in \cite{arjevani_lower_2015} with {\it linear coefficient matrices}.} %
\end{defn}
From (\ref{eq:ogda}) it is evident that OG with constant step size $\eta$ is a 2-SCLI with $\beta_1 = 1, \beta_0 = 0, \alpha_1 = -2\eta, \alpha_0 = \eta$. Many standard algorithms for convex function minimization, including gradient descent, Nesterov's accelerated gradient descent (AGD), and P\'olya's Heavy Ball method, %
are of the form (\ref{eq:scli-update}) as well. We additionally remark that several variants of SCLIs (and their non-stationary counterpart, CLIs) have been considered in recent papers proving lower bounds for min-max optimization (\cite{azizian_tight_2019,ibrahim_linear_2019,azizian_accelerating_2020}).
For simplicity, we restrict our attention to monotone operators $F$ arising as $F = F_\MG : \BR^n \ra \BR^n$ for a two-player zero-sum game $\MG$ (i.e., the setting of min-max optimization). For simplicity suppose that $n$ is even and for $\bz \in \BR^n$ write $\bz = (\bx, \by)$ where $\bx, \by \in \BR^{n/2}$.
Define $\MFbil_{n,\ell,D}$ to be the set of $\ell$-Lipschitz operators $F : \BR^n \ra \BR^n$ of the form $F(\bx, \by) = (\grad_\bx f(\bx, \by), -\grad_\by f(\bx, \by))^\t$ for some bilinear function $f : \BR^{n/2} \times \BR^{n/2} \ra \BR$, with a unique equilibrium point $\bz^* = (\bx^*, \by^*)$%
, which satisfies $\bz^* \in \MD_D := \MB_{\BR^{n/2}}({\ensuremath{\mathbf 0}},D) \times \MB_{\BR^{n/2}}({\ensuremath{\mathbf 0}},D)$. The following Theorem \ref{thm:mm-spectral} uses functions in $\MFbil_{n,\ell,D}$ as ``hard instances'' to show that the $O(1/\sqrt{T})$ rate of Corollary \ref{eq:tgap-ogda} cannot be improved by more than an {\it algorithm-dependent} constant factor.
\begin{theorem}[Algorithm-dependent lower bound for $p$-SCLIs]
\label{thm:mm-spectral}
Fix $\ell, D > 0$, let $\MA$ be a $p$-SCLI, and let $\bz\^t$ denote the $t$th iterate of $\MA$. Then there are constants $c_\MA, T_\MA > 0$ so that the following holds: For all $T \geq T_\MA$, there is some $F \in \MFbil_{n,\ell,D}$ so that for some initialization $\bz\^{0}, \ldots, \bz\^{-p+1} \in \MD_D$ and $T' \in \{ T, T+1, \ldots, T+p-1 \}$, it holds that $\Gap_F^{\MD_{2D}}(\bz\^{T'}) \geq \frac{c_\MA \ell D^2}{\sqrt{T}}$.
\end{theorem}
We remark that the order of quantifiers in Theorem \ref{thm:mm-spectral} is important: if instead we first fix a monotone operator $F \in \MFbil_{n,\ell,D}$ corresponding to some bilinear function $f(\bx, \by) = \bx^\t \bM \by$, then as shown in \cite[Theorem 3]{liang_interaction_2018}, the iterates $\bz\^T = (\bx\^T, \by\^T)$ of the \OG algorithm will converge at a rate of $e^{-O\left( \frac{\sigma_{\min}(\bM)^2}{\sigma_{\max}(\bM)^2} \cdot T \right)}$, which eventually becomes smaller than the sublinear rate of $1/\sqrt{T}$.\footnote{$\sigma_{\min}(\bM)$ and $\sigma_{\max}(\bM)$ denote the minimum and maximum singular values of $\bM$, respectively. The matrix $\bM$ is assumed in \cite{liang_interaction_2018} to be a square matrix of full rank (which holds for the construction used to prove Theorem \ref{thm:mm-spectral}).} Such ``instance-specific'' bounds are complementary to the minimax perspective taken in this paper.
We briefly discuss the proof of Theorem \ref{thm:mm-spectral}; the full proof is deferred to Appendix~\ref{sec:lb-proofs}. As in prior work proving lower bounds for $p$-SCLIs (\cite{arjevani_lower_2015,ibrahim_linear_2019}), we reduce the problem of proving a lower bound on $\Gap_\MG^{\MD_D}(\bz\^t)$ to the problem of proving a lower bound on the supremum of the spectral norms of a family of polynomials (which depends on $\MA$). Recall that for a polynomial $p(z)$, its {\it spectral norm} $\rho(p(z))$ is the maximum norm of any root. We show: %
\begin{proposition}
\label{prop:local-linear}
Suppose $q(z)$ is a degree-$p$ monic real polynomial such that $q(1) = 0$, $r(z)$ is a polynomial of degree $p-1$, and $\ell > 0$. Then there is a constant $C_0 > 0$, depending only on $q(z), r(z)$ and $\ell$, and some $\mu_0 \in (0,\ell)$, so that for any $\mu \in (0, \mu_0)$,
$$
\sup_{\nu \in [\mu, \ell]} \rho(q(z) - \nu \cdot r(z)) \geq 1 - C_0 \cdot \frac{\mu}{\ell}.
$$
\end{proposition}
The proof of Proposition \ref{prop:local-linear} uses elementary tools from complex analysis. The fact that the constant $C_0$ in Proposition \ref{prop:local-linear} depends on $q(z), r(z)$ leads to the fact that the constants $c_\MA, T_\MA$ in Theorem \ref{thm:mm-spectral} depend on $\MA$. Moreover, we remark that this dependence cannot be improved from Proposition \ref{prop:local-linear}, so removing it from Theorem \ref{thm:mm-spectral} will require new techniques:
\begin{proposition}[Tightness of Proposition \ref{prop:local-linear}]
\label{prop:local-linear-tight}
For any constant $C_0 > 0$ and $\mu_0 \in (0,\ell)$, there is some $\mu \in (0,\mu_0)$ and polynomials $q(z), r(z)$ so that $\sup_{\nu \in [\mu, \ell]} \rho(q(z) - \nu \cdot r(z)) < 1 - C_0 \cdot \mu$. Moreover, the choice of the polynomials is given by
\begin{equation}
\label{eq:qr}
q(z) = \ell ( z -\alpha)(z-1), \qquad r(z) = -(1+\alpha)z + \alpha \qquad \text{for} \qquad \alpha := \frac{\sqrt{\ell} - \sqrt{\mu}}{\sqrt{\ell} + \sqrt{\mu}}.
\end{equation}
\end{proposition}
The choice of polynomials $q(z), r(z)$ in (\ref{eq:qr}) are exactly the polynomials that arise in the $p$-SCLI analysis of Nesterov's AGD \cite{arjevani_lower_2015}; as we discuss further in Appendix~\ref{sec:lb-proofs}, Proposition \ref{prop:local-linear} is tight, then, even for $p=2$, because acceleration is possible with a $2$-SCLI.
As byproducts of our lower bound analysis, we additionally obtain the following:
\begin{itemize}
\item Using Proposition \ref{prop:local-linear}, we show that any $p$-SCLI algorithm must have a rate of at least $\Omega_\MA(1/T)$ for smooth convex function minimization (again, with an algorithm-dependent constant).\footnote{\cite{arjevani_iteration_2016} claimed to prove a similar lower bound for stationary algorithms in the setting of smooth convex function minimization; however, as we discuss in Appendix~\ref{sec:lb-proofs}, their results only apply to the strongly convex case, where they show a linear lower bound.%
} This is slower than the $O(1/T^2)$ error achievable with Nesterov's AGD with a time-varying learning rate. %
\item We give a direct proof of the following statement, which was conjectured by \cite{arjevani_lower_2015}: for polynomials $q,r$ in the setting of Proposition \ref{prop:local-linear}, for any $0 < \mu < \ell$, there exists $\nu \in [\mu,\ell]$ so that $\rho(q(z) - \nu \cdot r(z)) \geq \frac{\sqrt{\ell/\mu} - 1}{\sqrt{\ell/\mu} + 1}$. Using this statement, for the setting of Theorem \ref{thm:mm-spectral}, we give a proof of an {\it algorithm-independent} lower bound $\Gap_F^{\MD_D} (\bz\^t) \geq \Omega(\ell D^2 / T)$. Though the algorithm-independent lower bound of $\Omega(\ell D^2/T)$ has already been established in the literature, even for non-stationary CLIs (e.g., \cite[Proposition 5]{azizian_accelerating_2020}), we give an alternative proof from existing approaches.
\end{itemize}
\vspace{-0.4cm}
\section{Discussion}
In this paper we proved tight last-iterate convergence rates for smooth monotone games when all players act according to the optimistic gradient algorithm, which is no-regret. We believe that there are many fruitful directions for future research. First, it would be interesting to obtain last-iterate rates in the case that each player's actions is constrained to the simplex and they use the {\it optimistic multiplicative weights update (OMWU)} algorithm. \cite{daskalakis_last-iterate_2018,lei_last_2020} showed that OMWU exhibits last-iterate convergence, but non-asymptotic rates remain unknown even for the case that $F_\MG(\cdot)$ is linear, which includes finite-action polymatrix games. %
Next, it would be interesting to determine whether Theorem \ref{thm:ogda-last} holds if (\ref{eq:2o-smooth}) is removed from Assumption \ref{asm:smoothness}; this problem is open even for the EG algorithm (\cite{golowich_last_2020}). Finally, it would be interesting to extend our results to the setting where players receive noisy gradients (i.e., the stochastic case). %
As for lower bounds, it would be interesting to determine whether an algorithm-independent lower bound of $\Omega(1/\sqrt{T})$ in the context of Theorem \ref{thm:mm-spectral} could be proven for stationary $p$-SCLIs. %
As far as we are aware, this question is open even for convex minimization (where the rate would be $\Omega(1/T)$).
\neurips{
\newpage
\section*{Broader impact}
As this is a theoretical paper, we expect that the direct ethical and societal impacts of this work will be limited. As the setting of multi-agent learning in games describes many systems with potential for practical impact, such as GANs, we believe that the insights developed in this paper may eventually aid the improvement of such technologies. If not deployed and regulated carefully, technologies such as GANs could lead to harmful outcomes, such as through the proliferation of false media (``deepfakes''). We hope that, through a combination of legal and technological measures, such negative impacts of GANs can be limited and the positive applications, such as drug discovery and image analysis in the medical field, may be realized.
}
\neurips{
\begin{ack}
We thank Yossi Arjevani for a helpful conversation.
N.G.~is supported by a Fannie \& John Hertz Foundation Fellowship and an NSF Graduate Fellowship. C.D.~is supported by NSF Awards IIS-1741137, CCF-1617730 and CCF-1901292, by a Simons Investigator Award, and by the DOE PhILMs project (No. DE-AC05-76RL01830).
\end{ack}}
\arxiv{
\section*{Acknowledgements}
We thank Yossi Arjevani for a helpful conversation.
}
\neurips{{\small |
1,116,691,498,366 | arxiv |
\section{Introduction}
\input{sections/introduction}
\section{Our Procedure} \label{section: overview}
\input{sections/overview}
\section{Convergence Theory for Orthogonalization} \label{section:full-memory}
\input{sections/full_memory}
\section{Convergence Theory for Base Methods} \label{section:no-memory}
\input{sections/no_memory}
\section{Numerical Experiments} \label{section: experiments}
\input{sections/experiments}
\section{Conclusion} \label{section:conclusion}
\input{sections/conclusion}
\small
\bibliographystyle{plainnat}
\subsection{Core Results} \label{subsection:core-results-full-memory}
We establish two key results. First, we establish that our procedure is an orthogonalization procedure: that is, the matrices $\lbrace S_k \rbrace$ project the current search direction onto a subspace that is orthogonal to previous search directions. Second, we characterize the limit point of our iterates, $\lbrace x_k \rbrace$, in terms of a true solution of the linear system and the subspace generated by the rank-one RPMs, $\lbrace V_k \rbrace$.
\begin{theorem} \label{theorem: S are orthogonal projections}
Let $\lbrace w_l : l + 1 \in \mathbb{N} \rbrace \subset \mathbb{R}^n$ be an arbitrary sequence in $\mathbb{R}^n$, and let $\mathcal{R}_0 = \lbrace 0 \rbrace \subset \mathbb{R}^d$ and $\mathcal{R}_{l} = \linspan{A'w_0,\ldots,A'w_{l-1}}$ for $l \in \mathbb{N}$. Now, let $S_0 = I_d$ and $\lbrace S_l : l \in \mathbb{N} \rbrace$ be defined recursively as in \cref{eqn: rank-one update-matrix}. Then, for $l \geq 0$, $S_l$ is an orthogonal projection matrix onto $\mathcal{R}_l^\perp$.
\end{theorem}
\begin{proof}
We will prove the result by induction. For the base case, $l=0$, $S_0 = I_d$. It follows that $S_0$ is an orthogonal projection onto $\mathcal{R}_0^\perp = \mathbb{R}^d$ since $S_0^2 = I_d^2 = I_d = S_0$ and $\range{I_d} = \mathbb{R}^d$.
Now suppose that the result holds for $l > 0$. If $S_l A'w_l = 0$ then there is nothing to show. Therefore, for the remainder of this proof, suppose $S_l A' w_l \neq 0$.
First, we show that $S_{l+1}$ is a projection matrix by verifying that $S_{l+1}^2 = S_{l+1}$ by direct calculation. Making use of the recursive definition of $S_{l+1}$ and the induction hypothesis that $S_l^2 = S_l$,
\begin{equation}
\begin{aligned}
S_{l+1}^2 &= \left( S_l - \frac{S_l A' w_l w_l' A S_l}{w_l'A S_l A' w_l}\right) \left( S_l - \frac{S_l A' w_l w_l'A S_l}{w_l'A S_l A' w_l}\right) \\
&= \left( S_l - \frac{S_l A' w_l w_l' A S_l}{w_l'A S_l A' w_l}\right)\left( I_d - \frac{A'w_lw_l'A S_l}{w_l'A S_l A' w_l} \right) \\
&= S_l - 2\frac{S_l A' w_l w_l' A S_l}{w_l'A S_l A' w_l} + \frac{S_l A' w_l w_l' A S_l}{w_l'A S_l A' w_l} = S_{l+1}.
\end{aligned}
\end{equation}
Second, we use the fact that a projection is orthogonal if and only if it is self-adjoint to show that $S_{l+1}$ is an orthogonal projection. By induction, because $S_l$ is an orthogonal projection, $S_l' = S_l$, and so
\begin{equation}
S_{l+1}' = S_l' - \frac{S_l A' w_l w_l' A S_l}{w_l'A S_l A' w_l} = S_{l+1}.
\end{equation}
Finally, let $v$ be in the range of $S_{l+1}$ and we can decompose $v$ into the components $u$ and $y$ such that $v = u+y$, $0 = u'y$ and $y \in \mathcal{R}_{l+1}$. We will show that $y = 0$, which characterizes the range of $S_{l+1}$ as being all vectors orthogonal to $\mathcal{R}_{l+1}$. To show this note that because $S_{l+1}$ is a projection matrix, we have that
\begin{equation} \label{eqn-proof-S-ortho:1}
u + y = v = S_{l+1} v = S_{l+1}u + S_{l+1} y.
\end{equation}
By construction $\mathcal{R}_l \subset \mathcal{R}_{l+1}$ and so $u \in \mathcal{R}_l^{\perp}$. Using the induction hypothesis, we then have that $S_l u = u$. Moreover, because $u \in \mathcal{R}_{l+1}^\perp$ by construction, $u'A'w_l = 0$. Then, using the recursive definition of $S_{l+1}$, we have that
\begin{equation}
S_{l+1} u = S_l u - \frac{S_l A' w_l w_l' A S_lu}{w_l'A S_l A'w_l} = u - \frac{S_l A' w_l w_l' A u}{w_l'A S_l A'w_l} = u.
\end{equation}
Therefore, $u = S_{l+1}u$ and, by \cref{eqn-proof-S-ortho:1}, $y = S_{l+1} y$. We now decompose $y$ into $y_1$ and $y_2$ where $y_1 \in \mathcal{R}_l$ and $y_2 \in \mathcal{R}_l^\perp \cap \mathcal{R}_{l+1}$. By the induction hypothesis, $\mathcal{R}_l^\perp \cap \mathcal{R}_{l+1} = \linspan{ S_{l} A' w_l }$. Therefore, $S_l y = y_2$ and $\exists \alpha \in \mathbb{R}$ such that $y_2 = \alpha S_l A' w_l$. Finally, using the recursive formulation of $S_{l+1}$ and $S_l y = y = \alpha S_l A' w_l$,
\begin{equation}
y = S_{l+1} y = S_l y - \frac{S_l A' w_l w_l' A S_l y}{w_l' A S_l A' w_l} = \alpha S_l A' w_l - \alpha S_l A' w_l = 0.
\end{equation}
Thus, we have shown that the range of $S_{l+1}$ is orthogonal to $\mathcal{R}_{l+1}$.
\end{proof}
From \cref{theorem: S are orthogonal projections}, we see that our procedure is an orthogonalization procedure just like quasi-Newton methods \citep[][Ch. 8]{nocedal2006} and conjugated direction methods \citep{hestenes2012}. As a consequence, we have the following common and insightful characterization of the iterates of such an orthogonalization procedure.
\begin{corollary} \label{corollary: orthogonalization-iterate-subspace}
In addition to the setting of \cref{theorem: S are orthogonal projections}, let $x_0 \in \mathbb{R}^d$ be arbitrary and let $\lbrace x_{l} : l \in \mathbb{N} \rbrace$ be defined according to \cref{eqn: rank-one update-param}. For any $l \geq 0$, $x_{l+1} \in \linspan{x_0,A'w_0,\ldots,A'w_l}$.
\end{corollary}
\begin{proof}
We again proceed by induction. Because $S_0 = I_d$, the case of $x_1$ follows by recursion formula, \cref{eqn: rank-one update-param}. Now suppose that the result holds up to some $l > 0$. Note, by the recursion formula
\begin{equation}
x_{l+1} = x_l + \gamma S_l A' w_l, \quad \text{where} \quad \gamma = \begin{cases} \frac{w_l'(b - Ax_l)}{w_l' A S_l A'w_l} & S_l A' w_l \neq 0 \\
0 & \text{otherwise.}
\end{cases}
\end{equation}
Therefore, $x_{l+1} \in \linspan{ x_l, S_l A' w_l}$. Now, using the induction hypothesis,
\begin{equation}
\linspan{ x_l, S_l A' w_l } \subset \linspan{x_0, A'w_0,\ldots,A'w_{l-1}, S_l A' w_l }.
\end{equation}
Second, when $S_l A' w_l = 0$, then $A'w_l \in \mathcal{R}_{l}$. Consequently,
\begin{equation}
x_{l+1} \in \linspan{x_0, A'w_0,\ldots,A'w_{l-1}} = \linspan{x_0,A'w_0,\ldots,A'w_l}.
\end{equation}
Now suppose $S_l A' w_l \neq 0$. By \cref{theorem: S are orthogonal projections}, $S_l$ is an orthogonal projection onto $\linspan{A'w_0,A'w_1,\ldots,A'w_{l-1}}^{\perp}$.
Hence,
$x_{l+1} \in \linspan{ x_l, S_l A' w_l}$, which is contained in $ \linspan{x_0, A'w_0,\ldots,A'w_l}.$
\end{proof}
\Cref{corollary: orthogonalization-iterate-subspace} demonstrates that, as is common with orthogonalization procedures, the iterates are in a subspace generated by the initial iterate and the search directions $\lbrace A'w_0,\ldots,A'w_l \rbrace$. For deterministic procedures, such a characterization is usually sufficient and the next step would be to demonstrate that the iterates are the closest points to the true solutions within the given subspace. However, for a procedure in which the subspace is randomly generated, there is substantially more nuance. In order to be conscientious of space, we will not go through the litany of issues, but rather skip to the appropriate definitions and characterizations.
First, we begin by defining the maximal possible subspace that can be generated by a random quantity $A'w$. Let $w \in \mathbb{R}^n$ be a random variable defined on a space $\Omega$, and let
\begin{equation} \label{eqn: row span}
\mathcal{N}(w) = \linspan{ z \in \mathbb{R}^d: \Prb{z'A'w = 0} = 1} \text{ and } \mathcal{R}(w) = \mathcal{N}(w)^\perp.
\end{equation}
Moreover, we define the subspace $\mathcal{V}(w)$ such that $\mathcal{V}(w) \perp \mathcal{R}(w)$ and $\mathcal{V}(w) + \mathcal{R}(w) = \mathrm{row}(A)$ (hence, $\mathcal{V}(w) \oplus \mathcal{R}(w) = \mathrm{row}(A)$). Correspondingly, let $P_W$ denote the orthogonal projection matrix onto a subspace $W \subset \mathbb{R}^d$. The following result characterizes $\mathcal{R}(w)$.
\begin{lemma} \label{lemma: characterize R_w}
For $\mathcal{R}(w)$ as defined in \cref{eqn: row span}, $\mathcal{R}(w)$ is the smallest subspace of $\mathbb{R}^d$ such that $\Prb{ A'w \in \mathcal{R}(w) } = 1$.
\end{lemma}
\begin{proof}
First, we verify that $\Prb{A'w \in \mathcal{R}(w)} = 1$. Suppose that $\Prb{A'w \in \mathcal{R}(w)} < 1$. Then,
\begin{equation}
\Prb{\exists z \perp \mathcal{R}(w): z'A'w \neq 0} > 0.
\end{equation}
However, we know that for any $z$ such that $z \perp \mathcal{R}(w)$, $z \in \mathcal{N}(w)$ and $z'A'w = 0$ with probability one, which is a contradiction. Hence, $\Prb{A'w \in \mathcal{R}(w)} = 1$.
Now suppose there is a proper subspace of $\mathcal{R}(w)$, $U$, such that $\Prb{ A'w \in U} = 1$. Let $U^{\perp \mathcal{R}(w)}$ denote the subspace orthogonal to $U$ relative to $\mathcal{R}(w)$. Then, $\Prb{z' A'w = 0 } = 1$ for any $z \in U^{\perp \mathcal{R}(w)}$, which implies that $U^{\perp \mathcal{R}(w)} \subset \mathcal{N}(w)$. However, since $U^{\perp \mathcal{R}(w)} \subset \mathcal{R}(w) \perp \mathcal{N}(w)$, $U^{\perp \mathcal{R}(w)} = \lbrace 0 \rbrace$. Thus, $\mathcal{R}(w)$ is the smallest subspace such that $\Prb{A'w \in \mathcal{R}(w)} = 1$.
\end{proof}
Second, we must define when the maximal possible subspace of $A'w$ can be achieved by a sequence of random variables $\lbrace A'w_0,\ldots,A'w_l \rbrace$, which may or may not be related to $A'w$. Note, by not requiring a relationship between $\lbrace A'w_0,\ldots,A'w_l \rbrace$ and $A'w$ our next result is particularly general and applies to a variety of situations, from the case in which $\lbrace w_l \rbrace$ are independent copies of $w$ to the case where $\lbrace w_l \rbrace$ have complex dependencies. Now, let $\lbrace w_l : l +1 \in \mathbb{N} \rbrace \subset \mathbb{R}^n$ be random variables defined on $\Omega$, and let $T$ be a stopping time defined by
\begin{equation} \label{eqn: stopping time}
T = \min \lbrace k \geq 0 : \linspan{A'w_0,\ldots,A'w_k} \supset \mathcal{R}(w) \rbrace.\footnote{Below we will assume that $A'w \in \mathcal{R}(w)$ with probability one. If we relax this, this will change the results in a predictable manner but will require additional notation. To avoid such notation, we will leave this more general case to future work if there is a sampling case that merits it.}
\end{equation}
Using this notation, we have the following fundamental characterization result of the limit points of $\lbrace x_l \rbrace$.
\begin{theorem} \label{theorem: terminal iteration characterization}
Let $w$ be a random variable, and let $\mathcal{R}(w)$, $\mathcal{N}(w)$ and $\mathcal{V}(w)$ be as defined above (see \cref{eqn: row span}).
Moreover, let $w_0, w_1,\ldots \in \mathbb{R}^n$ be random variables such that $\Prb{A'w_l \in \mathcal{R}(w)}=1$ for all $l+1 \in \mathbb{N}$, and let $T$ be as defined in \cref{eqn: stopping time}. Let $x_0 \in \mathbb{R}^d$ be arbitrary and $S_0 = I_d$, and let $\lbrace x_l : l \in \mathbb{N} \rbrace$ and $\lbrace S_l : l \in \mathbb{N} \rbrace$ be defined as in \cref{eqn: rank-one update-param,eqn: rank-one update-matrix}. On the event $\lbrace T < \infty \rbrace$,
\begin{enumerate}
\item For any $s \geq T+1$, $S_{T+1} = S_s$ and $x_{T+1} = x_s$.
\item If $Ax=b$ admits a solution $x^*$ (not necessarily unique), then
\begin{equation}
x_{T+1} = P_{\mathcal{N}(w)} x_0 + P_{\mathcal{R}(w)} x^*.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
Recall that $\mathcal{R}_{k+1} = \linspan{A'w_0,\ldots,A'w_{k} }.$ Therefore, by the definition of $T$, $\mathcal{R}_{T+1} = \mathcal{R}(w)$ on the event that $\lbrace T < \infty \rbrace$. Therefore, by \cref{theorem: S are orthogonal projections}, $S_{T+1}$ is an orthogonal projection onto $\mathcal{N}(w)$ and its null space is $\mathcal{R}(w)$.
We now proceed by induction. Because $\nullsp{S_{T+1}} = \mathcal{R}(w)$ and $A'w_{T+1} \in \mathcal{R}(w)$ with probability one (by hypothesis), $S_{T+1} A' w_{T+1} = 0$. Therefore, by the recursion equations, \cref{eqn: rank-one update-param,eqn: rank-one update-matrix}, $S_{T+2} = S_{T+1}$ and $x_{T+2} = x_{T+1}$. Suppose now that $S_{T+l} = S_{T+1}$ and $x_{T+l} = x_{T+1}$ for $l > 1$. Again, by hypothesis, $A'w_{T+l} \in \mathcal{R}(w) = \nullsp{S_{T+l}}$. Therefore, $S_{T+l} A' w_{T+l} = 0$. By the recursion equations, \cref{eqn: rank-one update-param,eqn: rank-one update-matrix}, $S_{T+l+1} = S_{T+l} = S_{T+1}$ and $x_{T+l+1} = x_{T+l} = x_{T+1}$.
To establish the second part of the result, we must first establish that for any $l \geq 0$,
\begin{equation}
x_{l+1} - x^* = S_{l+1}(x_0 - x^*).
\end{equation}
We will prove this by induction. For $l=0$,
\begin{equation}
\begin{aligned}
x_1 - x^* &= x_0 - x^* + \frac{S_0 A' w_0 w_0'}{w_0'A S_0 A w_0} (A x^* - Ax_0) \\
&= \left( I_d - \frac{S_0A' w_0 w_0'A}{w_0' A S_0 A w_0 } \right) (x_0 - x^*),
\end{aligned}
\end{equation}
by the recursion equations, \cref{eqn: rank-one update-param}. Noting that $S_0 = I_d$ and by using \cref{eqn: rank-one update-matrix}, we conclude that $x_1 - x^* = S_1 (x_0 - x^*)$. Now suppose that this relationship holds for some $l > 0$. Again, using \cref{eqn: rank-one update-param},
\begin{equation}
\begin{aligned}
x_{l+1} - x^* &= x_l - x^* + \frac{S_l A' w_l w_l'}{w_l'A S_l A w_l} (A x^* - Ax_l) \\
&= \left( I_d - \frac{S_lA' w_l w_l'A}{w_l' A S_l A w_l } \right) (x_l - x^*).
\end{aligned}
\end{equation}
Using the induction hypothesis, $x_l - x^* = S_l(x_0 - x^*)$ and \cref{eqn: rank-one update-matrix},
\begin{equation}
x_{l+1} - x^* = \left(I_d - \frac{S_lA' w_l w_l'A}{w_l' A S_l A w_l } \right) S_l (x_0 - x^*) = S_{l+1} (x_0 - x^*).
\end{equation}
With this result established and noting that $S_{T+1}$ is a projection onto $\mathcal{N}(w)$ (i.e., $P_{\mathcal{N}(w)} = S_{T+1}$), on the event $\lbrace T < \infty \rbrace$,
\begin{equation}
\begin{aligned}
x_{T+1} &= x^* + S_{T+1}(x_0 - x^*) \\
&= \left(P_{\mathcal{N}(w)} + P_{\mathcal{R}(w)}\right)x^* + P_{\mathcal{N}(w)} x_0 - P_{\mathcal{N}(w)} x^* \\
&= P_{\mathcal{R}(w)} x^* + P_{\mathcal{N}(w)} x_0.
\end{aligned}
\end{equation}
\end{proof}
With \cref{theorem: terminal iteration characterization} in hand, the natural subsequent question is when the limit point of the iterates is actually a solution to the original system. This question is addressed in the following corollary.
\begin{corollary} \label{corollary: criteria system solution}
Under the setting of \cref{theorem: terminal iteration characterization}, on the event $\lbrace T < \infty \rbrace$, $Ax_{T+1} = b$ if and only if $P_{\mathcal{V}(w)} x_0 = P_{\mathcal{V}(w)} x^*$.
\end{corollary}
\begin{proof}
Recall that $\mathrm{row}(A) \perp \nullsp{A}$. Because $\mathcal{R}(w) \subset \mathrm{row}(A)$, $\mathcal{N}(w) = \mathcal{V}(w) + \nullsp{A}$. Moreover, by the definition of $\mathcal{V}(w) \subset \mathrm{row}(A)$, $\mathcal{V}(w) \perp \nullsp{A}$. Therefore, $P_{\mathcal{N}(w)} = P_{\nullsp{A}} + P_{\mathcal{V}(w)}$. Now, using the characterization in \cref{theorem: terminal iteration characterization},
\begin{equation}
A x_{T+1} = A P_{\nullsp{A}} x_0 + A P_{\mathcal{V}(w)} x_0 + A P_{\mathcal{R}(w)} x^* = A P_{\mathcal{V}(w)} x_0 + A P_{\mathcal{R}(w)} x^*.
\end{equation}
Similarly, because $I_d = P_{\nullsp{A}} + P_{\mathcal{V}(w)} + P_{\mathcal{R}(w)}$,
\begin{equation}
b = Ax^* = A P_{\nullsp{A}} x^* + A P_{\mathcal{V}(w)} x^* + A P_{\mathcal{R}(w)}x^* = A P_{\mathcal{V}(w)} x^* + A P_{\mathcal{R}(w)}x^*.
\end{equation}
Setting these two quantities equal to each other, we conclude that $Ax_{T+1} = b$ if and only if $A P_{\mathcal{V}(w)} x^* = A P_{\mathcal{V}(w)} x_0$. Clearly, if $P_{\mathcal{V}(w)} x_0 = P_{\mathcal{V}(w)} x^*$ then $Ax_{T+1} = b$. So, what we have left to show is that $A P_{\mathcal{V}(w)} x^* = A P_{\mathcal{V}(w)} x_0$ implies
$P_{\mathcal{V}(w)} x_0 = P_{\mathcal{V}(w)} x^*$.
Let $A^+$ denote the Moore-Penrose pseudo-inverse of $A$, and recall that $A^+ A$ is a projection onto $\mathrm{row}(A)$. Moreover, $\mathrm{range}(P_{\mathcal{V}}) \subset \mathrm{row}(A)$. Therefore, since if $Ax_{T+1} =b$ then $AP_{\mathcal{V}(w)} x_0 = AP_{\mathcal{V}(w)} x^*$, if $A x_{T+1} = b$ then
\begin{equation} \label{proof-eqn:V-in-row-A}
P_{\mathcal{V}(w)} x_0 = (A^+ A) P_{\mathcal{V}(w)} x_0 = A^+ (A P_{\mathcal{V}(w)} x_0 ) = A^+ A P_{\mathcal{V}(w)} x^* = P_{\mathcal{V}(w)} x^*.
\end{equation}
\end{proof}
\Cref{corollary: criteria system solution} provides criteria on the initial condition and on $\mathcal{V}(w)$ to determine when our procedure will solve the linear system. However, we would rarely have a way of choosing the initial condition apriori such that the requirement of \cref{corollary: criteria system solution} holds. Thus, the alternative is to design $w$ and $\lbrace w_l \rbrace$ so that $\mathcal{V}(w) = \lbrace 0 \rbrace$, which would guarantee that $Ax_{T+1} = b$ on the event $\lbrace T < \infty \rbrace$. It is worth reiterating that we have made very limited assumptions about the relationships between $w$ and $\lbrace w_l \rbrace$ and amongst $\lbrace w_l \rbrace$. This is important because it allows us to apply the preceding results to a variety of common relationship patterns between $w$ and $\lbrace w_l \rbrace$. In the next subsection, we explore some specific relationships and whether these relationships will result in $\mathcal{V}(w) = \lbrace 0 \rbrace$.
\subsection{Common Sampling Patterns} \label{subsection:sampling-times}
\Cref{theorem: terminal iteration characterization} supplies a general result about the behavior of \textit{any} sampling methodology on the solution of the system using \cref{eqn: rank-one update-matrix,eqn: rank-one update-param}, yet it does not suggest a precise sampling methodology. Generally, the sampling methodology choice will depend on both the hardware environment and the nature of the problem. For example, a random permutation sampling methodology will limit the parallelism achievable in \cref{alg: rank-one RPM low com}. On the other hand, a random permutation sampling methodology might be well-advised in a sequential setting where very little known is about the coefficient matrix $A$. Thus, the precise sampling scheme should depend on the hardware environment and should exploit the structure of the problem.
Despite this, in practice, there are two general sampling schemes that form a basis for more problem and hardware specific sampling schemes: random permutation sampling and independent and identically distributed sampling. The former sampling pattern is exemplified by randomly permuting the equations of the linear system. More concretely, let $e_1,\ldots,e_n \in \mathbb{R}^n$ be the standard basis; let $w$ be a random variable with nonzero probability on each element of the basis; let $\lbrace w_l \rbrace$ be random variables sampled from $\lbrace e_1,\ldots,e_n \rbrace$ without replacement (until the set is exhausted, then we repopulate the set with its original elements and repeat the sampling without replacement). The following statement provides a simple characterization of this sampling scheme.
\begin{lemma} \label{lemma: rand perm sampling}
Let $\lbrace W_1,\ldots,W_N \rbrace \subset \mathbb{R}^n$. Let $w$ be a random variable such that
\begin{equation}
\Prb{ w = W_j} > 0 \quad j = 1,\ldots,N, \quad\text{and}\quad \sum_{j=1}^N \Prb{ w = W_j } = 1.
\end{equation}
Moreover, let $\lbrace w_l : l + 1 \in \mathbb{N} \rbrace$ be random variables sampled from $\lbrace W_1,\ldots,W_N \rbrace$ without replacement (and once the set is exhausted, we repopulate the set with its original elements and repeat sampling without replacement). Then $T \leq N-1$. Moreover, $Ax_{T+1} = b$ for every initialization if $\linspan{A'W_1,\ldots,A'W_N} = \mathrm{row}(A)$, which holds if $\linspan{ W_1,\ldots,W_N}= \mathbb{R}^n$.
\end{lemma}
\begin{proof}
First, note that $\mathcal{N}(w) = \lbrace z \in \mathbb{R}^d: z'A'W_j = 0, ~\forall j = 1,\ldots,N \rbrace$. Therefore,
\begin{equation}
\mathcal{R}(w) = \mathcal{N}(w)^\perp = \linspan{ A'W_1,\ldots, A'W_N }.
\end{equation}
In turn, because $\lbrace w_0,\ldots,w_{N-1} \rbrace = \lbrace W_1,\ldots,W_N \rbrace$, $T$ is at most $N-1$.
By \cref{corollary: criteria system solution}, $Ax_{T+1} = b$ if and only if $P_{\mathcal{V}(w)} x_0 = P_{\mathcal{V}(w)}x^*$ where $x^*$ satisfies $Ax^* = b$. Now, given that $\mathcal{R}(w) + \mathcal{V}(w) = \mathrm{row}(A)$ and $\mathcal{R}(w) = \linspan{ A'W_1,\ldots, A'W_N }$, if $\linspan{A'W_1,\ldots,A'W_N} = \mathrm{row}(A)$ then $\mathcal{V}(w) = \lbrace 0 \rbrace$. Therefore, $Ax_{T+1} = b$ for any initialization. The final claim is straightforward.
\end{proof}
The second sampling scheme, independent and identically distributed sampling, is exemplified by randomly sampling equations from the system with uniform discrete probability. However, we do not need to limit ourselves to sampling from a finite population of elements. As the next result shows, we can do much more.
\begin{proposition} \label{theorem: iid sampling}
Suppose that $w, w_0,w_1,\ldots$ are independent, identically distributed random variables. There exists a $\pi \in (0,1)$ such that
\begin{equation} \label{theorem-cond:min-prob}
\mathop{\inf _{v \in \mathcal{R}(w)}}_{ \norm{v}_2 = 1} \Prb{ v'A'w \neq 0} \geq \pi.
\end{equation}
Moreover, $T < \infty$ and $\Prb{ T = k } \leq (k - r) ^{r-1} (1- \pi) ^ {k - r}$ where $r = \dim( \mathcal{R}(w))$ and $k \geq r$.
\end{proposition}
\begin{proof}
First, we show that there exists $\pi>0$ such that for any nontrivial, proper subspace $V\subsetneq\mathcal{R}(w)$,
$
\mathbb{P}[A'w\not\in V] \geq \pi$, which implies \cref{theorem-cond:min-prob} when we take $V$ to be the relative orthogonal compliment to the span of a unit vector $v \in \mathcal{R}(w)$. Suppose there is no such $\pi$. Then, for every $p \in (0,1)$, there is a nontrivial subspace $V \subsetneq \mathcal{R}(w)$ such that $\Prb{ A'w \in V} \geq 1 - p$. Let $r$ be the smallest integer between $0$ and $\dim(\mathcal{R}(w))$ such that
\begin{equation}
\mathop{\sup _{V\subsetneq \mathcal{R}(w)}}_{\dim[V] = r} \mathbb{P}[A'w \in V] = 1.
\end{equation}
For $\epsilon>0$, let $V_1\subsetneq\mathcal{R}(w)$ be an $r$-dimension subspace with $\mathbb{P}[A'w \in V_1] \geq 1 - \epsilon/2$. Note, by \cref{lemma: characterize R_w}, $\mathbb{P}[A'w \in V_1] < 1$. Therefore, let $V_2 \subsetneq \mathcal{R}(w)$ be an $r$-dimensional subspace with $\mathbb{P}[A'w \in V_2] > \mathbb{P}[A'w \in V_1] \geq 1 - \epsilon/2$. Given that $V_1$ and $V_2$ are distinct and the inclusion-exclusion principle,
\begin{equation}
\mathbb{P}[A'w\in V_1\cap V_2] \geq \mathbb{P}[A'w\in V_1] + \mathbb{P}[A'w\in V_2] -1\geq 1-\epsilon.
\end{equation}
However, this is contradicts the minimality of $r$ since $\epsilon >0$ is arbitrary and $\dim(V_1 \cap V_2) < r$. Thus, we conclude that such a $\pi$ exists.
It follow from \cref{theorem-cond:min-prob} that for any k,
\begin{equation}
\mathbb{P}\left[\dim(\linspan{A'w_0,\ldots,A'w_k}) > \dim(\linspan{A'w_0,\ldots,A'w_{k-1}}\right) ] \geq \pi.
\end{equation}
Therefore, we can bound $\mathbb{P}[T=k]$ by a negative binomial distribution. In particular,
\begin{equation}
\mathbb{P}[T=k] \leq \binom{k-1}{ r -1}\, (1- \pi) ^ {k - r} \leq (k - r) ^{r-1} (1- \pi) ^ {k - r}.
\end{equation}
\end{proof}
In light of the two preceding results, we may be convinced that there is a gap between the convergence properties between random permutation sampling and the independent and identically distributed sampling. However, by modifying the structure of the rank-one RPM, we can find more intermediate cases. The next result demonstrates this behavior with a somewhat contrived example, and we will leave more complex cases to future work.
\begin{theorem}
Suppose $w , w_0, w_1,\ldots$ are i.i.d. random variables such that the entries of $A'w$ are independent, identically distributed subgaussian random variables with mean zero and unit variance.
Then, there exists a $\pi \in (0,1)$ depending only on the distribution of the entries of $A'w$ such that $\Prb{T = k} \geq 1 - \pi^k$ for $k \geq d$.
\end{theorem}
\begin{proof} Let $H_k$ denote a $k \times d$ ($k \geq d$) random matrix whose entries are independent and identically distributed subgaussian random variables with zero mean and unit variance. As a consequence of \cite[Theorem 1.1]{rudelson2009}, there exists a $\pi$ that depends on the distribution of the entries such that for all $k \geq d$, $\Prb{ \sigma_{\min}(H_k) > 0 } \geq 1- \pi^k$. At iteration $k$, let $N_k$ denote the matrix whose rows are given by $w_0,w_1,\ldots$. Then, by hypothesis, $N_kA$ has entries that are independent, identically distributed subgaussian random with zero mean and unit variance. Therefore, there exists a $\pi \in (0,1)$ depending only on the distribution of the entries in $A'w$ such that $\Prb{ T = k} = \Prb{ \sigma_{\min}(N_kA) > 0} \geq 1 - \pi^k$ for $k \geq d$.
\end{proof}
\subsection{An Extension of Meany's Inequality} \label{subsection:meany}
Here, we will derive an extension of Meany's Inequality \cite{meany1969}, which, under a different extension, has recently been used to study the convergence rate of row-action solvers including the a block-variant of the Kaczmarz method \cite{bai2013}. We begin by stating a geometric lemma derived by \cite{meany1969}, and follow it with the extension, which closely follows Meany's original proof with several modifications.
\begin{lemma}[\cite{meany1969}] \label{lemma:meany-lemma}
Let $f_1,\ldots,f_k \in \mathbb{R}^n$ with $k \leq n$. Write $f_k = f^S + f^N$ where $f^S$ belongs to the space $S$ spanned by $f_1,\ldots,f_{k-1}$ and $f^N$ is perpendicular to $S$. Let $\bar{F}$ be the matrix whose columns are $f_1,\ldots,f_{k-1}$, and let $F$ be the matrix whose columns are $f_1,\ldots,f_k$. Then,
\begin{equation}
\det( F'F) = \norm{ f^N}_2^2 \det( \bar{F}'\bar{F}).
\end{equation}
\end{lemma}
\begin{theorem} \label{theorem:meany-no-mem}
Let $v_1,\ldots,v_k$ be unit vectors in $\mathbb{R}^n$ for some $k \in \mathbb{N}$. Let $S = \linspan{v_1,\ldots,v_k}$. Let $\mathcal{F}$ denote all matrices $F$ where the columns of $F$ are the vectors $\lbrace f_1,\ldots,f_r \rbrace \subset \lbrace v_1,\ldots,v_k \rbrace$ that are a maximal linearly independent subset. Then
\begin{equation}
\sup_{ y \in S, \norm{y}_2 = 1} \norm{Q y}_2 \leq \sqrt{1 - \min_{ F \in \mathcal{F}} \det(F'F)},
\end{equation}
where
\begin{equation}
Q = (I - v_k v_k')(I - v_{k-1}v_{k-1}')\cdots (I - v_1 v_1').
\end{equation}
\end{theorem}
\begin{proof}
The proof proceeds by induction. For the case $k=1$, both sides of the inequality are zero and so the result holds. Now suppose that the result holds for $k = j-1$. To prove the case $k=j$, we need the following additional notation.
Let $\bar{S} = \linspan{ v_1,\ldots,v_{j-1}} $; let $\lbrace f_1,\ldots,f_{\bar{r}} \rbrace$ denote a maximal linearly independent subset of the unit vectors $\lbrace v_1,\ldots,v_{j-1} \rbrace$ that achieve the minimum determinant; let $\bar{F}$ be the matrix whose columns are $f_1,\ldots,f_{\bar{r}}$; and let
\begin{equation}
\bar{Q} = (I - v_{j-1} v_{j-1}')( I - v_{j-2} v_{j-2}') \cdots (I - v_1 v_1').
\end{equation}
For a unit vector $y \in S$, let $y^{\bar{S}}$ denote the component of $y$ in $\bar{S}$, and let $y^N$ denote the component of $y$ orthogonal to $\bar{S}$. Moreover, let $z = \bar{Q} y^{\bar{S}}$. Then, by the induction hypothesis,
\begin{equation} \label{proof-meany:induction-hyp}
\norm{z}_2 = \norm{ \bar{Q} y^{\bar{S}}}_2 \leq \norm{y^{\bar{S}}}_2 \sqrt{1 - \det(\bar{F}'\bar{F})}.
\end{equation}
Similarly, write $v_j = v^{\bar{S}} + v^N$ where $v^{\bar{S}} \in \bar{S}$ and $v^N$ is perpendicular to $\bar{S}$.
\underline{Case A:} Suppose that $S = \bar{S}$. Then $y = y^{\bar{S}}$. Moreover, since $\bar{F} \in \mathcal{F}$,
\begin{equation}
\norm{ Q y}_2 \leq \norm{ \bar{Q} y}_2 \leq \norm{ y }_2 \sqrt{ 1 - \det(\bar {F} ' \bar{F}) } \leq \norm{y}_2 \sqrt{ 1 - \min_{F \in \mathcal{F}} \det( F'F) }.
\end{equation}
Thus, the result holds when $S = \bar{S}$.
\underline{Case B:} Suppose that $S \supsetneq \bar{S}$. Then,
\begin{align}
\norm{Qy}_2^2 &= \norm{ (I - v_j v_j')(z + y^N) }_2^2 = (z + y^N)' (I - v_j v_j') (z + y^N) \\
&= \norm{z}_2^2 + \norm{y^N}^2 + \underbrace{2 z' y^N}_{0} - (\underbrace{z' v_j}_{z'v^{\bar{S}}})^2 - 2 \underbrace{z'v_j}_{ z'v^{\bar{S}}} \underbrace{v_j'y^N}_{(v^N)'y^N} - (\underbrace{v_j' y^N}_{{(v^N)'y^N}})^2 \\
&= \norm{z}_2^2 + \norm{y^N}^2 - (z' v^{\bar{S}})^2 - 2 z'v^{ \bar{S} } \norm{v^N}_2 \norm{y^N}_2 - \norm{v^N}_2^2 \norm{y^N}_2^2,
\end{align}
where we have made use of $v^N$ and $y^N$ are colinear, implying that their inner product is equal to the product of their norms. Finally, since $-2 z ' v^{\bar{S}} \leq 2 |z' v^{\bar{S}}|$,
\begin{equation} \label{proof-meany:main-inequality}
\norm{Qy}_2^2 \leq \norm{z}_2^2 + \norm{y^N}_2^2 - \left( \left| z' v^{\bar{S}} \right| - \norm{v^N}_2 \norm{y^N}_2 \right)^2.
\end{equation}
\underline{Case B(1):} Suppose that $\norm{v^N}_2 \leq \norm{y^{\bar{S}}}_2$. Then,
\begin{align}
\norm{Qy}_2^2 &\leq \norm{z}_2^2 + \norm{y^N}_2^2 - \left( \left| z' v^{\bar{S}} \right| - \norm{v^N}_2 \norm{y^N}_2 \right)^2 \tag{by \cref{proof-meany:main-inequality}} \\
& \leq \norm{z}_2^2 + \norm{y^N}_2^2 \nonumber \\
& \leq \norm{y^{\bar{S}}}_2^2(1 - \det(\bar{F}'\bar{F})) + \norm{y^N}_2^2 \tag{by \cref{proof-meany:induction-hyp}} \\
& = \norm{y}_2^2 - \norm{y^{\bar{S}}}_2^2 \det(\bar{F}'\bar{F}) \nonumber \\
& \leq 1 - \norm{v^N}_2^2 \det(\bar{F}'\bar{F}) \nonumber \tag{ $\norm{y}_2 = 1$ and $\norm{v^N}_2 \leq \norm{y^{\bar{S}}}_2$} \\
& \leq 1 - \min_{F \in \mathcal{F}}\det(F'F),
\end{align}
where, in the last line, we use \cref{lemma:meany-lemma} and, since $S \neq \bar{S}$, $f_{\bar{r}+1} = v_j$, which, in turn, implies $f^N = v^N$.
\underline{Case B(2):} Suppose that $\norm{v^N}_2 > \norm{y^{\bar{S}}}_2$. Since $\norm{v_j}_2 = \norm{y}_2 = 1$, then $\norm{v^{\bar{S}}}_2 \leq \norm{ y^N}_2$. Using these inequalities and \cref{proof-meany:induction-hyp},
\begin{equation}
\norm{y^N}_2 \norm{v^N}_2 \geq \norm{v^{\bar{S}}}_2 \norm{y^{\bar{S}}}_2 \geq \norm{ v^{\bar{S}}}_2 \norm{ z}_2 \geq | z' v^{\bar{S}}|.
\end{equation}
Therefore,
\begin{equation}
\norm{y^N}_2 \norm{v^N}_2 - | z' v^{\bar{S}}| \geq \norm{y^N}_2 \norm{v^N}_2 - \norm{z}_2 \norm{v^{\bar{S}}}_2 \geq 0.
\end{equation}
Applying this relationship to \cref{proof-meany:main-inequality},
\begin{align*}
\norm{Q y}_2^2 &\leq \norm{z}_2^2 + \norm{y^N}_2^2 - \left( \norm{y^N}_2 \norm{v^N}_2 - \norm{z}_2 \norm{v^{\bar{S}}}_2 \right)^2 \\
&= \norm{z}_2^2 \norm{v^N}_2^2 + \norm{y^N}_2^2 \norm{v^{\bar{S}}}_2^2 + 2 \norm{v^{\bar{S}}}_2 \norm{z}_2 \norm{y^N}_2 \norm{v^N}_2 \\
&= \left( \norm{z}_2 \norm{v^N}_2 + \norm{y^N}_2 \norm{v^{\bar{S}}}_2 \right)^2 \\
&\leq \left( \sqrt{ 1 - \det( \bar{F}'\bar{F})} \norm{v^N}_2 \norm{y^{\bar{S}}}_2 + \norm{y^N}_2 \norm{v^{\bar{S}}}_2 \right)^2 \tag{by \cref{proof-meany:induction-hyp}} \\
&\leq \left( \norm{y^{\bar{S}}}_2^2 + \norm{y^N}_2^2 \right) \left( \norm{v^N}_2^2 ( 1 - \det( \bar{F}'\bar{F})) + \norm{v^{\bar{S}}}_2^2 \right) \tag{by Cauchy-Schwarz} \\
&= 1 - \norm{v^N}_2^2 \det( \bar{F}' \bar{F} ) \\
&= 1 - \min_{F \in \mathcal{F}} \det( F' F),
\end{align*}
where, in the last line, we use \cref{lemma:meany-lemma} and, since $S \neq \bar{S}$, $f_{\bar{r}+1} = v_j$, which, in turn, implies $f^N = v^N$.
Therefore, from Cases A, B(1) and B(2), we conclude that the result holds.
\end{proof}
\subsection{Main Convergence Result for Row-Action Methods} \label{subsection:no-mem-convergence}
Recall that $w \in \mathbb{R}^n$ is a random variable and $\lbrace w_\ell : \ell+1 \in \mathbb{N} \rbrace$ is a sequence of random variables taking value in $\mathbb{R}^n$ chosen such that $A'w_\ell \in \mathcal{R}(w)$.\footnote{Again, we can avoid this requirement and consider set inclusions below. However, this generalization will require additional, cumbersome notation and there is no practical reason for considering this case.} We will now define a sequence of stopping times $\lbrace \tau_\ell : \ell+1 \in \mathbb{N} \rbrace$ where $\tau_0 = 0$,
\begin{equation} \label{eqn:stop-iter-0}
\tau_1 = \min\lbrace k \geq 0 : \linspan{A'w_0,\ldots,A'w_k} = \mathcal{R}(w) \rbrace,
\end{equation}
and, if $\tau_{\ell-1} < \infty$, we define
\begin{equation} \label{eqn:stop-iter-arbitrary}
\tau_\ell = \min\lbrace k > \tau_{\ell-1} : \linspan{ A'w_{\tau_{\ell-1}+1},\ldots,A'w_{k}} = \mathcal{R}(w) \rbrace,
\end{equation}
else $\tau_\ell = \infty$. As an aside, it is worthwhile to note the commonalities between the definition of $\lbrace \tau_\ell \rbrace$ and the stopping time $T$ from \cref{eqn: stopping time}.
Moreover, whenever the stopping times are finite, we will define the collection, $\mathcal{F}_\ell$, for $\ell \in \mathbb{N}$, that contains all matrices $F$ whose columns are maximal linearly independent subsets of
\begin{equation}
\left\lbrace \frac{A'w_{\tau_{\ell-1}+1}}{\norm{A'w_{\tau_{\ell-1}+1}}_2},\ldots,\frac{A'w_{\tau_\ell}}{\norm{A'w_{\tau_\ell}}_2} \right\rbrace.
\end{equation}
Moreover, define
\begin{equation} \label{eqn:no-mem-rate}
\gamma_\ell = 1 - \min_{F \in \mathcal{F}_l} \det( F' F).
\end{equation}
Note, it follows by Hadamard's inequality that $\gamma_\ell \in [0,1)$.
\begin{theorem} \label{theorem:no-mem-convergence}
Suppose $Ax=b$ admits a solution $x^*$ (not necessarily unique). Let $w$ be a random variable valued in $\mathbb{R}^n$, and let $\mathcal{R}(w),$ $\mathcal{N}(w)$ and $\mathcal{V}(w)$ be defined as above (see \cref{eqn: row span}). Moreover, let $\lbrace w_\ell : \ell+1 \in \mathbb{N} \rbrace $ be random variables such that $\Prb{ A'w_\ell \in \mathcal{R}(w)} = 1$ for all $\ell+1 \in \mathbb{N}$. Let $x_0 \in \mathbb{R}^d$ be arbitrary and let $\lbrace x_k : k \in \mathbb{N} \rbrace$ be defined as in \cref{eqn:no-memory-iteration}. Then, for any $\ell$, on the event $\lbrace \tau_\ell < \infty \rbrace$,
\begin{equation} \label{theorem-eqn:no-mem-stop-rate}
\norm{x_{\tau_{\ell}+1} - x^* - P_{\mathcal{N}(w)}(x_0 - x^*)}_2^2 \leq \left( \prod_{j=1}^\ell \gamma_j \right) \norm{ P_{\mathcal{R}(w)}(x_0 - x^*) }_2^2,
\end{equation}
where $\gamma_j$ are defined in \cref{eqn:no-mem-rate} and $\gamma_j \in [0,1)$. Therefore, for any $k$,
\begin{equation}
\norm{ x_k - x^* - P_{\mathcal{N}(w)}(x_0 - x^*) }_2^2 \leq \left( \prod_{j=1}^{L(k)} \gamma_j \right) \norm{ P_{\mathcal{R}(w)}(x_0 - x^*) }_2^2,
\end{equation}
where $L(k) = \max\lbrace \ell : k \geq \tau_\ell + 1 \rbrace$; and where we are on the event $\lbrace \tau_{L(k)} < \infty \rbrace$.
\end{theorem}
\begin{proof}
From the basic iteration stated in \cref{eqn:no-memory-iteration}, we have
\begin{equation} \label{proof-eqn:no-mem-iteration}
x_{k+1} - x^* = x_k - x^* - \frac{A' w_k w_k' A}{\norm{A'w_k}_2^2} (x_k - x^*) = \left( I - \frac{A'w_k w_k'A}{\norm{A'w_k}_2^2} \right) (x_k - x^*).
\end{equation}
Iterating on this relationship, we conclude
\begin{equation}
x_{k+1} - x^* = \left( I - \frac{A'w_k w_k'A}{\norm{A'w_k}_2^2} \right) \cdots \left( I - \frac{A'w_0 w_0'A}{\norm{A'w_0}_2^2} \right) (x_0 - x^*).
\end{equation}
Moreover, by assumption, $A'w_\ell \in \mathcal{R}(w)$ with probability one, which implies that $A'w_\ell \perp \mathcal{N}(w)$. Therefore,
\begin{equation} \label{proof-eqn:decomposed-iteration}
x_{k+1} - x^* = P_{\mathcal{N}(w)}(x_0 - x^*) + \left( I - \frac{A'w_k w_k'A}{\norm{A'w_k}_2^2} \right) \cdots \left( I - \frac{A'w_0 w_0'A}{\norm{A'w_0}_2^2} \right) P_{\mathcal{R}(w)}(x_0 - x^*),
\end{equation}
and $P_{\mathcal{N}(w)}(x_{k} - x^*) = P_{\mathcal{N}(w)}(x_0 - x^*)$.
Note, when $\tau_1$ is finite, then the span of $\lbrace A'w_0,\ldots,A'w_{\tau_1} \rbrace$ is $\mathcal{R}(w)$. Therefore, on the event $\tau_1 < \infty$, \cref{theorem:meany-no-mem} implies that
\begin{equation}
\norm{x_{\tau_1 + 1} - x^* - P_{\mathcal{N}(w)}(x_0 - x^*)}_2^2 \leq \gamma_1 \norm{P_{\mathcal{R}(w)}(x_0 - x^*)}_2^2.
\end{equation}
We now proceed by induction. Suppose \cref{theorem-eqn:no-mem-stop-rate} holds for some $\ell \in \mathbb{N}$. Using \cref{proof-eqn:decomposed-iteration}, for $k > \tau_\ell$,
\begin{equation}
\begin{aligned}
&x_k - x^* - P_{\mathcal{N}(w)}(x_0 - x^*) \\
&\quad = \left( I - \frac{A'w_k w_k'A}{\norm{A'w_k}_2^2} \right) \cdots \left( I - \frac{A'w_{\tau_\ell+1} w_{\tau_\ell+1}'A}{\norm{A'w_{\tau_\ell+1}}_2^2} \right) P_{\mathcal{R}(w)}(x_{\tau_\ell+1} - x^*).
\end{aligned}
\end{equation}
Now, when $k = \tau_{\ell+1} +1$, the conditions of \cref{theorem:meany-no-mem} are satisfied. Therefore,
\begin{equation}
\begin{aligned}
\norm{x_{\tau_{\ell+1} + 1} - x^* - P_{\mathcal{N}(w)}(x_0 - x^*)}_2^2 &\leq \gamma_{\ell+1} \norm{ P_{\mathcal{R}(w)}(x_{\tau_\ell +1} - x^*) }_2^2 \\
&= \gamma_{\ell+1} \norm{x_{\tau_\ell + 1} - x^* - P_{\mathcal{N}(w)}(x_0 - x^*)}_2^2.
\end{aligned}
\end{equation}
By applying the induction hypothesis, we conclude that \cref{theorem-eqn:no-mem-stop-rate} holds on the event $ \lbrace \tau_{\ell+1} < \infty \rbrace$.
Now, for an orthogonal projection matrix, $I- vv'$, $\norm{I - vv'}_2 = 1$. The bound on $x_k - x^* - P_{\mathcal{N}}(x_0 - x^*)$ follows by applying this fact and the definition of $L(k)$.
\end{proof}
As an analogue of \cref{corollary: criteria system solution}, we have the following characterization of whether $\lim_{k \to \infty} x_k$ solves the system $Ax=b$.
\begin{corollary}
Under the setting of \cref{theorem:no-mem-convergence}, on the events $\bigcap_{\ell=0}^\infty \lbrace \tau_\ell < \infty \rbrace$ and $\lbrace \lim_{\ell \to \infty} \prod_{j=0}^\ell \gamma_j = 0 \rbrace$, $\lim_{k \to \infty} Ax_k = b$ if and only if $P_{\mathcal{V}(w)} x_0 = P_{\mathcal{V}(w)} x^*$.
\end{corollary}
\begin{proof}
By \cref{theorem:no-mem-convergence}, and on the events $\bigcap_{\ell=0}^\infty \lbrace \tau_\ell < \infty \rbrace$ and $\lbrace \lim_{\ell \to \infty} \prod_{j=1}^\ell \gamma_j = 0 \rbrace$,
\begin{equation}
\lim_{k \to \infty} x_k = x^* + P_{\mathcal{N}(w)}(x_0 - x^*) = x^* + P_{\ker(A)} (x_0 - x^*) + P_{\mathcal{V}(w)} (x_0 - x^*).
\end{equation}
Therefore, $\lim_{k \to \infty} Ax_k = b + AP_{\mathcal{V}(w)} (x_0 - x^*)$, which implies $\lim_{k \to \infty} Ax_k = b$ if and only if $AP_{\mathcal{V}(w)} x_0 = AP_{\mathcal{V}(w)} x^*$. Clearly, if $P_{\mathcal{V}(w)} x_0 = P_{\mathcal{V}(w)}x^*$, then $AP_{\mathcal{V}(w)}x_0 = AP_{\mathcal{V}(w)}x^*$. Now, since $\mathcal{V}(w) \subset \mathrm{row}(A)$, if $AP_{\mathcal{V}(w)} x_0 = AP_{\mathcal{V}(w)} x^*$, then $P_{\mathcal{V}(w)}x_0 = P_{\mathcal{V}(w)} x^*$ follows from \cref{proof-eqn:V-in-row-A}.
\end{proof}
\subsection{Main Convergence Result for Column-Action Methods} \label{subsection:no-mem-column}
For the family of methods specified by \cref{eqn:no-memory-column-iteration}, we will follow an almost identical proof except on the residual rather than the error. Specifically, if we let $r_k = Ax_k - b$, then \cref{eqn:no-memory-column-iteration} implies
\begin{equation} \label{eqn:no-mem-col-residual-iter}
r_{k+1} = Ax_{k+1} - b = Ax_k - b - \frac{A w_k w_k' A'}{\norm{A w_k}_2^2} ( Ax_k - b) = \left( I - \frac{A w_k w_k' A'}{\norm{A w_k}_2^2} \right) r_k.
\end{equation}
Thus, we will see two changes in the proof. First, we will see that see that $r_k$ for column-action methods will take the place of $x_k - x^*$ for row-action methods. Second, we already see that $Aw_k$ in \cref{eqn:no-mem-col-residual-iter} has taken the place of $A'w_k$ in \cref{proof-eqn:no-mem-iteration}. Owing to this latter issue, we will need to specify analogues of $\mathcal{R}(w)$, $\mathcal{N}(w)$ and $\mathcal{V}(w)$.
Let $w \in \mathbb{R}^d$ be a random variable, and let
\begin{equation} \label{eqn:left-col-null}
\mathcal{L}(w) = \mathrm{span}\left[ z \in \mathbb{R}^n : \Prb{ z' A w = 0 } = 1 \right]~\mathrm{and}~ \mathcal{C}(w) = \mathcal{L}(w)^\perp.
\end{equation}
Just as $\mathcal{N}(w)$ generalized the null space of $A$ under the action of an $n$-dimensional random variable from the left, we see that $\mathcal{L}(w)$ is a generalization of the left null space of $A$ under the action of a $d$-dimensional random variable from the right. Analogously, just as $\mathcal{R}(w)$ restricted the row space of $A$ under the action of an $n$-dimensional random variable from the left, we see that $\mathcal{C}(w)$ is a restriction of the column space of $A$ under the action of a $d$-dimensional random variable from the right. Finally, we let $\mathcal{E}(w)$ denote the subspace that is orthogonal to $\mathcal{C}(w)$ such that $\mathcal{E}(w) \oplus \mathcal{C}(w)$ is the column space of $A$.
With these new definitions, we may proceed just as we do in \cref{subsection:no-mem-convergence}. For a random variable $w \in \mathbb{R}^d$, let $\lbrace w_\ell : \ell+1 \in \mathbb{N} \rbrace$ be a sequence of random variables in $\mathbb{R}^d$ such that $Aw_\ell \in \mathcal{C}(w)$. We will now define a sequence of stopping times $\lbrace \tau_\ell : \ell + 1\in \mathbb{N} \rbrace$ where $\tau_0 = 0$,
\begin{equation}
\tau_1 = \min \lbrace k \geq 0 : \mathrm{span}\left[ Aw_0,\ldots,Aw_k \right] = \mathcal{C}(w) \rbrace,
\end{equation}
and, if $\tau_{\ell - 1} < \infty$, we define
\begin{equation}
\tau_\ell = \min \lbrace k > \tau_{\ell-1} : \mathrm{span} \left[ A w_{\tau_{\ell -1} + 1},\ldots, A w_k \right] = \mathcal{C}(w) \rbrace,
\end{equation}
else $\tau_\ell = \infty$.
Moreover, whenever the stopping times are finite, we will define a collection, $\mathcal{F}_{\ell},$ for $\ell \in \mathbb{N}$, that contains all matrices $F$ whose columns are maximal linearly independent subsets of
\begin{equation}
\left\lbrace \frac{Aw_{\tau_{\ell-1}+1}}{\norm{Aw_{\tau_{\ell-1}+1}}_2},\ldots,\frac{Aw_{\tau_\ell}}{\norm{Aw_{\tau_\ell}}_2} \right\rbrace.
\end{equation}
We can then define $\gamma_\ell$ just as we do in \cref{eqn:no-mem-rate}. For completeness, we will define it again here so that we reference the appropriate definitions. Define
\begin{equation} \label{eqn:no-mem-col-rate}
\gamma_\ell = 1 - \min_{ F \in \mathcal{F}_\ell} \det( F' F).
\end{equation}
\begin{theorem} \label{theorem:no-mem-column-convergence}
Suppose $Ax = b$ admits a solution $x^*$ (not necessarily unique). Let $w$ be a random variable valued in $\mathbb{R}^d$, and let $\mathcal{C}(w), \mathcal{L}(w)$ and $\mathcal{E}(w)$ be defined as above (see \cref{eqn:left-col-null}). Moreover, let $\lbrace w_\ell : \ell + 1 \in \mathbb{N} \rbrace$ be random variables such that $\Prb{ A w_\ell \in \mathcal{C}(w)} = 1$ for all $\ell +1 \in \mathbb{N}$. Let $x_0 \in \mathbb{R}^d$ be arbitrary, let $\lbrace x_k : k \in \mathbb{N} \rbrace$ be defined as in \cref{eqn:no-memory-column-iteration}, and define $r_k = Ax_k - b$ for $k + 1 \in \mathbb{N}$. Then, for any $\ell$, on the event $\lbrace \tau_{\ell} < \infty \rbrace$,
\begin{equation} \label{theorem-eqn:no-mem-col-rate}
\norm{ r_{\tau_{\ell} +1} - P_{\mathcal{L}(w)} r_0 }_2^2 \leq \left( \prod_{j=1}^\ell \gamma_j \right) \norm{ P_{\mathcal{C}(w)} r_0 }_2^2,
\end{equation}
where $\gamma_j$ are defined in \cref{eqn:no-mem-col-rate} and $\gamma_j \in [0,1)$. Therefore, for any $k$,
\begin{equation}
\norm{ r_{k} - P_{\mathcal{L}(w)} r_0 }_2^2 \leq \left( \prod_{j=1}^{L(k)} \gamma_j \right) \norm{ P_{\mathcal{C}(w)} r_0 }_2^2,
\end{equation}
where $L(k) = \max\lbrace \ell: k \geq \tau_{\ell}+1 \rbrace$; and where we are on the event $\lbrace \tau_{L(k)} < \infty \rbrace$.
\end{theorem}
\begin{proof}
Iterating on \cref{eqn:no-mem-col-residual-iter}, we conclude
\begin{equation}
r_{k+1} = \left( I - \frac{A w_k w_k' A'}{\norm{A w_k}_2^2} \right) \cdots \left( I - \frac{A w_0 w_0' A'}{\norm{A w_0}_2^2} \right) r_0.
\end{equation}
Moreover, by assumption, $A w_\ell \in \mathcal{C}(w)$ with probability one, which implies $A w_\ell \perp \mathcal{L}(w)$. Therefore,
\begin{equation} \label{proof-eqn:no-mem-col-decomp}
r_k = P_{\mathcal{L}(w)} r_0 + \left( I - \frac{A w_k w_k' A'}{\norm{A w_k}_2^2} \right) \cdots \left( I - \frac{A w_0 w_0' A'}{\norm{A w_0}_2^2} \right) P_{\mathcal{C}(w)} r_0,
\end{equation}
and $P_{\mathcal{L}(w)}r_k = P_{\mathcal{L}(w)} r_0.$
Note, when $\tau_1$ is finite, then the span of $\lbrace Aw_0,\ldots, Aw_{\tau_1} \rbrace$ is $\mathcal{C}(w)$. Therefore, on the event $\tau_1 < \infty$, \cref{theorem:meany-no-mem} implies that
\begin{equation}
\norm{ r_{\tau_1 + 1} - P_{\mathcal{L}(w)} r_0 }_2^2 \leq \gamma_1 \norm{ P_{\mathcal{C}(w)} r_0 }_2^2.
\end{equation}
We now proceed by induction. Suppose \cref{theorem-eqn:no-mem-col-rate} holds for some $\ell \in \mathbb{N}$. Using \cref{proof-eqn:no-mem-col-decomp}, for $k > \tau_\ell$,
\begin{equation}
r_k - P_{\mathcal{L}(w)}r_0 = \left( I - \frac{Aw_k w_k'A'}{\norm{Aw_k}_2^2} \right) \cdots \left( I - \frac{A w_{\tau_\ell + 1}w_{\tau_\ell + 1}'A'}{\norm{ A w_{\tau_\ell + 1} }_2^2 } \right) P_{\mathcal{C}(w)} r_{\tau_\ell + 1}.
\end{equation}
Now, when $k = \tau_{\ell + 1} + 1$, the conditions of \cref{theorem:meany-no-mem} are satisfied. Therefore,
\begin{equation}
\begin{aligned}
\norm{ r_{\tau_{\ell+1} + 1} - P_{\mathcal{L}(w)} r_0 }_2^2
&\leq \gamma_{\ell+1} \norm{ P_{\mathcal{C}(w)} r_{\tau_\ell+1}}_2^2 \\
&= \gamma_{\ell+1} \norm{ r_{\tau_\ell + 1} - P_{\mathcal{L}(w)} r_0 }_2^2.
\end{aligned}
\end{equation}
By applying the induction hypothesis, we conclude that \cref{theorem-eqn:no-mem-col-rate} holds on the event $\lbrace \tau_{\ell+1} < \infty \rbrace$. The second part of the result follows readily.
\end{proof}
We have the following characterization of whether $\lim_{k \to \infty} x_k$ solves the system $Ax = b$.
\begin{corollary}
Under the setting of \cref{theorem:no-mem-column-convergence}, on the events $\lbrace \bigcap_{\ell = 0}^\infty \tau_\ell < \infty \rbrace$, and $\lbrace \lim_{\ell \to\infty } \prod_{j=0}^\ell \gamma_j = 0 \rbrace$, $\lim_{k \to \infty} Ax_k = b$ if and only if $P_{\mathcal{E}(w)} r_0 = 0$.
\end{corollary}
\begin{proof}
On the events $\lbrace \bigcap_{\ell = 0}^\infty \tau_\ell < \infty \rbrace$ and $\lbrace \lim_{\ell \to\infty } \prod_{j=0}^\ell \gamma_j = 0 \rbrace$, \cref{theorem:no-mem-column-convergence} implies
\begin{equation}
\lim_{k \to \infty} r_k = P_{\mathcal{L}(w)} r_0.
\end{equation}
It straightforwardly follows that $\lim_{k \to \infty} Ax_k = b$ if and only if $P_{\mathcal{L}(w)} r_0 = 0$.
Moreover, by construction of $\mathcal{L}(w)$, we have that $\mathcal{L}(w) = \mathcal{E}(w) \oplus \ker(A')$. Thus,
\begin{equation}
P_{\mathcal{L}(w)} r_0 = P_{\mathcal{E}(w)} r_0 + P_{\ker(A')} r_0.
\end{equation}
Since the left null space of $A$ is orthogonal to the column space of $A$, and $r_0$ is in the column space of $A$ because $Ax=b$ is consistent, we have that $P_{\mathcal{L}(w)} r_0 = P_{\mathcal{E}(w)} r_0$.
\end{proof}
\subsection{Common, Non-Adaptive Sampling Patterns} \label{subsection:no-mem-sampling-patterns}
Just as for \cref{theorem: terminal iteration characterization}, \cref{theorem:no-mem-convergence,theorem:no-mem-column-convergence} are general results that characterizes convergence for \textit{any} sampling scheme. Following the discussion in \cref{subsection:sampling-times}, the sampling scheme should depend on the hardware environment and the problem setting. Despite this, the two sampling patterns studied in \cref{subsection:sampling-times} form a foundation for most sampling schemes in practice and warrant a precise analysis. After this analysis, certain adaptive schemes have become popular and are also analyzed in a generic manner. We will focus on the case of row-action methods (corresponding to \cref{theorem:no-mem-convergence}) as the column-action results (corresponding to \cref{theorem:no-mem-column-convergence}) are nearly identical.
The first result provides a proof of convergence when we sample without replacement from a finite population. We note that the result is quite general and does not depend on the nature of the sampling without replacement or the dependency of the samples whenever the finite population is exhausted. As a result, the bounds are loose, which may be unsatisfying. Should particular sampling patterns become sufficiently important to warrant a more detailed analysis, we will do so in future work.
\begin{proposition} \label{theorem-no-mem-w-o-replacement}
Let $w$ and $\lbrace w_\ell : \ell +1 \in \mathbb{N} \rbrace$ be defined as in \cref{lemma: rand perm sampling}. Then, under the setting of \cref{theorem:no-mem-convergence},
\begin{enumerate}
\item $\tau_\ell - \tau_{\ell-1} \leq 2N$ for all $\ell \in \mathbb{N}$, and
\item $\lim_{\ell \to \infty} \prod_{j=1}^\ell \gamma_j = 0$.
\end{enumerate}
Moreover, $\gamma_j$ are uniformly bounded by $\gamma \in [0,1)$ that depends on $\lbrace A'W_1,\ldots,A'W_N \rbrace$. Therefore, with probability one,
\begin{equation}
\norm{ x_{2N\ell} - x^* - P_{\mathcal{N}(w)}(x_0 - x^*) }_2^2 \leq \gamma^\ell \norm{ P_{\mathcal{R}(w)}(x_0 - x^*) }_2^2.
\end{equation}
\end{proposition}
\begin{proof}
By the definition of $w$ in \cref{lemma: rand perm sampling}, $\mathcal{R}(w) = \linspan{ A'W_1,\ldots,A'W_N }$. Moreover, by the definitions of $\lbrace w_\ell \rbrace$, we are sampling from $W_1,\ldots,W_N$ without replacement. Then, we are guaranteed that $\lbrace A'w_{\tau_{\ell-1}+1},\ldots,A'w_{\tau_{\ell}} \rbrace$ spans $\mathcal{R}(w)$ if $\lbrace W_1,\ldots,W_N \rbrace \subset \lbrace w_{\tau_{\ell-1}+1},\ldots,w_{\tau_\ell} \rbrace$. Now, suppose that at iteration $\tau_{\ell-1}$, $\mathcal{W} \subset \lbrace W_1,\ldots,W_N \rbrace$ are exhausted. Then, to ensure that $\lbrace W_1,\ldots,W_N \rbrace$ is contained in $\lbrace w_{\tau_{\ell-1}+1},\ldots,w_{\tau_\ell} \rbrace$, we need to exhaust $\mathcal{W}^c$ and then the entire set $\lbrace W_1,\ldots,W_N \rbrace$. Since $|\mathcal{W}^c| \leq N$, we need at most $2N$ more iterations from $\tau_{\ell-1}$ to achieve $\tau_{\ell}$. Therefore, $\tau_{\ell} - \tau_{\ell-1} \leq 2N$. Now, let $\mathcal{F}$ denote all matrices whose columns are maximal linearly independent subsets of
\begin{equation}
\left\lbrace \frac{A'W_1}{\norm{A'W_1}_2},\ldots,\frac{A'W_N}{\norm{A'W_N}_2} \right\rbrace.
\end{equation}
Then, $\mathcal{F}_\ell \subset \mathcal{F}$. Therefore,
\begin{equation}
\gamma_\ell = 1 - \min_{F \in \mathcal{F}_\ell} \det(F'F) \leq 1 - \min_{F \in \mathcal{F}} \det(F'F) =: \gamma.
\end{equation}
It is clear, by Hadamard's inequality, that $\gamma \in [0,1)$. Hence, $\lim_{\ell \to \infty} \prod_{j=1}^\ell \gamma_j \leq \lim_{\ell \to \infty} \gamma^\ell = 0$. The result follows by \cref{theorem:no-mem-convergence}.
\end{proof}
It is worth pausing here to compare our approach in \cref{theorem-no-mem-w-o-replacement} to previous results for cyclic row-action methods (e.g., \cite{kaczmarz1993},\footnote{This is a translated copy of Kaczmarz's original article, which is published in German \citep{karczmarz1937}.} algebraic reconstruction technique \citep{gordon1970}, cyclic block Kaczmarz). Our use of Meany's inequality to analyze such methods is not novel: Meany's inequality has been used previously to analyze deterministic row-action methods \citep{galantai2005,bai2013,wallace2014} with even more sophisticated refinements of Meany's inequality than what we have here, and a detailed comparison of Meany's inequality and other approaches to analyzing these deterministic variants can be found in \cite{dai2015}. However, our use of Meany's inequality generalizes these deterministic approaches as it (1) allows for an arbitrary transformation (via $\lbrace W_1,\ldots,W_N \rbrace$) of the original system, which has borne out to be a fruitful approach vis-\`{a}-vis matrix sketching \cite{woodruff2014}; and (2) allows for the benefits of random cyclic sampling, which many have observed to be the most productive route in practice and there is mounting evidence in adjacent fields that random cyclic sampling does indeed have practical benefits \citep{lee2019,wright2020}.
While our generalizations are valuable, further improvements are to be found by marrying our randomization framework with the more nuanced refinements of Meany's inequality found in \cite{galantai2005} and \cite{bai2013}, which we leave to future efforts.
The next result revisits the case of independent and identically distributed sampling. The result makes intuitive sense as, for such a situation, we should expect the difference in the stopping times to be independent and identically distributed, which, results in the natural conclusion that $\gamma_\ell$ are also independent and identically distributed. Moreover, we show that eventually, the rate of convergence is almost controlled by $\E{\gamma_1}$ with probability one. We again stress here that the generality of the results naturally makes them quite loose, and we discuss this further after the result.
\begin{proposition} \label{theorem-no-mem-w-replacement}
Let $w$ and $\lbrace w_\ell : \ell+1 \in \mathbb{N} \rbrace$ be defined as in \cref{theorem: iid sampling}. Then, under the setting of \cref{theorem:no-mem-convergence}, $\tau_\ell < \infty$ almost surely for all $\ell \in \mathbb{N}$, and $\lbrace \gamma_\ell : \ell \in \mathbb{N} \rbrace$ are independent and identically distributed such that
$\E{\gamma_1} = 1 - \E{ \min_{ F \in \mathcal{F}_1} \det(F'F)} < 1$.
Hence, for all $\ell \in \mathbb{N}$ and $\delta > 1$,
\begin{equation}
\Prb{\bigcup_{j=1}^\infty \bigcap_{\ell=j}^\infty \left\lbrace \norm{ x_{\tau_\ell+1} - x^* - P_{\mathcal{N}(w)}(x_0 - x^*) }_2^2 \leq \E{ \gamma_1}^{\frac{\ell}{\delta}} \norm{ P_{\mathcal{R}(w)}(x_0 - x^*) }_2^2 \right\rbrace} =1,
\end{equation}
where $\E{ \gamma_\ell } \in [0,1)$. Moreover, $\lim_{ \ell \to \infty} \tau_\ell/\ell = \E{ \tau_1}$.
\end{proposition}
\begin{remark}
In the proof below, we also compute the probability for each $j$ for which the conclusion of the preceding result holds. Thus, we can also make the usual ``high-probability'' statements without any additional effort.
\end{remark}
\begin{proof}
Again, our main workhorse will be \cite[Theorem 4.1.3]{durrett2010}. By this result, conditioned on $\tau_{\ell-1}$, $\lbrace A'w_{\tau_{\ell-1}+1},A'w_{\tau_{\ell-1}+2},\ldots \rbrace$ are independent and identically distributed. By this property, conditioned on $\tau_{\ell-1}$, $\tau_{\ell} - \tau_{\ell-1}$ is independent of $\tau_{\ell-1}$ and have the same distribution for all $\ell \in \mathbb{N}$. We conclude then that since $\gamma_\ell$ is a function of $\lbrace A'w_{\tau_{\ell-1}+1},\ldots,A'w_{\tau_\ell} \rbrace$, then $\gamma_\ell$ are independent and identically distributed. We now conclude that \cref{theorem-eqn:no-mem-stop-rate} holds with probability one by applying \cref{theorem:no-mem-convergence}. For any $\delta > 1$, by Markov's inequality and independence,
\begin{equation}
\Prb{ \prod_{j=1}^\ell \gamma_j > \E{ \gamma_1}^{k/\delta} } \leq \left(\E{ \gamma_1}^{1 - \frac{1}{\delta}} \right)^k.
\end{equation}
Since $\E{ \gamma_1}^{1 - \frac{1}{\delta}} < 1$, the Borel-Cantelli lemma implies that the probability that the product of $\gamma_j$ is eventually less than $\E{\gamma_1}^{k/\delta}$ is one.
\end{proof}
Here, we again take a moment to compare this result to the results of \cite{richtarik2017}. Namely, we are interested in how the rate of convergence of \cref{theorem-no-mem-w-replacement} compares with the rate of convergence result in \cite{richtarik2017}. To make this comparison, we numerically estimate the theoretical rates of convergence proposed by our result and the result of \cite{richtarik2017} on five matrices from the {\tt MatrixDepot} (as described in \cref{section: experiments}). We show these comparisons in \cref{table:iid-rate-comparison}. We show these comparisons in \cref{table:iid-rate-comparison}. As expected, the results of \cite{richtarik2017}, which are specialized to the i.i.d. case and apply on average, are much tighter than our general results that apply to more than just i.i.d. case and hold with probability one.
\input{tables/iid_rate_of_convergence_theory}
\subsection{Adaptive Sampling Schemes} \label{subsection:adaptive-sampling}
To bookend this section, we discuss how our results can be applied to a broad set of adaptive methods that make use of the residual information at a given iterate whether deterministically (e.g., \cite{motzkin1954,gubin1967,lent1976,censor1981}) or randomly (e.g., \cite{nutini2016,bai2018,haddock2019}). In \cref{subsubsection:framework-adaptive}, we will begin with some formalism to establish a general class of adaptive methods, and we then prove convergence and a rate of convergence for such methods. In \cref{subsubsection:specific-examples-adaptive}, we provide concrete examples at the end.
\subsubsection{A General Class and Analysis of Adaptive Methods} \label{subsubsection:framework-adaptive}
To be rigorous, let $x_0 \in \mathbb{R}^d$ and let $\varphi: (A,b,\lbrace x_j : j \leq k \rbrace) \mapsto w_k$ be an adaptive procedure for generating $\lbrace w_k \rbrace$ according to the following procedure: for $k +1 \in \mathbb{N}$,
\begin{equation} \label{eqn:adaptive-procedure}
\begin{aligned}
w_k &= \varphi(A, b , \lbrace x_j : j \leq k \rbrace) \\
x_{k+1} & = x_{k} + \frac{A'w_k w_k'(b - Ax_k)}{\norm{A' w_k}_2^2}.
\end{aligned}
\end{equation}
\begin{remark}
While we will focus on the base methods of type \cref{eqn:no-memory-iteration}, methods of the type \cref{eqn:no-memory-column-iteration} can be handled analogously.
\end{remark}
While \cref{eqn:adaptive-procedure} is quite general, the vast majority of adaptive schemes make further restrictions that we abstract in the following definitions.
\begin{definition}[Markovian] \label{defn:markovian}
For a fixed integer $\eta$, an adaptive procedure, $\varphi$, is $\eta$-Markovian if the conditional distribution of $\varphi(A,b, \lbrace x_j : j \leq k \rbrace) $ given $\lbrace x_j : j \leq k \rbrace$ is equal to the conditional distribution of $\varphi(A, b, \lbrace x_j : j \leq k \rbrace)$ given $\lbrace x_j : k - \eta < j \leq k \rbrace$. If a procedure is $1$-Markovian, we will frequently call it Markovian.
\end{definition}
A consequence of the $\eta$-Markovian property is that we can write $\varphi (A , b , \lbrace x_j : j \leq k \rbrace) $ as $\varphi (A, b, \lbrace x_j : k - \eta < j \leq k \rbrace )$. In the case of a $1$-Markovian adaptive procedure, we will simply write $\varphi(A, b, x_k )$. The $1$-Markovian property is readily satisfied for a number of common procedures analyzed in the literature (e.g., maximum residual, maximum distance, etc.), which may suggest that the $\eta$-Markovian notion is irrelevant for general $\eta$. We contend though, that procedures that are memory-sensitive may be more apt to make use of the $\eta$-Markovian property for $\eta > 1$. For example, to demonstrate its potential value, consider a procedure that selects the equations with the top $\eta$ residuals, pulls them into memory, and simply cycles through them deterministically or randomly. Then this simple procedure would be $\eta$-Markovian. However, owing to the lack of such procedures in the literature, we will focus on the $1$-Markovian case for which we can write $\varphi(A, b , x)$, and note that the results and definitions are readily extendable.
The next definition establishes another key property of these adaptive schemes that rely on residuals.
\begin{definition}[Magnitude Invariance] \label{defn:mag-invar}
Let $H$ represent the set of solutions to $Ax = b$, and let $P_H : \mathbb{R}^d \to H$ represent the projection of a vector onto $H$,\footnote{Since $H$ is a flat, $P_H$ is not guaranteed to be a linear operator.} then an adaptive procedure, $\varphi$, is magnitude invariant if, for any $x \not\in H$ and any $\lambda > 0$, the distribution of $\varphi( A, b, x )$ is equal to the distribution of
\begin{equation}
\varphi ( A, b, P_H(x) + \lambda [ x - P_H(x) ] ).
\end{equation}
\end{definition}
The magnitude invariance of a number of adaptive methods often follows from the following simple calculation that we state as a lemma for future reference.
\begin{lemma} \label{lemma:residual-mag-invar}
Let $x \in \mathbb{R}^d$ and let $v_1, v_2 \in \mathbb{R}^n$. Then, for any $\lambda > 0$, if $ | v_1' (A x - b) | \geq |v_2' (Ax - b)|$ then
\begin{equation}
| v_1' (A ( P_H(x) + \lambda [ x - P_H(x) ] ) - b) | \geq | v_2' (A ( P_H(x) + \lambda [ x - P_H(x) ] ) - b) |.
\end{equation}
If the hypothesis holds with a strict inequality, then so does the conclusion.
\end{lemma}
\begin{proof}
Note, $A P_H(x) = b$. Therefore, $ A (P_H(x) + \lambda [ x - P_H(x) ] ) - b = \lambda (A x - b)$. From the hypothesis and $\lambda > 0$, $\lambda | v_1' (Ax - b) | \geq \lambda | v_2'(Ax - b)|$. Also owing to $\lambda > 0$, we can replace the inequalities with strict inequalities.
\end{proof}
Furthermore, the magnitude invariance property has hidden within it an additional feature: the projection of $x$ onto the null space is irrelevant (as we might expect for a procedure depending on the residual). As a result, we can, without losing generality, focus our discussion to $x$ that are in the row space of $A$, which has a unique intersection with $H$ at a point that we denote $x_{\row}^*$. Furthermore, the magnitude invariance property allows us to focus specifically on the Euclidean unit sphere around $x_{\row}^*$, which we denote by $\mathbb{S}(x_{\row}^*)$. This will be essential to the next definition.
The final definition ensures that if \cref{eqn:adaptive-procedure} makes too much progress along one particular subspace, then it must have a nonzero probability of exploring an orthogonal subspace relative to, roughly, the row space of $A$. Before stating this definition, we need to be slightly careful here with using the row space of $A$: if the rows of $A$ can be partitioned into two sets that are mutually orthogonal and $x_0$ is initialized in the span of one of these subsets, then we will never need to visit the other set and, consequently, we will never observe the entire row space of $A$. To account for this, we can focus on the restricted row space,
\begin{equation} \label{eqn:restricted-row}
\rrow(A) = \mathrm{span}[ A_{i,\cdot} : A_{i,\cdot}'x_0 \neq b_i ].
\end{equation}
This definition may seem unnecessary as we can account for this (more generally) via $\mathcal{R}(w)$ by an appropriate choice of $w$. However, in our previous statements, we defined $w$ before specifying $x_0$. Here, we would need to know $x_0$ in order to define $w$ and, thus, $\mathcal{R}(w)$ appropriately. Fortunately, an examination of the preceding results shows that this ordering is not important and the results hold even if $w$ is defined given $x_0$ or even future iterates. With this explanation in hand, we can now state the final definition.
\begin{definition}[Exploratory] \label{defn:exploratory}
Let $x_0 \in \mathbb{R}^d$ and define $\rrow(A)$ accordingly. An adaptive procedure, $\varphi$, is exploratory if for any proper subspace $V \subsetneq \rrow(A)$, there exists $\pi \in (0,1]$ such that
\begin{equation} \label{eqn-defn-exploratory}
\sup_{ x \in \mathbb{S}(x_{\row}^*) \cap V } \Prb{ A' \varphi(A, b, x) \perp V } \leq 1 - \pi.
\end{equation}
\end{definition}
\begin{remark}
If magnitude invariance does not hold, then we could specify the exploratory property to hold for any point in $V$ that is distinct from $x_{\row}^*$. For this modified definition of the exploratory property, the results below would still hold. Then, why should we keep the magnitude invariance property? It is out of practicality. The magnitude invariance property allows us to restrict the verification of the exploratory property to the unit ball, and then we can apply it to any iterate regardless of its distance to the solution.
\end{remark}
For a Markovian, magnitude invariant and exploratory adaptive scheme, $\varphi$, we will need one assumption before stating the result.
\begin{assumption} \label{assumption:max-convergence}
Let $\mathcal{F}$ denote the set of matrices whose columns are normalized, maximal linearly independent subsets of
\begin{equation}
\left\lbrace A' \varphi(A, b, x_1),\ldots, A' \varphi(A, b, x_d) \right\rbrace,
\end{equation}
where $x_1,\ldots,x_d \in \mathbb{R}^d$ are arbitrary vectors. Suppose, for this choice of $\varphi$,
\begin{equation}
1 - \inf_{ F \in \mathcal{F} } \det( F' F) =: \gamma \in [0,1).
\end{equation}
\end{assumption}
\begin{remark}
As we will see, \cref{assumption:max-convergence} is sufficient for us to uniformly treat the many examples in the literature that are selecting equations or, more generally, are of the form in \cref{lemma: rand perm sampling}, rather than generating linear combinations of them. In the case of linear combinations, we could refine this assumption to account for the nature of the linear combinations as we do in \cref{theorem-no-mem-w-replacement}.
\end{remark}
\begin{theorem} \label{theorem:adaptive-convergence}
Suppose $Ax = b$ admits a solution $x^*$ (not necessarily unique); let $H$ denote the set of all solution, and $P_H$ be the projection onto this flat. Let $x_0 \in \mathbb{R}^d$ and let $\rrow(A)$ be defined as above (see \cref{eqn:restricted-row}). Moreover, let $\varphi$ be a $1$-Markovian, magnitude invariant and exploratory adaptive procedure satisfying \cref{assumption:max-convergence} that generates $\lbrace x_k \rbrace$ and $\lbrace w_k \rbrace$ according to \cref{eqn:adaptive-procedure} and so that $\Prb{ A'w_k \in \rrow(A) } = 1$ for all $k + 1 \in \mathbb{N}$. Then, there exist an increasing sequence of stopping times $\lbrace \tau_\ell : \ell \in \mathbb{N} \rbrace$ such that $\Prb{ E_1 \cup E_2 } = 1$, where:
\begin{enumerate}
\item $E_1$ is the event of iterates that terminate finitely to a solution of $Ax=b$; that is,
\begin{equation}
E_1 = \bigcup_{\ell \in \mathbb{N}} \left\lbrace x_{\tau_{\ell} + 1} \in H \right\rbrace.
\end{equation}
\item $E_2$ is the event of iterates that infinitely converge to a solution of $Ax=b$; that is,
\begin{equation}
E_2 = \bigcap_{\ell \in \mathbb{N}} \left\lbrace \norm{ x_{\tau_{\ell} + 1} - P_H(x_0) }_2^2 \leq \gamma^\ell \norm{ x_0 - P_H(x_0) }_2^2 \right\rbrace.
\end{equation}
Moreover, on $E_1$, $\tau_\ell$ has finite expectation for $\ell$ such that $x_{\tau_{\ell}+1} \in H$. Similarly, on $E_2$, $\tau_{\ell}$ has finite expectation for all $\ell$.
\end{enumerate}
\end{theorem}
\begin{proof}
Without loss of generality, we will assume $x_0 \in \row(A)$. We will consider the nontrivial case where $x_0 \neq x_{\row}^*$. Note, by the construction of $\rrow(A)$, it must hold then $x_0 - x_{\row}^* \in \rrow(A)$. To prove the result, we will make three claims of the following rough nature and purpose, which we will make precise below.
\begin{enumerate}
\item Finite termination can only occur at a point $x_{k+1}$ if and only if $A'\varphi(A,b,x_k)$ is parallel to $x_k - x_{\row}^*$. We will use this claim to specify the set $E_1$.
\item For the first time the span of the iterate errors, $\mathrm{span}[\lbrace x_k - x_{\row}^* \rbrace]$, fails to (non-trivially) increase in dimension, the corresponding $\lbrace A'w_k \rbrace$ up to this iterate span the subspace. As a result, with an appropriate definition of $\mathcal{R}(w)$, we will apply \cref{theorem:no-mem-convergence} to prove a multiplicative decrease in the iterate errors by a factor of $\gamma$.
\item Finally, we show that the first time that the span of the iterate errors fails to (non-trivially) increase in dimension must be finite with probability one and have bounded expectation. By combining the first claim with this claim, we have the property specified by the event $E_1$. By combining this claim with the second claim, we have the property specified by the event $E_2$. By this claim alone, we have that $\Prb{E_1 \cup E_2} = 1$.
\end{enumerate}
To establish our claims, we need some additional notation. Let $\xi$ be an arbitrary finite stopping time and define
\begin{equation}
V_k = \mathrm{span}\left[ x_{\xi} - x_{\row}^*,x_{\xi+1} - x_{\row}^*,\ldots, x_{\xi+k} - x_{\row}^* \right],
\end{equation}
and $V_{k}^0 = \mathrm{span} \left[ x_{\xi+k} - x_{\row}^* \right]$.
Furthermore, define
\begin{equation}
\nu = \min \left\lbrace k \geq 0: x_{\xi + k + 1} - x_{\row}^* \in V_{k}, x_{\xi+k+1} \neq x_{\xi+k} \right\rbrace.
\end{equation}
Note, $\nu$ corresponds to the first time that the span of the iterate errors, starting at $\xi$, fails to non-trivially increase in dimension. It will often be more succinct to specify the non-trivial cases by an indicator variable given by
\begin{equation}
\chi_{\xi+k} = \1{ \varphi(A,b,x_{\xi+k})'A(x_{\xi+k} - x_{\row}^*) \neq 0}.
\end{equation}
By \cref{eqn:adaptive-procedure}, we can readily replace $x_{\xi+k+1} \neq x_{\xi+k}$ in the definition of $\nu$ with $\chi_{\xi+k} = 1$. We now state and prove our claims precisely.
\underline{Claim 1:}
Suppose $x_{\xi} - x_{\row}^* \neq 0$. We claim that $x_{\xi+1} = x_{\row}^*$ if and only if $A'\varphi(A,b,x_\xi) \in V_0 \setminus \lbrace 0 \rbrace$.
Note, this claim readily follows from
\begin{equation}
x_{\xi+1} - x_{\row}^* = x_{\xi} - x_{\row}^* - \frac{A' \varphi(A,b,x_\xi) \varphi(A,b,x_\xi)'A}{\norm{ A'\varphi(A,b,x_\xi) }_2^2} (x_\xi - x_{\row}^*),
\end{equation}
which, in turn, follows from \cref{eqn:adaptive-procedure}.
\underline{Claim 2:} Suppose $\nu$ is finite and define $V_{\nu}$. We claim that
\begin{equation}
\mathrm{span}\left[ A'\varphi(A, b, x_{\xi} ) \chi_{\xi},\ldots, A'\varphi(A, b, x_{\xi+\nu} ) \chi_{\xi +\nu} \right] = V_{\nu}.
\end{equation}
We first note that $A'\varphi(A, b, x_{\xi + k})\chi_{\xi+k} \in V_{\nu}$ for any $k \in [0,\nu]$ by \cref{eqn:adaptive-procedure}. Therefore, we see that the span of $\Phi = \lbrace A'\varphi(A, b, x_{\xi} )\chi_{\xi},\ldots, A'\varphi(A,b,x_{\xi+\nu}) \chi_{\xi+\nu} \rbrace$ is contained in $V_{\nu}$. To show that $V_{\nu}$ is included in the span of $\Phi$, note that, by the definition of $V_{\nu}$ and by \cref{eqn:adaptive-procedure},
\begin{equation} \label{eqn-proof:repeating-subspace}
V_{\nu} = \mathrm{span}\left[ A'\varphi(A, b, x_{\xi}) \chi_{\xi},\ldots, A'\varphi(A,b,x_{\xi+\nu-1})\chi_{\xi+\nu-1}, x_{\xi+\nu} - x_{\row}^* \right].
\end{equation}
Moreover, the nonzero terms on the generating set on the right hand side of \cref{eqn-proof:repeating-subspace} must be linearly independent, as anything else would contradict the minimality of $\nu$. We are left to show that $x_{\xi+\nu} - x_{\row}^*$ is in the span of $\Phi$. To do this, we perform Gram-Schmidt on the generating set in \cref{eqn-proof:repeating-subspace} starting with $x_{\xi+\nu} - x_{\row}^*$. Denote the remaining vectors in this set $\phi_1,\ldots,\phi_{r-1}$ where $r = \dim( V_{\nu} )$. Then, by the definition of $\nu$, $x_{\xi+\nu+1} - x_{\row}^* \in V_{\nu}$. Therefore, there exist constants $c_0,\ldots,c_{r-1}$ such that
\begin{equation}
\begin{aligned}
& c_0 (x_{\xi+\nu} - x_{\row}^*) + \sum_{j=1}^{r-1} c_j \phi_j \\
&\quad = x_{\xi+\nu} - x_{\row}^* - \frac{A' \varphi(A, b, x_{\xi+\nu})\varphi(A, b, x_{\xi+\nu})'A }{\norm{A'\varphi(A, b, x_{\xi+\nu})}_2^2} ( x_{\xi + \nu} - x_{\row}^* ).
\end{aligned}
\end{equation}
If $c_0 \neq 1$, we see that the claim follows. For a contradiction, suppose that $c_0 = 1$. Then $A'\varphi(A,b,x_{\xi+\nu})$ can be written as a linear combination of vectors that are orthogonal to $x_{\xi+\nu} - x_{\row}^*$. This would imply then that $\chi_{\xi+\nu} = 0$, which contradicts the definition of $\nu$. Hence, we see that the claim holds.
\underline{Claim 3:} For any finite stopping time $\xi$, $\nu$ is finite with probability one and has bounded expectation.
To show this, we define a sequence of stopping times. Define
\begin{equation}
s_1 = \min \left\lbrace k : \chi_{\xi+k} \neq 0 \right\rbrace,
\end{equation}
and
\begin{equation}
s_{j} = \min \left\lbrace k : \chi_{\xi+s_1+\cdots+s_{j-1} + k } \neq 0 \right\rbrace.
\end{equation}
By the definition of $\nu$, $\nu$ can only take values in $\lbrace \sum_{i=1}^j s_i : j \in \mathbb{N} \rbrace$. Moreover, at each $s_j$, we must either observe $\lbrace \dim( V_{\xi+ s_1 + \cdots + s_j + 1} ) = \dim( V_{\xi + s_1 + \cdots + s_j}) + 1 \rbrace$ or $\lbrace \nu \leq \sum_{i=1}^j s_i \rbrace$. Hence, at most, we see that $\nu$ can only take values in $\lbrace \sum_{i=1}^j s_i : j = 1,\ldots, r \rbrace$ where $r = \dim( \rrow(A) )$. Thus, if we show that each $s_j$ is finite and has bounded expectation, then $\nu$ must be finite and have bounded expectation. By the magnitude invariance, Markovian and exploratory properties, we conclude that
\begin{equation} \label{eqn:s_j}
\begin{aligned}
&\condPrb{s_j = k}{\xi, s_1,\ldots,s_{j-1}, x_{\xi},\ldots,x_{\xi+s_1+\cdots+s_{j-1}+1 } } \\
&\quad \leq ( 1- \pi( V_{s_1+\cdots+s_{j-1} + 1}) )^{k-1} \pi( V_{s_1+\cdots+s_{j-1} + 1}).
\end{aligned}
\end{equation}
Therefore, we see that $s_j$ is finite and has bounded expectation.
\underline{Conclusion:} From these three claims we can now prove the result by induction.
\paragraph{Base Case} Define $\mathfrak{E}_0^c = \lbrace x_0 \neq x_{\row}^* \rbrace$. On this event, if we take $\xi = 0$ and define $\tau_1$ to be the corresponding $\nu$. On $\mathfrak{E}_0^c$, $\tau_1$ is finite and has finite expectation by Claim 3. Then, we can define, as a subset of $\mathfrak{E}_0^c$,
\begin{equation}
\mathfrak{E}_1 = \lbrace A'\varphi(A,b,x_{\tau_1}) \in V_{\tau_1}^0 \setminus \lbrace 0 \rbrace \rbrace,
\end{equation}
and $\mathfrak{E}_1^c$ to be its relative complement on $\mathfrak{E}_0$.
Note,
\begin{enumerate}
\item By Claim 1, $\mathfrak{E}_1$ is equivalent to the event $x_{\tau_1 + 1} = x_{\row}^*$ up to a measure zero set.
\item By Claim 2, \cref{theorem:no-mem-convergence} with $\mathcal{R}(w) = V_{\tau_1}$, and \cref{assumption:max-convergence}, $\mathfrak{E}_1^c$ is contained in the event on which
\begin{equation}
\norm{ x_{\tau_1 + 1} - x_{\row}^* }_2^2 \leq \gamma \norm{ x_0 - x_{\row}^*}_2^2
\end{equation}
up to a measure zero set.
\end{enumerate}
\paragraph{Induction Hypothesis} Let $\ell \in \mathbb{N}$. On the event $\mathfrak{E}_{\ell-1}^c$, we let $\xi = \tau_{\ell-1} + 1$ and, for the correspondingly defined $\nu$, we can define $\tau_{\ell} = \tau_{\ell-1} + 1 + \nu$. Furthermore, on $\mathfrak{E}_{\ell-1}^c$, $\tau_{\ell}$ is finite and has finite expectation. We can define, as a subset of $\mathfrak{E}_{\ell-1}^c$,
\begin{equation}
\mathfrak{E}_{\ell} = \lbrace A' \varphi(A, b, x_{\tau_{\ell}}) \in V_{\tau_{\ell}}^0 \setminus \lbrace 0 \rbrace\rbrace,
\end{equation}
and $\mathfrak{E}_{\ell}^c$ to be its relative complement on $\mathfrak{E}_{\ell-1}^c$.
Further,
\begin{enumerate}
\item $\mathfrak{E}_{\ell}$ is equivalent to the event $x_{\tau_{\ell}+1} = x_{\row}^*$ up to a measure zero set.
\item $\mathfrak{E}_{\ell}^c$ is contained in the event on which
\begin{equation}
\norm{ x_{\tau_{\ell} + 1} - x_{\row}^* }_2^2 \leq \gamma \norm{ x_{\ell} - x_{\row}^*}_2^2
\end{equation}
up to a measure zero set.
\end{enumerate}
\paragraph{Generalization} On the event $\mathfrak{E}_{\ell}^c$, we let $\xi = \tau_{\ell}+1$ and, for the correspondingly defined $\nu$, we can define $\tau_{\ell+1} = \tau_{\ell} + 1 + \nu$. On $\mathfrak{E}_{\ell}^c$, $\tau_{\ell+1}$ is finite and has finite expectation by Claim 3. Then, we can define, as a subset of $\mathfrak{E}_{\ell}^c$,
\begin{equation}
\mathfrak{E}_{\ell+1} = \lbrace A' \varphi(A, b, x_{\tau_{\ell+1}}) \in V_{\tau_{\ell+1}}^0 \setminus \lbrace 0 \rbrace\rbrace,
\end{equation}
and $\mathfrak{E}_{\ell+1}^c$ to be its relative complement on $\mathfrak{E}_{\ell}^c$.
\begin{enumerate}
\item By Claim 1, $\mathfrak{E}_{\ell+1}$ is equivalent to the event $x_{\tau_{\ell+1} + 1} = x_{\row}^*$ up to a measure zero set.
\item By Claim 2, \cref{theorem:no-mem-convergence} with $\mathcal{R}(w) = V_{\tau_{\ell+1}}$, and \cref{assumption:max-convergence}, $\mathfrak{E}_{\ell+1}^c$ is contained in the event on which
\begin{equation}
\norm{ x_{\tau_{\ell+1} + 1} - x_{\row}^* }_2^2 \leq \gamma \norm{ x_{\tau_\ell} - x_{\row}^*}_2^2
\end{equation}
up to a measure zero set.
\end{enumerate}
Therefore, by the induction claims,
\begin{equation}
E_1 = \bigcup_{\ell \in \mathbb{N}} \mathfrak{E}_{\ell}
\end{equation}
and
\begin{equation}
E_2 = \bigcap_{\ell \in \mathbb{N}} \mathfrak{E}_{\ell}^c,
\end{equation}
and $\Prb{ E_1 \cup E_2} =1$.
\end{proof}
\subsubsection{Applying our General Theory to Specific Adaptive Schemes} \label{subsubsection:specific-examples-adaptive}
To demonstrate the utility of \cref{theorem:adaptive-convergence}, we show that a number of classical and recent methods satisfy \cref{defn:mag-invar,defn:markovian,defn:exploratory,assumption:max-convergence}. In fact, we will show that a stronger version of \cref{defn:exploratory} holds for these methods, which allows us to explicitly upper bound the elements of $\lbrace \E{\tau_{\ell}}: \ell \in \mathbb{N} \rbrace$ (when they are defined).
\begin{proposition} \label{proposition:convergence-specific-adaptive}
Suppose $Ax = b$ admits a solution $x^*$. Let $x_0 \in \mathbb{R}^d$ and let $\rrow(A)$ be defined as above (see \cref{eqn:restricted-row}). Suppose that we define $\lbrace x_k \rbrace$ and $\lbrace w_k \rbrace$ according to \cref{eqn:adaptive-procedure} for the following adaptive methods
\begin{enumerate}
\item the maximum residual method \citep[see][Section 4]{agmon1954});
\item the maximum distance method \citep[see][Section 3]{agmon1954};
\item the Greedy Randomized Kaczmarz method \citep[see][Method 2]{bai2018};
\item the Sampling Kaczmarz-Motzkin method \citep[see][Page 4]{haddock2019}.
\end{enumerate}
Then, for each of the above methods, there exists a $\gamma \in [0,1)$ such that the conclusions of \cref{theorem:adaptive-convergence} hold. Moreover, there exists a constant $\kappa$ such that for any finite $\tau_{\ell}$ (as specified in \cref{theorem:adaptive-convergence}), $\E{\tau_{\ell}} \leq \ell \kappa$.
\end{proposition}
\begin{remark}
Greedy Randomized Kaczmarz is an example of methods that deterministically determine a threshold over residuals; select the equations whose residuals surpass this threshold; and then randomly select from this set. For this more general class, so long as the threshold satisfies the magnitude invariance property and the random selection does not give any equation less than zero probability, then the result applies to this more general class. Similarly, Sampling Kaczmarz-Motzkin is an example of methods that randomly determine a set of equations; and then deterministically select from this subset of equations based on the residual values. So long as the random subset of equations does not give any equation less than zero probability (that is not already satisfied), then the result will apply to this more general class as well.
\end{remark}
\begin{remark}
Our partial orthogonalization methods (see \cref{alg: rank-one RPM low mem}) do not satisfy the $\eta$-Markovian property, as the partial orthogonalizations have a dependence on every preceding iterate.
\end{remark}
For each method, we show that it satisfies \cref{defn:mag-invar,defn:markovian,defn:exploratory,assumption:max-convergence}. In fact, for each method, we will show that a stronger version of \cref{defn:exploratory} holds.
We will start by establishing several general facts that will be useful in the discussion of each method.
\begin{lemma} \label{lemma:min-max-inner-product}
Let $x_0 \in \row(A)$ and define $\rrow(A)$ as in \cref{eqn:restricted-row}. Then,
\begin{equation}
\inf_{ v \in \rrow(A) \cap \mathbb{S}(0) } \max _{ i \in \lbrace 1, \ldots, n \rbrace} \frac{|A_{i,\cdot}'v|}{\norm{A_{i,\cdot}}_2} =: c > 0,
\end{equation}
where $\mathbb{S}(0)$ is the Euclidean unit sphere around the zero vector.
\end{lemma}
\begin{proof}
For each $v \in \rrow(A) \cap \mathbb{S}(0)$, we see that
\begin{equation}
\max_{i \in \lbrace 1,\ldots, n \rbrace } \frac{|A_{i,\cdot}' v | }{\norm{ A_{i,\cdot} }_2 } =: c_v > 0,
\end{equation}
else $v \perp \rrow(A)$ and $v \in \rrow(A) \cap \mathbb{S}(0) \subset \rrow(A)$ and we would have a contradiction since $v \neq 0$. By continuity, we see that we can construct an open ball around each $v \in \rrow(A) \cap \mathbb{S}(0)$, $D_v$, such that
\begin{equation}
\max_{i \in \lbrace 1, \ldots, n \rbrace } \frac{|A_{i,\cdot}'\tilde v|}{\norm{ A_{i,\cdot}}_2 } > c_v / 2,
\end{equation}
for all $\tilde v \in D_v \cap \mathbb{S}(0)$. Now, $\lbrace D_v : v \in \rrow(A) \cap \mathbb{S}(0) \rbrace$ is an open cover of $\rrow(A) \cap \mathbb{S}(0)$, which is a compact space. Hence, there is a finite subcover given by $\lbrace D_{v_1},\ldots,D_{v_K} \rbrace$. It follows that since each $v \in \rrow(A) \cap \mathbb{S}(0)$ belongs to one of the elements in the subcover, then
\begin{equation}
\inf_{ v \in \rrow(A) \cap \mathbb{S}(0) } \max _{ i \in \lbrace 1, \ldots, n \rbrace} \frac{|A_{i,\cdot}'v|}{\norm{A_{i,\cdot}}_2} \geq \min \lbrace c_{v_1}/2,\ldots, c_{v_K}/2 \rbrace > 0.
\end{equation}
Therefore $c > 0$.
\end{proof}
\begin{lemma} \label{lemma:convergence-rate-bound}
Let $x_0 \in \row(A)$ and define $\rrow(A)$ as in \cref{eqn:restricted-row}. Let $\Phi = \left\lbrace A_{i,\cdot} : A_{i,\cdot} \in\rrow(A) \right\rbrace$. Let $\mathcal{F}$ be the matrices whose columns are normalized, maximal linearly independent vectors from $\Phi$. Then
\begin{equation}
1 - \min_{F \in \mathcal{F}} \det(F'F) =: \gamma < 1.
\end{equation}
\end{lemma}
\begin{proof}
There are only a finite number of matrices in $\mathcal{F}$ up to column permutations. Therefore, we can choose the $F \in \mathcal{F}$ that minimizes $\det(F'F)$. By Hadarmard's inequality, $\det(F'F) \in (0,1]$, which implies that $\gamma \in [0,1)$.
\end{proof}
\paragraph{Maximum Residual Method.} In the maximum residual method, $\varphi(A,b,x)$ is the standard basis vector in $\mathbb{R}^n$, $\lbrace e_1,\ldots,e_n\rbrace$, that solves
\begin{equation}
\max_{e \in \lbrace e_1,\ldots,e_n \rbrace} |e'(Ax - b)|.
\end{equation}
\underline{$1$-Markovian:} It follows from the definition of the maximum residual method that it only relies on the current iterate to evaluate $\varphi$. Therefore, it is $1$-Markovian.
\underline{Magnitude Invariance:} By \cref{lemma:residual-mag-invar}, it follows that $\varphi(A,b,x)$ is magnitude invariant.
\underline{Exploratory:} Consider any $A_{i,\cdot} \perp V$. Then, $0 = A_{i,\cdot}'(x - x_{\row}^*) = A_{i,\cdot}'x - b_i$. Therefore, we have that the only equations whose residuals are non-zero are the ones such that $P_{V} A_{i,\cdot} \neq 0$, and there is at least one such equation by \cref{lemma:min-max-inner-product}. Therefore,
\begin{equation}
\sup_{ x \in \mathbb{S}(x_{\row}^*) \cap V } \Prb{ A'\varphi(A,b,x) \perp V } = 0.
\end{equation}
That is, we satisfy the exploratory property in a stronger manner:
\begin{equation}
\sup_{V \subsetneq \rrow(A)} \sup_{ x \in \mathbb{S}(x_{\row}^*) \cap V } \Prb{ A'\varphi(A,b,x) \perp V } = 0.
\end{equation}
With these three properties verified and by \cref{lemma:convergence-rate-bound}, the conditions of \cref{theorem:adaptive-convergence} are satisfied and the result holds. The only thing left to show is that $\E{ \tau_{\ell}}$ are bounded by some $\ell \kappa$. By the proof of \cref{theorem:adaptive-convergence}, it is enough to bound the conditional expectations of $s_j$ in \cref{eqn:s_j}. Given that $\pi = 1$ for all $V \subsetneq \rrow(A)$,
\begin{equation}
\condPrb{s_j = 1}{\xi, s_1,\ldots,s_{j-1}, x_{\xi},\ldots,x_{\xi+s_1+\cdots+s_{j-1}+1 } } = 1.
\end{equation}
Hence, $\nu \leq \dim(\rrow(A))$. Thus, $\E{\tau_{\ell}} \leq \ell \dim( \rrow(A))$. $\quad\blacksquare$
\paragraph{Maximum Distance Method.} In the maximum distance method, $\varphi(A,b,x)$ is the standard basis vector in $\mathbb{R}^n$ that solves
\begin{equation}
\max_{ e \in \lbrace e_1,\ldots,e_n \rbrace} \frac{|e' (Ax - b) |}{\norm{ A'e }_2^2}.
\end{equation}
\underline{$1$-Markovian:} It follows from the definition of the maximum distance method that it only relies on the current iterate to evaluate $\varphi$. Therefore, it is $1$-Markovian.
\underline{Magnitude Invariance:} Note, \cref{lemma:residual-mag-invar} still holds if we were to divide by the norm squared of $A_{i,\cdot}$. It follows that the maximum distance method is magnitude invariant.
\underline{Exploratory:} Just as in the maximum residual method, if $A_{i,\cdot}$ that is orthogonal to a subspace $V$, then $A_{i,\cdot}'x - b_i = 0$ for any $x \in V \cap \mathbb{S}(x_{\row}^*)$. Moreover, by \cref{lemma:min-max-inner-product}, there is at least one equation such that $A_{j,\cdot}'x - b \neq 0$ for all $x \in V \cap \mathbb{S}(x_{\row}^*)$. Hence, the maximum distance method satisfies a stronger version of the exploratory condition, namely,
\begin{equation}
\sup_{V \subsetneq \rrow(A)} \sup_{ x \in \mathbb{S}(x_{\row}^*) \cap V } \Prb{ A'\varphi(A,b,x) \perp V } = 0.
\end{equation}
By the same argument as above, \cref{theorem:adaptive-convergence} follows. Similarly, $\E{ \tau_{\ell}} \leq \ell \dim( \rrow(A) )$. $\quad\blacksquare$
\paragraph{Greedy Randomized Kaczmarz.} In \cite{bai2018} (Method 2), a residual threshold is selected given by
\begin{equation} \label{eqn:grk-threshold}
\frac{1}{2}\left( \frac{1}{\norm{Ax - b}_2^2} \max_{ e \in \lbrace e_1,\ldots,e_n \rbrace} \frac{|e' (Ax - b) |^2}{\norm{ A'e }_2^2} + \frac{1}{\norm{A}_F^2} \right)
\end{equation}
Then, from the set of equations whose residual surpasses this threshold (which is shown to at least contain the equation selected by the maximum distance method), an equation is selected by a probability proportional to the equation's residual squared.
\underline{$1$-Markovian:} Given that the threshold relies only on the current iterate value and that the random selection criteria only relies on the current residual, it follows that the Greedy Randomized Kaczmarz method is $1$-Markovian.
\underline{Magnitude Invariance:} Suppose $x \not\in H$. For $\lambda > 0$, let $x(\lambda) = P_H(x) + \lambda( x - P_{H}(x) )$. Then, by \cref{lemma:residual-mag-invar},
\begin{equation}
\begin{aligned}
&\frac{1}{\norm{Ax(\lambda) - b}_2^2} \max_{ e \in \lbrace e_1,\ldots,e_n \rbrace} \frac{|e' (A x(\lambda) - b) |^2}{\norm{ A'e }_2^2} \\
&= \quad \frac{1}{\lambda^2 \norm{Ax - b}_2^2} \max_{ e \in \lbrace e_1,\ldots,e_n \rbrace} \frac{\lambda^2|e' (Ax- b) |^2}{\norm{ A'e }_2^2},
\end{aligned}
\end{equation}
which implies that the threshold is magnitude invariant. Similarly, we can show that the selection probabilities are magnitude invariant (we look at the preceding calculation, but only for a nonempty subset of the equations).
\underline{Exploratory:} Let $V \subsetneq \rrow(A)$ be a nontrivial subspace. Then for any $x \in \mathbb{S}(x_{\row}^*) \cap V$, we saw that any equations for which $P_V A_{i,\cdot} = 0$ have a zero residual. Therefore, the only equations with nonzero residuals are those that not orthogonal to $V$. Since the threshold is bounded away from zero, only equations that are not orthogonal to $V$ can be in the subset. Therefore,
\begin{equation}
\sup_{V \subsetneq \rrow(A)} \sup_{ x \in \mathbb{S}(x_{\row}^*) \cap V } \Prb{ A'\varphi(A,b,x) \perp V } = 0.
\end{equation}
By the same argument as above, \cref{theorem:adaptive-convergence} follows. Similarly, $\E{ \tau_{\ell}} \leq \ell \dim( \rrow(A) )$. $\quad\blacksquare$
\paragraph{Sampling Kaczmarz-Motzkin.} In \cite{haddock2019} (Page 4), a subset of equations are randomly selected, and then the equation with the maximum residual is selected from this subset.
\underline{$1$-Markovian:} The Sampling Kaczmarz-Motzkin method only relies on the current residual to sample. As a result, it is $1$-Markovian.
\underline{Magnitude Invariance:} The \textit{distribution} of the initial subsetting is independent and identical at each iteration. Therefore, conditioned on a given subset, we choose the maximum residual. By \cref{lemma:residual-mag-invar}, this last step is magnitude invariant. Moreover, since the random subsetting is independent and identical at each iteration, it too is magnitude invariant. Therefore, the entire procedure is magnitude invariant.
\underline{Exploratory:} Let $V \subsetneq \rrow(A)$ be a nontrivial subspace. Then, for any $x \in \mathbb{S}(x_{\row}^*) \cap V$, we have shown that there exists a $j$ such that $A_{j,\cdot}'x -b_j \neq 0$. Therefore, so long as the probability of selecting this equation is nonzero, then we are guaranteed that there is some choice of $\varphi(A,b,x)$ such that
\begin{equation}
\Prb{ A'\varphi(A,b,x) \perp V } \leq 1 - \Prb{ \mathrm{choosing ~ j ~ in ~ the ~ subset } }.
\end{equation}
Let $\pi$ be the smallest inclusion probability for any equation in the random subset. Then, it follows that
\begin{equation}
\sup_{V \subsetneq \rrow(A)} \sup_{ x \in \mathbb{S}(x_{\row}^*) \cap V } \Prb{ A'\varphi(A,b,x) \perp V } \leq 1 - \pi.
\end{equation}
For the Sampling Kaczmarz-Motzkin method, the minimum inclusion probability is at least $\psi/n$, which corresponds to random sampling without replacement of subsets of size $\psi$.
With these three properties verified and by \cref{lemma:convergence-rate-bound}, the conditions of \cref{theorem:adaptive-convergence} are satisfied and the result holds. The only thing left to show is that $\E{ \tau_{\ell}}$ are bounded by some $\ell \kappa$. By the proof of \cref{theorem:adaptive-convergence}, it is enough to bound the conditional expectations of $s_j$ in \cref{eqn:s_j}. Supposing that $\pi$ for all $V \subsetneq \rrow(A)$,
\begin{equation}
\condPrb{s_j = k}{\xi, s_1,\ldots,s_{j-1}, x_{\xi},\ldots,x_{\xi+s_1+\cdots+s_{j-1}+1 } } \leq (1 - \psi/n)^{k-1} \psi/n
\end{equation}
Hence, $\E{\tau_{\ell}} \leq \ell n\dim( \rrow(A))/\psi$. $\quad\blacksquare$
\subsection{A Brief Overview} \label{subsection:overview}
Let $A \in \mathbb{R}^{n \times d}$ and $b \in \mathbb{R}^n$ be the coefficient matrix and constant vector, respectively. Assuming consistency, our goal is to determine an $x^* \in \mathbb{R}^d$, not necessarily unique, such that
\begin{equation} \label{eqn: linear system}
Ax^* = b.
\end{equation}
In a base randomized iterative approach, a sequence of iterates $\lbrace x_k : k +1 \in \mathbb{N} \rbrace$ is generated that has the form
\begin{equation} \label{eqn: general random update}
x_{k+1} = x_k + V_{k}(b - Ax_k),
\end{equation}
where $V_k \in \mathbb{R}^{d \times n}$ are independent random variables, which we call residual projection matrices (RPM). The RPM defines the base technique which is being used. To make this formulation concrete, we give several examples of randomized iterative methods that have this formulation.
\paragraph{Randomized Kaczmarz.} Let $A_{i,} \in \mathbb{R}^d$ denote the $i^{\text{th}}$ row of $A$ and let $e_{i}$ denote the $i^{\text{th}}$ standard basis vector of dimension $n$. Define the random variable $I$ such that
$$ \Prb{I = i} = \begin{cases}
\frac{\norm{A_{i,}}_2^2}{\norm{A}_F^2} & i=1,\ldots,n \\
0 & \text{otherwise}
\end{cases}.$$
Now, given an independent copy of $I$ at each $k$, define the RPM, $V_{k} = A_{I,} e_{I}'/\norm{A_{I,}}_2^2.$ Then, using \cref{eqn: general random update},
\begin{align*}
x_{k+1} = x_k + A_{I,} e_{I}'(b - Ax_k) = x_k + A_{I,}(b_I - A_{I,}'x_k)/\norm{A_{I,}}_2^2,
\end{align*}
which is the Randomized Kaczmarz method of \citet{strohmer2009}. $\quad\blacksquare$
\paragraph{Randomized Gauss-Seidel.} Let $A_{,j} \in \mathbb{R}^n$ denote the $j^{\text{th}}$ column of $A$ and let $f_j$ denote the $j^{\text{th}}$ standard basis vector of dimension $d$. Define a random variable $J$ such that
$$ \Prb{J = j} = \begin{cases}
\frac{\norm{A_{,j}}_2^2}{\norm{A}_F^2} & j = 1,\ldots,d \\
0 & \text{otherwise}
\end{cases}.$$
Now, given an independent copy of $J$ at each $k$, define the RPM, $V_{k} = e_{J} A_{,J}'/\norm{A_{,J}}_2^2.$ Then, using \cref{eqn: general random update},
\begin{align*}
x_{k+1} = x_k + e_J A_{,J}'(b - Ax_k)/\norm{A_{,J}}_2^2,
\end{align*}
which is the Randomized Gauss-Seidel method of \citet{leventhal2010}. $\quad\blacksquare$
\paragraph{Randomized Block Coordinate Descent.} Let $t$ be a subset of $\lbrace 1,\ldots,d \rbrace$. Let $E_{t} \in \mathbb{R}^{d \times |\tau|}$ whose columns are the $d$-dimensional standard basis vectors whose non-zero components correspond to the indices in $t$. Let $\mathcal{T}$ be a partition $\lbrace 1,\ldots, d \rbrace$, and define a random variable $T$ that randomly selects a partition in $\mathcal{T}$. Given an independent copy of $T$ at each $k$, define the RPM, $V_k = ( E_T' A' A E_T )^\dagger E_T'A'$. Then, using \cref{eqn: general random update},
\begin{align*}
x_{k+1} = x_k + ( E_T' A' A E_T )^\dagger E_T'A'(b - A x_k),
\end{align*}
which is a version of the randomized block coordinate descent method specified by \citep[Equation 3.14]{gower2015}. $\quad\blacksquare$
\paragraph{Sketch-and-Project.} Let $\lbrace N_0,N_1,\ldots \rbrace$ be a sequence of sketching matrices with $n$ columns. Define the $k^{\mathrm{th}}$ RPM to be $V_k = A'N_k'( N_k A A' N_k')^\dagger N_k$. Then, using \cref{eqn: general random update},
\begin{align*}
x_{k+1} = x_k + A'N_k'( N_k A A' N_k')^\dagger N_k (b - Ax_k),
\end{align*}
which is the general sketch-and-project method \citep[Equation 2.2]{gower2015}. $\quad\blacksquare$
\subsection{A Heuristic Derivation} \label{subsection:derivation}
Here, given a strategy for defining $\lbrace V_k : k +1 \in \mathbb{N} \rbrace$, we consider how to augment the randomized iterative method with prior information in order to improve convergence. For this purpose, we propose defining a sequence of matrices $\lbrace M_k : k +1 \in \mathbb{N} \rbrace \subset \mathbb{R}^{d \times d}$ (discussed below) and modify \cref{eqn: general random update} to be
\begin{equation} \label{eqn: optimal general random update}
x_{k+1} = x_k + M_k V_k (b - Ax_k).
\end{equation}
Of course, $M_k$ can simply be absorbed by $V_k$; however, our goal is to augment a randomized iterative method. For this reason, we will keep these two quantities separate.
The main question now is how to choose $\lbrace M_k : k + 1 \in \mathbb{N} \rbrace$. Our guiding principle is that $M_k$ should minimize some measure of error between $x_{k+1}$ and $x^*$. However, implementing this guiding principle requires (1) choosing an appropriate error measure and (2) handling the fact that $x^*$ is unknown. In order to convey the intuition behind our procedure, we now state the heuristics that we use to make these choices.
\paragraph{Choosing an Error Measure.} Temporarily, suppose $x^*$ is known, and suppose we choose the $l^1$ error as our measure. Then, we must minimize the difference between the next iterate and $x^*$. While this error metric might have merit, solving it is a convex optimization problem that is as difficult to solve as the original linear system. Therefore, we will need an error measure which gives an explicit representation for $M_k$. Hence, one sensible choice is to use the Mahalanobis norm,
\begin{equation} \label{eqn: optimization problem}
\norm{x_{k+1} - x^*}_B^2,
\end{equation}
where $B$ is a positive definite, symmetric $\mathbb{R}^{d \times d}$ matrix.
\paragraph{Compensating for the Unknown Solution.} Now, we consider the task of compensating for the unknown $x^*$.
For a fixed $x^*$ and for all $k+1 \in \mathbb{N}$, let $S_k = (x_k - x^*)(x_k - x^*)'$. Then, $S_{k+1}$ is related to $S_k$ by
\begin{align}
S_{k+1} &= (I - M_k V_k A) S_k (I - M_k V_k A)', \label{eqn: variance update}
\end{align}
where we have made use of \cref{eqn: optimal general random update}.
Using \cref{eqn: variance update}, we can rewrite \cref{eqn: optimization problem} as
\begin{align*}
\norm{x_{k+1} - x^*}_B^2 = \tr{B (I - M_k V_k A) S_k (I - M_k V_k A)'}.
\end{align*}
To find an optimal $M_k$, we differentiate the right hand side and set the quantity equal to zero, which, explicitly is
\begin{equation} \label{eqn: linear matrix equation}
M_k (V_k A S_k A' V_k') - S_k A'V_k' = 0.
\end{equation}
Clearly, $V_kA {S}_k A'V_k'$ is positive semi-definite, so the solution to such a system will be the minimizer of the original objective function. However, \cref{eqn: linear matrix equation} may have many possible solutions or may fail to be consistent. In the case of nonunique solutions, we arbitrarily choose the solution with the smallest Frobenius norm. In the case of an inconsistent system, we arbitrarily choose the solution that minimizes the Frobenius norm of the residual and has the minimal Frobenius norm. In both cases, a straightforward calculation gives
\begin{equation} \label{eqn: gain matrix}
{M}_k = S_k A' V_k' (V_k A S_k A' V_k')^\dagger,
\end{equation}
where $\dagger$ represents the Moore-Penrose Pseudo-inverse. Using \cref{eqn: gain matrix} with \cref{eqn: variance update}, we have the following recursion
\begin{equation} \label{eqn: updated variance update}
S_{k+1} = S_k - S_k A' V_k'(V_k A S_kA'V_k')^\dagger V_k A_k S_k.
\end{equation}
From \cref{eqn: gain matrix} and \cref{eqn: updated variance update}, it is clear that if $S_0$ were known, then the remaining unknown quantities could be determined.
\paragraph{Our Procedure.} Since $S_0$ is unknown, we use the following heuristic procedure instead. First, we let $S_0 = I_d$, where $I_d$ is the $d$-dimensional identity matrix. Then, we recursively define $M_k$ and $S_k$ according to \cref{eqn: gain matrix,eqn: updated variance update}. To summarize, given $\lbrace V_k : k+1 \in \mathbb{N} \rbrace$, we let $S_0 = I_d$, let $x_0 \in \mathbb{R}^d$, and define
\begin{equation} \label{eqn: iterate update heur}
x_{k+1} = x_k + M_k V_k (b - Ax_k),
\end{equation}
where
\begin{equation} \label{eqn: gain matrix heur}
M_k = S_k A' V_k' (V_k A S_k A' V_k')^\dagger;
\end{equation}
and
\begin{equation} \label{eqn: variance update heur}
S_{k+1} = S_k - S_k A' V_k' (V_k A S_k A' V_k')^\dagger V_k A_k S_k.
\end{equation}
To interpret the terms in the above procedure, we begin by ignoring $S_k$ (i.e., set it to the identity). In this case, $M_k$ and its role in updating $x_k$ to $x_{k+1}$ is familiar: $M_k$ serves to map the residual onto the row space of $V_kA$, thereby ensuring that $x_{k+1}$ satisfies $V_k A x_{k+1} = V_k b$. If we now consider the role of $S_k$, we see that it is an orthogonal projector that ``weights'' the behavior of $M_k$ to ensure that $x_{k+1}$ satisfy $V_i A x_{k+1} = V_i b$ for $i \leq k$. We will see these interpretations clearly and formally when we focus on the case of rank-one $V_k$ next.
We pause here momentarily to discuss the relationship between our procedure, as specified by \cref{eqn: iterate update heur,eqn: gain matrix heur,eqn: variance update heur}, and the sketch-and-project method in \cite{gower2015} and \cite{richtarik2017}. At first glance, it may seem that our procedure is a special case of sketch-and-project with adaptive choices of the inner product at each iteration of the sketch-and-project update. Unfortunately, an effort to recast our approach as a special case of sketch-and-project breaks down at two fundamental points. First, the adaptive choices of the sketch-and-project inner product would have to be the inverse of $S_k$, which are orthogonal projection matrices. As a result, the inverse is ill-defined and the inner product is ill-defined. Of course, this can be rectified by allowing for a pseudo-metric, but this then results in the second major point of difficulty: the theory presented in \cite{gower2015} and \cite{richtarik2017} relies on the determinism and invertibility of the matrix defining the metric space to prove convergence. Thus, sketch-and-project, without a substantial investment, cannot readily include our approach. On the other hand, we can state sketch-and-project as a base randomized iterative approach, as shown in \cref{subsection:derivation}, and then improve on it with our procedure via \cref{eqn: iterate update heur,eqn: gain matrix heur,eqn: variance update heur}.
\subsection{Rank-One Refinements and Random Sketching} \label{subsection:rank-one-and-sketching}
By choosing $x_0 \in \mathbb{R}^d$ and $S_0 = I_d$, \cref{eqn: iterate update heur,eqn: gain matrix heur,eqn: variance update heur} describe an orthogonal projection procedure for typical randomized iterative procedures. However, because our goal is to improve the practicality of random sketching methods, we will need to focus on a particular refinement of the general procedure that occurs when $\lbrace V_k \rbrace$ are rank-one matrices, that is, when there exist pairs of vectors $\lbrace (v_k,w_k) \rbrace$ such that $V_k = v_k w_k'$ for each $k$. In this case, \cref{eqn: gain matrix heur,eqn: variance update heur} become
\begin{equation} \label{eqn: gain rank-one update}
M_k = \begin{cases}
\frac{1}{w_k'A S_k A'w_k \norm{v_k}_2^2} S_k A' w_k v_k' & S_k A'w_k \neq 0 \\
0 & \text{ otherwise,}
\end{cases}
\end{equation}
and
\begin{equation} \label{eqn: rank-one update-matrix}
S_{k+1}= \begin{cases}
S_k - \frac{1}{w_k'A S_k A'w_k} S_k A'w_k w_k'AS_k & S_k A'w_k \neq 0 \\
S_k & \text{ otherwise.}
\end{cases}
\end{equation}
Moreover, if we substitute \cref{eqn: gain rank-one update} into \cref{eqn: iterate update heur}, we recover
\begin{equation} \label{eqn: rank-one update-param}
x_{k+1} = \begin{cases}
x_k + \frac{1}{w_k'A S_k A'w_k} S_k A' w_k w_k'(b - Ax_k) & S_k A'w_k \neq 0 \\
x_k & \mathrm{otherwise}.
\end{cases} \\
\end{equation}
It follows from \cref{eqn: rank-one update-param,eqn: rank-one update-matrix} that in the case of a rank-one RPM, \textit{the left singular vector of the RPM is not important}. To give some explicit examples, recall that rank-one RPM methods include the important special cases of randomized Kaczmarz and Gauss-Seidel.
\paragraph{Randomized Kaczmarz with Orthogonalization.} Let $A_{i,} \in \mathbb{R}^d$ denote the $i^{\text{th}}$ row of $A$ and let $e_{i}$ denote the $i^{\text{th}}$ standard basis vector of dimension $n$. Define the random variable $I$ arbitrarily taking values in $\lbrace 1,\ldots,n \rbrace$. Now, given an independent copy of $I$ at each $k$, the randomized Kaczmarz method has rank-one RPM, $V_k = A_{I,} e_{I}'/\norm{A_{I,}}_2^2.$ Then, using \cref{eqn: rank-one update-param,eqn: rank-one update-matrix}, the randomized Kaczmarz method with orthogonalization is
$$
\begin{aligned}
x_{k+1} &= x_{k} + \frac{1}{e_I'A S_k A'e_I } S_k A' e_I e_I' (b - A x_k) \\
S_{k+1} &= \left( I_d - \frac{1}{e_I'A S_k A' e_I} S_k A' e_I e_I'A \right) S_k,
\end{aligned}
$$
when $S_k A' e_I \neq 0$, or is $x_{k+1} = x_{k}$ and $S_{k+1} = S_k$ otherwise. $\quad\blacksquare$
\paragraph{Randomized Gauss-Seidel with Orthogonalization.} Let $A_{,j} \in \mathbb{R}^n$ denote the $j^{\text{th}}$ column of $A$ and let $f_j$ denote the $j^{\text{th}}$ standard basis vector of dimension $d$. Define a random variable $J$ arbitrarily taking values in $\lbrace 1,\ldots,d \rbrace$. Now, given an independent copy of $J$ at each $k$, the randomized Gauss-Seidel method has rank-one RPM, $V_k = e_{J} A_{,J}'/\norm{A_{,J}}_2^2.$ Then, using \cref{eqn: rank-one update-param,eqn: rank-one update-matrix}, the randomized Gauss-Seidel method with orthogonalization is
$$
\begin{aligned}
x_{k+1} &= x_k + \frac{1}{A_{,J}'A S_k A'A_{,J}} S_k A' A_{,J} A_{,J}'(b - Ax_k) \\
S_{k+1} &= \left( I_d - \frac{1}{A_{,J}'A S_k A'A_{,J}} S_k A' A_{,J} A_{,J}'A \right) S_k,
\end{aligned}
$$
when $S_k A' A_{,J} \neq 0$, or is $x_{k+1} = x_{k}$ and $S_{k+1} = S_k$ otherwise. $\quad\blacksquare$
Again, we see from the two preceding examples that the left singular vector of the rank-one RPM does not play a role in the updates for our procedure. As we now explain, this observation is critical for converting the impractical, noniterative randomized sketch-\textit{then}-solve methods into iterative randomized sketch-\textit{and}-solve methods.
Recall that the fundamental sketch-then-solve procedure is to construct a specialized matrix $N^\mathrm{sketch} \in \mathbb{R}^{k \times n}$, then generate and solve the smaller, sketched problem $(N^\mathrm{sketch}A)x = N^\mathrm{sketch}b$ \citep[see][Ch. 1]{woodruff2014}.\footnote{We note that the typical formulation considers linear regression rather than a linear system.} The special matrix $N^\mathrm{sketch}$, called the sketching matrix, can be generated in a variety of ways such as making each entry an independent, identically distributed Gaussian random variable \citep{indyk1998}, or by setting the columns of $N^\mathrm{sketch}$ as uniformly sampled columns (with replacement) of the appropriately-dimensioned identity matrix \citep{cormode2005}.
In order to convert the usual sketch-\textit{then}-solve procedure into our sketch-\textit{and}-solve procedure, we simply set $\lbrace w_k : k +1 \in \mathbb{N} \rbrace \subset \mathbb{R}^n$
to the transposed rows of $N^\mathrm{sketch}$, which we will rigorously demonstrate in \cref{section:full-memory}. Of course, this requires that we have a streaming procedure for generating arbitrarily many rows of $N^\mathrm{sketch}$. For concreteness, we show how to do this for the two sketching strategies just mentioned.
\paragraph{Random Gaussian Sketch.} In the random Gaussian sketch, the entries of the sketching matrix, $N^\mathrm{sketch}$, are independent, standard normal random variables. Accordingly, we let $\lbrace w_k \rbrace$ be independent, $n$-dimensional standard normal vectors. We see that if $N^\mathrm{sketch}$ has $r$ rows, then $N^\mathrm{sketch}$ and
\begin{equation*}
\begin{bmatrix}
w_0' \\
w_1' \\
\vdots \\
w_{r-1}'
\end{bmatrix}
\end{equation*}
have the same distribution. $\quad\blacksquare$
\paragraph{Count Sketch.} Fix $K \in \mathbb{N}$, and let $\lbrace E_1,E_2,\ldots \rbrace$ be drawn from the $\mathbb{R}^K$ standard basis vectors with replacement. Define a sequence of Rademacher random variables $\lbrace R_1,R_2,\ldots \rbrace$ which are independent and independent of $\lbrace E_1,E_2,\ldots \rbrace$. The count sketch sketching matrix, $N^\mathrm{sketch}$, is specified by
\begin{equation*}
\begin{bmatrix}
R_1 E_1 & R_2E_2 & \cdots & R_n E_n
\end{bmatrix},
\end{equation*}
which is a matrix whose entries are either $-1$, $0$ or $1$. Generally, the choice of $K$ is the topic of substantial theory and consideration \citep{cormode2005,clarkson2017}. Owing to the fact that we have a streaming procedure, we do not need to worry too much about $K$. Therefore, we generate $\lbrace w_k \rbrace$ as follows:
\begin{enumerate}
\item Generate a count sketch matrix with $K$ small. In our experiments below, we used $K = 10$.
\item To generate a $w_k$, pop a row of the matrix and set it to $w_k$.
\item Once the count sketch matrix is exhausted, regenerate a new count sketch matrix with the same $K$. Repeat.
\end{enumerate}
From this strategy, (a) if we let $\lbrace N_{(i)}: i \in \mathbb{N} \rbrace$ denote a sequence of independent $K \times n$ count sketch matrices, (b) $i_k$ denote the remainder of an integer $k$ divided by $K$ and incremented by one, and (c) we let $\lbrace e_i \rbrace$ denote the standard basis vectors of $\mathbb{R}^K$, then $w_k = N_{(i_k)}'e_{i_k}$ for all $k+1 \in \mathbb{N}$. $\quad\blacksquare$
Thus, if we let \texttt{RPMStrategy()} define a generic user-defined procedure for choosing $\lbrace w_k : k +1 \in \mathbb{N} \rbrace$, then this observation gives us \cref{alg: rank-one RPM full mem} for (1) converting the sketch-\textit{then}-solve procedure into a sketch-\textit{and}-solve procedure, and (2) adding orthogonalization to such base methods as randomized Kaczmarz and randomized Gauss-Seidel.
\input{algorithms/full_memory}
\subsection{Algorithmic Refinements Considering the Computing Platform}
\cref{alg: rank-one RPM full mem} implicitly assumes the traditional sequential programming paradigm. However, the performance of the algorithm can be improved by taking advantage of parallel computing architectures.
Here, we will consider a handful of important computing architecture abstractions and how our procedure can adapt to different configurations. In \cref{subsubsection:parallel}, we will consider the case of a parallel computing architecture for which the communication overhead, which is proportional to the dimension $d$, is not a limiting factor. For this subsection, the problems that we have in mind come from data and imaging sciences, where $n \gg d$ and $d$ is reasonably sized. In \cref{subsection:low-memory}, we consider a similar class of problems where the communication of $\bigO{d}$-sized vectors is acceptable and $n \gg d$, but that $d$ is so large that storing and manipulating a matrix in $\mathbb{R}^{d \times d}$ is burdensome. Finally, in \cref{subsubsection:structured} we will consider problems in which computational overhead becomes a bottleneck for scalability, but that we have structured systems that will allow us to circumvent this issue. For this ultimate subsection, the problems that we have in mind here come from the solution of systems of differential equations \citep[e.g.,][]{dongarra1986}.
\subsubsection{Asynchronous Parallelization on Shared and Distributed Memory Platforms} \label{subsubsection:parallel}
First, when we are using a matrix sketch for \texttt{RPMStrategy()}, one of the expensive components of the computation is determining $\begin{bmatrix} A & b \end{bmatrix}'w_k$. Fortunately, in our sketch-and-solve procedure, this expensive computation can be trivially asynchronously parallelized on a shared memory platform when
\begin{enumerate}
\item the data within the rows $\begin{bmatrix} A & b \end{bmatrix}$ are stored together, and
\item the \texttt{RPMStrategy()} generates $\lbrace w_k : k+1 \in \mathbb{N} \rbrace$ that are either independent (e.g., the Gaussian Strategy) or can be grouped into independent subsets (e.g., the Count-Sketch strategy).
\end{enumerate}
When these two requirements are met, each processor can generate its own $\lbrace w_k : k+1 \in \mathbb{N} \rbrace$ independently of the other processors, and evaluate $\begin{bmatrix} A & b \end{bmatrix}'w_k$. It can then simply write the resulting row to an address reserved for performing the iterate and $S_k$ matrix updates by the master processor. Importantly, this procedure does not require locking any of the rows of $\begin{bmatrix} A & b \end{bmatrix}$, and the reserved addresses can use fine grained locks to prevent any wasted calculations.
Similarly, in our sketch-and-solve procedure, computing $\begin{bmatrix} A & b \end{bmatrix}'w_k$ can be trivially asynchronously parallelized on a distributed memory platform using a Fork-join model, when
\begin{enumerate}
\item the rows of $\begin{bmatrix} A & b \end{bmatrix}$ are distributed across the different storages, and
\item the \texttt{RPMStrategy()} generates $\lbrace w_k : k+1 \in \mathbb{N} \rbrace$ such that $w_k$ have independent groups of components (e.g., the Gaussian Strategy and the Count-Sketch strategy).
\end{enumerate}
When these two requirements are met, each processor can generate its own $\lbrace w_k : k+1 \in \mathbb{N} \rbrace$ and operate on the local rows of $\begin{bmatrix} A & b \end{bmatrix}$. It can then simply pass the resulting row to the master processor which performs the iterate and $S_k$ matrix updates. For each iteration, a scattering and gathering of the data is performed but no other data exchange is required.
\Cref{table:full-memory-parallel-comparison} summarizes the time and total computational costs of computing $x_k$ and $S_k$ from $x_0$ and $S_0$ in the following context: (1) the sequential platform refers to the case where there is a single processor with a sufficiently large memory to store the system, and perform the necessary operations in \cref{alg: rank-one RPM full mem};
(2) the shared memory platform assumes that there are $p+1$ processors that share a sufficiently large memory. One of the processors is dedicated to performing the iterate and matrix updates, while the remaining $p$ processors compute $\begin{bmatrix} A & b \end{bmatrix}'w_k$;
(3) the distributed memory architecture assumes that there are $p+1$ processors each with a sufficient memory capacity. The rows of $\begin{bmatrix} A & b \end{bmatrix}$ are split evenly or nearly evenly amongst $p$ of the processors, and each process only manipulates its local information about $A$ and $b$. Finally, master processor is dedicated to performing the iterate and matrix updates.
\input{tables/full_memory_parallel_comparison}
\subsubsection{Memory-Reduced Procedure} \label{subsection:low-memory}
\input{algorithms/low_memory}
Another notable aspect of \cref{alg: rank-one RPM full mem} (and its aforementioned parallel variants described above) is that it must store and manipulate the matrix $S_k$ at each iteration, which is clearly expensive when $d$ is large or is excessive when $d^3$ is comparable to $n$ or greater than $n$. This difficulty motivates a partial orthogonalization approach, as described in \cref{alg: rank-one RPM low mem}. In this approach, a user-defined parameter $m < d$ specifies the number of $d$-dimensional vectors needed to implicitly store an approximate representation of $S_k$ (based on \cref{theorem: S are orthogonal projections}). With this implicit representation, the cost of computing $u_k$ reduces to $\bigO{md}$,\footnote{If $q_k$ replace $u_k$ in the calculation of $z_k$, then the cost of computing $u_k$ is $\bigO{dm^2}$ \citep[see][Ch. 5.2]{golub2012}.} which, consequently, reduces the overall cost of updating $x_{k}$ to $x_{k+1}$ to $\bigO{md}$. Moreover, because $S_k$ is implicitly represented by a $m$ $d-$dimesnional vectors in $\mathcal{S}$, there is no notable additional computational cost incurred for updating $S_{k}$ to $S_{k+1}$. Thus, an entire iteration incurs a computational cost $\bigO{md}$ plus the cost of computing $\begin{bmatrix} A & b \end{bmatrix}'w_k$, which can be mollified under the strategies above in shared memory or distributed memory platforms.
\begin{remark}
\cref{alg: rank-one RPM low mem} is an efficient implementation of the partial orthogonalization procedure and, as a result, at $m=0$, seems to only recover row-action base randomized iterative methods as specified by \cref{eqn:no-memory-iteration}. A less efficient algorithm based on directly applying \cref{eqn: gain rank-one update,eqn: rank-one update-matrix} with the appropriate low memory modification would recover all rank-one base randomized iterative methods when $m=0$.
\end{remark}
\input{algorithms/modified_gram_schmidt}
\subsubsection{Optimizing Communication Overhead. Structured Systems} \label{subsubsection:structured}
\begin{figure}[hbt]
\centering
\input{figures/tikz-banded-matrix-ex}
\caption{A representation of a $20 \times 20$ banded matrix with bandwidth $\tilde{Q} +1 = 5$, whose rows are split across five compute nodes (represented by the dashed line). Note, the empty grid points represent zeros, while the filled grid points represent nonzero values.}
\label{figure:tikz-banded-matrix}
\end{figure}
In the above approaches, we take for granted that $d$ is not so large such that communicating $\bigO{d}$ vectors is acceptable during the procedure. However, for many problems coming from the solution of differential equations \citep[e.g., see][]{dongarra1986}, $d$ and $n$ are of the same order and are so large that communicating $\bigO{d}$ vectors at arbitrary points during the procedure is impossible. Fortunately, linear system problems in this class are highly sparse and structured \citep[][Ch. 2]{saad2003}. A simple example is the case where $A$ is a square, banded system with nonzero bandwidth $\tilde{Q}+1$ for some $\tilde{Q} \ll n = d$; that is, $A_{ij} = 0$ if $|i-j| > \tilde{Q}$ and the remaining $A_{ij}$ can take arbitrary values.
For such sparse and structured problems, our methodology can be efficiently implemented across a distributed memory platform with $p$ processors under some additional qualifications. However, to understand these qualifications, let us first introduce some notation and concepts that define the communication pattern across the $p$ nodes.
Suppose somehow that we distribute the equations of our linear system of interest across $p$ nodes. \cref{figure:tikz-banded-matrix} shows how the coefficient matrix of a $20 \times 20$ banded system with bandwidth $5$ can be distributed across five nodes. Note, in this example, the entries of the constant vector would be stored on the same processor as the corresponding rows of the coefficient matrix.
Moreover, we need a way of tracking which components of $x$ are manipulated by each node:
let $\mathcal{X}_i$ be the set of indices of the components of $x$ with nonzero coefficients at node $i$ in the distributed system for $i=1,\ldots,p$.
In our example, $\mathcal{X}_1 = \lbrace 1,\ldots,6 \rbrace$, $\mathcal{X}_2 = \lbrace 3,\ldots,10 \rbrace$, $\mathcal{X}_3 = \lbrace 7,\ldots,14 \rbrace$, $\mathcal{X}_4 = \lbrace 11,\ldots,18 \rbrace$, and $\mathcal{X}_5 = \lbrace 15,\ldots,20 \rbrace$.
Finally, for any vector $z$ and any set $\mathcal{X}$ over the indices of $z$, let $z[\mathcal{X}]$ be the vector whose elements are the elements of $z$ indexed by $\mathcal{X}$.
From this example and from our discussion in \cref{subsubsection:parallel} of distributing the \texttt{RPMStrategy()}, we can use the local rows of $A$ at Node 1 and a Gaussian sketch to generate a $q_1 \in \mathbb{R}^d$ such that $q_1[\lbrace 1,\ldots, 6 \rbrace]$ are arbitrarily valued and $q_1[\lbrace 7,\ldots,20 \rbrace] = 0$. Thus, our vector $q_k$ is highly sparse and can be generated locally on the node. However, following \cref{alg: rank-one RPM full mem}, the next step of computing $u_k$ requires computing the product between $S_k$ and $q_k$, which, in a naive implementation, would require storing a dense $d \times d$ matrix $S_k$ and computing a global matrix-vector product. Such a required computation raises several concerns, which we detail and address in the following enumeration.
\begin{enumerate}
\item Given that $d$ is relatively large to the computing environment, is storing a $d \times d$ matrix even feasible? Generally, the answer will be that storing such a matrix is infeasible. However, by exploiting the properties of $S_k$ (see \cref{theorem: S are orthogonal projections}), we will approximately and implicitly store $S_k$ as $\mathcal{S}$, which is a collection of orthonormal vectors.
\item Even if we use $\mathcal{S}$ in place of $S_k$, will the resulting implicit matrix-vector product and update of $\mathcal{S}$ incur prohibitive communication costs? To answer these questions completely, we will need to specify how the implicit matrix-vector product will be computed and how $\mathcal{S}$ will be stored. Here, we will compute the implicit matrix-vector product by using twice-iterated classical Gram-Schmidt (\cref{alg:iterated-GS}), which was shown to be numerically stable in the seminal work of \cite{giraud2005}. Owing to this calculation pattern, we can store $\mathcal{S}$ in a distributed fashion across the $p$ processors, which we detail below along with the communication cost of the synchronization of $\mathcal{S}$.
\end{enumerate}
\input{algorithms/iterated_gram_schmidt}
To understand the costs associated with computing $u$ from the orthonormal vectors in $\mathcal{S}$ and the vector $q$, we will characterize the support of $u$ (i.e., index set of its nonzero entries).
\begin{lemma} \label{lemma:support-GS}
Let $q \in \mathbb{R}^d$ and let $\mathcal{Q} = \lbrace i : q[i] \neq 0 \rbrace \subset \lbrace 1,\ldots, d \rbrace$. Let $\lbrace z_1,\ldots,z_m \rbrace \subset \mathbb{R}^d$ be a set of orthonormal vectors (hence, $m \leq d$), and let $\mathcal{Z}_j = \lbrace i : z_j[i] \neq 0 \rbrace \subset \lbrace 1,\ldots, d \rbrace$ for $j=1,\ldots,m$. If $u$ denotes the result of \cref{alg:iterated-GS} applied to $q$ over the set $\lbrace z_1,\ldots,z_m \rbrace$ then
\begin{equation}
\mathcal{U} := \lbrace i : u[i] \neq 0 \rbrace \subset \left(\bigcup_{j \in \mathfrak{Q}} \mathcal{Z}_j \right) \cup \mathcal{Q},
\end{equation}
where $\mathfrak{Q} = \lbrace j : \mathcal{Q} \cap \mathcal{Z}_j \neq \emptyset \rbrace \subset \lbrace 1,\ldots,m \rbrace$.
\end{lemma}
\begin{proof}
Letting $Z$ denote the matrix whose columns are elements of the orthonormal set, we recall that classical Gram-Schmidt generates $u = (I_d - ZZ')q$. Thus, twice iterated Gram-Schmidt can be written as
\begin{equation}
(I_d - ZZ')(I_d - ZZ')q = (I_d - 2ZZ' + ZZ') q = (I_d - ZZ')q = u,
\end{equation}
which is expected in exact arithmetic. Thus, we can consider classical Gram-Schmidt and ignore the iteration to compute the support of $u$. For any $l = 1,\ldots,d$,
\begin{equation} \label{eqn:GS-component}
u[l] = q[l] - \sum_{j=1}^k (q' z_j) z_j[l] = q[l] - \sum_{j \in \mathfrak{Q}} (q'z_j) z_j[l],
\end{equation}
where we use the fact that if $j \not\in \mathfrak{Q}$ then $q' z_j = \sum_{l \in \mathcal{Q} \cap \mathcal{Z}_j } q[l] z_j[l] = \sum_{l \in \emptyset} q[l] z_j[l] = 0$.
For a contradiction, suppose $l \in \mathcal{U}$ such that
\begin{equation}
l \not\in \left(\bigcup_{j \in \mathfrak{Q}} \mathcal{Z}_j \right) \cup \mathcal{Q}.
\end{equation}
Then, $q[l] = 0$ and $z_j[l] = 0$ for $j \in \mathfrak{Q}$. Using the above formula for $u[l]$, $u[l] = 0 - \sum_{j \in \mathfrak{Q}} (q'z_j) 0 = 0$,
which is a contradiction.
\end{proof}
At iteration $k$, \cref{lemma:support-GS} states that the support of $u_k$ will depend on the support of $\lbrace z_m,\ldots,z_1 \rbrace$, which, in turn, has elements whose support depend on (a subset of) $\lbrace u_{k-1},\ldots,u_0 \rbrace$. Moreover, if $\lbrace u_{k-1},\ldots,u_0 \rbrace$ has elements whose combined support cover $\lbrace 1,\ldots,d \rbrace$, which will be necessary to solve the system,\footnote{Note, if the combined supports of the elements of $\lbrace u_{k-1},\ldots,u_0 \rbrace$ do not cover all of $\lbrace 1,\ldots,d\rbrace$, then some components of our iterates, $\lbrace x_k \rbrace$ will not be updated.} it is possible that the support of $u_k$ will be all of $\lbrace 1,\ldots,d \rbrace$ (ignoring any trivial independence in the system). Thus, it appears that we will eventually have to store vectors in $\mathcal{S}$ whose support is all of $\lbrace 1,\ldots, d \rbrace$. Naively, we may think that we need a faithful copy of $\mathcal{S}$ at each node in the system, which incurs prohibitive communication costs as the support of $u_k$ tends to $\lbrace 1,\ldots, d \rbrace$. While this is true, a careful inspection of Gram-Schmidt and the nonzero patterns of $q_k$ suggest a less naive approach, which we now detail.
We begin by supposing that on a processor $i \in \lbrace 1,\ldots, p \rbrace$, only $z_j[\mathcal{X}_{i}]$ are stored on the node for every $j=1,\ldots,k$. Immediately, we have eliminated the need for synchronizing all of $\mathcal{S}$ on each processor. Instead, we need only to synchronize those components of $z_j$ in $\mathcal{X}_i \cap \mathcal{X}_j$ for all $i \neq j$. Thus, we have that our synchronization costs will depend on the maximum overlap, $Q$, between two processors, which, formally, is
\begin{equation}
Q = \max_{i \neq j} | \mathcal{X}_i \cap \mathcal{X}_j|.
\end{equation}
Now, we can understand the precise nature of this synchronization by inspecting \cref{alg:iterated-GS}. If for some $j = 1,\ldots,p$, $q_k[\mathcal{X}_j^c] = 0$, then
\begin{equation} \label{eqn:IGS-loop-1}
t_1[l] = \begin{cases}
- \sum_{t=1}^{m} \left( \sum_{r \in \mathcal{X}_j} q_k[r] z_t[r] \right) z_t[l] & \forall l \in \mathcal{X}_j^c \\
q_k[l] - \sum_{t=1}^{m} \left( \sum_{r \in \mathcal{X}_j} q_k[r] z_t[r] \right) z_t[l] & \forall l \in \mathcal{X}_j
\end{cases}
\end{equation}
From \cref{eqn:IGS-loop-1}, we see that we must communicate the values of $q[l]$ to all nodes $i \in \lbrace 1,\ldots,p \rbrace \setminus \lbrace j \rbrace$ such that $\mathcal{X}_j \cap \mathcal{X}_i \neq \emptyset$, and we must communicate the $m$ inner products to all $p-1$ nodes. The resulting number of floating point values that must be communicated (counting each replicate to a node individually) during the first iteration of \cref{alg:iterated-GS} is
\begin{equation}
\sum_{i \in \mathfrak{Q}_j\setminus\lbrace j \rbrace} |\mathcal{X}_j \cap \mathcal{X}_i| + m(p-1),
\end{equation}
where $\mathfrak{Q}_j = \lbrace i : \mathcal{X}_i \cap \mathcal{X}_j \neq \emptyset \rbrace$ for $j=1,\ldots,p$ (see the notation in \cref{lemma:support-GS}). For the second iteration of \cref{alg:iterated-GS}, we must broadcast $m$ inner products that are partially computed (using some ordering that respects the non-associative property of floating point complexity) on each node to the remaining $p-1$ nodes. Thus, the number of floating point values that must be communicated (counting each replicate to a processor individually) to ensure synchronization is
\begin{equation} \label{eqn:communication-complexity}
\sum_{i \in \mathfrak{Q}_j\setminus\lbrace j \rbrace} |\mathcal{X}_j \cap \mathcal{X}_i| + m(p-1) + mp(p-1),
\end{equation}
which we can bound by
\begin{equation}
Q (F-1) + m(p^2-1), \text{ where } F = \max_{j} |\mathfrak{Q}_j|.
\end{equation}
Noting that $Q$ represents the maximum shared indices between two nodes and that $F$ represents the maximum number of nodes that overlap, the first term in the bound can be controlled by the ordering choice of the differential equations that generate the system, but a discussion of this topic is beyond the scope of this work. \cref{alg: rank-one RPM low com} summarizes a simple version of the procedure described here. We can also modify this algorithm to the low memory context of \cref{alg: rank-one RPM full mem} by limiting the number of vectors that can be stored in $\mathcal{S}$.
\input{algorithms/low_communication}
|
1,116,691,498,367 | arxiv | \section{Quantinar in numbers}
Quantinar ecosystem gather an international community of researchers that produces quality research on diverse scientific topics using modern technologies. At the time of writing, Quantinar already has an important community and can offer high quality content with multiple courselets, especially since Quantinar was not officially launched yet.
\subsection{Community}\label{sec:community}
Quantinar community consists of 516 users with 51 instructors and 459 simple users. Without a marketing campaign, Quantinar has experienced an important growth in the last months as it is shown in Figure \ref{fig:users_ts}.
\begin{figure}[h!]
\centering
\includegraphics[width=14cm, keepaspectratio]{figure/nb_users_ts.png}
\caption{Number of users}
\label{fig:users_ts}
\end{figure}
Still, at the time of writing, only a few instructors contributes most to the courselet creation, as it is shown on Figure \ref{fig:courselet_users}.
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{figure/courselet_users.png}
\caption{Number of courselets per instructor}
\label{fig:courselet_users}
\end{figure}
On top, most users are affiliated to \href{https://www.wiwi.hu-berlin.de/en/}{Humboldt-Universität zu Berlin} and \href{https://www.ase.ro/index_en.asp}{ASE Bucuresti} where the platform was originally built. Nevertheless, Quantinar has already attracted users from multiple countries and continents (Germany, Romania, United States, China, Singapore, Taiwan, etc.), as it is shown on Figure \ref{fig:users_location}.\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/users_affiliation.png}
\caption{Users' affiliation}
\label{fig:users_affiliation}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/sessions_location}
\caption{Sessions by country}
\label{fig:sessionslocation}
\end{subfigure}
\caption{Users location}
\label{fig:users_location}
\end{figure}
Thanks to its Discord channel (\href{https://discord.gg/ebS3Bf6gfS}{https://discord.gg/ebS3Bf6gfS}), Quantinar's community is alive, members can follow the latest updates from the platform and interacts.
\subsection{Content}
At time of writing, Quantinar gathers 137 courselets and 5 courses in 8 categories (see Figure \ref{fig:courseletcategories}). \begin{figure}[h!]
\centering
\includegraphics[width=14cm, keepaspectratio]{figure/courselet_categories.png}
\caption{Courselet category distribution}
\label{fig:courseletcategories}
\end{figure}
The top most views courses are \href{https://quantinar.com/course/23/Blockchain}{DEDA Digital Economy \& Decision Analytics}, \href{https://quantinar.com/course/103/statistics-of-financial-markets}{Statistics of Financial Markets} and \href{https://quantinar.com/course/67/ADM}{Advanced Mathematics}.
The most recurrent topics are related to cryptos, clustering, price, prediction, marketing or finance, as the world cloud in Figure \ref{fig:coursedetailwordcloud} suggests it. \begin{figure}[h!]
\centering
\includegraphics[width=14cm, keepaspectratio]{figure/course_detail_word_cloud.png}
\caption{Word occurrence in courselet description}
\label{fig:coursedetailwordcloud}
\end{figure} Quantinar's courselets come with their respective implementation code in Quantlet using mostly R or Python programming languages (see Figure \ref{fig:quantletrepolang}). \begin{figure}[h!]
\centering
\includegraphics[width=14cm, keepaspectratio]{figure/quantlet_repo_lang}
\caption{Quantlets programming language}
\label{fig:quantletrepolang}
\end{figure}
\newpage
\section{How? Blockchain, a technology for Open Science}
\subsection{Technology}\label{sec:technology}
The Quantinar P2P platform will have most of its operations on-chain, since having decentralization in its core is a strong suit compared to most educational platforms. However, as for academic journals, Quantinar must ensure that each user is indeed a real person and cannot transfer or sell its wallet addresses, and with them, its acquired reputation. In order to do so, a centralized software component with specific capabilities is required, as illustrated in Figure \ref{fig:software-architecture}.
\begin{figure}[h!]
\includegraphics[width=14cm, keepaspectratio]{diagrams/general-architecture-quantinar-p2p.png}
\caption{Software Architecture Overview}
\label{fig:software-architecture}
\end{figure}
Indeed, as accountability in the OPR process cannot be achieved without an immutable online identity, the centralized component should be able to provide identity management features such as a Single Sign On or BrowserID solution. \cite{sso}
That way, Quantinar can integrate all the different components of the platform under the same umbrella. For this, the Keycloak project \cite{keycloak} is going to be used, with a simple MySQL Database. An extra layer of information will be sought from the users by using a KYC solution, as for example MouseKYC \cite{mousekyc}.
Except for identity management, a minimal gateway is needed in order to provide flawless document uploading to an IPFS chain and validate the uploaded content in order to filter out forbidden items or spam. Articles will be uploaded on a public IPFS chain like Filecoin's \cite{filecoinWhitepaper}, while presentations, videos, and other types of content will be uploaded on a private IPFS chain managed by the Quantinar DAO. More information about IPFS can be found under section \ref{sec:ipfs}.
The Quantinar DAO will be hosted on the Ethereum blockchain, as it is one of the most stable and predictable technology. Moreover, the recent adoption of Proof of Stake and the upcoming implementation of sharding by Ethereum improves its competitivity with respect to other blockchains. On top, the planned launch of SoulBound tokens, announced in early 2022 by Vitalik Buterin (\href{https://vitalik.ca/general/2022/01/26/soulbound.html}{vitalik.ca/soulbound}) comes in good time. SoulBound tokens are \textit{special NFTS with immutable ownership and pre-determined immutable burn authorization}. Quantinar is a perfect application for SoulBound tokens via the various certifications obtained when passing a course.
Finally, the whole Web3 side of the platform will be written using the Aragon framework (\href{https://aragon.org/}{aragon.org}), one of the most popular DAO framework on Ethereum. The centralized side of the infrastructure will be hosted on an openstack cloud fully installed and managed by the Quantinar Team.
\subsection{Open Access and IPFS}\label{sec:ipfs}
At the center of Quantinar is the Open Access to its content. Nevertheless, it is not enough to guarantee the Open Access of content if it is controlled by a centralised institution such as an online academic Journal. Indeed, the access can be revoked at any time, or even worse, ransomware attacks could take place which would render all of the data on the servers useless.
The InterPlatenary File System (IPFS) is a peer-to-peer protocol that distributes data storage in its network in a decentralised manner, firstly described in "IPFS - Content Addressed, Versioned, P2P File System" as \textit{an ambitious vision of new decentralized Internet infrastructure, upon which many different kinds of applications can be built} \cite{benet:2014}. It builds on top of common ideas collected from pieces of software like BitTorrent or Git, to create a protocol that manages Merkle DAG's containing file data over a network.
There have been multiple attempts at building an IPFS-based storage solution that is ready for solving real-world challenges like decentralizing medical data, as described by \cite{kumar:2021}, and even more attempts of building IPFS networks for academic usage, like the share of databases used for research \cite{meng:2021}, some of which tackled issues like availability, immutability, transparency and security \cite{kumar:2021}, and even open access to academic research \cite{kumar:2020}.
Decentralisation makes it difficult to restrict the network's data access. On top of this, thanks to content addressing, files stored using IPFS are automatically versioned (https://ipfs.io/). Thus, by using a decentralised storage system such as IPFS, one cannot revoke the access to the authors publications or delete an open peer-review article, as Quantinar's content is shared across an IPFS network. Finally, at times of writing, Filecoin costs are extremely low compared to standard centralized cloud based solutions (<1\$/month/TB and 23\$ for Filecoin and AWS respectively, source \href{https://largedata.filecoin.io/}{Filecoin} and \href{https://aws.amazon.com/s3/pricing}{AWS}). Quantinar's network and reputation system is then truly resilient.
We do not propose implementing our own IPFS network, at least not in the beginning and for the open resources of the platform like the papers, videos or accompanying PDF's. That should all be public information. On top of that, easy access to a completely decentralized, readily available IPFS network is required in order for the platform to take off, this being the second main reason for the usage of the Filecoin network (\cite{filecoinWhitepaper}). The capabilities of token generation of the Quantinar DAO (cf \ref{sec:dao}), can help support the filecoin storage on a \textit{pay as you grow} basis.
Of course, private data use cases will be developed in a future phase of the platform that would require a private IPFS network with access control (as seen in \cite{steichen:2018}). This will be designed specifically for holding private databases, original LaTeX files or other information which might be seen as sensitive in the eyes of the researchers, companies or universities. Thus, all of the actors will have the power to decide who can access this information. More research and development is required for developing such a feature, which is planned to happen in the second phase of the project.
\subsection{Ownership and copyrights: Courselet NFT}\label{sec:nft}
Collective NFT (multiple authors)
One goal of Quantinar is to make sure that the researchers keep the copyrights and ownership of their publication. For that goal, Quantinar offers to create a non-fungible token (NFT) for each courselet on the platform and transfer it to the courselet author(s). A NFT is a unique cryptographic token created from a specific smart contract standard (e.g. ERC-721 on Ethereum blockchain) that provides functionalities such as ownership transfer or ownership verification. Thanks to this NFT, any authors can claim ownership on a specific courselet on Quantinar platform by providing the unique hash of the NFT associated to it.
While smart contract and NFT makes it easy to verify ownership of a digital object on the blockchain, copyrights exist in the non-digital world and are governed by state law. Thus, in order to truly ensure copyrights protection, each courselet will be secured with an Open Source license such as MIT or GNU. Finally, authors will keep ownership of the intellectual property (IP) on their work allowing them to publish their study in external academic journals.
Some information such as authors' identity, courselet title, link to the courselet content, review score and link to reviews will be stored onchain, while in order to reduce storage cost the courselet content and actual reviews will be stored offchain on Quantinar IPFS network defined in the previous section \ref{sec:ipfs}. Having a courselet NFT with immutable link to versioned content and reviews help to open the iterative publication process during the peer-review feedback loop and protect the integrity of the authors reputation score defined in \ref{sec:rep_score}. Thus, on top of the courselet metadata, all contribution logs will be accessible directly on the blockchain ledger or on the IPFS network ensuring transparency.
\subsection{Reward and Reputation system}\label{sec:reputation}
Since the development of online P2P communities, multiple proposals have been made to ensure trust between members and quality of members contributions in order to develop sustainable p2p platforms. Such a sustainable ecosystem determines the global trust value of each member while making sure that evaluations cannot be gamed in favor of malicious agents.
A well known technique is PageRank algorithm \cite{page1999}. While it was originally built for ranking web pages in the context of improving search engines, it serves also as an indicator of the trustworthiness of a website. Since its publication, it has been widely studied (the reader can refer to \cite{6998874} for a short survey or \cite{LangvilleMeyer+2011}) and extended for example, EigenTrust \cite{10.1145/775152.775242} evaluates trust in a distributed-manner within P2P file-sharing communities. Some alternatives have been proposed such as PeerTrust \cite{xiong:2004} which evaluates the members reputation based on specific contextualized parameters such as contribution feedback, number of contributions and credibility of the feedback source. However, PageRank is not famous only because it was created by the founders of Google, but its simplicity, generality, guaranteed existence, uniqueness, and fast computation are the reasons that it is used in many other applications than Google's search engine and still very popular today. In fact, \citet{doi:10.1137/140976649} shows how PageRank can be applied to biology, chemistry, ecology, neuroscience, physics, sports, and computer systems.
To estimate the reputation of each member, we propose to use CredRank, an algorithm inspired by PageRank and developed for blockchain based communities by SourceCred (\href{sourcecred.io}{sourcecred.io}). We can estimate the value of each contribution relatively to the marginal value it brings to the community as a whole and engage the community by effectively rewarding Quantinar contributors for the labor they provided in order to produce their own contribution. SourceCred provides an algorithm for distributing rewards with a community which is abstract enough to be used in any DAO thanks to the concepts of \textit{Grain} and \textit{Cred}. Cred refers to the reputation score defined in the next Section \ref{sec:rep_score} which is mapped by a utility token that is not transferable and can be gained by contributing to the community. In the Quantinar DAO, the Cred token will be named \textbf{QNAR}. The second concept of SourceCred's algorithm is Grain, represented by the Quantlet token, \textbf{QLET}, in Quantinar DAO. This token is meant for the creation of an internal research-based economy and will be tradable on external exchanges. It will be used for the payment of developers to generate new features for the platform, gaining access to private courselet components or as payment for consulting top-level researchers inside the platform (see Section \ref{sec:reward}).
\subsubsection{Reputation score evaluation}\label{sec:rep_score}
Explain why course + courselet nodes, explain linked courselet and minting process (no mint for course, mint for courselet)
In details, all contributions and community members are mapped into a directed graph $G = (\mathcal{V}, \mathcal{E}, \mathcal{W})$ where $\mathcal{V} = \{v_i\}_{1\leq i \leq n}$ is the set of $n$ unique vertices mapping contributions and contributors which are connected with $m$ directed edges, $\mathcal{E} = \{e_{ij}\}_{(v_i,v_j) \in \mathcal{V}^2}$, where $e_{ij}$ denotes the edge from parent vertex $j$ to child vertex $i$ and $\mathcal{W}=(w_{ij})_{e_{ij}\in \mathcal{E}}$ with $w_{ij} \geq 0$ is the set of $m$ edges weights which are chosen by the community members in a heuristic manner. The weights have a strong influence on the final PageRank computation and they must ensure that contributions which require a lot of labor gets rewarded more than simpler ones and that they get rewarded when they are validated or reviewed by the community. Indeed, if a contribution $j$ has multiple children, its score should be propagated forward to the most important child. Figure \ref{fig:contrigraph} shows such a graph with 3 community members, John, Alice and Bob, who are connected via the edges between their contributions and interactions. We can clearly see that the rank of courselet $CL0$ will mostly be propagated to its author Alice ($1/(1+1/16+1/16)=0.89$).
\begin{figure}[h!]
\centering
\includegraphics[height=7cm, keepaspectratio]{figure/chap_how/contrigraph.jpg}
\caption{Contribution graph}
\label{fig:contrigraph}
\end{figure}
The following non-exhaustive list of contributions will be mapped into the graph: creating a courselet, quantlet or datalet, viewing, citing and reviewing a courselet, enrolling as a student into a courselet, actively participating in the discord channels, voting in the DAO decision process and helping other community members with their research.
A simple definition of the PageRank score of vertex $i$ is given in the original paper as:
\begin{equation}
pr(v_i)= \sum_{j=1}^{n} A_{i j} \frac{pr(v_j)}{\sum_{k=1}^{n} A_{k j}}
\end{equation}
where is $A$ the adjacency matrix of the graph $G$. Base on the random surfer model, where from any node, a user either returns to a random seed node with probability $\alpha$ or continues to a linked node with probability $1-\alpha$, PageRank is then defined as the stationary distribution of a random walk on the graph $G$ defined by the equation:
$$pr = \alpha s + (1-\alpha)pr P$$
where $s$ is the $n$-vector $(1/n, \ldots, 1/n)$, $pr$ is the vector of PageRank score $pr = \left(pr(v_i)\right)_{1\leq i\leq n}$, $P$ is the transition probability matrix defined by $\forall 1\leq i,j \leq n$ $P(i,j) = \frac{w_{ij}}{d_i}$ where $d_i = \sum_j w_{ij}$ is the degree of $v_i$. PageRank can easily be estimated \textit{locally} on large graphs \cite{NIPS2013_99bcfcd7} or globally using the power iteration method if the graph is small enough.
Thanks to PageRank, we can easily evaluate each node score by using the score of all nodes that are directed to it, that way, for each researcher, having backnodes that are highly valuable improves its score more than having backnodes that are less important. This evaluation motivates Quantinar's members to produce research of high quality.
In our context, since contributions such as "review" or "order" are not at the source of Quantinar's value, the personalized PageRank is used to define $s$ instead of a uniform distribution over all contributions where $s_i = \frac{pr(v_i)}{\sum_{j\in U}pr(v_j)}$ if $i\in U$ else $s_i=0$ where $U$ is the set of user and courselet vertices
Moreover, in order to ensure that new contributors do not suffer from a strong "newcomer effect" having their score undervalued because of "old-timer" score domination, we propose to use the latest CredRank algorithm from SourceCred to evaluate the scores on a discretized historical graph as it is represented in Figure \ref{fig:contrigraph_temp}.
\begin{figure}[h!]
\centering
\includegraphics[height=7cm, keepaspectratio]{figure/chap_how/contrigraph_temp.jpg}
\caption{Contribution graph over periods}
\label{fig:contrigraph_temp}
\end{figure}
First, each vertex $i$ is indexed by a timestamp $t$ corresponding to its appearance in Quantinar and denoted as $v_i^t$. The history is then discretized in $T$ fixed length periods (for example a week). We define $\mathcal{A}_k = \{v_i^t,\ 1\leq i \leq n,\ 1\leq t \leq k,\ v_i^t$ is a contribution$\}$ as the set of vertices including all contributions until the date $k$. We can easily retrieve the set of new contributions during the period $k \geq 1$ with $\mathcal{A}_k^{new} = \mathcal{A}_k \setminus \mathcal{A}_{k-1}$ and $\mathcal{A}_0^{new} = \mathcal{A}_0$. At each period $k$, we create the epoch contributor nodes that authored the new contributions in $\mathcal{A}_k^{new}$, that is $\mathcal{C}_k^{new} = \{v_i^k,\ 1\leq i \leq n$ if $e_{ij} \in \mathcal{E}$ where $v_j \in \mathcal{A}_k^{new}\}$. Thus the set of epoch contributor nodes until period $k$ is defined as $\mathcal{C}_k = \bigcup_{1\leq t \leq k} \mathcal{C}_k^{new}$. We also add directed edges from the contributions in $\mathcal{A}_k^{new}$ to their respective author(s) in $\mathcal{C}_k^{new}$.
For any $k,\ 1 \leq k \leq T$, we define the epoch contribution graph as $G_k = (\mathcal{V}_k, \mathcal{E}_k, \mathcal{W})$ where $\mathcal{V}_k = \mathcal{A}_k \cup \mathcal{C}_k$ and $\mathcal{E}_k = \{e_{ij}\}_{(v_i, v_j) \in \mathcal{V}_k^2}$. The graph update is then given by: $$G_{k}=G_{k-1} \oplus \Delta G_{k}=\left(\mathcal{V}_{k-1} \oplus \Delta \mathcal{V}_{k}, \mathcal{E}_{k-1} \oplus \Delta \mathcal{E}_{k}, , \mathcal{W}\right)$$
That way, for each contributor $i$, the relative value of its new contributions with respect to past contributions is easily available and given by the score of nodes with index $i$ in $\mathcal{C}_k$. In order to compute the global score at a given period, a weighted sum with discount factor is used:
at each period $k$, PageRank is evaluated on $G_k$ and the global score for any contributor $v_i$ is given by:
\begin{equation}
S_{i}^{*} = \sum_{t=1}^k c^{k-t}pr(v_i^t)
\label{eq:rep_score}
\end{equation}
where $c$ is a decay factor.
The above formula is valid for any vertex $v_i$ in the contribution graph. However, every contributions should not be rewarded and since the reputation score is the first step for computing the associated reward, the score is modified to reflect the distribution of the reward more accurately. On top, since $pr(v_i)$ is a probability, $S_i$ defined above can be very small as $n$ increases. That is why SourceCred propose to normalize the score and scale it by the total amount of Cred token minted at a given period.
In order to do so, SourceCred introduces weights for each nodes in the graph which define how many Cred tokens are minted at a given node. The weights are determined by the community based on what the community values in Quantinar. Any node $i$ that have a strictly positive weight $w_i > 0$ will mint $w_i$ $\operatorname{QNAR}$ tokens. Let us denote $M_k=\sum_{i=1}^n w_i$, the following definition of the reputation score at period $k$ for any contributor $i$ is then used:
\begin{equation}
S_{i} = \frac{S_{i}^{*}}{\sum_{j \in C_k} S_{j}^{*}} (M_k + base)
\label{eq:rep_score}
\end{equation}
where $C_k$ is the set of original contributor nodes in $G_k$ and $base=1000$ is a constant that serves as reference level. With definition \eqref{eq:rep_score}, the reputation score will increase as the relative contributor page rank score and the total number of contributions that are valuable increase. Finally, it is clear that this reputation system favors competition on the long term because in order to maximise ones reputation score, one needs to maximise the long term value of ones contributions.
\subsubsection{Reward based on reputation score}\label{sec:reward}
SourceCred proposes three types of grain-generation strategies, these being: \textit{IMMEDIATE, BALANCED} and \textit{RECENT}. The IMMEDIATE strategy mints one QLET, for each QNAR gained in the past week. The BALANCED strategy mints QLET based on the weights that are given to each action inside the DAO at a given moment. For example, if creating a courselet changed it's weight from 1 to 2, the QLET target will be recalculated for each author and they will be payed extra for the next contributions inside the platform, in order to catch up. The converse is also true. The last strategy of generating QLET is RECENT. This strategy refers to a weekly weight decay of a certain percentage.
For the Quantinar DAO, the BALANCED strategy will be used, since the concept of a decentralized P2P recent platform is very new, which makes it is more equitable in the long run, for all the contributors, and it allows for a continuous fine-tuning of weights used for QNAR/QLET generation by the whole community.
\subsection{Quantinar CredRank dynamics}
\subsubsection{Case study}
Quantinar has already implemented certain features and collected some statistics. To illustrate the dynamics of CredRank algorithm for the platform, we made an experiment including:
\begin{itemize}
\item Courselet publication as an author
\item Courselet enrollment (ordering a courselet as a user)
\item Courselet review (grading or commenting a courselet as a user)
\item Courselet page view
\end{itemize}
The graph is created with the contributions \{"courselet", "order", "review", "view"\} and users nodes. The weighted edges and weighted nodes are defined respectively in Table \ref{table:edges_weights} and \ref{table:nodes_weights} and an example of such graph is represented in Figure \ref{fig:contrigraph}.
\begin{table}[!htb]
\begin{subtable}{.5\linewidth}
\centering
\begin{tabular}{ll}
\hline\hline
Edge & Weight \\
\hline
(view, courselet) & 1e-5 \\
(courselet, user) & 1 \\
(user, courselet) & 1/8 \\
(order, courselet) & 5 \\
(courselet, order) & 1/16 \\
(user, order) & 1/8 \\
(order, user) & 1 \\
(user, review) & 1/8 \\
(review, user) & 1 \\
(review, courselet) & 2 \\
(courselet, review) & 1/16 \\
\hline\hline
\end{tabular}
\caption{Edges weights}
\label{table:edges_weights}
\end{subtable}%
\begin{subtable}{.5\linewidth}
\centering
\begin{tabular}{ll}
\hline\hline
Edge & Weight \\
\hline
courselet & 10 \\
review & 1 \\
order & 1 \\
view & 0 \\
user & 0 \\
\hline\hline
\end{tabular}
\caption{Contribution weights}
\label{table:nodes_weights}
\end{subtable}
\caption{Graph weights}
\end{table}
Our dataset contains the production SQL dataset from Quantinar platform and Google Analytics statistics of \href{quantinar.com}{quantinar.com} from the inception date on 2021-09-13 to 2022-10-25.
Thanks to the graph structure defined above, the relative contributions value can be measured over time. We can identify various effects. In Figure \ref{fig:user_score}, it is clear that the past score have inertia and participate to the new period score, which ensures that yesterday's important contributors still have a high score today even if they are not as active.
\begin{figure}[h!]
\centering
\includegraphics[height=7cm, keepaspectratio]{figure/chap_how/score_dynamics_2022-10-12.png}
\caption{PageRank score of Quantinar's users}
\label{fig:user_score}
\end{figure}
On top, as past contributions become more popular, the share of the past epoch nodes score within the total score will increase, which incentivize community members to appreciate the long term value of their contributions. Indeed, in Figure \ref{fig:epoch_score}, the most important epoch in July 2022 for the user is 2022-01-31 which contributes with more than 20\% to the total score.
\begin{figure}[h!]
\centering
\includegraphics[height=7cm, keepaspectratio]{figure/chap_how/pagerank_user_time_2022-10-03-47.jpg}
\caption{Epoch nodes score for one user (in \% of the total score, $c=1$)}
\label{fig:epoch_score}
\end{figure}
\begin{itemize}
\item Manipulation and Gaming
\end{itemize}
\subsection{An auction based open peer-review (OPR)}\label{sec:opr}
An open peer-review process using IPFS and decentralization has already been conceptualized by \cite{tenorioetal:2019}. Such a process ensures the seven traits of the open peer-review process identified by \citet{Ross-Hellauer:2017}:
\begin{itemize}
\item \textbf{Open identities}: Authors and reviewers are aware of each other’s identity.
\item \textbf{Open reports}: Review reports are published alongside the courselet and versioned using IPFS
\item \textbf{Open participation}: The wider community are able to contribute to the review process. This is ensured by a the open call for review from the courselet author.
\item \textbf{Open interaction}: Direct reciprocal discussion between author(s) and reviewers, and/or between reviewers, is allowed and encouraged. Since the authors and reviewers identities are revewaled to each other, they are encouraged to chat in a specific Discord channel which is open for reading the community.
\item \textbf{Open pre-review manuscripts}: Courselets are made immediately available on Quantinar in advance of any formal peer review procedures.
\item \textbf{Open final-version commenting}: Review or commenting on final courselet publications is allowed directly on the courselet page on Quantinar.
\item \textbf{Open platforms}: Quantinar is open by design
\end{itemize}
In Quantinar, on top of addressing the above issues, we propose a new peer-review process that incentivizes community members to review publications in a time effective and fair manner. Let there be a set of entities and smart contracts that governs the game defined with stake owner ${S_i}$ with stakes ${s_i}$, paper proposer $P$, paper acceptance status $S$ (Accepted, Denied), auction type $T$ maturity $M$, payoff function $f$, voting function $v$, token $\operatorname{QNAR}$, inflation rate $I$.
The goal of the game is to find a democratic and fair peer-review for a proposed paper and to incentivise publication. While the latter can be achieved through token rewards, a due process is designed for the former. A democratic and fair review is to be defined as one that is simultaneously qualified and aligned with the majority of the network. Since the majority of decision makers is not necessarily objectively right, an incentive structure must be created that assigns a larger weight to those decision makers, who have a history of being right or to publish knowledge which is deemed right (see Section \ref{sec:reputation}).
Inflation rate, time to maturity, auction type and payoff function are defined as global variables, whose status can be set by the DAO. The payoff function is in its simplest form a function that sums all bids and assigns a positive (negative) bid-weighted share to each winner (loser). Additionally, each participant receives token units according to the inflation rate after each iteration. A multitude of auction types are possible. Auctions are conducted in terms of a First-price sealed-bid, implemented in a smart contract \cite{blindAuctionContract}. Other classes of auctions might be employed in the future.
Let there be a proposal for a new paper by proposer P. The acceptance of the proposal is treated as an auction which inherits above defined global variables. Stake Owners $S_1, S_2, S_3$, of Token QNAR are given the opportunity to participate in the auction. Let's assume that each Stake Owner enters the auction as a player.
Having reviewed the paper each on their own, the players must decide on the Acceptance Status S in a blind auction. They submit a bid, which is a subset of their stake, encrypted with their public key and their voting decision (Accepted or Denied) as an input to a smart contract. When the auction closes at maturity, all players must publicly disclose their bid. Since their public keys are known, verification of their bid is trivial. In case of a false claim, the player automatically loses their (locked up) bid. Due to the strict negative payoff, there cannot be any incentive to make a false claim.
$$
\begin{aligned}
r = \frac{\sum_{j \in \mathcal{L}}{s_j}}{\sum_{i \in \mathcal{W}}{s_i}} \\
P(S_i) = \frac{s_i}{\sum_{j \in \mathcal{W}}{s_j}} * r \\
\end{aligned}
$$
where $P(S_i)$ is defined as the profit of Staker $S_i$. The index $\mathcal{L}$ and $\mathcal{W}$ correspond to stakes on a game's losing or winning side. The ratio of the sum of stakes on either side of the bet balances diminishes the bet's payoff if a structural imbalance exists.
As an example, use a voting function $v$ for the acceptance status, e.g. meaning that "Accepted" is assigned 1 and "Denied" is assigned -1. Then, at the close of the auction, the voting decision is defined according to the sign of the bid-weighted average of votes.
Let player $S_1, S_2$ vote "Accepted" with a stake of 1 and 2 $\operatorname{QNAR}$s, whereas player $S_3$ votes "Denied" with 2 $\operatorname{QNAR}$s.
Then the sum of bids, is 5 $\operatorname{QNAR}$s. The voting decision is $1 + 2 - 2 = 1$. The potential payoff for the winners are restricted by the ratio of stakes on each side of the game. This is an optional function that aims to dampen the effects of a strong imbalance of betting forces. It achieves this goal by decreasing the marginal payoff for each Staker to join the stronger side. The sum of bids is then distributed proportionally to the players $S_1$, who receives $(1/3) \times 5$ and $S_2$, who receives $(2/3) \times 5$.
The rate of minted tokens determines the general degree of incentive that potential reviewers have; that is independent of their own abilities. Since only people who participate in reviews receive additional tokens, non-participants bear the cost of inflation. Hence the rate of minted tokens should be decided by the network.
In order to prevent proposers to employ trivial strategies to profit from minting, an anti-spam strategy needs to be enforced. We suggest that each proposer is required to submit a minimum bid with each paper. The bid size should also be decided by the network.
\subsubsection{Simulation}
\href{https://github.com/QuantLet/Quantinar-Staking-Simulation}{\hspace*{\fill} \raisebox{-1pt}{\includegraphics[scale=0.05]{figure/qletlogo_tr.png}}\, Quantinar-Staking-Simulation}
A simulation study is conducted in order to assess the minimum amount of participants required for a stable system, their time of survival and expected changes in wealth (stakes) conditional on the initial distributions.
An artificial paper acceptance probability is introduced and assumed to be 0.5. Inflation per iteration is 1Q.
The aggregate behavior of stakers is simulated. A staker's Acceptance or denial of a paper is a Bernoulli-distributed random variable with probability 0.5. For each type of initial stake distribution, pareto and uniform, staker's performances are evaluated for a set of 5, 10, 50, 100, 1000 participants after the rounds 10, 50, 100, 1000, 10000. One round is counted as the proposal of a paper, meaning the submission and subsequent acceptance or denial of a paper.
We find the system to be stable for as little as 5 players. Figure \ref{fig:boxplot_staker} shows that Sharpe Ratios are generally high and positive when there are few stakers. Sharpe Ratios decrease with growing competition, yet stay positive even for large amount of stakers. This behavior is independent of the initial distribution, whether it is uniform- or pareto-distributed. Consequently, reviewers are incentivised to offer their expertise especially when there is a lack of it, because that is when their potential payoff is the highest. This incentive holds under real-world, uneven distributions such as of the pareto type. Table \ref{table:table_performance} show the expected return, standard deviation and Sharpe Ratio per round, i.e. per paper proposal conditional on a different type of initial distribution. Under both types of distribution, stakers are strongly incentivised to participate due to the high expected return. While stakers generally earn less under an initial pareto-distributed stake than under a uniform distribution, Sharpe Ratios and expected returns remain high.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/boxplot_paretooutput_for_paper_n_rounds.png}
\caption{Initial Pareto Distribution}
\label{fig:boxone}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/boxplot_uniformoutput_for_paper_n_rounds.png}
\caption{Initial Uniform Distribution}
\label{fig:boxtwo}
\end{subfigure}
\caption{Sharpe Ratio conditional on amount of Paper Proposals}
\label{fig:boxplot_paper}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/boxplot_paretooutput_for_paper_n_stakers.png}
\caption{Initial Pareto Distribution}
\label{fig:boxthree}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figure/boxplot_uniformoutput_for_paper_n_stakers.png}
\caption{Initial Uniform Distribution}
\label{fig:boxfour}
\end{subfigure}
\caption{Sharpe Ratio conditional on amount of Stakers}
\label{fig:boxplot_staker}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{rrrr}
\hline
Initial Distribution & Expected Return & Standard Deviation & Sharpe Ratio \\
\hline
\hline
Uniform & 0.0846 & 0.4127 & 0.2604 \\
\hline
Pareto & 0.0284 & 0.3472 & 0.2059 \\
\hline
\end{tabular}
\caption{Performance Measures conditional on initial Wealth Distribution per proposed Paper}
\label{table:table_performance}
\end{table}
\section{Introduction}
The invention of the transistor in 1947 is commonly regarded as the first step into our modern era: The Information Age.
Shortly before this point in time, important scientific advances have been accomplished by people such as Ronald Fisher, Egon Pearson and Jerzy Neyman.
The interplay between their scientific methods and the means of their efficient application through computation, have paved the way for the inauguration of statistics in various scientific disciplines. The added value through hypothesis testing, causal inference techniques (...) to medicine and psychology (clinical testing), social sciences, economics and policy-making, (...) could hardly be overstated.
With the ever-growing influence of statistics, new ways must be explored to preserve and produce knowledge, evaluate ideas and to educate the growing amount of practitioners. We believe that the necessity for a unified platform, that combines these features is evident and that its content should be created, consumed and owned by its community.
Naturally, the concept of a platform economy is not new. There are many top-down approaches that provide single elements of the combined approach that we propose.
Journals are a proven way of retaining and furthering knowledge. Yet they are limited in capacity, such as the amount and speed of reviewers. They are also subject to the publication bias, meaning that papers with insignificant results are unlikely to be published, thus skewing research results. The existence of the publication bias is well studied, such as in (Ioannidis, "Methods Matter", ...).
Common means of spreading knowledge are available in educational platforms.
The search for a fruitful mixture of both theoretical and practical approaches has created many platforms in the education business, like Coursera, EdX, Youtube, etc. all of which share many common strengths and weaknesses.
However, they are typically top-down approaches that do not provide ownership for creators (or only under severe limitations) and they fail to connect the creators of knowledge with its consumers and to pave the way for today's consumers to be tomorrow's creators.
There have been multiple attempts to define the problem that is lack of quality in quantitative research as enumarated by \cite{significanceFilter:2021}. However, defining the problem is only part of the solution, while creating a framework for quality, reproducible research is another.
\href{https://quantinar.com/}{Quantinar} is a peer-to-peer (p2p) platform that strengthens research collaboration and reproducibility in different areas like Fintech, Blockchain, Machine Learning, Explainable AI, Data Science, Digital Economy, Cryptocurrency and Maths \& Stats. Its aim is to provide a better integration of scholarly articles, the studied data and the code of the implemented analysis to ensure the reproducibility of the published results, while also providing educational content.
Quantinar's philosophy is Open Science and its main pillars are the transparency, accessibility, sharing and collaborative-development of knowledge that can be nowadays implemented thanks to Blockchain technology and smart contracts via a decentralized autonomous organisation (DAO) with a tokenized ecosystem. With the recent developments of Web3 technology, Quantinar makes it possible to spread the benefits of AI by equalizing the opportunity to access and monetize data, code and scientific ideas.
The first part of this paper explains the need of Quantinar by reviewing the literature and stating the current problems in modern academics, then the technology used is presented and how Quantinar tries to solve some of the enunciated problems. Finally, the current status of Quantinar is presented with its goals.
\section{Why Quantinar?}
According to \citet{Albright2017}, we are living in a so-called post-truth, or fake-news era, where the masses can be manipulated via different information channels. Despite the peer review process, elements of false information are also present in research publications - in the shape of publication bias \cite{Ioannidis2005} or HARKing. Thus, a methodology that improves research reproducibility and reliability is much needed in today’s academic environment.
First, a clarification of the concept of reproducibility in scientific research is necessary. \citet{Goodman2016} provide a good definition of reproducibility by emphasizing the difference between reproducibility, replicability and repeatability. The authors propose a new terminology:
\begin{itemize}
\item the methods reproducibility refers to the provision of enough detail about study procedures and data so the same procedures could, in theory or in actuality, be exactly repeated.
\item the results reproducibility refers to obtaining the same results from the conduct of an independent study whose procedures are as closely matched to the original experiment as possible
\item finally, the inferential reproducibility refers to the drawing of qualitatively similar conclusions from either an independent replication of a study or a reanalysis of the original study.
\end{itemize}
Inferential reproducibility might be an unattainable ideal since their might be competing models. However, the desired state of today's research, namely a framework in which authors can make arguments for or against one’s research can be designed by ensuring a transparent research process, thanks to extensive reporting of the scientific design, measurement, data and analysis. Quantinar offers to integrate an open access to scientific publication with code and data to allow for the result reproducibility and direct communication channels between researchers.
\subsection{Modern academics publish small p-values}\label{issues_academics}
Along their careers, academics and researchers face nowadays more and more pressure to publish articles, for example: being eligible as a candidate for a PhD student, being eligible for tenure at some universities, or just mere pressure from a funding private entity. This reduces the liberty of choosing the research topic and time to complete the study and can strongly impact the quality of the research process by forcing the researchers to make questionable methodological decisions that would produce "significant" results i.e. with small p-values \cite{Campbell2017}.
By studying articles in the field of medicine, \citet{Ioannidis2005} concludes that probably most of today's research findings are biased or even false. Considering $p = R/R+1$ the probability of a study being true in a specific field where there is either only one true relationship between all those that could be hypothesized, or the chance of finding any of the existing true relationships is the same , he defines the Positive Predictive Value (\ref{PPV}) of any given field of study as the \textbf{post-study} probability of it being true, as:
\begin{equation} \label{PPV}
PPV = \frac{(1 - \beta)R}{\beta R + \alpha}
\end{equation}
The conclusion drawn from this equation is that a study is considered to be true if $(1 - \beta)R > \alpha$. Since usually a study is considered to be true in the academic environment when $\alpha = 0.05$, this results in it being true if $(1 - \beta)R > 0.05$ \cite{Ioannidis2005}.
This brings a lot of pressure on the researchers to use unethical practices such as \href{https://quantinar.com/course/35/phacking}{$p$-hacking}, or HARKing in order to publish their research according to what the academic environment expects of them. $p$-hacking refers to all the practices that could result in significant outcomes, that is having a $p$-value small enough for a null hypothesis to be rejected. Such practices could be, for example, using only a subset of the data for the estimation, choosing dependent variables post-factum or adding data points if the final estimates are not significant \cite{Bruns2016}. HARKing is more specific and probably harder to prevent. It is defined as, "presenting a post-hoc hypothesis (i.e. based or informed on one's results) in one's research report as if it were, in fact, an a priori hypothesis" \cite{Kerr1998}. While top journals are considered to act as guarantor of quality, they fail at better mitigating the problem, especially in Economics \cite{Brodeuretal:2020}.
The costs of such practices span from the range of ethical issues that might arise especially in fields like medicine, to a general lack of trust in the field of science, but the literature seems to agree that research reproducibility practices could lower these kinds of practices.
To strengthen even further the main idea presented by Ioannidis, \citet{Fanelli2010} creates a hierarchy of scientific fields, based on the reported support for the tested hypothesis of the published papers in these fields (cf p.5 in Quantinar's \href{https://quantinar.com/course/35/phacking}{$P$-hacking} courselet). The author draws the conclusion that the scientific fields that have fewer constraints for their biases (psychology, social science, etc.), usually report more positive values than the fields where these constraints are higher (space science).
\subsection{Research Reproducibility in Econometrics}
Scholarly literature in Econometrics has been long criticized for its lack of reproducibility \cite{Leamer1983}. The impossibility of sample randomization and control, which are usually not present in empirical studies, highly contributes to misplaced true hypotheses. Based on this critique, \citet{Ioannidis2013Economics} argue that problems like the broad flexibility of econometric models and the lack of accounting for multiplicity are still strong problems in empirical Econometrics. To even strengthen the case, \citet{AngristPischke2010}, who brought a solid improvement in the econometrics methods, argue that the Leamer's critique is only part of the problem, the other part being the selective reporting bias. However, when taking those issues into consideration, the conclusion is that "strengthening the reproducibility culture with emphasis on independent replication, conducting larger, better studies, promoting collaborative efforts rather than siloed [...], and reducing biases and conflicts" are ways in which contemporary research in Econometrics can be improved.
On top, since empirical studies are hard to realize in Econometrics, researchers strongly rely on data. In particular, data hungry methods such as machine learning and non-parametric statistics are used more frequently and cannot be reproduced since the data is often private because of cost of building, curating, maintaining, storing, etc. \href{http://www.quantlet.com/}{QuantNet} \cite{Hardle2007Quantnet, Borke2017}, the \href{https://blockchain-research-center.com}{Blockchain Research Center} or \href{https://paperswithcode.com/}{PapersWithCode} are platform that already promote data and code accessibility as open access libraries. On top, solutions such as \href{https://www.cascad.tech/}{CASCaD}, \href{https://www.replicabilitystamp.org/}{ReplicabilityStamp} and \href{https://codeocean.com/}{Code Ocean} are trying to address the reproducibility problem by providing a reproducibility stamp to scholarly publications. However, a larger integration between data, code, information and researchers is necessary.
\subsection{Can the peer review process ensure quality?}
Finally, the last problem Quantinar tries to address concerns the peer review process. The peer review process should ensure quality of the research produced by top journals, however its necessity and effect on scientific research has been discussed and criticized since decades \cite{Ziman1968, Spier:2002, Rowland:2002, Ware:2011}. The goal of this paragraph is not to give another extensive review of it, but to give a short summary of the literature. \citet{Ross-Hellauer:2017} gives a good definition of the generic peer review process as the formal quality assurance mechanism whereby scholarly manuscripts (e.g. journal articles, books, grant applications and conference papers) are made subject to the scrutiny of others, whose feedback and judgements are then used to improve works and make final decisions regarding selection (for publication, grant allocation or speaking time). Its function is twofold: evaluating the validity and assessing the innovation and impact of the submitted work.
The peer review process is used across various disciplines and it is widely agreed that it participates in maintaining the overall quality of the scholarly literature \cite{Rowland:2002} in particular in medical sciences \cite{Lock:1985}. Nevertheless, multiple surveys have identified a widespread belief that the current model is sub-optimal \cite{ALPSP:1999, Ware:2008} which is caused by the multiple critics against the peer review process. \citet{Ross-Hellauer:2017} distinguishes 6 categories among those critics:
\begin{itemize}
\item the unreliability and inconsistency of the reviews which is inherent to human judgement
\item the delay between submission and publication which slows down the research progress and the expenses associated with the peer review process
\item the lack of accountability of the involved agents (authors, editors and reviewers) and risks of subversion introduced by the anonymity of the reviewers (in particular for single blinded reviews)
\item social (gender, nationality, institutional affiliation, language and discipline) and publication biases (preference of complexity over simplicity, conservatism against innovative methods, preference for positive results against negative or neutral ones which leads to P-Hacking or HARKing defined in Section \ref{issues_academics})
\item the lack of incentives for reviewers
\item and finally the wastefulness: the "black-box" nature of the process hides discussions between reviewers and authors that could benefit younger and future researchers.
\end{itemize}
Moreover, \citet{Heckmanetal:2020} show that the top journals in Economics fail to ensure a higher quality of their publications. By comparing cumulative citation counts (measured as of 2018) of articles published in the top5 and those published in 25 non-top5 journals over the ten year period 2000–2010, the authors argue that non top5 journals can produce as many if not more influential articles than the top5, concluding that, whether an article is published in the top5 or not, is a poor predictor of the article’s actual quality in the Economics literature. Finally, \citet{Ellison:2011} argues that with the development of Internet the necessity of peer review has lessened for high-status authors by observing a decline of publication from Economists in top-ranked university departments between the early 1990s and 2000s.
Indeed, with the development of Internet and in response to those critics, multiple solutions have been suggested for disseminating research as part of the Open Science movement and in opposition to the traditional science publication process. \citet{vicente:2018} identifies the core elements of Open science, which are the transparency, accessibility, sharing and collaborative-development of knowledge and defines Open science as "transparent and accessible knowledge that is shared and developed through collaborative networks". Platforms such as \href{https://arxiv.org/}{arXiv} or \href{https://www.ssrn.com/}{SSRN} gather pre-print versions of scientific articles. \href{https://www.academia.edu/}{Academia} or \href{https://www.researchgate.net/}{ResearchGate} are community based platform that act as social medias for researchers where they can upload their articles to gain visibility, connect with other researchers to communicate and increase their network. On top, researchers can easily share their code on \href{https://cran.r-project.org/}{CRAN} network as R packages or simply as a repository on \href{https://gitlab.com/}{GitLab} or \href{https://github.com/}{GitHub} in any language, which considerably help the associated research reproducibility. Finally, communities such as \href{https://huggingface.co/}{Hugging Face} or \href{https://www.kaggle.com/}{Kaggle} allow users to collaborate with each other on common tasks by sharing pretrained models, code or datasets in the machine learning universe.
Nevertheless, the previously cited platforms have a centralized infrastructure usually controlled by a private institution which interests do not necessarily align with the community they represent. On top, the data storage is often centralized via a cloud solution such as AWS or Google Cloud and the institution can revoke that access at any time. Finally, Open science cannot free itself from the need for quality control of information via the peer review process. While the literature provides multiple proposals around Open peer review \cite{Ross-Hellauer:2017} (see section \ref{sec:opr}) and nowadays, many scholarly journals employ versions of open peer review practice, including BMJ, BMC, Royal Society Open Science, Nature Communications, the PLOS journal, (\citeauthor{PLOS}), there is no solution that engage a scientific community in an open manner where the researcher and his/her ideas are at the center.
As we have outlined in the previous sections, independent solutions addressing specific issues exist. Our goal is to integrate those ideas into a single platform, namely Quantinar, that should set a new standard for scholarly publications thanks to a vertical integration of article, code and data to allow and verify research reproducibility, a fair and transparent open peer review process, a full control for the researcher over his publications.
\section{What is Quantinar?}
\href{https://quantinar.com/}{Quantinar} is a peer-to-peer (p2p) platform that strengthens research collaboration and reproducibility in different areas like Fintech, Blockchain, Machine Learning, Explainable AI, Data Science, Digital Economy, Cryptocurrency and Maths \& Stats. Its aim is to provide a better integration of scholarly articles, the studied data and the code of the implemented analysis to ensure the reproducibility of the published results. The conceptual architecture of Quantinar, it's called \textit{C5} and it stands for \textit{Creation, Content, Consumption, Coins, and Chain}. Blockchain technology (\textit{chain}) and smart contracts via a decentralized autonomous organisation (DAO) with a tokenized ecosystem (\textit{coins}) can be used today to to promote Open Science, transparency and accessibility of research and collaborative-development of knowledge \cite{tenorioetal:2019}. In this section, we will present the features, applications, infrastructure and technology of Quantinar.
Quantinar is organized as a Decentralized Autonomous Organization (DAO), an online community that jointly controls the organisation's funds to pursue common goals (\href{https://ethereum.org/en/whitepaper/}{Ethereum's white paper}, \href{https://blog.ethereum.org/2014/05/06/daos-dacs-das-and-more-an-incomplete-terminology-guide/}{2014 article} from Vitalik Buterin). As described in \ref{sec:reputation}, the Quantinar DAO will represent a P2P platform for knowledge sharing that will have a reputation based governance, meaning that only active contributors in the academic environment will be able to control the update of the platform and the management of it's funds. Quantinar's main goal is to enforce scientific research reproducibility by creating a new digital publication platform that requires more input from scholars, namely giving access to the code, data, results and extra information provided by courselets (see Section \ref{sec:content}). Moreover, the peer-review process described in \ref{sec:opr} ensures the quality control needed by an academic journal. On top of that, Quantinar plans to become an integrated solution for presentation, data loading and exploration, code execution and computing power. By creating an integrated research environment, the platform will also offer a marketplace for Datalets, Models or Quantlets (see Section \ref{sec:content}) that aims to connect the academic environment to the industry. This will generate the revenue needed for the growth of the platform, and for other causes like funding research that the community decides it's needed. The governance process and tokenomics research of Quantinar will be described in a separate publication. At time of writing, Quantinar (\href{https://www.quantinar.com/}{quantinar.com}) is a website that manages the following:
\subsection{Content}\label{sec:content}
The main content on Quantinar is a "Courselet". A courselet is a scientific research study in the form of slides with a presentation video and the associated PDF file of the slides for reading. If applicable, it must be accompanied by a Quantlet and a Datalet, the associated implementation and data. Any user of Quantinar can create a courselet which will have three status: unverified, verified and peer-reviewed. The unverified status is the default status obtained at the time of publication on Quantinar. The verified status is given if the courselet contains the slides with the link to the verified Quantlet and Datalet, ensuring that the research results can be reproduced. The peer-reviewed status can only be obtained after passing the open peer review process defined in Section \ref{sec:opr}. Finally, a simplified pre-publication review will be used in order to verify that the uploaded content complies with the platform's rules, using e.g. \includegraphics[scale=0.05]{logo/quantlet.png} \href{https://github.com/QuantLet/AOBDL_code}{AOBDL} to avoid offensive content.
The choice of the presentation video format for the Courselet publication is not insignificant. Indeed, it allows a faster sharing of research studies compared to scholarly articles and gives emphasis on scientific ideas and results generation, not formal writing. On top, it does not replace scholarly articles that can always be referred to in the Courselet, but rather motivates reaction and direct communication between Quantinar community members via our discussion channels (see Section \ref{sec:community}).
Finally, the authors keep their copyright and full ownership of their creation as each Courselet is mapped to a Non-Fungible Token (NFT) (see Section \ref{sec:nft}). They give an open access licence instead.
\subsection{Features}
Quantinar's features are designed to benefit the scientific community as a whole. The main planned or already implemented features are summarized in Table \ref{table:feature}. In particular, Quantinar answers the needs of three main categories of users. By offering open access of courselet and certification, students and professionals are able to acquire more skills necessary for their careers. By offering a classroom and the possibility to reuse other users' courselets, universities and professors can attract more students and gain online visibility. Finally, researchers increase their reputation by creating transparent and reproducible research, reviewing other users publications and share knowledge via a simplified publication and communication channel.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ccc}
\hline\hline
Feature's Type & Feature & Status \\
\hline
\multirow{6}{*}{Content creation } & Upload Courselet & Done \\
& Link QuantLet & Backlog \\
& Link DataLet & Backlog \\
& Make a course from multiple Courselet & Done \\
& Make a classroom & Backlog \\
& Start a blog & Backlog \\
& & \\
\multirow{4}{*}{Content consumption } & Explore Courselets & Done \\
& Explore QuantLets & Backlog \\
& Explore DataLets & Backlog \\
& Read Courselet slides & Done \\
& Watch Courselet video & Done \\
& Obtain a course certificate & In Progress \\
& Experiment with code and data & Backlog \\
& Read blogs & Done\\
& & \\
\multirow{3}{*}{Community} & Comment Courselet & Backlog \\
& Likes and other reaction & Backlog \\
& Discuss on Discord & In progress \\
& ... & \\
& & \\
\multirow{2}{*}{Open peer review} & Call for reviews & Backlog \\
& On chain review process & Backlog \\
& ... & \\
& & \\
\hline\hline
\end{tabular}
\caption{Quantinar's features}
\label{table:feature}
\end{center}
\end{table}
\subsection{Applications}\label{sec:applications}
Quantinar's purpose is to become a platform that can be used by any person that works in the field of data science, namely \textit{students, teachers and researchers}.
It can be seen as a \textit{student platform}, because student's can easily understand individual research problems in the form of courselets that offer slides and presentation videos accompanied by code examples that are already implemented by top researchers. Moreover, students can enroll into courses that tackle a wider range of subject and get a certification after passing a series of tests to verify their knowledge. Running algorithms in the cloud, exploring datasets and models and sharing their work will all be integrated under the same platform to make studying interactive and accessible.
Quantinar is a \textit{teacher platform} as much as it is a student one. Teachers can create courselet \textit{flowers} which \textit{link} other courselets to their own courses, in order to compose more complex and more complete teaching environments. Apart from that, teacher's can also get coins, reputation and citations when others use their courselets, feature which is thoroughly described in \ref{sec:reputation}
For \textit{researchers}, Quantinar delivers the ability to publish courselets that represent their research articles and projects. By doing so, they gain coins, reputation and citations when other people use their research (see Section \ref{sec:reputation}). Quality is ensured by a P2P review process that is later described in \ref{sec:opr}. By publishing their work in the form of courselets, researchers have to publish a video presentation, code and data (when possible), on top of the article itself, which strengthens the reproducibility of the results.
\subsection{Case Study: Prosumer}
Quantinar aims to be a prosumer platform. This means that students can become teachers by creating courselets of their own. Researchers are also encouraged to post their preliminary results and get feedback from the community while also contributing to its' growth. In \autoref{img:courselet-pricing-kernelts} we can find an example of a courselet, that represents work done by one of the authors of this paper.
\begin{figure}[!ht]
\centering
\includegraphics[height=7cm, keepaspectratio]{images/courselet_example.png}
\caption{Courselet: Pricing Kernels}
\label{img:courselet-pricing-kernelts}
\end{figure}
\subsection{Case Study: Extending a course}
As we briefly mentioned in Section \ref{sec:applications}, another goal of Quantinar is to become a platform for teachers. Teachers can compose courses using both their own content, and courselets that are already posted on the platform by other prosumers. As we can see in Figure \ref{img:course-deda}, the courselet created in Figure \ref{img:courselet-pricing-kernelts} was made available in the Statistics of Financial Markets (SFM) Course. Of course, the other courselets that are available in the SFM course can be further linked to develop courses that are more suitable for other teacher's visions
\begin{figure}[!ht]
\centering
\includegraphics[height=12cm, keepaspectratio]{images/course_example.png}
\caption{Course: Digital Economy and Decision Analytics}
\label{img:course-deda}
\end{figure}
|
1,116,691,498,368 | arxiv | \section{Introduction}
The MS turn-off is the most reliable feature for age-dating a star
cluster. Nevertheless for very young clusters, with ages of tenths to
few Myr, the identification of the turn-off is usually hampered by the
paucity of massive stars.
We propose to take a different point of view, focusing on the TOn, the
CMD locus where the PMS joins the MS. Although the importance of the
TOn has been already emphasized in several papers (e.g.
\citealt{Stauffer80}, \citealt{Belikov98}, \citealt{Baume03},
\citealt{Naylor09}), its application to dating extragalactic star
forming regions is a new proposition.
In analogy with the turn-off, the TOn properties are directly related
to the age of the stellar population, but with evolutionary times much
shorter than the corresponding MS times. In fact, from simple stellar
evolution arguments, the age of a cluster is equal to the time spent
in the PMS phase by its most massive star still in the PMS phase. By
definition, this star is at the TOn. Hence, when the intrinsic
luminosity of the TOn is detected, it is straightforward to associate
it to the age of the cluster.
In the first part of this letter we describe how the intrinsic
properties of the TOn can be used as a clock. In the second part we
present a new method to apply TOn related properties to date
extragalactic systems. Finally we apply this method to the largest
extragalactic star forming region, NGC346, in the Small Magellanic
Cloud (SMC).
\section{The PMS Turn-On}
The potential strength of the TOn is apparent from the morphology of
isochrones taking both the PMS and MS phases into
account. Fig.\ref{peaks}(a) shows 5 isochrones with metallicity
$Z=0.004$, obtained combining the Pisa PMS tracks
(\citealt{Cignoni09}) with the Padua evolutionary tracks
(\citealt{Fagotto94}) to cover the entire mass range
$0.45-120\,M_{\odot}$. For all ages younger than 15 Myr the isochrone
portion just before the zero age MS (ZAMS) has a hook and then is
significantly flatter than the ZAMS. The TOn is at the vertex of the
hook, quite easy to recognize.
To test if/how the TOn can be useful as a cosmo-chronometer, we
simulated synthetic simple stellar populations (SSPs), using the
tracks quoted above. As an example, we describe the case with
metallicity $Z=0.004$, burst duration 1 Myr, Salpeter's Initial Mass
Function (IMF), and no binaries. To guarantee statistically
significant tests, extensive theoretical simulations have been
performed with a number of synthetic stars up to $1\times 10^6$.
\begin{figure*}[t!]
\centering\includegraphics[width=6cm]{iso3.eps}
\centering\includegraphics[width=6cm]{prova30.eps}\\
\centering\includegraphics[width=6cm]{prova15.eps}
\centering\includegraphics[width=6cm]{prova10.eps}\\
\centering\includegraphics[width=6cm]{prova5.eps}
\centering\includegraphics[width=6cm]{prova2.eps}
\caption{Panel (a): combined PMS and MS isochrones for the labeled
ages. The TOn is at the discontinuity where the PMS joins the MS
(e.g. at $(V-I)_0=-0.15$ and $M_{V}=0.85$ for the 2 Myr case). The
numbers on the left give the mass of the TOn star of each
isochrone. Panels (b) to (f): normalized LFs for synthetic SSPs of
the labeled age (see text for details).}
\label{peaks}
\end{figure*}
Panels (b)-(f) in Fig.\ref{peaks} show the luminosity functions (LFs),
binned in 0.2 magnitude bins and scaled to the total number of stars,
of synthetic SSPs (open histograms) corresponding to the isochrones of
panel (a). A reference synthetic zero age population, that is
artificially built without\footnote{i.e. stars are born directly on
the ZAMS.} PMS stars (grey filled histogram), is also shown. In the
LF without PMS, the only valuable feature is a mild peak around
$M_{V}\approx 2$, consequence of an inflection in the derivative of
the mass-$M_{V}$ relation in MS stellar models around
$m=2\,M_{\odot}$. By contrast, when the evolution starts from the PMS
phase, the corresponding LFs develop an additional strong peak
followed by a dip. Peak and dip reflect respectively the steep
dependence of stellar mass on $M_{V}$ near the TOn (see also the
discussion in \citealt{piskunov}) and the following flattening below
the TOn (caused by the short evolutionary timescale of the PMS phase
compared to the MS). After the dip, the shape of the LF mimics the
IMF.
The LFs in Fig.\ref{peaks} indicate the importance of both features,
peak and dip, to infer the cluster age: the older the age, the fainter
is the LF TOn peak and the corresponding dip. On the other hand, for
the explored range of ages the magnitude of the MS peak is fairly
constant ($M_{V}\approx 2$), being it locked to the (much longer) MS
evolutionary times. Through a polynomial fit to the models, theory
provides a useful relation between age ($\tau$) of the SSP and the
magnitude $M_{V}$ of the TOn in the range 2-100 Myr:
\begin{eqnarray}
\displaystyle
\tau (Myr)=&\sum_{j=0,5} a_{j}\times (M_{V})^j& \label{eq1}
\end{eqnarray}
In turn:
\begin{eqnarray}
\displaystyle
M_{V}=&\sum_{j=0,5} b_{j}\times \{\log [\tau(Myr)]\}^j&\label{eq2}
\end{eqnarray}
\begin{table}[h!]
\begin{center}
\caption{Coefficients for Eq. 1 and Eq. 2.}
\label{tab}
\vspace{0.2cm}
\begin{tabular}{c|c|c} \hline
\multicolumn{1}{c|}{j}&\multicolumn{1}{|c}{$a_{j}$}&\multicolumn{1}{|c}{$b_{j}$}\\ \hline
0&8.144&-2.595\\ \hline
1&-17.620&22.021\\ \hline
2&16.120&-48.826\\ \hline
3&-5.005&51.418\\ \hline
4&0.6908&-23.420\\ \hline
5&-0.03218&3.904\\ \hline
\end{tabular}
\end{center}
\end{table}
Once the bin width of the LF is chosen, Eq. (\ref{eq1}) (whose
coefficients are given in Table \ref{tab}) gives also the intrinsic
uncertainty on the age. Assuming 0.2 mag wide bins, the minimum
uncertainty is about 0.6 Myr at 3 Myr, 1.3 Myr at 20 Myr, 2.5 Myr at
30 Myr and 6 Myr at 50 Myr. The TOn formula is deliberately limited to
populations older than 2 Myr, since for younger ages the current PMS
models are still extremely uncertain (see \citealt[][ for a
discussion]{baraffe02}).
Although attractive, the TOn dating method should be used with caution. When we
compare the observed and theoretical LFs it is important to evaluate
the following uncertainties:
\emph{Incompleteness and photometric errors.} To test real conditions,
the synthetic SSPs have been degraded assuming reddening and distance
of the SMC, $E(B-V)=0.08$ and $(m-M)_0=18.9$, and photometric errors
and incompleteness as derived by \cite{Sabbi07} from HST images of
NGC346. Fig.\ref{uncer}(a) shows the degraded LF for populations of
5, 30, 40, 50 Myr: at the distance of SMC the excellent quality of the
HST/ACS photometry guarantees perfect detectability of the peak/dip
feature up to about 50 Myr. Beyond this age the TOn becomes too faint to
be detected.
\emph{Poisson fluctuations.} In order to check how many stars are
necessary to safely identify the TOn, we progressively reduced the
number of synthetic stars belonging to the 5 Myr population, until the
TOn peak was hidden by the Poisson fluctuations. This experiment
indicates that about 50 stars brighter than $M_{V}\approx 5$
(corresponding to a cluster total mass of $\approx\,500\,M_{\odot}$)
are sufficient to identify the TOn with a significance of $2\,\sigma$.
\emph{IMF, binaries, star formation duration.} In Fig.\ref{uncer}(b)
the synthetic 5 Myr SSP has been modeled changing the binary fraction
(50\% of simulated stars have now an unresolved companion, randomly
extracted from the same IMF as the primary) and the IMF exponent
(1.5-3). None of these changes alters significantly the TOn position:
the case with binaries is almost identical to the reference case,
while the adoption of different IMFs modifies only the shape of the LF
before and after the TOn peak.
\begin{figure}[]
\centering\includegraphics[width=7cm]{compl2.eps}
\centering\includegraphics[width=7cm]{5myr_imf_bin.eps}
\centering\includegraphics[width=7cm]{10myr_durata.eps}
\caption{Panel (a): Effect of incompleteness and photometric errors on
the LFs for the labeled ages. All LFs are normalized to the total
number of stars. Panel (b): Effect of different IMFs and binary
fractions on the 5 Myr LF. The shadowed histogram is the same as the
reference 5 Myr case of panel (a). Panel (c): normalized LFs for a
10 Myr cluster with the labeled duration of prolonged star formation
activity.}
\label{uncer}
\end{figure}
In Fig. \ref{uncer}(c) we have simulated a 10 Myr old cluster with a
star formation activity that lasts 1, 3 and 5 Myr (see
e.g. \citealt[][]{Palla00}). As expected, with a prolonged SF activity
the LF peak grows brighter and broader. In this case, the peak
magnitude provides information on the average age of the population.
\emph{Reddening.} Typical of star forming regions, the presence of
highly obscuring material (foreground as well as local) can
significantly dim and blur a TOn. Thus, reliable reddening estimates
are fundamental to obtain unbiased ages. On the other hand, there are
several regions with affordable foreground and intrinsic
extinction. NGC346 is one of them with foreground $E(B-V)\sim 0.08$
and internal $\lesssim\,0.1$ mag (as deduced from the upper main
sequence), which contributes to the final uncertainty by $\approx$ 1-2
Myr.
\begin{figure*}[!t]
\centering \includegraphics[width=5cm]{ms5.eps}
\centering\includegraphics[width=5cm]{map1.eps}
\centering\includegraphics[width=5cm]{map2.eps}\\
\centering\includegraphics[width=5cm]{map3.eps}
\centering\includegraphics[width=5cm]{map4.eps}
\centering\includegraphics[width=5cm]{map5.eps}
\caption{The top-left panel shows the CMD selection (see text) of bona-fide
MS stars (black dots). Each of the other panels shows the spatial
distribution of these MS stars for the labeled range of magnitudes. }
\label{ms_slices}
\end{figure*}
\section{Identification of the TOn in extragalactic star forming regions: the case of NGC346}
Our goal is to study how star formation develops in extragalactic star
forming regions. In the CMD of these regions, the MS is often
contaminated by young fore/background stars and only in a few cases
membership information is available for a safe decontamination. The
presence of the field MS partially fills the LF dip, and sensibly
lowers the significance of the TOn in the LF.
To get around the problem of contamination we propose to combine the
fact that, by definition, no sub-cluster member on the MS can be
fainter than the TOn, \emph{together} with a careful analysis of the
spatial distribution of the stars in the region. The procedure we
suggest has 3 main steps: 1) Selection of bona-fide MS stars (both
members and non-members of the clusters) of all magnitudes. In order
to take into account photometric errors and reddening, we consider all
the stars bluer than a ridge line appropriately redder than the
theoretical ZAMS. 2) Division of the selected MS stars into bins of
progressively fainter magnitude. For each bin of a given magnitude,
the spatial distribution of stars is examined to assess whether the
sub-clusters are still visible. When a sub-cluster is clearly
identified up to a given magnitude, but it disappears in the fainter
maps, we identify the magnitude of its TOn. This is because we have
reached the magnitude where the sub-cluster members have not yet
reached the MS, and MS stars fainter than this limit do not belong to
that sub-cluster. In order to evaluate the statistical significance of
the sub-cluster disappearance from the maps, for each of the three
sub-clusters we built a stellar density radial profile and we
evaluated the contribution from different magnitude bins. For field
stars we expect a flat stellar density profile, while a decrease in
stellar density is the typical signature of a sub-cluster. The range
of magnitudes where the transition between the two profiles occurs
identifies the apparent magnitude of the TOn. 3) Age determination of
the sub-clusters. After applying reddening and distance modulus
corrections, we use Eq. (\ref{eq1}) to translate the sub-cluster TOn
magnitude into an age. The final uncertainty is fixed by the bin
width, which is, in turn, chosen to obtain an acceptable number of
counts per bin.
To test the strength of our method, we applied it to the star forming
region NGC346 in the SMC, whose images acquired by the HST Advanced
Camera for Survey have revealed a wealth of young sub-clusters
containing several PMSs \citep{Sabbi07}. This region provides
excellent conditions to test our method because its recent strong
activity supplies an outstanding sample of PMS stars
\citep{Nota06}. The complexity of its structure, with several non
coeval sub-clusters, requires a method able to trace the temporal
sequence of events leading to the present configuration. In this
letter we focus on three of the richest sub-clusters, SC-1, SC-13 and
SC-16, as defined by \cite{Sabbi07}. Their location in the region can
be seen in their Fig.8.
Fig.\ref{ms_slices} summarizes our analysis of NGC346. To select only
MS stars we used a ridge line 0.05 mag redder than the theoretical
$Z=0.004$ ZAMS. We considered all the stars bluer than this line as
bona-fide MS stars, and divided them in six bins of different
size. Bona-fide MS stars are drawn in black in the top-left panel of
Fig.\ref{ms_slices}, where all the other stars are marked in grey. The
other five panels show the spatial distribution of the bona-fide MS
stars for progressively fainter bins of magnitude. Analyzing these
maps, we find that the central sub-cluster SC-1 is well visible down
to $V=21$, while no obvious spatial structure resembling SC-1 is
observable in the fainter maps. The sub-cluster SC-16, on the
contrary, is still clearly visible down to $V=23$. Beyond this limit,
also SC-16 vanishes. The small sub-cluster SC-13 is recognizable at
least to $V=21$. Notice that the stellar agglomeration appearing at
$V=22$ corresponds to the 4-5 Gyr turn-off stars of the older cluster
BS90 (see e.g. \citealt{Sabbi07}) in the foreground of NGC346.
In order to test the significance of the sub-clusters we examined the
radial density profile of the three sub-clusters as a function of
magnitude (Fig. \ref{res} left panels). The profiles are calculated
using annuli of equal area centered on the highest density peak. The
inner radius is fixed at 150 pixel. To exclude any completeness issue
in these crowded regions, we also show in the middle panels of Figure
\ref{res} the completeness factors obtained from the extensive
artificial star tests performed by \cite{Sabbi07}, for all stars
within 100 pixels from the centers of SC-16, SC-1 and SC-13. To
further test the final ages, theoretical isochrones are over-imposed
on the CMDs (right panels of Fig.\ref{res}).
\subsection{SC-16}
The first row in Fig.\ref{res} shows the MS radial profiles (left
panel), the completeness curves (middle panel) and the CMD (right
panel) for SC-16. The compact morphology of this sub-cluster is
evident: the radial profile of MS stars brighter than $V=23$ rapidly
drops with distance from the center, while the distribution of less
luminous MS stars is constant. This confirms that the TOn magnitude is
between 22 and 23, thereby constraining the first generation of stars
to have formed between 12.5 and 18 Myr ago. The upper right panel of
Fig.\ref{res} shows the corresponding isochrones superimposed to the
SC-16 CMD (all stars within 150 pixel from the center): it is
reassuring to see that below the 18 Myr TOn, the sub-cluster stars
move away from the MS, confirming that the MS deficit in the map is a
genuine evolutionary effect. Concerning the completeness, although the
sub-cluster region loses stars faster than the field, for magnitudes
$V<23$ this effect is not severe (compared to the field, less than
25\% of stars are lost). By comparing our age estimate to independent
determinations we find good agreement both with \cite{Sabbi07} ($15\pm
2.5$ Myr) and with \cite{Hennekemper08} (5-15 Myr).
\subsection{SC-1}
The case of SC-1 (second row in Fig. \ref{res}) is quite
different. Here the extended spatial morphology produces a gradual
decrease in the MS radial profile rather than an abrupt drop at
$V\approx 21$. Radial profiles for fainter magnitudes are quite
flat. Assuming a conservative bin uncertainty of 1.5 mag on the TOn
magnitude, the star formation onset is expected between 3.5 and 6.5
Myr ago, in agreement with the estimate by \cite{Sabbi07} ($3\pm 1$
Myr). Both the 3.5 and the 6.5 isochrones are overlaid on the SC-1
CMD. As suggested by radial profiles at $V=21$ there is a clear
signature of a PMS TOn, with a large sample of stars still in the PMS
phase. Our 6.5 Myr isochrone fits the upper MS morphology, including
the TOn, very well and fits the PMS blue envelope as well. The few MS
stars below the TOn are likely contamination of the SMC field and/or
stars of the old cluster BS90 (the presence of 3 clump stars endorses
this hypothesis). The absence of any star on the MS around $V\sim
21.5$ and the morphology of the CMD for the vast majority of the
stars, which seems to follow the isochrone with PMS included, further
supports our conclusion. The measured completeness (middle panel) is
better than 75\% and cannot account for such a reduction.
\subsection{SC-13}
The third row of Fig. \ref{res} shows the sub-cluster SC-13. As for
SC-1, the MS density profiles of SC-13 are declining down to
$V>21$. However, due to the poor statistics, we can only state that
the cluster formation started sometimes in the last 6 Myr. There is a
further clue suggesting that SC-13 is actually younger than 6 Myr: a
close inspection of the CMD (right panel) reveals that most of
probable intermediate-mass PMS stars are aligned with the 3 Myr
isochrone. In agreement with this finding, most of the low-mass PMS
stars are redder than the 3 Myr isochrone. This age is also found by
\cite{Sabbi07} ($3\pm 1$ Myr) and by \cite{Hennekemper08} (0.5-2.5
Myr).
\begin{figure*}[!t]
\centering \includegraphics[width=5cm]{sc16_x.eps}
\centering \includegraphics[width=5cm]{COMPL_SC16.eps}
\centering \includegraphics[width=5cm]{sc16_cmd.eps}\\
\centering \includegraphics[width=5cm]{sc1_x.eps}
\centering \includegraphics[width=5cm]{COMPL_SC1.eps}
\centering \includegraphics[width=5cm]{sc1_cmd.eps}\\
\centering \includegraphics[width=5cm]{sc13_x.eps}
\centering \includegraphics[width=5cm]{COMPL_SC13.eps}
\centering \includegraphics[width=5cm]{sc13_cmd.eps}
\caption{Left-hand panels: radial distribution of MS stars near the
centers of SC-16 (top), SC-1 (middle) and SC-13 (bottom). Different
curves correspond to the indicated ranges of magnitude. The abscissa
is in units of $n$, where $n$ is related to the distance $d$ from
the sub-cluster center by
$d=\sqrt{n}\times\,150\,\mathrm{pixels}$. Middle panels:
Completeness for the labeled sub-clusters (dashed line) and field
stars (solid line). Right-hand panels: CMDs for all stars within 150
pixels from the center of SC-1 (top panel), SC-16 (middle panel) and
SC-13 (bottom panel). Red dots indicate sub-cluster stars while
black dots stand for the entire data sample. Isochrones of the
indicated ages are also shown. }
\label{res}
\end{figure*}
\section{Discussion and Conclusions}
The evolution of young star forming regions in their earliest stages
is still quite unknown. For very young clusters, with ages of tenths
to few Myr, the identification of the turn-off is usually hampered by
the paucity of massive stars. However when a cluster is sufficiently
young to harbor PMS stars, the luminosity of its TOn provides a robust
indication of its age. Furthermore, while the PMS
evolutionary models still suffer of several limitations, the simple
comparison of TOn luminosities is a reliable measure of relative ages.
In the LF the TOn is a narrow peak followed by a dip. The TOn is very
sensitive to age: for a $\sim 30$ Myr old stellar population the
luminosity of the TOn changes by $\sim 0.08$ mag/Myr, but at $\sim 20$
Myr it already changes by $\sim 0.15$ mag/Myr, and by $\sim 0.33$
mag/Myr at 3 Myr.
To guarantee a safe TOn identification, it is important to select
targets with low or well known reddening. In the visual, this
currently excludes Galactic clusters like Westerlund 1-2, NGC3603,
Arches and Quintuplet, but good targets can be found in the solar
vicinity and in some nearby irregulars galaxies like the Magellanic
Clouds and IC1613.
In this letter we have presented a new method to investigate how star
formation develops in complex extragalactic young clusters such as
NGC346 in the SMC.
Our method combines the analysis of the star clusters stellar
densities profiles with the notion that in a cluster no star on the MS
can be fainter than the cluster TOn. This approach has the advantage
of strongly reducing the uncertainties introduced by the contamination
to the CMD by young stars that do not belong to the cluster,
ultimately affecting the TOn detectability in both CMD and LF.
Clearly, there are limitations to the applicability of the method. The
bona-fide MS selection may include intruders like PMS and stars
evolved off the MS, reducing the TOn visibility. The issue is
particularly thorny at ages larger than about 30 Myr, because the PMS
isochrones are close to the ZAMS (see Figure \ref{peaks}(a)). The
method relies on the assumption that the formation sites are
agglomerations of stars: if for some reason the newborn stars formed
in isolation or, drifting apart, were too diluted to be detected as
aggregates, the method would not be applicable. In this respect, the
study by \citet[][]{Pfalzner09}, exploiting the density-radius
relation to date nearby clusters younger than 20 Myr is interesting
and reassuring.
We applied the TOn method to NGC346 in SMC, and we found that the
onset of the star formation in the sub-cluster SC-1 was between
$3.5-6.5$ Myr, in the sub-cluster SC-13 3 Myr ago or less and in SC-16
between $12.5-18$ Myr ago, in good agreement with the results from the
literature.
Having established the effectiveness of our method, the next steps
will be to: 1) incorporate near-infrared photometry and use it to
study extinguished regions where measurements of the upper MS are
difficult to carry out; 2) undertake comparative studies of the
duration and spatial patterns of SF in Magellanic Cloud regions
ranging from the giant 30 Dor complex to the comparatively isolated
NGC602 cluster.
\section*{Acknowledgments}
We thank A. Bragaglia and S.N. Shore for useful suggestions. MC and MT
acknowledge financial support through contracts ASI-INAF-I/016/07/0
and PRIN-MIUR-2007JJC53X-001. Partial support for U.S. research in
program GO10248 was provided by NASA through a grant from the STScI,
which is operated by the AURA, Inc., under NASA contract NAS5-26555.
|
1,116,691,498,369 | arxiv | \section{Introduction}
\label{sec:Intro}
Parity-violating precision measurements have for many years provided
crucial low-energy tests of the Standard Model. Early efforts such as
the E122 experiment at SLAC \cite{Prescott1978tm, Prescott1979dh}
firmly established the SU(2)$\times$U(1) model as the theory of the
unified electroweak interactions. Modern-day experiments use parity
violation to probe physics beyond the Standard Model. One of the most
recent parity-violating measurements is the $Q_{\text{weak}} \,\,$ experiment at
Jefferson Lab \cite{Armstrong2012ps}, which aims to measure the
proton's weak charge to 4\% accuracy. With an initial analysis
of a subset of the data already reported \cite{Androic2013},
the analysis of the full data set is expected in the near future.
For the precision requirements of the $Q_{\text{weak}} \,\,$ experiment, the weak charge
of the proton, defined at tree level as
$Q_W^p = 1 - 4 \sin^2 \theta_W$,
must also include radiative corrections.
Including these corrections at the 1-loop level, the weak charge can be
written as \cite{Erler2003yk}
\begin{eqnarray}
Q_W^p &=& \left( 1 +\Delta\rho + \Delta_e \right)
\left( 1 - 4 \sin^2\theta_W(0) + \Delta_e^{'} \right)
+ \square_{WW} + \square_{ZZ} + \square_{\gamma Z}(0),
\label{eq:qwHO}
\end{eqnarray}
where $\sin^2\theta_W(0)$ is the weak mixing angle at zero momentum
transfer, and the electroweak vertex and neutral current correction
terms $\Delta \rho$, $\Delta_e$ and $\Delta_e'$ have been calculated
to the necessary levels of precision \cite{Erler2003yk}.
The weak box corrections $\square_{WW}$ and $\square_{ZZ}$ are
dominated by short-distance effects and can also be computed
perturbatively to the required accuracy.
On the other hand, the final term in Eq.~\eqref{eq:qwHO}, the $\gamma Z$
box contribution, depends on both short- and long-distance physics
and therefore requires nonperturbative input. Considerable attention
has been given to the analysis of this term, for both the vector
electron--axial vector hadron coupling to the $Z$, $\square_{\gamma Z}^A$
(which is relevant for atomic parity violation experiments)
\cite{Marciano1983, Marciano1984, Blunden2011rd, Blunden2012ty},
and the axial electron--vector hadron coupling, $\square_{\gamma Z}^V$
(which because of its strong energy dependence makes important
contributions to the $Q_{\text{weak}} \,\,$ experiment) \cite{Gorchtein2008px,
Sibirtsev2010zg, Rislow2010vi, Gorchtein2011mz, Hall2013hta}.
The most accurate technique to evaluate the latter is a dispersion
relation. While constraints from parton distribution functions
(PDFs) and recent parity-violating deep-inelastic scattering (PVDIS)
data \cite{Wang2013kkc, Wang2014bba} provide a systematic way of
reducing the errors on this correction \cite{Hall2013hta}, some
uncertainty remains about the model dependence of the low-$Q^2$
input.
The E08-011 electron--deuteron PVDIS experiment at Jefferson Lab
not only allowed an accurate determination of the $C_{2q}$
electron--quark effective weak couplings \cite{Wang2014bba}, but
also presented the first direct evidence for quark-hadron duality
in $\gamma Z$ interference structure functions, which was verified at
the (10--15)\% level for $Q^2$ down to $\approx 1$~GeV$^2$
\cite{Wang2013kkc}.
In general, quark-hadron duality refers to the similarity of
low-energy hadronic cross sections, averaged over resonances,
with asymptotic cross sections, calculated at the parton level
and extrapolated to the resonance region.
It is manifested in many different hadronic observables
\cite{Melnitchouk2005} and was first observed in deep-inelastic
scattering (DIS) by Bloom and Gilman \cite{Bloom1970, Bloom1971}.
Subsequent studies have quantified the validity of duality for
various spin-averaged and spin-dependent electromagnetic structure
functions, as well as in neutrino scattering and for different
targets \cite{Niculescu2000, Niculescu2000a, Airapetian2003,
Arrington2003nt, Wesselmann2007, Psaker2008, Malace2009, Malace2010},
establishing the phenomenon as a general feature of the strong
interaction.
Furthermore, recent analysis of moments of the free neutron
electromagnetic structure function \cite{Niculescu2015} has
demonstrated that duality in the lowest three neutron moments
is violated at a similar level ($\lesssim 10\%$) as in the proton
for $Q^2 \geqslant 1$~GeV$^2$ \cite{Niculescu2000, Niculescu2000a,
Malace2009}. This suggests that the isospin dependence of duality
and its violation is relatively weak. It is reasonable therefore
to expect that duality may also hold to a similar degree for the
$\gamma Z$ structure functions, which are related to the electromagnetic
structure functions by isospin rotations.
In this paper we discuss the extent to which quark-hadron duality in
$\gamma Z$ structure functions can provide additional constraints on the
$\square_{\gamma Z}^V$ corrections, and in particular the contributions
from low hadronic final state masses $W$ and $Q^2 \sim 1$~GeV$^2$.
In Sec.~\ref{sec:DualSM} we illustrate the realization of duality
in the moments of the proton and neutron electromagnetic structure
functions using empirical parametrizations of data in the resonance
and DIS regions down to $Q^2 = 1$~GeV$^2$.
Motivated by the approximate isospin independence of duality in
electromagnetic scattering from the nucleon, in Sec.~\ref{sec:ImplBgZ}
we explore the consequences of duality in the $\gamma Z$ structure functions
for the energy dependence of the $\square_{\gamma Z}^V$ correction, and
especially the limits on its overall uncertainty.
Finally, in Sec.~\ref{sec:Con} we summarize our findings and discuss
their implications for the analysis of the $Q_{\text{weak}} \,\,$ experiment as well as
future parity-violating experiments such as MOLLER at Jefferson Lab
\cite{MOLLER} and MESA at Mainz \cite{MESA}.
\section{Duality in electromagnetic structure functions}
\label{sec:DualSM}
Historically, the observation of duality in inclusive electron
scattering \cite{Bloom1970, Bloom1971} predates the development of
QCD and was initially formulated in the language of finite-energy
sum rules. Within QCD, duality was reinterpreted within the
operator product expansion through moments of structure functions
\cite{DeRujula1977}, with duality violations associated with matrix
elements of higher twist (HT) operators describing multi-parton physics.
The extent to which inclusive lepton--nucleon cross sections can be
described by incoherent scattering from individual partons through
leading twist (LT) PDFs can be quantified by studying the $Q^2$
dependence of the structure function moments.
At low $Q^2$, corrections to the LT results arise not only from
multi-parton processes, but also from kinematical target mass
corrections (TMCs), which, although $1/Q^2$ suppressed, arise from
LT operators.
To isolate the genuine duality-violating HT effects, one can consider
Nachtmann moments of structure functions \cite{Nachtmann1973}, which
are constructed to explicitly remove the effects of higher spin
operators and the resulting TMCs.
Specifically, the Nachtmann moments of the $F_1$ and $F_2$ structure
functions are defined as \cite{Nachtmann1974, Nachtmann_note}
\begin{eqnarray}
\mu_1^{(n)}(Q^2)
&=& \int_0^1 dx \, \frac{\xi^{n+1}}{x^3}
\left[ x F_1(x,Q^2) + \frac{1}{2}\rho^2 \eta_n F_2(x,Q^2)
\right],
\label{eq:mu1} \\
\mu_2^{(n)}(Q^2)
&=& \int_0^1 dx \, \frac{\xi^{n+1}}{x^3}\,
\rho^2 (1 + 3\eta_n) F_2(x,Q^2),
\label{eq:mu2}
\end{eqnarray}
where
\begin{equation}
\xi = \frac{2 x}{1 + \rho}
\label{eq:xi}
\end{equation}
is the Nachtmann scaling variable \cite{Nachtmann1974, Greenberg1971},
with $x = Q^2/(W^2 - M^2 + Q^2)$ the Bjorken scaling variable,
$\rho^2 = 1 + 4 M^2 x^2/Q^2$, and $M$ the nucleon mass.
The variable $\eta_n$ is given by
\begin{eqnarray}
\eta_n
&=& \frac{\rho-1}{\rho^2}
\left[ \frac{n + 1 - (\rho+1)(n + 2)}{(n+2)(n+3)} \right],
\label{eq:eta_n}
\end{eqnarray}
and vanishes in the $Q^2 \to \infty$ limit.
In that limit the moments $\mu_i^{(n)}$ approach the standard
Cornwall-Norton moments \cite{Cornwall1969},
\begin{eqnarray}
\mu_i^{(n)}(Q^2)
&\longrightarrow& M_i^{(n)}(Q^2)\,
=\, \int_0^1 dx\, x^{n-i}\, F_i(x,Q^2),\ \ \ \ i=1,2.
\label{eq:CNmom}
\end{eqnarray}
At finite $Q^2$, while the $\mu_2^{(n)}$ moments depend only on the
$F_2$ structure function, the $\mu_1^{(n)}$ moments have contributions
from both the $F_1$ and $F_2$ structure functions. Because the latter
contribution is proportional to $\eta_n$, it vanishes at large $Q^2$,
so that the $\mu_1^{(n)}$ moments are generally dominated by the
$F_1$ structure function at large $Q^2$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.05\textwidth]{ggMoms.pdf}
\caption{The proton (left panels) and neutron (right panels)
electromagnetic $F_1^{\gamma \gamma}$ (top) and $F_2^{\gamma \gamma}$ (bottom)
structure function moments. The total Nachtmann moments
(black solid lines) include contributions from the
resonance ($W^2 \leqslant 6$~GeV$^2$, blue dot-dashed lines)
and DIS ($W^2 > 6$~GeV$^2$, green dotted lines) regions,
as well as the elastic contributions (gray dashed lines),
and are compared with the Cornwall-Norton moments of
the LT structure functions (red long-dashed lines).}
\label{fig:mugg}
\end{center}
\end{figure}
Duality in unpolarized electron--nucleon scattering has been studied
most extensively for the electromagnetic $F_2$ structure function
\cite{Niculescu2000, Niculescu2000a, Malace2009}, and to a lesser
extent for the $F_1$ (or longitudinal $F_L$) structure function
\cite{Melnitchouk2005, Monaghan2013}.
The latter is generally more difficult to access experimentally,
as it requires precise longitudinal--transverse separated cross
section measurements, or equivalently the $\sigma_L/\sigma_T$
cross section ratio.
In Fig.~\ref{fig:mugg} the workings of duality in the $n=2$
Nachtmann moments of the proton and neutron $F_1^{\gamma \gamma}$ and
$F_2^{\gamma \gamma}$ structure functions are illustrated over the range
$1 \leqslant Q^2 \leqslant 8$~GeV$^2$.
For the low-$W^2$ contributions, $W^2 \leqslant 6$~GeV$^2$, the
resonance-based fit to the electromagnetic structure function
data from Christy and Bosted \cite{Christy2010} is used.
For the DIS region at higher $W^2$ values, $W^2 > 6$~GeV$^2$,
this is supplemented by the ABM global QCD fit \cite{Alekhin2012}
to high-energy data, which includes LT, TMC and HT contributions.
Since LT evolution is logarithmic in $Q^2$, at large $Q^2$
the moments are predicted to become flat in $\ln Q^2$.
While the individual resonance and DIS region contributions,
as well as the elastic ($W=M$) component, are strongly $Q^2$
dependent in the region of low $Q^2$ shown in Fig.~\ref{fig:mugg},
remarkably their sum exhibits only very mild $Q^2$ dependence
down to $Q^2 \approx 1$~GeV$^2$.
This is the classic manifestation of duality observed by Bloom and
Gilman \cite{Bloom1970, Bloom1971}, in which the total empirical
moments resemble the LT contributions down to surprisingly low
momentum scales.
Note that since the Nachtmann moments are constructed to remove
higher spin operators that are responsible for TMCs, in the absence
of HTs one would expect the Nachtmann moments of the total structure
functions to equal the Cornwall-Norton moments of the LT functions,
$\mu_i^{(n)}({\rm LT+TMC}) = M_i^{(n)}({\rm LT)}$ \cite{Steffens2006}.
This expectation is clearly borne out in Fig.~\ref{fig:mugg},
where the total $\mu_1^{(2)}$ and $\mu_2^{(2)}$ moments are
very similar to the moments computed from the LT PDFs.
For the proton structure functions, the average violation of duality
in the range $1 \leqslant Q^2 \leqslant 2.5$~GeV$^2$ is 3\% and 4\%
for the $F_1^{\gamma \gamma}$ and $F_2^{\gamma \gamma}$ structure functions, respectively,
with the maximum violation being $\approx 5\%$ and $\approx 10\%$ at
the lower end of the $Q^2$ range.
For the neutron the maximum violation is slightly larger, with the
LT $F_1^{\gamma \gamma}$ and $F_2^{\gamma \gamma}$ moments being $\approx 14\%$ and
$\approx 10\%$ smaller than the full results, although the average
over this $Q^2$ range is 5\% and 8\%, respectively.
This is consistent with several previous phenomenological analyses
\cite{Virchaux1992, Alekhin2004, JR2014} of high-energy scattering
data which have found no indication of strong isospin dependence of HT
corrections.
Following Ref.~\cite{Christy2010}, we assign a 5\% error on the
proton $F_1^{\gamma \gamma}$ and $F_2^{\gamma \gamma}$ structure functions,
and a larger, 10\% error on the neutron structure function
\cite{Bosted2007xd}, reflecting the additional nuclear model
dependence in extracting the latter from deuterium data \cite{CJ13}.
For the elastic contribution a 5\% uncertainty is assumed for the
total elastic structure functions from Ref.~\cite{Kelly2004}.
For higher moments ($n > 2$), which are progressively more sensitive
to the high-$x$ (or low-$W$) region, the degree to which duality is
satisfied diminishes at lower $Q^2$ values \cite{Ji95}.
\section{Duality in $\gamma Z$ structure functions and implications
for $Q_W^p$}
\label{sec:ImplBgZ}
In contrast to the electromagnetic structure functions which have been
studied extensively for many years, experimental information on the
interference $\gamma Z$ structure functions is for the most part nonexistent.
Some measurements of $F_2^{\gamma Z}$ and $xF_3^{\gamma Z}$ have been made at
very high $Q^2$ at HERA \cite{HERA}, where the $\gamma Z$ contribution
becomes comparable to the purely electromagnetic component of the
neutral current. However, no direct measurements of $F_1^{\gamma Z}$ and
$F_2^{\gamma Z}$ for the proton exist in the $Q^2 \sim$~few~GeV$^2$ range
relevant for the evaluation of the $\gamma Z$ box correction to $Q_W^p$
\cite{Hall2013hta}.
In principle, the computation of the imaginary part of the
$\square_{\gamma Z}^V$ correction to the proton's weak charge at a given
incident energy $E$ requires knowledge of the $\gamma Z$ structure functions
over all kinematics,
\begin{eqnarray}
\hspace{-1.0cm}
\Im m\, \square_{\gamma Z}^V(E)
&=& \frac{1}{(s - M^2)^2}
\int_{W_\pi^2}^s dW^2
\int_0^{Q^2_{\rm max}} dQ^2\, \frac{\alpha(Q^2)}{1+Q^2/M_Z^2}
\left[ F_1^{\gZ}
+ \frac{ s \left( Q^2_{\rm max}-Q^2 \right) }
{ Q^2 \left( W^2 - M^2 + Q^2 \right) } F_2^{\gZ}
\right].
\label{eq:ImBoxV}
\end{eqnarray}
where $\alpha$ is the running electromagnetic coupling evaluated
at the scale $Q^2$, and $M_Z$ is the $Z$ boson mass.
The $W^2$ range covered in the integral lies between the inelastic
threshold, $W_{\pi}^2 = (M + m_\pi)^2$ and the total electron--proton
center of mass energy squared, $s = M^2 + 2 M E$, while the $Q^2$
integration range is from 0 up to $Q^2_{\rm max} = 2ME (1 - W^2/s)$.
(The small mass of the electron is neglected throughout.)
The real part of the $\gamma Z$ box correction which enters in
Eq.~(\ref{eq:qwHO}) can then be determined from the imaginary part
through an unsubtracted dispersion relation \cite{Gorchtein2008px,
Sibirtsev2010zg, Rislow2010vi, Gorchtein2011mz, Hall2013hta},
\begin{eqnarray}
\Re e\, \square_{\gamma Z}^V (E)
&=& \frac{2E}{\pi} {\cal P} \int_0^\infty dE' \frac{1}{E'^2-E^2}\,
\Im m\, \square_{\gamma Z}^V(E'),
\label{eq:DRv}
\end{eqnarray}
where ${\cal P}$ is the Cauchy principal value integral.
While the dispersion relation (\ref{eq:DRv}) is valid only for
forward scattering, because the $Q_{\text{weak}} \,\,$ experiment is performed at a
small scattering angle $\approx 6^\circ$, in practice it provides
a very good approximation.
Note that at high $Q^2$ and large $E$, the total correction
$\Re e\, \square_{\gamma Z}^V$ can also be expressed in terms of the
moments of the $F_1^{\gamma Z}$ and $F_2^{\gamma Z}$ structure functions by
switching the order of the integrations in Eqs.~(\ref{eq:ImBoxV})
and (\ref{eq:DRv}) and expanding the integrand in powers of
$x^2/Q^2$ \cite{Blunden2011rd}.
The higher order terms in $1/Q^2$ are then given in terms of
higher moments of the structure functions. The expansion in
Ref.~\cite{Blunden2011rd} was performed in terms of the
Cornwall-Norton moments, but the expansion could also be
generalized to the Nachtmann moments in Eqs.~(\ref{eq:mu1})
and (\ref{eq:mu2}).
However, because this approximation neglects contributions from
the low-$W$ region, it is appropriate only for DIS kinematics
and is not directly applicable for the present application,
where the integrals are dominated by contributions at low $Q^2$
and $W^2$.
In particular, as we discuss below, at energy $E \sim 1$~GeV,
approximately 2/3 of the integral comes from the traditional
resonance region $W < 2$~GeV and $Q^2 < 1$~GeV$^2$.
In contrast, the contribution from the DIS region for $W > 2$~GeV
and $Q^2 > 1$~GeV$^2$ is $\approx 13\%$ at this energy.
In Refs.~\cite{Hall2013hta, Hall2013loa} the $F_1^{\gamma Z}$
and $F_2^{\gamma Z}$ structure functions were computed from the
phenomenological Adelaide-Jefferson Lab-Manitoba (AJM)
parametrization.
This is based on the electromagnetic structure functions described
in Sec.~\ref{sec:DualSM}, but appropriately rotated to the $\gamma Z$
case according to the specific $W^2$ and $Q^2$ region considered,
with the rotation parameters constrained by phenomenological PDFs
\cite{Hall2013hta} and recent PVDIS data \cite{Wang2013kkc,
Wang2014bba}.
In the AJM model the integrals over $W^2$ and $Q^2$ in
Eq.~(\ref{eq:ImBoxV}) are split into three distinct regions,
characterized by different physical mechanisms underlying the
scattering process. In each region the most accurate parametrizations
or models of $F_1^{\gamma Z}$ and $F_2^{\gamma Z}$ available for the appropriate
kinematics are used.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.65\textwidth]{Q2W2reg.pdf}
\caption{Kinematic regions contributing to the
$\square_{\gamma Z}^V$ integrals in the AJM model.
Region I (blue) includes the nucleon resonance region
at low $W^2$ and $Q^2$;
Region II (red) encompasses the low-$Q^2$, high-$W^2$
region described by Regge theory; and
Region III (green) is the deep-inelastic region
characterized by LT PDFs.
The shaded band between $Q^2=1$ and 2.5~GeV$^2$ represents
the extension of Region III from its previous boundary in
Ref.~\cite{Hall2013hta} ($Q^2=2.5$~GeV$^2$) to its current
reach ($Q^2=1$~GeV$^2$).}
\label{fig:Q2W2}
\end{center}
\end{figure}
In the present analysis, we define the $W^2$ and $Q^2$ regions
as illustrated in Fig.~\ref{fig:Q2W2}.
``Region~I'' (low $Q^2$, low $W^2$) encompasses
$0 \leqslant Q^2 \leqslant 10$~GeV$^2$
for $W_\pi^2 \leqslant W^2 \leqslant 4$~GeV$^2$, and
$0 \leqslant Q^2 \leqslant 1$~GeV$^2$
for $4 < W^2 \leqslant 9$~GeV$^2$, using the $\gamma \gamma \to \gamma Z$
rotated Christy-Bosted parametrization \cite{Christy2010}
of the resonance $+$ background structure functions.
For ``Region~II'' (low $Q^2$, high $W^2$), the vector meson
dominance $+$ Regge model of Alwall and Ingelman
\cite{Alwall2004wk} is used over the range
$0 \leqslant Q^2 \leqslant 1$~GeV$^2$ and $W^2 > 9$~GeV$^2$.
Finally, for ``Region~III'' (high $Q^2$, high $W^2$) the
perturbative QCD-based global fit from Alekhin {\it et al.}
(ABM) \cite{Alekhin2012} is used for $Q^2 > 1$~GeV$^2$ and
$W^2 > 4$~GeV$^2$, which includes LT as well as subleading
$1/Q^2$ TMC and HT contributions.
For $x=1$, the elastic contributions to the structure functions
are computed using the form factor parametrizations from
Ref.~\cite{Kelly2004}.
While the uncertainties on the $\gamma Z$ structure functions in Region~III
are small --- typically a few \%, reflecting the errors on the PDFs
from which they are constructed through the simple replacement of
quark charges $e_q \to g_V^q$ --- the uncertainties in $F_1^{\gamma Z}$
and $F_2^{\gamma Z}$ are expected to be larger at lower $W^2$ and $Q^2$.
In the previous analyses of the $\gamma Z$ correction \cite{Hall2013hta,
Hall2013loa}, the PDF-based description was limited to
$Q^2 > 2.5$~GeV$^2$ (and $W^2 > 4$~GeV$^2$).
Motivated by the observation of duality in the proton and neutron
$F_1^{\gamma \gamma}$ and $F_2^{\gamma \gamma}$ structure functions, and in PVDIS from the
deuteron, as discussed in Sec.~\ref{sec:DualSM}, we further assume the
approximate validity of duality in the $\gamma Z$ proton structure functions
and extend the QCD description of Region~III down to $Q^2 = 1$~GeV$^2$.
Lowering the boundary of the DIS region, which is well constrained
by leading twist PDFs, to smaller $Q^2$ decreases the contribution
from Regions~I and II, and hence reduces the model uncertainty on the
$\gamma \gamma \to \gamma Z$ rotation of the structure functions in this region.
Within the AJM $\gamma Z$ structure function parametrization, the most
uncertain elements are the $\kappa_C^{T, L}$ continuum parameters used to relate
the high-mass, non-resonant continuum part of the $\gamma Z$ transverse
and longitudinal cross sections to the $\gamma \gamma$ cross sections in the
generalized vector meson dominance model \cite{Alwall2004wk,
Sakurai1972wk}.
The $\kappa_C^{T, L}$ parameters are fitted by matching the $\gamma Z$ to $\gamma \gamma$
cross section ratios with the LT structure function ratios at
$Q^2 = 1$~GeV$^2$,
\begin{eqnarray}
\frac{\sigma_T^{\gamma Z} (\kappa_C^T)}{\sigma_T^{\gamma \gamma}}
&=& \left. \frac{F_1^{\gZ}}{F_1^{\gg}} \right|_{\rm LT},
\hspace*{1.5cm}
\frac{\sigma_L^{\gamma Z} (\kappa_C^L)}{\sigma_L^{\gamma \gamma}}\
=\ \left. \frac{F_L^{\gZ}}{F_L^{\gg}} \right|_{\rm LT},
\label{eq:sigTFi}
\end{eqnarray}
where the longitudinal structure function $F_L$ is
related to the $F_1$ and $F_2$ structure functions by
$F_L = \rho^2 F_2 - 2x F_1$ \cite{Hall2013hta}.
(Note that, consistent with the duality hypothesis, we use the LT
structure functions in Region~III rather than the total structure
functions that may include the small subleading contributions
\cite{Alekhin2012}.)
The resulting fit values,
\begin{equation}
\kappa_C^T\, =\, 0.36 \pm 0.15, \qquad \qquad
\kappa_C^L\, =\, 1.5 \pm 3.1,
\label{eq:kappaC}
\end{equation}
are obtained by averaging over the $\kappa_C^{T, L}$ parameter determined from
10 fits with the ratios in Eq.~(\ref{eq:sigTFi}) matched at between
$W^2 = 4$~GeV$^2$ and 13~GeV$^2$.
These values are then used to compute the $\gamma Z$ structure functions
in the dispersion integral for $1 \leqslant Q^2 \leqslant 10$~GeV$^2$
and $W_\pi^2 \leqslant W^2 \leqslant 4$~GeV$^2$.
To allow for stronger violations of duality at lower $Q^2$,
the uncertainties on $\kappa_C^{T, L}$ are inflated to 100\% for the region
$0 \leqslant Q^2 < 1$~GeV$^2$ for all $W^2$.
In the numerical calculations the uncertainties on the proton $\gamma Z$
structure function parametrizations are taken to be the same as those
used in the $\square_{\gZ}^V \,$ calculation in Ref.~\cite{Hall2013hta}, and a 5\%
uncertainty is assumed for the nucleon elastic contributions.
\begin{table}[b]
\begin{center}
\caption{Contributions to \regzv\ from Regions~I, II and III,
and the total, at the kinematics of the $Q_{\text{weak}}$,
MOLLER, and MESA experiments.}
\begin{tabular}{cccc} \hline \hline \vspace*{-0.36cm} \\
& \multicolumn{3}{c}{ \regzv\ ($\times 10^{-3}$)} \\ \cline{2-4}
& \ \ $Q_{\text{weak}} \,\,$ \ \
& \ \ MOLLER \ \
& \ \ MESA \ \ \\
Region\ \ & ($E=1.165$~GeV)
& ($E=11$~GeV)
& ($E=0.18$~GeV) \\ \hline\vspace*{-0.36cm} \\
I & $4.3 \pm 0.4\ $
& $2.5 \pm 0.3$
& $1.0 \pm 0.1$ \\
II & $0.4 \pm 0.05$
& $3.2 \pm 0.5$
& $0.06 \pm 0.01$ \\
III & $0.7 \pm 0.04$
& $5.5 \pm 0.3$
& \ $0.1 \pm 0.01$ \\ \hline
Total & $5.4 \pm 0.4\ $
& \!\!\!\! $11.2 \pm 0.7$
& \!\!\! $1.2 \pm 0.1$ \\ \hline \hline
\end{tabular}
\label{tab:ReBox}
\end{center}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.85\textwidth]{LogReBox.pdf}
\caption{Energy dependence of the $\gamma Z$ box correction,
$\Re e\, \square_{\gamma Z}^V$, to $Q_W^p$.
The contributions from various regions in $W^2$ and $Q^2$
(Regions I, II and III) are shown separately, as is the
total (solid curve). The dashed vertical lines indicate the
beam energies of the various parity-violating experiments
($E = 0.18$~GeV for MESA \cite{MESA},
$E = 1.165$~GeV for $Q_{\text{weak}} \,\,$ \cite{Androic2013}, and
$E = 11$~GeV for MOLLER \cite{MOLLER}.}
\label{fig:QRBLoK}
\end{center}
\end{figure}
Using the $\gamma Z$ structure functions obtained from the newly fitted
$\kappa_C^{T, L}$ values, the $\Re e\, \square_{\gamma Z}^V$ correction is displayed
in Fig.~\ref{fig:QRBLoK} as a function of beam energy, with a
breakdown of the individual contributions from different regions
given in Table~\ref{tab:ReBox}.
At the incident beam energy $E = 1.165$~GeV of the $Q_{\text{weak}} \,\,$ experiment,
the total correction is found to be
\begin{eqnarray}
\Re e\, \square_{\gamma Z}^V &=& (5.4 \pm 0.4) \times 10^{-3}.
\label{eq:ReBox_final}
\end{eqnarray}
This is in good agreement with the value
$\Re e\, \square_{\gamma Z}^V = (5.57 \pm 0.36) \times 10^{-3}$
found in the previous analysis \cite{Hall2013hta}.
In particular, even though the values of the continuum rotation
parameters in the earlier fit were somewhat different
($\kappa_C^T = 0.65 \pm 0.14$ and $\kappa_C^L = -1.3 \pm 1.7$
with matching to the total DIS structure functions at
$Q^2=2.5$~GeV$^2$), the central value of
$\Re e\, \square_{\gamma Z}^V$ remains relatively unaffected.
The largest contribution to \regzv\ at the $Q_{\text{weak}} \,\,$ energy is still
from Region~I, which makes up $\approx 80\%$ of the total,
with its error dominating the total uncertainty.
Of this, $\approx 2/3$ is from the traditional resonance
region $W^2 < 4$~GeV$^2$
(of which 61\% is from $Q^2 < 1$~GeV$^2$
and 6\% from $Q^2 > 1$~GeV$^2$), and
$\approx 13\%$ is from $Q^2 < 1$~GeV$^2$ and
$4 < W^2 < 9$~GeV$^2$.
The contributions from Regions~II and III are $\approx 7\%$
and $\approx 13\%$, respectively, of the total at the $Q_{\text{weak}} \,\,$
energy, but become more important with increasing energy.
Interestingly, the modified $Q^2$ boundary for Region~III
results in a somewhat smaller contribution from Region~II
($0.4 \times 10^{-3}$ compared with $0.6 \times 10^{-3}$),
while the Region~III contribution has doubled
($0.7 \times 10^{-3}$ compared with $0.35 \times 10^{-3}$)
relative to that in Ref.~\cite{Hall2013hta}.
In effect, moving the $Q^2$ boundary from 2.5~GeV$^2$ to
1~GeV$^2$ shifts $\approx 6\%$ of the total correction
$\Re e\, \square_{\gamma Z}^V$ from Regions~I and II to Region~III.
Furthermore, since the $\gamma Z$ structure functions at
$Q^2 < 1$~GeV$^2$ depend on $\kappa_C^{T,L}$, because the $\kappa$
values are refitted at $Q^2 = 1$~GeV$^2$, duality also indirectly
affects the low-$Q^2$ contribution.
Therefore, although duality is formally used only down to
$Q^2 = 1$~GeV$^2$, the constraint influences the $\gamma Z$
calculation below 1~GeV$^2$ as well, as the matching now is
to a more reliable $\gamma Z$ cross section at that point.
While we have assumed the validity of duality for the $F_1^{\gamma Z}$
and $F_2^{\gamma Z}$ structure functions down to $Q^2=1$~GeV$^2$, the
possible violations of duality have a minor effect on the analysis.
Even if one takes the maximum violation of duality ($\approx 14\%$)
in the $\gamma \gamma$ structure functions seen in Fig.~\ref{fig:mugg} at the
lowest $Q^2$ over the entire $1 \leqslant Q^2 \leqslant 2.5$~GeV$^2$
range, the error introduced into the total $\Re e\, \square_{\gamma Z}^V$
from duality violation is $< 0.1\%$.
Overall, compared with Ref.~\cite{Hall2013hta} the total relative
uncertainty increases marginally, from 6.5\% to 7.4\%, despite the
rather more conservative estimates of the structure function
uncertainty for $Q^2 \lesssim 1$~GeV$^2$ through the inflated
errors on $\kappa_C^{T,L}$.
Note that the same 100\% uncertainties are used in the
transformation of the vector meson dominance model
\cite{Alwall2004wk, Sakurai1972wk} in Region~II.
For Region~III, the LT $F_1^{\gamma Z}$ and $F_2^{\gamma Z}$ structure
functions are assigned a 5\% uncertainty for
$Q^2 \geqslant 2.5$~GeV$^2$, which is increased linearly
to 10\% at $Q^2 = 1.0$~GeV$^2$.
Since the electromagnetic structure functions are reasonably
well approximated by the LT results even below the traditional
resonance-DIS boundary of $W^2 = 4$~GeV$^2$, we also examine
the effect of lowering the $W^2$ cut into the peripheral
resonance region down to $W^2 = 3$~GeV$^2$.
In this case the contribution from Region~III increases to
$0.9 \times 10^{-3}$, while that from Region~I correspondingly
decreases to $4.2 \times 10^{-3}$, hence leaving the total
essentially unchanged.
At the higher $E=11$~GeV energy of the planned MOLLER experiment
at Jefferson Lab \cite{MOLLER}, the DIS region contributes
about half of the total,
$\Re e\, \square_{\gamma Z}^V = (11.2 \pm 0.7) \times 10^{-3}$,
with Regions~I and II making up the other 50\%.
This again agrees well with the earlier determination
$\Re e\, \square_{\gamma Z}^V = (11.5 \pm 0.8) \times 10^{-3}$
from Ref.~\cite{Hall2013loa}.
On the other hand, for the possible future MESA experiment
in Mainz \cite{MESA} at a lower energy, $E=0.18$~GeV,
the bulk of the contribution still comes from Region~I,
but is reduced by a factor of $\sim 4$ compared with the
correction at the $Q_{\text{weak}} \,\,$ energy.
\section{Conclusion}
\label{sec:Con}
Quark-hadron duality is one of the most remarkable phenomena ever
observed in hadronic physics.
While some aspects of global duality can be formulated in the
language of QCD, such as the relation between the scale independence
of structure function moments and the size of higher twists,
the detailed workings of local duality, for specific regions
of $W^2$ or $x$, are not well understood from first principles.
Nevertheless, there are many marvellous practical applications
to which duality can be put.
For example, the high-energy behavior of hadronic cross sections
can be used to predict averages of resonance properties; and,
conversely, low-$W^2$ data, suitably averaged, can be utilized to
constrain LT parton distributions in difficult to access kinematic
regions.
The latter category appears the most promising approach at present,
with several global PDF analyses \cite{Alekhin2012, CJ13, JR2014}
extending their coverage down to lower $Q^2$ ($Q^2 \gtrsim 1$~GeV$^2$)
and $W^2$ ($W^2 \gtrsim 3$~GeV$^2$) values than in traditional
LT analyses. This not only increases considerably the available
data base for PDF fitting, it is also one of the few ways currently
available to study PDFs at high $x \sim 1$.
The main implication of duality for the current analysis is the
extension of the LT description of $\gamma Z$ structure functions to
lower $Q^2$, $Q^2 = 1$~GeV$^2$, than in previous work
\cite{Hall2013hta}.
This serves to reduce the size of the contribution from Region~I,
which has the largest uncertainty associated with the behavior
of the $\gamma Z$ structure functions at low $Q^2$ and $W^2$.
To account for the possible model dependence of the $\gamma \gamma \to \gamma Z$
structure function rotation and the violation of duality at low $Q^2$,
we have assigned rather conservative errors on $F_1^{\gamma Z}$ and
$F_2^{\gamma Z}$ in this region. This is reflected in the increased
uncertainty on this contribution compared with our previous analysis
\cite{Hall2013hta}, which is somewhat offset by the larger
contribution from Region~III that is well constrained by PDFs.
The final result of
$\Re e\, \square_{\gamma Z}^V = (5.4 \pm 0.4) \times 10^{-3}$
is consistent with Ref.~\cite{Hall2013hta}, but with a slightly
larger relative uncertainty, which comes almost entirely from
Region~I. It also agrees with the central value from
Ref.~\cite{Gorchtein2011mz}, although the error there is
$\approx 5$ times larger, which in view of our current analysis
appears to be somewhat overestimated.
Our findings suggest that with the constraints from existing
PVDIS data and PDFs, and now with the further support from
quark-hadron duality, the overall uncertainty in the estimate of
the $\gamma Z$ box correction is well within the range needed for an
unambiguous extraction of the weak charge from the $Q_{\text{weak}} \,\,$ experiment.
Further reduction of the uncertainty on the $\gamma Z$ correction will
come from new measurements of PVDIS asymmetries on the proton,
particularly at the low $Q^2$ and $W^2$ values that are most
relevant at the $Q_{\text{weak}} \,\,$ energy. These will also be useful in
constraining the $\gamma Z$ contribution at the much lower energy
$E=0.18$~GeV of the MESA experiment \cite{MESA}, where we find
the correction to be $\approx 4$ times smaller but even more
dominated by Region~I.
In contrast, for the MOLLER experiment at the higher $E=11$~GeV
energy the dispersion integral is dominated by the DIS region,
which although contributing to a larger overall $\square_{\gamma Z}^V$
correction, is better determined in terms of PDFs.
These new experiments hold the promise of allowing the most
precise low-energy determination of the weak mixing angle to date,
and providing a unique window on possible new physics beyond the
Standard Model.
\section*{Acknowledgements}
This work was supported by NSERC (Canada), the DOE Contract No.
DE-AC05-06OR23177, under which Jefferson Science Associates, LLC
operates Jefferson Lab, and the Australian Research Council through an
Australian Laureate Fellowship (A.W.T.), a Future Fellowship (R.D.Y.)
and through the ARC Centre of Excellence for Particle Physics at the
Terascale.
|
1,116,691,498,370 | arxiv | \section{Introduction}
{Weighted ensemble}~\cite{
bhatt2010steady,
chong2017path,costaouec2013analysis,
darve2013computing,dickson,donovan2013efficient,
huber1996weighted,
rojnuckarin1998brownian,
rojnuckarin2000bimolecular,zhang2007efficient,
zhang2010weighted,
zwier2015westpa} is an importance sampling
technique, based on interacting particles, for distributions associated
with a Markov chain.
In this article, we will
focus on sampling the average
of a function, or {\em observable},
with respect to the steady state of
a generic Markov chain. By generic,
we mean that the only thing
we might know about the Markov chain is
how to sample it; in particular,
we may not know its stationary
distribution up to a
normalization factor.
A weighted
ensemble consists of a collection
of {\em particles} with
associated {\em weights}.
The particles evolve between
{\em resampling} steps according
to the law of the underlying Markov
chain. In each resampling step,
some of the particles are {\em copied}
while others are {\em killed};
the resulting particles are
given new weights so that the weighted
ensemble is statistically
unbiased. In this way,
weighted ensemble can be understood
as a kind of sequential Monte
Carlo method~\cite{del2004feynman,del2005genealogical,doucetSMC},
as we explain in more detail below.
The mechanism
for resampling is based on
dividing the particles into
{\em bins}, where
all the particles in a given bin are treated
the same way. This can be understood
as a kind of coarse-graining or stratification.
In practice, the resampling should be designed so that important
particles survive and irrelevant particles
are killed. This effect can be
achieved with bins that sufficiently
resolve the relative
importance of particles, along with
a rule assigning a
number of copies to each bin
that reflects its importance. The definition
of the bins, and how many copies to
maintain in each,
of course requires some care.
With appropriate choices, weighted
ensemble can have drastically smaller
variance than direct Monte
Carlo, or independent particles.
Our contribution in this article is
two-fold. First, we prove the consistency of
weighted ensemble for steady state averages
via an ergodic theorem. We are not
aware of any other ergodic theorems
for interacting-particle importance sampling methods
of the type and generality that we consider here.
And second,
we produce weighted
ensemble variance formulas valid
for a {\em
finite} number of particles.
On
the theoretical side, these
variance formulas are handy for understanding
the rate of weighted ensemble convergence, and on the
practical side, these same formulas
can be used for optimizing the
resampling procedure. We will, however,
mostly leave the discussion of bin
and other parameter choices
to other works, especially
our companion paper~\cite{aristoff2018steady}.
See the references
above and~\cite{aristoff2016analysis,aristoff2018steady}
for discussions on such
practical questions.
\subsection{Description of the method}
Here we describe weighted
ensemble somewhat informally. A
detailed description is
in Section~\ref{sec:WE} below.
Weighted ensemble consists of
a fixed number, $N$, of
particles
belonging to
a common state space, each
carrying a positive scalar weight, and undergoing
repeated resampling and evolution steps.
At each time $t\ge 0$ before resampling, the particles and weights
are $$\xi_t^1,\ldots,\xi_t^N \text{ and }
\omega_t^1,\ldots,\omega_t^N,$$
respectively. After resampling, the particles
and weights are, respectively,
$$\hat\xi_t^1,\ldots,\hat\xi_t^N\text{ and }
\hat\omega_t^1,\ldots,\hat\omega_t^N.$$
Using a genealogical
analogy, the resampling
step is called
{\em selection}, and the particle
evolution between resampling
steps is termed {\em mutation}.
At each time $t$, the
particles and weights undergo
a selection and then a mutation
step. Thus,
weighted ensemble evolves from
time $t$ to $t+1$ as follows:
\begin{align*}
&\{\xi_t^i\}^{i=1,\ldots,N}
\xrightarrow{\textup{selection}}
\{{\hat \xi}_t^i\}^{i=1,\ldots,N}
\xrightarrow{\textup{mutation}}
\{{\xi}_{t+1}^i\}^{i=1,\ldots,N},\\
&\{\omega_t^i\}^{i=1,\ldots,N}
\xrightarrow{\textup{selection}}
\{{\hat \omega}_t^i\}^{i=1,\ldots,{N}}
\xrightarrow{\textup{mutation}}
\{{\omega}_{t+1}^i\}^{i=1,\ldots,N}.
\end{align*}
For the purposes of this
article, the initial particles
$\xi_0^1, \ldots,\xi_0^N$
can be arbitrary. The initial
weights, however, must be strictly
positive and sum to $1$. That is, $\omega_0^i>0$ for all $i$,
and $\omega_0^1+\ldots + \omega_0^N =1$.
The mutation step consists
of the particles evolving independently
according to the law of the
underlying Markov chain. Writing
$K$ for the kernel of this Markov chain,
this means that $\xi_{t+1}^j$ is
distributed as $K({\hat \xi}_t^j,\cdot)$.
The
weights do not change during the
mutation step,
$${\omega}_{t+1}^{j} = \hat{\omega}_{t}^j.$$
The selection step
is completely
characterized by the number
of times each particle is copied,
along with a formula for adjusting
the weights. Let $C_t^j$ be the
number of times $\xi_t^j$ is copied.
Appealing to the genealogical
analogy, we think of $\xi_t^j$
as a {\em parent} and of $C_t^j$ as its
number of {\em children}. Thus,
$$C_t^j = \#\{i: \textup{par}({\hat \xi}_t^i) = \xi_t^j\},$$
where $\textup{par}(\hat{\xi}_t^i) = \xi_t^j$ means $\xi_t^j$ is the parent of
$\hat{\xi}_t^i$, and $\#$ indicates the
number of elements in a set.
Children are just copies of their
parents: $$\hat{\xi}_t^i = \xi_t^j, \quad \text{if }\textup{par}(\hat{\xi}_t^i) = \xi_t^j.$$
The weight of a child
is equal to the weight of its parent, divided
by its parent's average number of children.
For example, if a parent has $3$ children
with probability $1$, then each of its
children have weight equal to $1/3$
of the parent's weight.
Writing $\beta_t^j$ for the mean
value of $C_t^j$, this defines weight adjustments
$$\hat{\omega}_t^i = \frac{\omega_t^j}{\beta_t^j}, \qquad \textup{if par}(\hat{\xi}_t^i) = \xi_t^j.$$
The procedure above defines distributions
$\omega_t^1 \delta_{\xi_t^1}+ \ldots+\omega_t^N \delta_{\xi_t^N}$ at each time $t$,
where $\delta_\xi$ is the Dirac delta distribution
centered at $\xi$. In a sense to
be described below, these
distributions give unbiased
estimates of the law of a Markov
chain with kernel $K$ and the same initial distribution
as the weighted ensemble. This is a
direct result of the fact that the
particles evolve according to $K$
and the resampling is, by construction,
unbiased.
The selection step we have described so
far is general enough to describe
many different interacting particle
methods~\cite{aristoff2016analysis}.
What distinguishes weighted
ensemble from these methods is a particular resampling
mechanism, based on a choice of {\em bins}
${\mathcal B}$ that partition
the set of particles.
The resampling enforces a user-specified
number, $N_t(u)$, of children in each bin $u \in {\mathcal B}$ at time $t$. Variance
reduction can be
achieved by a judicious choice of binning and
of $N_t(u)^{u \in {\mathcal B}}$.
We refer to the latter as the {\em particle allocation}.
The children in bin $u \in {\mathcal B}$
are obtained by sampling $N_t(u)$
times, with replacement, from the
parents in bin $u$, according to their
weight distribution. That is, the $N_t(u)$ children
in bin $u \in {\mathcal B}$ are obtained
by repeatedly sampling their parents according
to the distribution
$$Pr(\textup{sample }\xi_t^j\text{ in bin }u) = \frac{\omega_t^j}{\omega_t(u)}, \qquad \omega_t(u) := \sum_{i\,:\,\xi_t^i \text{ in bin } u}\omega_t^i.$$
The mean number of children of parent $\xi_t^j$ in bin $u \in {\mathcal B}$ is then
$$\beta_t^j = \frac{N_t(u)\omega_t^j}{\omega_t(u)}.$$
Each bin containing a parent
must have at least
one child after selection,
that is, $N_t(u) \ge 1$ whenever
$\omega_t(u)>0$. If a bin $u \in {\mathcal B}$
contains no parents, then of course
it can have no children, meaning $N_t(u) = 0$
if $\omega_t(u) = 0$. The total
number of particles is always $\sum_{u \in {\mathcal B}}N_t(u) = N$.
The childrens' weights
in any bin $u\in {\mathcal B}$ are all equal to $\omega_t(u)/N_t(u)$ after selection.
So since $N_t(u)$ counts the number of children
in bin $u$, the total particle weight
in each bin is the same before and
after selection. As a result,
the total particle weight is
preserved over time, $\omega_t^1 + \ldots + \omega_t^N = 1$ for all $t \ge 0$.
This feature is important for our
ergodic theorem, as we will discuss below.
The form of this selection step was proposed and
shown to be optimal, among bin-based resampling
mechanisms, in~\cite{costaouec2013analysis,darve2013computing}.
This selection step is slightly different from
the one in our earlier paper~\cite{aristoff2016analysis},
and simpler than the
one
described by Huber in
the original weighted ensemble
paper~\cite{huber1996weighted}.
In Huber's article,
the selection step
is presented as a splitting
and merging scheme, in which
a particle's weight is viewed
as its ``size.''
In this interpretation, splitting a particle
means dividing it into
equal sized pieces, and merging means
fusing several particles
together. The selection
step defined above can be estimated
via a splitting
and merging procedure in each bin.
Indeed
this is a useful way to understand
the
preservation of total weight.
\subsection{Main result}
We are interested in estimating the average of an observable
with respect to the stationary
distribution of a
Markov chain. We use the notation
$$\text{Quantity of interest} = \int f \,d\mu,$$ with $f$
the observable and $\mu$ the
stationary distribution of the Markov kernel $K$.
We will estimate
$\int f\,d\mu$ via the weighted
ensemble time averages
$$\theta_T = \frac{1}{T}\sum_{t=0}^{T-1}\sum_{i=1}^N \omega_t^i f(\xi_t^i).$$
Our main result is that, if $K$
is uniformly geometrically ergodic~\cite{douc}, then
\begin{equation}\label{erg0}
\lim_{T \to \infty} \theta_T \stackrel{a.s.}{=} \int f\,d\mu,
\end{equation}
where $\stackrel{a.s.}{=}$ indicates equality
with probability $1$.
We prove this ergodic theorem by
analyzing the behavior, for fixed finite $N$
in the limit
$T \to \infty$,
of the {\em selection variance}
and {\em mutation variance} of
weighted ensemble.
As the names indicate, these
are the variances of $\theta_T$ associated with
each of the the selection and mutation steps
described above. We also
obtain formulas for the
scaling in time of the
error in~\eqref{erg0}. An overview
of these and more results is in
Section~\ref{sec:intro4}.
\subsection{Applications and comparison with related methods}
As an interacting particle
method where the interaction
is based on
resampling, weighted
ensemble can be understood
as a kind of sequential
Monte Carlo method (for a
very incomplete bibliography,
see {\em e.g.} the textbooks~\cite{doucet2001sequential,del2004feynman},
the review~\cite{del2014particle}
and the articles~\cite{del2005genealogical,weare,assaraf}).
Indeed, sequential Monte Carlo
and weighted ensemble share
the same basic
selection and mutation
structure. Weighted ensemble,
however, is
different from most sequential
Monte Carlo methods for a few reasons:
\begin{itemize}
\item The resampling mechanism is unusual.
Most sequential Monte Carlo methods
use resampling schemes based on a
globally defined fitness function. Weighted
ensemble is somewhat unique in that the resampling
is based on a stratification or
binning of state space. See
however island particle models~\cite{island1,island2} for
a similar mechanism in a different
context.
\vskip3pt
\item The quantity of interest is different.
In sequential Monte Carlo, the quantity of
interest often takes a form similar to
\begin{align}\begin{split}\label{seq_MC}
&{\mathbb E}\left[f(\xi_T)\prod_{t=0}^{T-1} G_t(\xi_t,\ldots,\xi_0)\right]\\
&\qquad \text{ or }\qquad\frac{{\mathbb E}\left[f(\xi_T)\prod_{t=0}^{T-1} G_t(\xi_t,\ldots,\xi_0)\right]}{{\mathbb E}\left[\prod_{t=0}^{T-1} G_t(\xi_t,\ldots,\xi_0)\right]},
\end{split}
\end{align}
where $(\xi_t)_{t \ge 0}$ is a Markov
chain with kernel $K$. In these cases,
weights may not be needed, or if
they are, they
can have a different meaning, where
they depend directly on the values of $G_t(\xi_t,\ldots,\xi_0)$
as well as on the resampling adjustments.
In weighted ensemble, the weights arise
{\em only} from resampling adjustments
and the initial condition.
\vskip3pt
\item The applications are different. Weighted
ensemble was developed for computing reaction
rates in chemistry. Such rates, as we explain below, can be computed from a steady
state average $\int f \,d\mu$. Most sequential Monte Carlo
applications are either in diffusion or quantum Monte Carlo~\cite{assaraf,rousset1,rousset2,weare} or
in filtering~\cite{del2004feynman},
where the quantities of interest have a form similar to~\eqref{seq_MC} above, and the $G_t$ may be associated
to a potential energy or a likelihood ratio.
\vskip2pt
Sequential Monte
Carlo is sometimes used for rare
event importance sampling~\cite{del2005genealogical,chraibi2018optimal,webber}.
In this context, which is the most similar
to weighted ensemble, the $G_t$ are
user-chosen importance biasing functions,~\eqref{seq_MC}
defines the biased distributions
from which the particles are sampled,
and the quantity of interest usually has the form
${\mathbb E}[f(\xi_T)]$. Here, because the
quantity of interest is an average with respect
to an {unbiased} distribution,
weight adjustments are needed to account for the
importance biasing. The resulting weights can become
degenerate over
large times even for carefully designed
biasing functions $G_t$. One advantage of weighted ensemble, beyond its simplicity and generality,
is that it automatically avoids this degeneracy. See Section~\ref{sec:counterexample} below for discussion.
\end{itemize}
Weighted ensemble was developed
for applications in computational
chemistry~\cite{huber1996weighted}
ranging from state space
exploration~\cite{dickson} to protein association~\cite{huber1996weighted} and protein
folding~\cite{zwier2010reaching}. The
application we have in mind is
the computation of the characteristic
time for a protein to fold.
The mean folding time
is the inverse of the steady state
flux of particles from
the unfolded into
the folded state~\cite{hill}.
Since this flux is typically very small,
importance sampling is
needed to
estimate it
with substantial precision~\cite{bhatt2010steady,suarez}.
The
protein folding time problem
is one of the most important
applications of weighted
ensemble. In this setup, $K$ is usually
a time discretization of Langevin
molecular dynamics~\cite{tony_book}, with a source
in the unfolded state and sink
in the folded state~\cite{bhatt2010steady}. The observable $f$ is the
characteristic or indicator function
of the folded state, and $\int f\,d\mu$ approximates
the
steady state flux into the folded
state, up to a multiple
given by the time step~\cite{aristoff2016analysis,
aristoff2018steady,bhatt2010steady}.
Other methods that are similar to
weighted ensemble, besides
sequential Monte Carlo, include
Exact Milestoning~\cite{bello2015exact},
Non-Equilibrium Umbrella
Sampling~\cite{dinner2016trajectory,warmflash2007umbrella}, Transition
Interface Sampling~\cite{van2003novel}, and
Trajectory Tilting~\cite{vanden2009exact}. All of these methods
use iterative procedures, based
on stratification, to approximate
some steady state of interest.
Like weighted ensemble, they
do not require any strong assumptions (like
reversibility) on the underlying
Markov chain.
These methods,
however, suffer from
a finite particle number
bias which can be difficult to
quantify.
Weighted ensemble, like sequential Monte Carlo, is {unbiased}~\cite{zhang2010weighted}
in a sense to be described below (Theorem~\ref{thm_unbiased}).
Other unbiased methods, differing
from weighted ensemble in that they
rely on sampling ``forward'' paths ({\em e.g.}
paths going from the unfolded to the
folded state) instead
of steady state,
include Adaptive Multilevel
Splitting and Forward Flux Sampling~\cite{allen2006forward,
brehier_AMS,brehier_AMS2,tony_AMS}.
The unbiased property allows
for a straightforward study of variance using
martingale techniques~\cite{aristoff2016analysis,
brehier_AMS2,del2004feynman,del2005genealogical}.
In this article, we extend these techniques to study the long-time stability of weighted
ensemble.
\subsection{Contents of the article}\label{sec:intro4}
Our main results concern the stability
of weighted ensemble over large times~$T$: we prove an $O(1/{T})$ scaling
of the $L^2$ error (Corollary~\ref{cor_scale_var}) and an ergodic theorem (Theorem~\ref{thm_main})
for weighted ensemble time averages.
We show that analogous results {do not} hold for
sequential Monte Carlo with resampling
based on Gibbs-Boltzmann potentials~\cite{del2005genealogical}, even
when the potentials are carefully designed
(Section~\ref{sec:counterexample}).
The
lesson is that long-time computations are
very sensitive to the structure of the selection step
(Remark~\ref{sel_sensitive}).
Because our ergodic theorem
allows for essentially
arbitrary binning, we do not
automatically get $O(1/N)$ scaling
of the error in the number,
$N$, of particles. In Remark~\ref{rmk_var_scale}
and in Section~\ref{sec:compare_naive},
we discuss the scaling of the weighted
ensemble variance
in $N$, including how to beat direct Monte Carlo.
In a companion paper~\cite{aristoff2018steady},
we discuss how to optimize the bins and particle
allocation for minimizing the variance, together with
other issues related to
implementing weighted ensemble.
See~\cite{aristoff2016analysis}
for discussions of optimization in
the more standard finite
time setting. (See also~\cite{chraibi2018optimal}
and the references therein for
closely related work.)
For our ergodic theorem,
we do {\em not} consider weighted
ensemble time averages
over the ancestral lines of
particles that survive up to the final time
(Remark~\ref{R1}). Such
time averages can have large variances
because they do
not include contributions from particles
that are killed before the final time.
For this reason, we instead define time averages
using contributions from
the weighted ensemble at each
time (equation~\eqref{theta_T}).
Our time averages require no
particle storage and
should have smaller variances than
naive averages over surviving ancestral lines
(Section~\ref{sec:compare_avg}).
This article is organized as follows. In Section~\ref{sec:WE},
we describe weighted ensemble in more detail. In Section~\ref{sec:mainresult},
we formally
state our ergodic theorem (Theorem~\ref{thm_main}). Proofs of our main
results, including the
ergodic theorem and the unbiased property (Theorem~\ref{thm_unbiased}),
are in Section~\ref{sec:proofs}. There we also compute the
contributions to the weighted
ensemble variance
arising from mutation (Lemma~\ref{lem_mut_var}) and from selection
(Lemma~\ref{lem_sel_var}) at each time,
which we use to estimate the
$L^2$ error of our time averages (Corollary~\ref{cor_scale_var}).
In Section~\ref{sec:compare_naive},
we show
why weighted ensemble can
outperform direct Monte Carlo, and in Section~\ref{sec:compare_avg},
we explain why our time averages are better than averages over
surviving ancestral lines.
In Section~\ref{sec:counterexample}, we compare weighted ensemble against sequential Monte
Carlo with Gibbs-Boltzmann resampling. In
a numerical example, we verify our
ergodic theorem and variance scaling results, and we show that
the ergodic theorem fails for this
sequential Monte Carlo method.
\section{Weighted ensemble}\label{sec:WE}
\subsection{Detailed description}
\hskip15pt
The parent particles $\xi_t^1,\ldots,\xi_t^{N}$ and
their children $\hat{\xi}_t^1,\ldots,\hat{\xi}_t^{N}$ belong to a
common state space, and
their weights $\omega_t^1,\ldots,\omega_t^{N}$ and $\hat{\omega}_t^1,\ldots,\hat{\omega}_t^{N}$
are positive real numbers. To avoid technical
problems associated with existence
of an appropriate probability
space, we assume that the state space of the particles is a standard
Borel or ``nice'' space
(see Section 2.1.4 of~\cite{Durrett}).
At each time,
every particle is assigned a unique
bin.
Writing $\textup{bin}(\xi_t^i) = u$
to indicate that $\xi_t^i$ is in bin $u\in {\mathcal B}$, this means that
\begin{equation}\label{bin_part}
\sum_{u \in {\mathcal B}} \mathbbm{1}_{\textup{bin}(\xi_t^i) = u} = 1, \qquad i=1,\ldots,N,
\end{equation}
where $\mathbbm{1}_E$ is the indicator function of the event $E$ ($\mathbbm{1}_E = 1$ if $E$ is true, and otherwise
$\mathbbm{1}_E = 0$).
The bins are an essentially
arbitrary partition of the particles;
our only requirement is that the
bin associations $(\mathbbm{1}_{\textup{bin}(\xi_t^i) = u})^{u \in {\mathcal B},i=1,\ldots,N}$ at each time $t$ are
known before the selection step
at time $t$.
Thus ${\mathcal B}$ is simply an
abstract bin labelling set.
The weight of bin $u$ at
time $t$ is
\begin{equation}\label{omega_t}
\omega_t(u) = \sum_{i:\textup{bin}(\xi_t^i) = u} \omega_t^i.
\end{equation}
The weight of a bin without
any particles is zero.
We write $\text{par}(\hat{\xi}_t^i) = \xi_t^j$
to indicate $\xi_t^j$ is the parent of ${\hat \xi}_t^i$. This is a slight abuse of notation,
because a child always has exactly one parent, even if two parents occupy the same point in state space. Thus $\text{par}(\hat{\xi}_t^i) = \xi_t^j$ really means the particle
indexed by $i$ after the selection step at time
$t$ has parent indexed by $j$.
In particular, $\textup{par}(\hat{\xi}_t^i) = \xi_t^j$ implies $\hat{\xi}_t^i = \xi_t^j$,
but not conversely.
Recall that $C_t^i$ is the number of children of
parent $\xi_t^i$, and that $N_t(u)$
is the number of children in bin $u \in {\mathcal B}$ at time $t$. We require that the particle
allocation $N_t(u)^{u \in {\mathcal B}}$ at
time $t$ is known before the selection step at time $t$. Children are of course associated with the same bin as their parents. That is, $\textup{bin}(\hat{\xi}_t^i) = \textup{bin}(\textup{par}(\hat{\xi}_t^i))$.
We assume that each bin with a parent
must have at least
one child, and a bin with no
parents can produce no children:
\begin{align}\begin{split}\label{A0}
N_t(u)\ge 1 \text{ if }\omega_t(u)>0, \qquad N_t(u) = 0 \text{ if } \omega_t(u) = 0.
\end{split}
\end{align}
Let ${\mathcal F}_t$ and
$\hat{\mathcal F}_t$ be
the $\sigma$-algebras generated by, respectively,
\begin{align}\begin{split}\label{sig_algs}
&(\xi_s^i, \omega_s^i)_{0 \le s \le t}^{i=1,\ldots,N},N_s(u)_{0 \le s \le t}^{u \in {\mathcal B}},(\mathbbm{1}_{\textup{bin}(\xi_s^i) = u})_{0 \le s \le t}^{u \in {\mathcal B},i=1,\ldots,N}, \\
&\qquad \qquad (\hat{\xi}_s^i,\hat{\omega}_s^i)_{0 \le s \le t-1}^{i=1,\ldots,N}, (C_s^i)_{0 \le s \le t-1}^{i=1,\ldots,N}\\
&\text{and} \\
&(\xi_s^i, \omega_s^i)_{0 \le s \le t}^{i=1,\ldots,N},N_s(u)_{0 \le s \le t}^{u \in {\mathcal B}},(\mathbbm{1}_{\textup{bin}(\xi_s^i) = u})_{0 \le s \le t}^{u \in {\mathcal B},i=1,\ldots,N},\\
&\qquad \qquad (\hat{\xi}_s^i,\hat{\omega}_s^i)_{0 \le s \le t}^{i=1,\ldots,N}, (C_s^i)_{0 \le s \le t}^{i=1,\ldots,N}.
\end{split}
\end{align}
We think of ${\mathcal F}_t$ and $\hat{\mathcal F}_t$ as the information
from weighted ensemble up
to time $t$ inclusive,
before and after the selection step, respectively.
Thus,~\eqref{sig_algs} formalizes the rule that the
particle allocation $N_t(u)^{u \in {\mathcal B}}$ and bin associations $(\mathbbm{1}_{\textup{bin}(\xi_t^i) = u})^{u \in {\mathcal B},i=1,\ldots,N}$
must be known
at time $t$ before selection.
Recall that the bin associations and particle allocation are user-chosen parameters, subject of course to the
constraints above. The weighted ensemble
initial condition is also chosen by
the user.
The remaining processes in~\eqref{sig_algs} --
namely the particles, their weights,
and the number of children of each parent --
are defined in terms of the bin associations, particle allocation, and initial condition, in a manner to be described in Algorithm~\ref{alg2} below.
Below, recall that $K$ is the kernel
of the underlying Markov process.
\begin{algorithm}[Weighted ensemble]\label{alg2}
\mbox{}
\begin{itemize}
\item {\em (Initialization)} Choose an initial probability distribution $\nu$
on state space, and pick initial
particles and weights distributed as $\nu$, in
the following sense:
\vskip5pt
\noindent The initial particles
and weights satisfy
\begin{equation}\label{initialization}
\sum_{i=1}^N \omega_0^i = 1, \qquad {\mathbb E}\left[\sum_{i=1}^N \omega_0^i g(\xi_0^i)\right] = \int g\,d\nu \quad\forall\text{ bounded measurable }g.
\end{equation}
\end{itemize}
Then for $t = 0,1,2,\ldots$,
iterate the following steps:
\begin{itemize}
\item {\em (Selection)}
Each parent $\xi_t^i$ is assigned
a number $C_t^i$ of children, as follows.
\vskip5pt
\noindent For each $u \in {\mathcal B}$, conditional on ${\mathcal F}_t$, let $(C_t^i)^{i:\textup{bin}(\xi_t^i) = u}$ be
multinomial with
$N_t(u)$ trials
and event probabilities
$\omega_t^i/\omega_t(u)$. Thus for each $u \in {\mathcal B}$,
\begin{align}\begin{split}\label{def_Cti}
&{\mathbb E}\left[\left.\prod_{i:\textup{bin}(\xi_t^i) = u} \mathbbm{1}_{C_t^i = n_i}\right|{\mathcal F}_t\right] \\
&\quad= \frac{N_t(u)!}{\prod_{i:\textup{bin}(\xi_t^i) = u} n_i!}
\prod_{i:\textup{bin}(\xi_t^i) = u}\left(\frac{\omega_t^i}{\omega_t(u)}\right)^{n_i}\mathbbm{1}_{N_t(u) = \sum_{i:\textup{bin}(\xi_t^i) = u} n_i}.
\end{split}
\end{align}
Such assignments in distinct bins are
conditionally independent:
\begin{align}\begin{split}\label{Cti_indep}
&{\mathbb E}\left[\left.\prod_{i:\textup{bin}(\xi_t^i) = u}\mathbbm{1}_{C_t^i = n_i}\prod_{j:\textup{bin}(\xi_t^j) = v} \mathbbm{1}_{C_t^j = n_j}\right|{\mathcal F}_t\right] \\
& \quad ={\mathbb E}\left[\left.\prod_{i:\textup{bin}(\xi_t^i) = u}\mathbbm{1}_{C_t^i = n_i}\right|{\mathcal F}_t\right]{\mathbb E}\left[\left.\prod_{j:\textup{bin}(\xi_t^j) = v} \mathbbm{1}_{C_t^j = n_j}\right|{\mathcal F}_t\right],\qquad \text{if }u \ne v \in {\mathcal B}.
\end{split}
\end{align}
The children are defined implicitly by
\begin{equation}\label{children}
C_t^i = \#\left\{j : \textup{par}(\hat{\xi}_t^j) = \xi_t^i\right\}, \qquad i=1,\ldots,N,
\end{equation}
since the children's indices do not matter.
The children's weights are
\begin{equation}\label{weights}
{\hat \omega}_t^i = \frac{\omega_t(u)}{N_t(u)}, \qquad \textup{if bin}({\hat \xi}_t^i) = u.
\end{equation}
\item {\em (Mutation)}
Each child $\hat{\xi}_t^i$ independently evolves one time step, as follows.
\vskip5pt
\noindent Conditionally on $\hat{\mathcal F}_t$, the children evolve independently according
to $K$:
\begin{align}\begin{split}\label{particle_mut}
{\mathbb E}\left[\left.\prod_{i=1}^N
\mathbbm{1}_{\xi_{t+1}^i \in A_i}\right|\hat{\mathcal F}_t\right] = \prod_{i=1}^N K({\hat \xi}_t^i, A_i),\quad \forall\text{ measurable }A_1,\ldots, A_N,
\end{split}
\end{align}
becoming the next parents. Their weights remain the same:
\begin{equation}\label{weight_mut}
\omega_{t+1}^i = {\hat \omega}_t^i, \qquad i=1,\ldots,N.
\end{equation}
\end{itemize}
\end{algorithm}
Some remarks are in order. It
follows from~\eqref{A0} and Algorithm~\ref{alg2}
that the total particle weight is preserved over time.
Below, we will also assume that the total number of particles
is constant over time. Thus,
\begin{equation}\label{fix_tot}
\sum_{i=1}^N \omega_t^i = \sum_{u \in {\mathcal B}} \omega_t(u) = 1, \qquad \sum_{u \in {\mathcal B}} N_t(u) = N.
\end{equation}
Actually, the restriction that
the total number of particles is
$N$ is not even needed for
our ergodic theorem. We include
this assumption anyway, for simplicity and
because it is usually enforced in practice.
The indices of the children are unimportant,
so~\eqref{children}, which gives the number of children of each parent, suffices to define them.
For the ergodic theorem (Theorem~\ref{thm_main}), the details
of the initialization step
do not matter, though
they are needed for the unbiased
property (Theorem~\ref{thm_unbiased}).
In the selection step, we use
multinomial resampling to define
$(C_t^i)^{i=1,\ldots,N}$.
Multinomial resampling is not
often used in practice
due to its large variance.
We present the algorithm
with multinomial resampling mainly
because it simplifies our mathematical
presentation and proofs. Our
results are not limited
to this setup, however: in Section~\ref{sec:remarks} below
we consider residual resampling (Corollary~\ref{cor_erg}). Residual resampling
is a practical method with variance
on par with other commonly used resampling techniques
like stratified and systematic
resampling~\cite{cappe,chopin,douc1,webber2}.
We did not explain
how we choose the particle allocation $N_t(u)_{t \ge 0}^{u \in {\mathcal B}}$, besides requiring
that~\eqref{A0} holds and $\sum_{u \in {\mathcal B}} N_t(u) = N$. The
ergodic theorem holds for any particle
allocation satisfying~\eqref{A0}, though
of course a reduction in variance
compared to direct Monte
Carlo requires conscentious choices. A good particle allocation can lead to $O(1/N)$
scaling of the variance, with a better
prefactor than direct Monte Carlo; see
Sections~\ref{sec:compare_naive}-\ref{sec:compare_avg}
and Remark~\ref{rmk_var_scale}.
In applications of weighted ensemble, $N$ is usually only moderately large.
This makes the question of particle
allocation important.
Traditionally, the particles are
evenly distributed throughout the
occupied bins. This can
work well, but is
far from optimal for most bins.
We explain how to optimize
the bins and the particle allocation
in our companion paper~\cite{aristoff2018steady}.
We expect that, under
suitable conditions on the
allocation, there is a law of large numbers
and propagation of
chaos~\cite{del2004feynman}
as
$N \to \infty$, but
we leave this to future work.
We also did not explicitly
define the bin associations
$(\mathbbm{1}_{\textup{bin}(\xi_t^i) = u})_{t \ge 0}^{u \in {\mathcal B},i=1,\ldots,N}$.
To make things clearer,
we sketch some
possibilities.
Let ${\mathcal S}$ be a
collection of disjoint
sets whose union is all
of state space. We could
define bins using this partition:
simply let ${\mathcal B} = {\mathcal S}$ and
define $\textup{bin}(\xi) = u
$ when $\xi \in u$. This is the
most common way to define
weighted ensemble bins. We emphasize,
however, that {the bins are
allowed to change in time and
need not be associated with any
partition of state space.}
We could also
pick bins for the
merging and splitting procedure
described by Huber in~\cite{huber1996weighted}.
In this case, there are two types of bins:
bins $u$ comprised of some
particles we want to merge together
(for instance, a group of
particles with small weights
in some set in ${\mathcal S}$), and
bins $v$ consisting of a particle
we want to split apart (for example, a particle with the largest weight in
some set in ${\mathcal S}$).
Merging and splitting then corresponds to $N_t(u) = 1$
for the first type of bin and
$N_t(v) \ge 2$ for the second type.
In this formulation, Huber's merging and splitting step corresponds
to multiple
selection steps of Algorithm~\ref{alg2}.
But since
the number of such selection steps
before each mutation step is bounded in $T$, the arguments below that prove the ergodic
theorem are still valid.
\section{Ergodic theorem}\label{sec:mainresult}
\hskip15pt Let $f$ be a bounded measurable real-valued function on state space. Pick
a deterministic time horizon $T \ge 1$, and
consider the time average
\begin{equation}\label{theta_T}
\theta_T = \frac{1}{T}\sum_{t = 0}^{T-1} \sum_{i=1}^N \omega_t^i f(\xi_t^i).
\end{equation}
Since $f$ is bounded and $\sum_{i=1}^N \omega_t^i = 1$ for each $t \ge 0$, we have
\begin{equation}\label{theta_T_bdd}
|\theta_T|\le \sup |f|.
\end{equation}
\begin{assumption}\label{A1}
There is $c>0$, $\lambda \in [0,1)$ and a probability measure $\mu$ so that
$$\|K^t(\xi,\cdot) - \mu\|_{TV} \le c \lambda^t, \qquad \text{for all }\xi \text{ and all }t\ge 0.$$
That is, $K$ is uniformly geometrically ergodic
with respect to $\mu$~\cite{douc}.
\end{assumption}
\begin{theorem}[Ergodic theorem]\label{thm_main}
If Assumption~\ref{A1} holds, then
\begin{equation*}
\lim_{T \to\infty} \theta_T \stackrel{a.s.}{=}\int f\,d\mu.
\end{equation*}
\end{theorem}
\begin{remark}\label{sel_sensitive}
The ergodic theorem fails for
many sequential
Monte Carlo methods,
including the ones
described in~\cite{del2005genealogical}
and in our earlier paper~\cite{aristoff2016analysis}.
This is despite
these methods being unbiased
(see Theorem~\ref{thm_unbiased} below).
Ergodicity is
sensitive to the details of the selection step, and is not automatically attained with any unbiased method. To illustrate this, in Section~\ref{sec:counterexample}
examine how the ergodic theorem
fails for sequential
Monte Carlo
with resampling based on Gibbs-Boltzmann potentials~\cite{del2005genealogical}.
\end{remark}
The time average in~\eqref{theta_T}
is {\em not}
a time average
over the ancestral lines
of particles that survive
up to the final time
(see Remark~\ref{R1}).
Instead, it is an average over
the weighted ensemble at each time.
This form of the time average
has favorable variance properties;
see Section~\ref{sec:compare_avg} below.
As~\eqref{theta_T} does not require storage of
children or parents, it
can be computed more
efficiently than
a time average over ancestral
lines.
\begin{remark}\label{R1}
We could also consider time averages over ancestral lines
that survive up to time $T-1$. Indeed,
we can consider $\xi_t^i$ as a path-particle
\begin{equation*}
\xi_t^i = (\xi_{t,0}^i, \xi_{t,1}^i,\ldots,\xi_{t,t}^i),
\end{equation*}
where the vector is its ancestral history,
and define time averages using
\begin{equation}\label{theta_T_tilde}
\tilde{\theta}_T = \frac{1}{T}\sum_{t=0}^{T-1}\sum_{i=1}^N \omega_{T-1}^i f(\xi_{T-1,t}^i).
\end{equation}
It should be possible
to prove, with appropriate assumptions, that $\tilde{\theta}_T$
converges to $\int f\,d\mu$ with probability $1$, though we do not investigate this here.
This result {would not} carry
over to sequential Monte Carlo with
Gibbs-Boltzmann resampling, however;
see Section~\ref{sec:counterexample}
for discussion.
\end{remark}
\section{Variance analysis and results}\label{sec:proofs}
\hskip15pt This section is organized as follows.
In Section~\ref{sec:pfs1}, we
compute the means and covariances
corresponding to the selection
and mutation steps of weighted ensemble,
by filtering through the
$\sigma$-algebras
${\mathcal F}_t$ and $\hat{\mathcal F}_t$.
The formulas for the means lead to the unbiased
property (Theorem~\ref{thm_unbiased}).
In Section~\ref{sec:pfs2}, we
introduce a Doob decomposition
to estimate the variance of $\theta_T$ (Lemma~\ref{lem_bdd_var}) and the $L^2$ error
(Corollary~\ref{cor_scale_var}).
In Section~\ref{sec:pfs3},
we use these variance estimates to
prove our ergodic theorem
(Theorem~\ref{thm_main}).
In Section~\ref{sec:remarks}
we sketch some generalizations
of our results, including an
extension of our ergodic
theorem and variance formulas
to residual resampling (Proposition~\ref{prop_resid} and Corollary~\ref{cor_erg}).
Below, some of our computations concerning
the mutation mean and variance,
as well as the Doob decomposition,
can be
found in the same or similar
form in~\cite{aristoff2016analysis}.
The results
below concerning the selection
means and covariances, and
scaling with respect to $T$ of the overall variance,
do not appear in~\cite{aristoff2016analysis}. Moreover,
our formulas for the
weighted ensemble selection variance (Lemma~\ref{lem_sel_var} and
Proposition~\ref{prop_resid})
are not direct consequences
of
analogous results in~\cite{aristoff2016analysis}.
As discussed above, an ergodic
theorem will hold only for
a carefully designed selection
step. We emphasize that the selection
step in Algorithm~\ref{alg2}
is different from the one
described in~\cite{aristoff2016analysis}.
In particular the algorithm
in~\cite{aristoff2016analysis}, which is
appropriate for the rare event setting
and not long-time
sampling, fails to satisfy an ergodic
theorem.
Our proofs are different from the usual
ones in sequential Monte Carlo, though
they do rely on the same basic telescopic
decomposition of the variance~\cite{del2004feynman}.
In sequential Monte Carlo,
the classical analyses (see {\em e.g.}
the textbook~\cite{del2004feynman}
and the article~\cite{del2005genealogical})
are based on
framing the particles
as empirical approximations
of an underlying (usually continuous)
measure, informally corresponding to $N \to \infty$.
Their weights, if they are needed, are
usually then handled at a later stage.
By contrast, we treat the
particles together with their weights directly,
at finite $N$,
without appealing to the
underlying continuous
measures. As a consequence, we believe our
analyses are more straightforward
and intuitive. In
our companion papers~\cite{aristoff2016analysis,aristoff2018steady}
we use this perspective to directly
minimize variance for finite
$N$, instead
of the usual tactic of
minimizing the
asymptotic variance as $N \to \infty$~\cite{chraibi2018optimal}.
\subsection{One step means and variances}\label{sec:pfs1}
Below, $g$ denotes an arbitrary
bounded measurable real-valued function on
state space, and $c$ denotes a
positive constant whose value
can change without
explicit mention. To make
notations more convenient,
we adopt the convention that
sums over the empty set equal zero,
and $0/0 = 0$. For example, if $\omega_t(u) = N_t(u) = 0$, then $\omega_t(u)/N_t(u) = 0$.
\begin{lemma}\label{lem1}
The one-step mutation mean for a single particle is
\begin{equation*}
{\mathbb E}\left[\left.\omega_{t+1}^i g(\xi_{t+1}^i)\right|\hat{\mathcal F}_t\right] = \hat{\omega}_t^i Kg(\hat{\xi}_t^i).
\end{equation*}
\end{lemma}
\begin{proof}
From~\eqref{particle_mut}, ${\mathbb E}[g(\xi_{t+1}^i)|\hat{\mathcal F}_t] = Kg(\hat{\xi}_t^i)$. Thus by~\eqref{weight_mut},
\begin{align*}
{\mathbb E}\left[\left.\omega_{t+1}^i g(\xi_{t+1}^i)\right|\hat{\mathcal F}_t\right] = \hat{\omega}_t^i {\mathbb E}\left[\left.g(\xi_{t+1}^i)\right|\hat{\mathcal F}_t\right] = \hat{\omega}_t^i Kg(\hat{\xi}_{t}^i).
\end{align*}
\end{proof}
\begin{lemma}\label{lem2}
The one-step mutation covariance for two particles is given by
\begin{equation*}
{\mathbb E}\left[\left.
\omega_{t+1}^i \omega_{t+1}^j g(\xi_{t+1}^i)g(\xi_{t+1}^j)\right|\hat{\mathcal F}_t\right] =
\begin{cases} \hat{\omega}_t^i \hat{\omega}_t^j Kg(\hat{\xi}_t^i)Kg(\hat{\xi}_t^j), & i \ne j \\
\left(\hat{\omega}_t^i\right)^2Kg^2(\hat{\xi}_t^i), & i = j\end{cases}
\end{equation*}
\end{lemma}
\begin{proof}
From~\eqref{particle_mut} and~\eqref{weight_mut},
\begin{align*}
{\mathbb E}\left[\left.
\omega_{t+1}^i \omega_{t+1}^j g(\xi_{t+1}^i)g(\xi_{t+1}^j)\right|\hat{\mathcal F}_t\right] &= \hat{\omega}_t^i \hat{\omega}_t^j{\mathbb E}\left[\left.
g(\xi_{t+1}^i)g(\xi_{t+1}^j)\right|\hat{\mathcal F}_t\right] \\
&=
\begin{cases} \hat{\omega}_t^i \hat{\omega}_t^j Kg(\hat{\xi}_t^i)Kg(\hat{\xi}_t^j), & i \ne j \\
\left(\hat{\omega}_t^i\right)^2Kg^2(\hat{\xi}_t^i), & i = j\end{cases}.
\end{align*}
\end{proof}
\begin{lemma}\label{lem3}
The one-step selection mean in bin $u$ is
\begin{equation*}
{\mathbb E}\left[\left.\sum_{i:\textup{bin}(\hat{\xi}_t^i) = u}\hat{\omega}_t^i g(\hat{\xi}_t^i)\right|{\mathcal F}_t\right] =
\sum_{i:\textup{bin}(\xi_t^i) = u} \omega_t^i g(\xi_t^i).
\end{equation*}
\end{lemma}
\begin{proof}
By~\eqref{def_Cti}, ${\mathbb E}[C_t^i|{\mathcal F}_t] = N_t(u)\omega_t^i/\omega_t(u)$ if $\textup{bin}(\xi_t^i) = u$. So by~\eqref{children} and~\eqref{weights},
\begin{align*}
{\mathbb E}\left[\left.\sum_{i:\textup{bin}(\hat{\xi}_t^i) = u}\hat{\omega}_t^i g(\hat{\xi}_t^i)\right|{\mathcal F}_t\right]
&= \sum_{i:\textup{bin}(\xi_t^i) = u}{\mathbb E}\left[\left.\sum_{j:\textup{par}(\hat{\xi}_t^j) = \xi_t^i}
\hat{\omega}_t^j g(\hat{\xi}_t^j)\right|{\mathcal F}_t\right] \\
&= \sum_{i:\textup{bin}(\xi_t^i) = u}\frac{\omega_t(u)}{N_t(u)}g(\xi_t^i){\mathbb E}\left[\left.C_t^i \right|{\mathcal F}_t\right] \\
&= \sum_{i:\textup{bin}(\xi_t^i) = u}\omega_t^i g(\xi_t^i).
\end{align*}
\end{proof}
\begin{lemma}\label{lem4}
The one-step selection covariance in bins $u,v$ is given by
\begin{align*}
&{\mathbb E}\left[\left.\sum_{i:\textup{bin}(\hat{\xi}_t^i) = u}\sum_{j:\textup{bin}(\hat{\xi}_t^j) = v}\hat{\omega}_t^i\hat{\omega}_t^j g(\hat{\xi}_t^i)g(\hat{\xi}_t^j)
\right|{\mathcal F}_t\right] \\
&\quad= \begin{dcases} \sum_{i:\textup{bin}({\xi}_t^i) = u}\sum_{j:\textup{bin}({\xi}_t^j) = v} \omega_t^i \omega_t^j g(\xi_t^i)g(\xi_t^j), & u \ne v \\
\left(1-\frac{1}{N_t(u)}\right) \left(\sum_{i:\textup{bin}(\xi_t^i) = u} \omega_t^i g(\xi_t^i)\right)^2 \\
\qquad\qquad + \sum_{i:\textup{bin}(\xi_t^i) = u}\frac{\omega_t(u)\omega_t^i}{N_t(u)}g(\xi_t^i)^2, & u = v
\end{dcases}.
\end{align*}
\end{lemma}
\begin{proof}
By~\eqref{def_Cti} and~\eqref{Cti_indep}, if $\textup{bin}(\xi_t^i) = u$ and $\textup{bin}(\xi_t^j) = v$, with $u \ne v$, then
\begin{equation}\label{cov_diffu}
{\mathbb E}[C_t^iC_t^j|{\mathcal F}_t] = {\mathbb E}[C_t^i|{\mathcal F}_t]{\mathbb E}[C_t^j|{\mathcal F}_t] = \frac{N_t(u)N_t(v)\omega_t^i\omega_t^j}{\omega_t(u)\omega_t(v)},
\end{equation}
while if $\textup{bin}(\xi_t^i) = \textup{bin}(\xi_t^j) = u$, then
\begin{equation}\label{cov_sameu}
{\mathbb E}[C_t^i C_t^j|{\mathcal F}_t] = \left(\frac{N_t(u)}{\omega_t(u)}\right)^2\omega_t^i\omega_t^j\left(1 - \frac{1}{N_t(u)}\right) + \mathbbm{1}_{i=j}\frac{N_t(u) \omega_t^i}{\omega_t(u)}
\end{equation}
Meanwhile for any $u,v$, by~\eqref{weights},
\begin{align*}
&{\mathbb E}\left[\left.\sum_{i:\textup{bin}(\hat{\xi}_t^i) = u}\sum_{j:\textup{bin}(\hat{\xi}_t^j) = v}\hat{\omega}_t^i\hat{\omega}_t^j g(\hat{\xi}_t^i)g(\hat{\xi}_t^j)
\right|{\mathcal F}_t\right] \\
&=
\sum_{i:\textup{bin}({\xi}_t^i) = u}\sum_{j:\textup{bin}({\xi}_t^j) = v}{\mathbb E}\left[\left.
\sum_{k: \textup{par}(\hat{\xi}_t^k) =\xi_t^i}\sum_{\ell: \textup{par}(\hat{\xi}_t^\ell) =\xi_t^j}
{\hat \omega}_t^k {\hat \omega}_t^\ell g(\hat{\xi}_t^k)g(\hat{\xi}_t^\ell) \right|{\mathcal F}_t\right] \\
&=
\sum_{i:\textup{bin}({\xi}_t^i) = u}\sum_{j:\textup{bin}({\xi}_t^j) = v}
\frac{\omega_t(u)\omega_t(v)}{N_t(u)N_t(v)}g(\xi_t^i)g(\xi_t^j){\mathbb E}\left[\left.C_t^i C_t^j \right|{\mathcal F}_t\right],
\end{align*}
which by~\eqref{cov_diffu} leads to the claimed formula when
$u \ne v$. When $u=v$ we get
\begin{align*}
&{\mathbb E}\left[\left.\sum_{i:\textup{bin}(\hat{\xi}_t^i) = u}\sum_{j:\textup{bin}(\hat{\xi}_t^j) = v}\hat{\omega}_t^i\hat{\omega}_t^j g(\hat{\xi}_t^i)g(\hat{\xi}_t^j)
\right|{\mathcal F}_t\right] \\
&=
\sum_{i,j:\textup{bin}({\xi}_t^i)=\textup{bin}({\xi}_t^j) = u}
\left(\frac{\omega_t(u)}{N_t(u)}\right)^2 g(\xi_t^i)g(\xi_t^j){\mathbb E}\left[\left.C_t^i C_t^j \right|{\mathcal F}_t\right]
\end{align*}
which by~\eqref{cov_sameu} again leads to the stated formula.
\end{proof}
\begin{lemma}\label{lem5}
The one-step mean of the weighted ensemble is
\begin{equation*}
{\mathbb E}\left[\left.\sum_{i=1}^N \omega_{t+1}^i g(\xi_{t+1}^i)\right|{\mathcal F}_t\right] = \sum_{i=1}^N \omega_t^i Kg(\xi_t^i).
\end{equation*}
\end{lemma}
\begin{proof}
By~\eqref{bin_part} and Lemmas~\ref{lem1} and~\ref{lem3},
\begin{align*}
{\mathbb E}\left[\left.\sum_{i=1}^N \omega_{t+1}^i g(\xi_{t+1}^i)\right|{\mathcal F}_t\right] &=
{\mathbb E}\left[\left.{\mathbb E}\left[\left.\sum_{i=1}^N \omega_{t+1}^i g(\xi_{t+1}^i)\right|\hat{\mathcal F}_t\right]\right|{\mathcal F}_t\right] \\
&= {\mathbb E}\left[\left.\sum_{i=1}^N \hat{\omega}_t^i Kg(\hat{\xi}_{t}^i)\right|{\mathcal F}_t\right] \\
&= \sum_{u \in {\mathcal B}}
{\mathbb E}\left[\left.\sum_{i:\textup{bin}(\hat{\xi}_t^i) = u} \hat{\omega}_t^i Kg(\hat{\xi}_{t}^i)\right|{\mathcal F}_t\right] \\
&= \sum_{u \in {\mathcal B}}
\sum_{i:\textup{bin}({\xi}_t^i) = u} {\omega}_t^i Kg({\xi}_{t}^i)
= \sum_{i=1}^N \omega_t^i Kg(\xi_t^i).
\end{align*}
\end{proof}
\begin{theorem}[Unbiased property]\label{thm_unbiased}
Let $(\xi_t)_{t \ge 0}$ be a Markov chain
with kernel $K$ and initial distribution
$\xi_0 \sim \nu$. Weighted ensemble is unbiased in the sense that for each time $T \ge 0$,
\begin{equation*}
{\mathbb E}\left[\sum_{i=1}^N \omega_T^i g(\xi_T^i)\right] = {\mathbb E}[g(\xi_T)].
\end{equation*}
\end{theorem}
\begin{proof}
Equation~\eqref{initialization} shows the result holds when $T = 0$. So fix a time $T > 0$ and consider the (Doob) ${\mathcal F}_t$-martingale
$(M_t)_{t \ge 0}$ defined by
\begin{equation}\label{Mt}
M_t = {\mathbb E}\left[\left.\sum_{i=1}^N \omega_T^i g(\xi_T^i)\right|{\mathcal F}_t\right].
\end{equation}
By repeated application of the tower property
and Lemma~\ref{lem5},
\begin{equation*}
M_0 = {\mathbb E}\left[\left.{\mathbb E}\left[\left.\ldots {\mathbb E}\left[\left.\sum_{i=1}^N\omega_T^i g(\xi_T^i)\right|{\mathcal F}_{T-1}\right]\ldots\right|{\mathcal F}_{1}\right]\right|{\mathcal F}_0\right]
= \sum_{i=1}^N \omega_0^i K^{T}g(\xi_0^i).
\end{equation*}
Since $(M_t)_{0 \le t \le T}$ is a ${\mathcal F}_t$-martingale, ${\mathbb E}[M_{T}] = {\mathbb E}[M_0]$ and thus
\begin{align*}
{\mathbb E}\left[\sum_{i=1}^N \omega_T^i g(\xi_T^i)\right] = {\mathbb E}[M_{T}]
&= {\mathbb E}[M_0] \\
&= {\mathbb E}\left[\sum_{i=1}^N \omega_0^i K^{T}g(\xi_0^i)\right] \\
&= \int K^Tg(x)\nu(dx) = {\mathbb E}[g(\xi_T)],
\end{align*}
where the second-to-last equality above uses~\eqref{initialization}.
\end{proof}
Theorem~\ref{thm_unbiased} shows that
the weighted ensemble is statistically
exact:
at each time $T$, the weighted
ensemble
has the same distribution as
a Markov chain driven
by the underlying kernel $K$. Note
that if we consider $\xi_T$ as a
path particle, $\xi_T = (\xi_{T,0},\xi_{T,1},\ldots,\xi_{T,T})$ where the vector is
the particle's ancestral line, then Theorem~\ref{thm_unbiased} shows that
weighted ensemble is statistically
exact for functions on {\em paths} and
not just functions at
fixed times~\cite{aristoff2016analysis,zhang2010weighted}.
\subsection{Doob decomposition}\label{sec:pfs2}
Recall that $\theta_T = \frac{1}{T}\sum_{t = 0}^{T-1} \sum_{i=1}^N \omega_t^i f(\xi_t^i)$.
For $t \ge 0$ define
\begin{equation}\label{doob_martingale}
D_t = {\mathbb E}[\theta_T|{\mathcal F_t}], \qquad \hat{D}_t = {\mathbb E}[\theta_T|\hat{\mathcal F_t}].
\end{equation}
Define also
\begin{equation}\label{ht}
h_t(\xi) = \sum_{s=0}^{T-t-1} K^s f(\xi).
\end{equation}
Of course $h_t$ and $D_t$ depend on $T$, but
we leave this dependence implicit to avoid more cumbersome notation.
\begin{lemma}\label{lem6}
The Doob martingales in~\eqref{doob_martingale}
can be expressed as
\begin{align}
D_t &= \frac{1}{T}\sum_{i=1}^N \left(\omega_t^i h_{t}(\xi_t^i) + \sum_{s=0}^{t-1} \omega_s^i f(\xi_s^i)\right), \label{Dt1}\\
\hat{D}_t &= \frac{1}{T}\sum_{i=1}^N \left(\hat{\omega}_t^i Kh_{t+1}(\hat{\xi}_t^i) + \sum_{s=0}^{t} \omega_s^i f(\xi_s^i)\right).\label{Dt2}
\end{align}
\end{lemma}
\begin{proof}
For the first equation~\eqref{Dt1}, we have
\begin{align*}
D_t = \frac{1}{T}{\mathbb E}\left[\left.\sum_{t = 0}^{T-1} \sum_{i=1}^N \omega_t^i f(\xi_t^i)\right|{\mathcal F}_t\right] &= \frac{1}{T}\sum_{s=t}^{T-1} {\mathbb E}\left[\left.\sum_{i=1}^N\omega_s^if(\xi_s^i)\right|{\mathcal F}_t\right] \\
&\qquad \quad+ \frac{1}{T}\sum_{s=0}^{t-1}\sum_{i=1}^N \omega_s^if(\xi_s^i).
\end{align*}
Repeated application of Lemma~\ref{lem5} and the tower property shows that for $s \ge t$,
\begin{align*}
{\mathbb E}\left[\left.\sum_{i=1}^N\omega_s^if(\xi_s^i)\right|{\mathcal F}_t\right] &={\mathbb E}\left[\left.{\mathbb E}\left[\left.\ldots {\mathbb E}\left[\left.\sum_{i=1}^N\omega_s^if(\xi_s^i)\right|{\mathcal F}_{s-1}\right]\ldots\right|{\mathcal F}_{t+1}\right]\right|{\mathcal F}_t\right] \\
&= \sum_{i=1}^N \omega_t^i K^{s-t}f(\xi_t^i).
\end{align*}
Combining the last two displays leads to~\eqref{Dt1}. Similarly, for~\eqref{Dt2},
\begin{align*}
\hat{D}_t = \frac{1}{T}{\mathbb E}\left[\left.\sum_{t = 0}^{T-1} \sum_{i=1}^N \omega_t^i f(\xi_t^i)\right|\hat{\mathcal F}_t\right] &= \frac{1}{T}\sum_{s=t+1}^{T-1} {\mathbb E}\left[\left.\sum_{i=1}^N \omega_s^if(\xi_s^i) \right|\hat{\mathcal F}_t\right] \\
&\qquad \quad +\frac{1}{T} \sum_{s=0}^{t}\sum_{i=1}^N {\omega}_s^i f({\xi}_s^i).
\end{align*}
Repeatedly using Lemma~\ref{lem5} with the tower property gives, for $s>t$,
\begin{align*}
{\mathbb E}\left[\left.\sum_{i=1}^N\omega_s^if(\xi_s^i)\right|\hat{\mathcal F}_t\right] &={\mathbb E}\left[\left.{\mathbb E}\left[\left.\ldots {\mathbb E}\left[\left.\sum_{i=1}^N\omega_s^if(\xi_s^i)\right|{\mathcal F}_{s-1}\right]\ldots\right|{\mathcal F}_{t+1}\right]\right|\hat{\mathcal F}_t\right] \\
&= {\mathbb E}\left[\left.\sum_{i=1}^N \omega_{t+1}^i K^{s-t-1}f(\xi_{t+1}^i)\right|\hat{\mathcal F}_t\right] \\
&= \sum_{i=1}^N \hat{\omega}_t^i K^{s-t} f(\hat{\xi}_t^i),
\end{align*}
where the last equality uses Lemma~\ref{lem1}.
The last two displays imply~\eqref{Dt2}.
\end{proof}
\begin{lemma}\label{lem_doob}
There is a ${\mathcal F}_{T-1}$-measurable random variable $R_T$ with ${\mathbb E}[R_T] = 0$ such that
\begin{align}
&\theta_T^2 - {\mathbb E}[\theta_T]^2\label{variances0} \\
&= R_T +
\left(D_0 - {\mathbb E}[\theta_T]\right)^2
\label{variances1} \\
&\quad+ \sum_{t=1}^{T-1}\left({\mathbb E}\left[\left.\left(D_t-\hat{D}_{t-1}\right)^2\right|\hat{\mathcal F}_{t-1}\right] +
{\mathbb E}\left[\left.\left(\hat{D}_{t-1}-{D}_{t-1}\right)^2\right|{\mathcal F}_{t-1}\right]\right). \label{variances2}
\end{align}
\end{lemma}
\begin{proof}
Because of~\eqref{theta_T_bdd},
$(D_t)_{0 \le t \le T}$ and $(\hat{D}_t)_{0 \le t \le T}$ are square integrable.
By
Doob decomposing $$D_0^2, \hat{D}_0^2, D_1^2,\hat{D}_1^2,\ldots$$ with respect to the filtration
$${\mathcal F}_0,\,\hat{\mathcal F}_0,\,{\mathcal F}_1,\,\hat{\mathcal F}_1,\ldots,$$
we see that
\begin{equation}\label{exp_doob}
D_t^2 = D_0^2 + A_t + B_t,
\end{equation}
where $B_t$ is a ${\mathcal F}_t$-martingale
with ${\mathbb E}[B_0] = 0$, and the predictable part $A_t$ is
\begin{align}\begin{split}\label{At}
A_t &= \sum_{s=1}^{t}\left( {\mathbb E}\left[\left.D_{s}^2\right|\hat{\mathcal F}_{s-1}\right]-\hat{D}_{s-1}^2 + {\mathbb E}\left[\left.\hat{D}_{s-1}^2\right|{\mathcal F}_{s-1}\right]-D_{s-1}^2\right)\\
&= \sum_{s=1}^{t}\left( {\mathbb E}\left[\left.(D_{s}-\hat{D}_{s-1})^2\right|\hat{\mathcal F}_{s-1}\right] + {\mathbb E}\left[\left.(\hat{D}_{s-1}-D_{s-1})^2\right|{\mathcal F}_{s-1}\right]\right);
\end{split}
\end{align}
see e.g.~\cite{grimmett}. Letting $t = T-1$ in~\eqref{exp_doob}, and using the fact that $D_{T-1} = \theta_T$,
\begin{align}\begin{split}\label{before_exp}
\theta_T^2 - {\mathbb E}[\theta_T]^2 &= D_0^2 - {\mathbb E}[\theta_T]^2 + A_{T-1} + B_{T-1}\\
&= \left(D_0-{\mathbb E}[\theta_T]\right)^2 + A_{T-1} + B_{T-1} + 2(D_0 {\mathbb E}[\theta_T]-{\mathbb E}[\theta_T]^2).
\end{split}
\end{align}
Define $R_{T} = B_{T-1} + 2(D_0 {\mathbb E}[\theta_T]-{\mathbb E}[\theta_T]^2)$. Then $R_{T}$ is ${\mathcal F}_{T-1}$-measurable, and ${\mathbb E}[R_{T}] = 0$ since ${\mathbb E}[D_0] = {\mathbb E}[D_{T-1}]= {\mathbb E}[\theta_T]$
and ${\mathbb E}[B_{T-1}]= {\mathbb E}[B_0] = 0$. In
light of~\eqref{At}-\eqref{before_exp} this completes
the proof.
\end{proof}
In Lemma~\ref{lem_doob}, by taking expectations in~\eqref{variances0}-\eqref{variances2}, we obtain an
expression for $\text{Var}(\theta_T)$.
Lemma~\ref{lem_doob} decomposes this
variance into a term from the initial condition along with terms from each mutation
and selection step in Algorithm~\ref{alg2}, as follows. First, $R_T$ may be ignored as it has mean zero. The term $(D_0 - {\mathbb E}[\theta_T])^2$ in~\eqref{variances1} gives the contribution to the variance from the initial condition of Algorithm~\ref{alg2}. The terms in~\eqref{variances2} yield the
contributions from each time $t$
in Algorithm~\ref{alg2}.
We refer to the summands in~\eqref{variances2} as the {\em mutation variance} and {\em selection variance}, since they
correspond to the variances from the
mutation and selection step of
Algorithm~\ref{alg2}, respectively.
\begin{lemma}\label{lem_mut_var}
The mutation variance
at time $t$ is
\begin{equation}\label{eq_mutvar}
{\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_{t}\right)^2\right|\hat{\mathcal F}_{t}\right] = \frac{1}{T^2}\sum_{i=1}^N \left(\hat{\omega}_t^i\right)^2\left[Kh_{t+1}^2(\hat{\xi}_t^i) - (Kh_{t+1}(\hat{\xi}_t^i))^2\right].
\end{equation}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem1},
\begin{equation*}
{\mathbb E}\left[\left.\omega_{t+1}^i h_{t+1}(\xi_{t+1}^i)\hat{\omega}_t^j Kh_{t+1}(\hat{\xi}_t^j)\right|\hat{\mathcal F}_t\right] = \hat{\omega}_t^i\hat{\omega}_t^j Kh_{t+1}(\hat{\xi}_t^i)
Kh_{t+1}(\hat{\xi}_t^j).
\end{equation*}
Using this and Lemma~\ref{lem6},
\begin{align*}
&{\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_t\right)^2\right|{\mathcal F}_t\right] \\
&= \frac{1}{T^2} {\mathbb E}\left[\left.\left(\sum_{i=1}^N \omega_{t+1}^i h_{t+1}(\xi_{t+1}^i) - \sum_{i=1}^N\hat{\omega}_t^i Kh_{t+1}(\hat{\xi}_t^i)\right)^2\right|\hat{\mathcal F}_t\right]\\
&=\frac{1}{T^2} {\mathbb E}\left[\left.\left(\sum_{i=1}^N \omega_{t+1}^i h_{t+1}(\xi_{t+1}^i)\right)^2 \right|\hat{\mathcal F}_t\right]- \frac{1}{T^2}\left(\sum_{i=1}^N\hat{\omega}_t^i Kh_{t+1}(\hat{\xi}_t^i)\right)^2.
\end{align*}
The result now follows from Lemma~\ref{lem2}.
\end{proof}
The term in square brackets on the right hand side of~\eqref{eq_mutvar} can be written
$$\textup{Var}_{K(\hat{\xi}_t^i,\cdot)}h_{t+1} = Kh_{t+1}^2(\hat{\xi}_t^i) - (Kh_{t+1}(\hat{\xi}_t^i))^2.$$
This variance tends to be large when $\hat{\xi}_t^i$ is in a ``bottleneck'' of state space, where one time step of a Markov
chain with kernel $K$ can have a large
effect on the average of $f$
over subsequent time steps. In~\cite{aristoff2016analysis,aristoff2018steady} we adopt variance optimization strategies based on minimizing mutation variance.
Such strategies are not practical to implement exactly, because computing the variance of $h_{t+1}$ with respect to $K$ to a given accuracy is even more expensive
than computing $\int f\,d\mu$
with the same precision.
The mutation variance could
be computed approximately, however, by using bin-to-bin transitions
to obtain a coarse estimate of $K$.
See our companion paper~\cite{aristoff2018steady}
for details.
\begin{lemma}\label{lem_sel_var}
The selection variance at time $t$ is
\begin{align}
&{\mathbb E}\left[\left.\left(\hat{D}_{t}-{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] \nonumber \\
&= \frac{1}{T^2}
\sum_{u\in {\mathcal B}} \frac{\omega_t(u)^2}{N_t(u)}\left[\sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\omega_t^i}{\omega_t(u)}(Kh_{t+1}(\xi_t^i))^2 \right. \label{eq_selvar1} \\
&\qquad \qquad \qquad \qquad \qquad \qquad -\left.\left(
\sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\omega_t^i}{\omega_t(u)}Kh_{t+1}(\xi_t^i)\right)^2\right].\label{eq_selvar2}
\end{align}
\end{lemma}
\begin{proof}
From~\eqref{ht},
$h_t = Kh_{t+1} + f$. So in Lemma~\ref{lem6}, we can rewrite $D_t$ as
\begin{equation*}
D_t = \frac{1}{T}\sum_{i=1}^N \left(\omega_t^i Kh_{t+1}(\xi_t^i) + \sum_{s=0}^{t}\omega_s^i f(\xi_s^i)\right).
\end{equation*}
By~\eqref{bin_part} and Lemma~\ref{lem3},
\begin{equation*}
\sum_{i,j=1}^N {\mathbb E}\left[\left.\hat{\omega}_t^i Kh_{t+1}(\hat{\xi}_t^i)\omega_t^j Kh_{t+1}(\xi_t^j)\right|{\mathcal F}_t\right] = \sum_{i,j=1}^N {\omega}_t^i \omega_t^j Kh_{t+1}({\xi}_t^i)Kh_{t+1}(\xi_t^j).
\end{equation*}
Using the last two displays and Lemma~\ref{lem6},
\begin{align*}
{\mathbb E}\left[\left.\left(\hat{D_t} - D_t\right)^2\right|{\mathcal F}_t\right]
&= \frac{1}{T^2}{\mathbb E}\left[\left.\left(\sum_{i=1}^N \hat{\omega}_t^iKh_{t+1}(\hat{\xi}_t^i) - \sum_{i=1}^N
\omega_t^i Kh_{t+1}(\xi_t^i)\right)^2\right|{\mathcal F}_t\right] \\
&= \frac{1}{T^2}{\mathbb E}\left[\left.\left(\sum_{i=1}^N\hat{\omega}_t^i Kh_{t+1}(\hat{\xi}_t^i)\right)^2\right|
{\mathcal F}_t\right] \\
&\qquad \qquad - \frac{1}{T^2}\left(\sum_{i=1}^N \omega_t^i Kh_{t+1}(\xi_t^i)\right)^2.
\end{align*}
Let us analyze the first term in
the last line above. By~\eqref{bin_part} and Lemma~\ref{lem4},
\begin{align}
&{\mathbb E}\left[\left.\left(\sum_{i=1}^N\hat{\omega}_t^i Kh_{t+1}(\hat{\xi}_t^i)\right)^2\right|
{\mathcal F}_t\right] \nonumber \\
&=
\sum_{u,v \in {\mathcal B}}
{\mathbb E}\left[\left.\sum_{i:\textup{bin}(\hat{\xi}_t^i) = u} \sum_{j:\textup{bin}(\hat{\xi}_t^j) = v}\hat{\omega}_t^i\hat{\omega}_t^j Kh_{t+1}(\hat{\xi}_t^i)Kh_{t+1}(\hat{\xi}_t^j)\right|{\mathcal F}_t\right] \nonumber \\
&= \sum_{u \ne v \in {\mathcal B}} \sum_{i:\textup{bin}({\xi}_t^i) = u} \sum_{j:\textup{bin}({\xi}_t^j) = v} \omega_t^i \omega_t^j Kh_{t+1}(\xi_t^i)Kh_{t+1}(\xi_t^j)
\label{above1} \\
&\qquad + \sum_{u \in {\mathcal B}}\left[\left(1-\frac{1}{N_t(u)}\right) \left(\sum_{i:\textup{bin}(\xi_t^i) = u} \omega_t^i Kh_{t+1}(\xi_t^i)\right)^2 \right. \\
&\qquad \qquad \qquad \qquad \qquad + \left.\sum_{i:\textup{bin}(\xi_t^i) = u}\frac{\omega_t(u)\omega_t^i}{N_t(u)}(Kh_{t+1}(\xi_t^i))^2\right]. \label{above2}
\end{align}
The last expression above,~\eqref{above1}-\eqref{above2}, rewrites as
\begin{align*}
&\left(\sum_{i=1}^N \omega_t^i Kh_{t+1}(\xi_t^i)\right)^2 \\
&\qquad \quad+
\sum_{u \in {\mathcal B}} \frac{\omega_t(u)^2}{N_t(u)}\left[\sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\omega_t^i}{\omega_t(u)}(Kh_{t+1}(\xi_t^i))^2 \right. \\
&\qquad \qquad \qquad \qquad \qquad -\left. \left(
\sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\omega_t^i}{\omega_t(u)}Kh_{t+1}(\xi_t^i)\right)^2\right].
\end{align*}
Combining the last three displays gives
the desired result.
\end{proof}
The term in square brackets in~\eqref{eq_selvar1}-\eqref{eq_selvar2} can be written as a
variance of $Kh_{t+1}$ with respect to a distribution supported in bin $u$; see the proof of Lemma~\ref{lem_bdd} below. This
means that the selection variance is small when bins are chosen so that the
value of $Kh_{t+1}$ is approximately
constant within each bin.
On way to achieve this is with
bins that are ``small'': in particular,
the selection variance is zero if each bin corresponds to exactly one point of state space. For more general
bins, one would hope a reduction in
mutation variance (compared to direct
Monte Carlo) would be enough
to offset the selection variance
(which is zero for direct Monte Carlo).
In general, one
could try to minimize the selection
variance as well as the mutation variance.
A natural choice to estimate these variances
is to look at bin-to-bin transitions and
use the corresponding transition matrix
as a coarse estimate of $K$; see~\cite{aristoff2016analysis,aristoff2018steady}.
However this technique, which can be applied when the bins correspond to a fixed partition of state space, treats all particles
in the same bin equally and therefore is uninformative
for the selection variance. One
possible solution would be to use different, smaller
``microbins'' to estimate the selection
variance, and then larger bins (perhaps
conglomerations of these microbins) to define the selection step in
Algorithm~\ref{alg2}. These larger bins
could be chosen in a way that approximately
minimizes the selection variance. We
discuss these issues in detail
in~\cite{aristoff2018steady}.
\begin{lemma}\label{lem_bdd}
Let Assumption~\ref{A1} hold. Then the
mutation and selection variances at time $t$ are
$O(1/T^2)$ as $T\to \infty$. That is, $$T^2{\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_{t}\right)^2\right|\hat{\mathcal F}_{t}\right] \le c \quad \text{and}\quad
T^2{\mathbb E}\left[\left.\left(\hat{D}_{t}-{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] \le c,$$ where $c>0$ is a constant that does not depend on $T$.
\end{lemma}
\begin{proof}
For a probability measure $\eta$ on state space, define
$$\textup{Var}_\eta(g) = \int g^2\,d\eta - (\int g\,d\eta)^2.$$
Define
\begin{equation*}
\qquad \eta_{mut}(dy) = K(\hat{\xi}_t^i,dy), \qquad \eta_{sel} = \sum_{i: \textup{bin}(\xi_t^i)=u} \frac{\omega_t^i}{\omega_t(u)}\delta_{\xi_t^i}.
\end{equation*}
By~\eqref{A0}-\eqref{fix_tot},~\eqref{weights} and Lemmas~\ref{lem_mut_var} and~\ref{lem_sel_var}, it
suffices to show that the variances
\begin{equation}\label{h_variance2}
\textup{Var}_{\eta_{mut}}(h_{t+1}) = Kh_{t+1}^2(\hat{\xi}_t^i) - (Kh_{t+1}(\hat{\xi}_t^i))^2
\end{equation}
and
\begin{align}\begin{split}\label{h_variance}
\textup{Var}_{\eta_{sel}}(Kh_{t+1}) &=
\int \eta_{sel}(dy)(Kh_{t+1}(y))^2 - \left(\int \eta_{sel}(dy)Kh_{t+1}(y)\right)^2 \\
&=
\sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\omega_t^i}{\omega_t(u)}(Kh_{t+1}(\xi_t^i))^2 \\
&\qquad \qquad \qquad - \left(
\sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\omega_t^i}{\omega_t(u)}Kh_{t+1}(\xi_t^i)\right)^2
\end{split}
\end{align}
are bounded in $T$. By
Assumption~\ref{A1}, it is possible (see Theorem 6.1 of~\cite{douc})
to choose $\lambda \in [0,1)$ and $c>0$ such that
\begin{equation}\label{geom_erg}
|K^tf(x)-K^tf(y)| \le c\lambda^t, \qquad \text{for all }x,y\text{ and all }t\ge 0.
\end{equation}
Thus
\begin{equation*}
|h_{t+1}(x)-h_{t+1}(y)| \le \sum_{s=0}^{T-t-2}|K^sf(x)-K^sf(y)| \le \frac{c}{1-\lambda} := C,
\end{equation*}
where $C$ is constant. Thus, for any probability measure $\eta$ on state space,
\begin{align*}
\textup{Var}_\eta(h_{t+1}) &= \int \eta(dx) \left(h_{t+1}(x)- \int \eta(dy)h_{t+1}(y)\right)^2 \\
&= \int \eta(dx) \left(\int \eta(dy)[h_{t+1}(x)- h_{t+1}(y)]\right)^2 \\
&\le \int \eta(dx) \int \eta(dy)\left(h_{t+1}(x)- h_{t+1}(y)\right)^2 \le C^2.
\end{align*}
Identical arguments show
that $|Kh_{t+1}(x)-Kh_{t+1}(y)|\le C$
and $\textup{Var}_\eta(Kh_{t+1}) \le C^2$. Thus, the
variances in~\eqref{h_variance} and~\eqref{h_variance2} are smaller than a
constant $c$ that does not depend on $T$, as required.
\end{proof}
\begin{lemma}[Scaling of the mean and variance]\label{lem_bdd_var}
Let Assumption~\ref{A1} hold. Then:
\noindent (i) There is a constant $c>0$ such that $
{\mathbb E}\left[\left(\theta_T - {\mathbb E}[\theta_T]\right)^2\right] \le \frac{c}{T}$.
\noindent (ii) There is a constant $c>0$ such that
$|{\mathbb E}[\theta_T]-\int f\,d\mu| \le \frac{c}{T}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem6}
and~\eqref{ht},
\begin{equation*}
D_0 = \frac{1}{T}\sum_{i=1}^N \omega_0^i h_0(\xi_0^i) = \frac{1}{T}\sum_{t=0}^{T-1} \sum_{i=1}^N \omega_0^i K^tf(\xi_0^i).
\end{equation*}
Using this, the
fact that $(D_t)_{t \ge 0}$ is a ${\mathcal F}_t$-martingale, and~\eqref{initialization},
\begin{equation*}
{\mathbb E}[\theta_T] = {\mathbb E}[D_{T-1}] = {\mathbb E}[D_0] =
\frac{1}{T}\sum_{t=0}^{T-1} \int K^tf(x)\,\nu(dx).
\end{equation*}
Using Assumption~\ref{A1}, choose $c>0$ and $\lambda \in [0,1)$ so that~\eqref{geom_erg} holds. Then
\begin{align}\begin{split}\label{doob1}
|D_0 - {\mathbb E}[\theta_T]| &\le
\frac{1}{T}\sum_{t=0}^{T-1}\left|\sum_{i=1}^N \omega_0^i K^tf(\xi_0^i) -\int K^t f(x)\,\nu(dx)\right| \\
&\le
\frac{1}{T}\sum_{t=0}^{T-1}\sum_{i=1}^N \omega_0^i\int |K^tf(\xi_0^i) - K^t f(x)|\,\nu(dx) \\
&\le \frac{c}{T}\sum_{t=0}^{T-1}\lambda^t \le \frac{c}{(1-\lambda)T}.\end{split}
\end{align}
where the last line uses $\sum_{i=0}^N \omega_0^i = 1$ from~\eqref{initialization}.
Meanwhile
from Lemma~\ref{lem_bdd},
\begin{equation}\label{doob2}
{\mathbb E}\left[\left.\left(D_t-\hat{D}_{t-1}\right)^2\right|\hat{\mathcal F}_{t-1}\right] +
{\mathbb E}\left[\left.\left(\hat{D}_{t-1}-{D}_{t-1}\right)^2\right|{\mathcal F}_{t-1}\right] \le \frac{c}{T^2},
\end{equation}
with a different $c>0$.
Using Lemma~\ref{lem_doob} with~\eqref{doob1} and~\eqref{doob2} gives {\em (i)}.
For {\em (ii)},
\begin{align*}
\left|{\mathbb E}[\theta_T]-\int f\,d\mu\right| &\le \frac{1}{T}\sum_{t=0}^T \left| \int K^tf(x)\,\nu(dx) - \int f\,d\mu\right| \\
&\le \frac{1}{T}\sum_{t=0}^T \int \left|K^tf(x) - \int f\,d\mu\right|\nu(dx) \le \frac{c}{T},
\end{align*}
with again a different $c$. For the last inequality we
used Assumption~\ref{A1} again.
\end{proof}
We comment briefly on the scaling
in $N$ in Lemma~\ref{lem_bdd_var}.
\begin{remark}\label{rmk_var_scale}
Suppose that Assumption~\ref{A1} holds.
For simplicity assume that the initial
condition satisfies $\omega_0^i = 1/N$
with $\xi_0^i$, $i=1,\ldots,N$ being iid
samples from~$\nu$. Looking at the terms in the Doob decomposition for the variance (Lemma~\ref{lem_doob}), we get
\begin{align*}
{\mathbb E}\left[(D_0-{\mathbb E}[\theta_T])^2\right] &= \text{Var}(D_0) \\
& = \frac{1}{NT^2} \text{Var}\left(h_0(\xi_0^1)\right) \le \frac{c}{NT^2},
\end{align*}
with the last inequality coming from the
proof of Lemma~\ref{lem_bdd}.
If we assume that $$N_t(u) \ge N_0 \omega_t(u)$$
where $N_0$ is constant, then
the results in Lemmas~\ref{lem_mut_var}-\ref{lem_bdd_var}
are easily modified to show that
\begin{equation}\label{bound}
{\mathbb E}[(\theta_T-{\mathbb E}[\theta_T])^2] \le c\left(\frac{1}{NT^2} + \frac{1}{N_0 T}\right),
\end{equation}
where $c$ is a constant that does
not depend on $N_0,$ $N$, or $T$. Since $\sum_{u\in {\mathcal B}} \omega_t(u) = 1$, the largest
possible choice
of $N_0$ in Remark~\ref{rmk_var_scale}
is $N_0 = N$, in
which case the bound~\eqref{bound} gives,
up to a constant $c$,
the same variance scaling as direct Monte
Carlo.
\end{remark}
Remark~\ref{rmk_var_scale} shows
that under a simple condition
on the particle allocation $N_t(u)_{t \ge 0}^{u \in {\mathcal B}}$, weighted ensemble
is no worse than direct Monte Carlo,
or independent particles. It is
more delicate to see that
the variance scaling of weighted
ensemble can be better than
that of direct Monte Carlo.
We outline how and why this can occur in
Section~\ref{sec:compare_naive}.
Weighted
ensemble can also be worse than
direct Monte Carlo. One
extreme case is if
one parent has exactly
$N$ children in each selection step, resulting
in $N$ copies of the same particle.
In this case the variance can be order $1$ in $N$, instead of order $1/N$ as above. On the other hand, if a particle
is important, for instance if it lies
in a ``bottleneck,'' it may be advantageous to select it nearly $N$ times.
Because of the way we
define time averages, this
does not necessarily cause a
variance catastrophe. See the
discussion in Section~\ref{sec:compare_avg}.
\begin{corollary}[Scaling of the $L^2$ error]\label{cor_scale_var}
Let Assumption~\ref{A1} hold. Then
$$
{\mathbb E}\left[\left(\theta_T - \int f\,d\mu\right)^2\right] \le \frac{c}{T}$$
where $c>0$ is constant.
\end{corollary}
\begin{proof}
This is an immediate consequence of Lemma~\ref{lem_bdd_var} and~\eqref{theta_T_bdd}.
\end{proof}
Note that Corollary~\ref{cor_scale_var}
already implies a weak form of the
ergodic theorem. Importantly, it
gives a $L^2$ rate of convergence
for our time averages. The rate, $O(1/T)$,
is the same as that of a single
particle, or weighted ensemble with
$1$ particle. For more discussion of
the effects that the number, $N$, of
particles has on the variance
and $L^2$ error, see Sections~\ref{sec:compare_naive} and~\ref{sec:compare_avg} below.
\subsection{Proof of ergodic theorem}\label{sec:pfs3}
We are now ready for the proof of our main result:
\begin{proof}[Proof of Theorem~\ref{thm_main}]
Define $M = \sup |f|$. Since $\sum_{i=1}^N \omega_t^i = 1$, we have
$|\sum_{i=1}^N \omega_t^i f(\xi_t^i)| \le M$ and $|\theta_t| \le M$ for all $t \ge 0$. Thus for $0 \le S \le T$,
\begin{equation}\label{theta_diff}
|\theta_T - \theta_S| = \left|\left(\frac{S}{T}-1\right)\theta_S + \frac{1}{T} \sum_{t=S}^{T-1}\sum_{i=1}^N\omega_t^i f(\xi_t^i)\right| \\
\le 2M\left(1-\frac{S}{T}\right).
\end{equation}
From Lemma~\ref{lem_bdd_var}{\em (i)} and Chebyshev's inequality, there is $c>0$ such that for any $\alpha,n>0$,
\begin{equation}\label{Chebyshev}
{\mathbb P}\left(\left|\theta_T - {\mathbb E}[\theta_T]\right| \ge \frac{cn^{\alpha/2}}{\sqrt{T}}\right) \le \frac{1}{n^{\alpha}}.
\end{equation}
Fix $\beta>\alpha>1$ and set $T_n = n^\beta$. By~\eqref{Chebyshev} and the Borel-Cantelli lemma,
there is $n_0$ such that
\begin{equation}\label{BorelCantelli}
\left|\theta_{T_n} - {\mathbb E}[\theta_{T_n}]\right| < cn^{(\alpha-\beta)/2} \qquad\text{a.s. for all } n \ge n_0.
\end{equation}
From~\eqref{theta_diff},
\begin{equation}\label{theta_ST}
|\theta_S - \theta_{T_n}| \le 2M\left(1 - \left(\frac{n}{n+1}\right)^\beta\right), \qquad \text{if } T_n \le S \le T_{n+1}.
\end{equation}
Let $\epsilon>0$. Making $n_0$ larger if needed,~\eqref{BorelCantelli},~\eqref{theta_ST} and Lemma~\ref{lem_bdd_var}{\em (ii)}
show that
\begin{equation*}
\left|\theta_S - \int f\,d\mu\right| \le |\theta_S - \theta_{T_n}| + |\theta_{T_n} - {\mathbb E}[\theta_{T_n}]|
+ \left|{\mathbb E}[\theta_{T_n}] - \int f\,d\mu\right| < \epsilon
\end{equation*}
almost surely for all $S \ge n_0^\beta$.
Here, we chose $T_n$ so that $T_n \le S \le T_{n+1}$.
\end{proof}
\subsection{Remarks and extensions}\label{sec:remarks}
As discussed above, in the
path-particle setting,
Theorem~\ref{thm_unbiased} establishes that
weighted ensemble is unbiased for
the ancestral lines of particles~\cite{zhang2010weighted}.
Though we focus on
time averages, we could also study time marginals,
without much extra work. To this
end, define
\begin{equation}\label{theta_T_bar}
\bar{\theta}_T = \sum_{i=1}^N \omega_{T-1}^i f(\xi_{T-1}^{i}).
\end{equation}
Then the following holds:
\begin{proposition}\label{prop_marginals}
For $\bar{\theta}_T$ defined in~\eqref{theta_T_bar}, we have
\begin{equation*}
{\mathbb E}\left[\left(\bar{\theta}_T - {\mathbb E}[\bar{\theta}_T]\right)^2\right] \le c,
\end{equation*}
where $c>0$ is constant
that does not depend on $T$.
\end{proposition}
\begin{proof}
This follows immediately from the fact
that ${\bar \theta}_T$ is uniformly
bounded in $T$. To understand
better how to estimate the value of $c$, however, we reproduce
some arguments analogous to those
above.
In Section~\ref{sec:pfs2}, consider instead
the Doob martingales $M_t = {\mathbb E}[\bar{\theta}_T|{\mathcal F}_t]$,
$\hat{M}_t = {\mathbb E}[\bar{\theta}_T|\hat{\mathcal F}_t]$. Arguments
similar to those in the proof
of Lemma~\ref{lem6} show that
\begin{equation*}
M_t = \sum_{i=1}^N \omega_t^i K^{T-t-1}f(\xi_t^i), \qquad \hat{M}_t = \sum_{i=1}^N \hat{\omega}_t^i K^{T-t-1}f(\hat{\xi}_t^i).
\end{equation*}
The Doob
decomposition in Lemma~\ref{lem_doob}
remains true when $\bar{\theta}_T$ and
$M_t, \hat{M_t}$ take the places
of $\theta_T$ and $D_t, \hat{D}_t$, respectively; see~\cite{aristoff2016analysis}. Write $g_t = K^{T-t-1}f$. The mutation and
selection variance formulas in
Lemmas~\ref{lem_mut_var} and~\ref{lem_sel_var}
remain valid when $D_t,\hat{D}_t$ and
$h_t$ are
replaced by $M_t,\hat{M}_t$ and
$Tg_t$, respectively. Using Assumption~\ref{A1}, choose $\lambda \in [0,1)$
and $c>0$ such that
\begin{equation}
|g_{t+1}(x)-g_{t+1}(y)| \le c \lambda^{T-t}.
\end{equation}
Following the arguments in the proof
of Lemma~\ref{lem_bdd}, we find that
\begin{equation}\label{cc}
\sum_{t=1}^{T-1}\left({\mathbb E}\left[\left.\left(M_t-\hat{M}_{t-1}\right)^2\right|\hat{\mathcal F}_{t-1}\right] +
{\mathbb E}\left[\left.\left(\hat{M}_{t-1}-{M}_{t-1}\right)^2\right|{\mathcal F}_{t-1}\right]\right) \le c,
\end{equation}
where $c$ is another constant that does not depend on $T$, whose value could,
at least in principle, be estimated from the formulas in Lemmas~\ref{lem_mut_var} and~\ref{lem_sel_var}. Appropriately
adjusting the proof of
Lemma~\ref{lem_bdd_var}, by
adding to $c$ in~\eqref{cc}
a contribution from the
initial condition of the weighted
ensemble, leads to the result.
\end{proof}
Proposition~\ref{prop_marginals}
shows that the empirical
distributions $\sum_{i=1}^N \omega_t^i \delta_{\xi_t^i}$ of weighted
ensemble are stable over long times. This is in contrast
with sequential Monte Carlo, where the
empirical distributions
exhibit variance blowup
at large times. See Figure~\ref{fig2}
in Section~\ref{sec:counterexample}.
In Algorithm~\ref{alg2}, we assumed multinomial resampling
to simplify the mathematical
exposition. In practice, multinomial
resampling is rarely used since
it leads to significantly larger
variance compared to
residual,
stratified, or
systematic resampling~\cite{douc}.
Below we consider residual resampling.
We do this
because, on the one hand, residual
resampling is a practical
resampling method with variance
roughly on par with residual,
stratified, or
systematic resampling; and on the
other hand, compared to those methods,
it leads to much simpler explicit
formulas for the variance. Since
these variance formulas may be
useful for practitioners designing
optimized algorithms, we produce
them below. See our companion
paper~\cite{aristoff2018steady}
for some optimization ideas
based on minimizing the variance
expression in Proposition~\ref{prop_resid}
below.
For residual resampling,
we modify the selection step of
Algorithm~\ref{alg2} as follows.
Write
\begin{equation}\label{deltati}
\beta_t^i = \frac{N_t(u)\omega_t^i}{\omega_t(u)},\qquad \delta_t^i = \beta_t^i - \lfloor \beta_t^i \rfloor, \qquad \text{if }\textup{bin}(\xi_t^i) = u,
\end{equation}
where $\lfloor \cdot \rfloor$ is
the floor function,
and
\begin{equation}\label{deltatu}
\delta_t(u) = \sum_{i:\textup{bin}(\xi_t^i) = u} \delta_t^i.
\end{equation}
Conditionally on ${\mathcal F}_t$, let $(R_t^i)^{i:\textup{bin}(\xi_t^i) = u}$
be multinomial with $\delta_t(u)$ trials and event
probabilities $\delta_t^i/\delta_t(u)$.
Then
redefine the $(C_t^i)^{i:\textup{bin}(\xi_t^i) = u}$ in each bin $u$ using
\begin{equation}\label{Cti_mod}
C_t^i = \lfloor \beta_t^i\rfloor + R_t^i,
\end{equation}
and let the selections in distinct bins be independent as in~\eqref{Cti_indep}. Observe that the means
\begin{equation}\label{pres_means}
{\mathbb E}[C_t^i|{\mathcal F}_t] = \lfloor \beta_t^i\rfloor + {\mathbb E}[R_t^i|{\mathcal F}_t] = \lfloor \beta_t^i\rfloor +\delta_t^i = \beta_t^i
\end{equation}
agree with the means defined by~\eqref{def_Cti}, though the covariances differ.
\begin{proposition}\label{prop_resid}
Suppose we use residual resampling instead of
multinomial resampling in
the selection step of Algorithm~\ref{alg2}.
That is, we use~\eqref{Cti_mod} instead
of~\eqref{def_Cti} to define
$(C_t^i)^{i=1,\ldots,N}$.
Then with $\delta_t^i$ and $\delta_t(u)$ defined by~\eqref{deltati}-\eqref{deltatu}, the
selection variance at time $t$ is
\begin{align}\begin{split}\label{sel_varnew}
&{\mathbb E}\left[\left.\left(\hat{D}_{t}-{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] \\
&= \frac{1}{T^2}
\sum_{u\in {\mathcal B}} \left(\frac{\omega_t(u)}{N_t(u)}\right)^2\delta_t(u) \left[\sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\delta_t^i}{\delta_t(u)}(Kh_{t+1}(\xi_t^i))^2\right. \\
&\qquad \qquad \qquad \qquad \qquad \qquad\qquad -\left. \left(
\sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\delta_t^i}{\delta_t(u)}Kh_{t+1}(\xi_t^i)\right)^2\right].
\end{split}
\end{align}
\end{proposition}
\begin{proof}
By definition of $(R_t^i)^{i:\textup{bin}(\xi_t^i) = u}$ we have
\begin{equation*}
{\mathbb E}[R_t^i|{\mathcal F}_t] = \delta_t^i, \qquad {\mathbb E}[R_t^i R_t^j|{\mathcal F}_t] = \delta_t^i \delta_t^j
\left(1 - \frac{1}{\delta_t(u)}\right) + \mathbbm{1}_{i=j}\delta_t^i, \qquad \text{if }\textup{bin}(\xi_t^i) = u.
\end{equation*}
From this and~\eqref{Cti_mod},
\begin{align*}
{\mathbb E}[C_t^i C_t^j|{\mathcal F}_t] &= \lfloor \beta_t^i \rfloor\lfloor \beta_t^j\rfloor + \lfloor \beta_t^i \rfloor\delta_t^j + \lfloor \beta_t^j \rfloor \delta_t^i + \delta_t^i \delta_t^j
\left(1 - \frac{1}{\delta_t(u)}\right) + \mathbbm{1}_{i=j}\delta_t^i \\
&= \beta_t^i \beta_t^j + \mathbbm{1}_{i=j}\delta_t^i - \frac{\delta_t^i \delta_t^i}{\delta_t(u)}, \qquad\qquad \text{if }\textup{bin}(\xi_t^i) = \textup{bin}(\xi_t^j) = u.
\end{align*}
Following the arguments in the proof
of Lemma~\ref{lem4}, we find that now
\begin{align*}
&{\mathbb E}\left[\left.\sum_{i:\textup{bin}(\hat{\xi}_t^i) = u}\sum_{j:\textup{bin}(\hat{\xi}_t^j) = v}\hat{\omega}_t^i\hat{\omega}_t^j g(\hat{\xi}_t^i)g(\hat{\xi}_t^j)
\right|{\mathcal F}_t\right] \\
&\quad= \begin{dcases} \sum_{i:\textup{bin}({\xi}_t^i) = u}\sum_{j:\textup{bin}({\xi}_t^j) = v} \omega_t^i \omega_t^j g(\xi_t^i)g(\xi_t^j), & u \ne v \\
\left(\sum_{i:\textup{bin}(\xi_t^i)= u}\omega_t^i g(\xi_t^i)\right)^2 \\
\qquad \qquad + \left(\frac{\omega_t(u)}{N_t(u)}\right)^2\left[\sum_{i:\textup{bin}(\xi_t^i) = u}\delta_t^i g(\xi_t^i)^2 \right. \\
\qquad\qquad\qquad\qquad\qquad\qquad
-\left.\frac{1}{\delta_t(u)} \left(\sum_{i:\textup{bin}(\xi_t^i) = u} \delta_t^i g(\xi_t^i)\right)^2\right], & u = v
\end{dcases}.
\end{align*}
Repeating the arguments in the proof of Lemma~\ref{lem_sel_var} gives the result.
\end{proof}
Next we show that the ergodic theorem still holds with residual resampling.
\begin{corollary}\label{cor_erg}
Let Assumption~\ref{A1} hold. Suppose we use residual instead of
multinomial resampling in
the selection step of Algorithm~\ref{alg2}.
That is, we use~\eqref{Cti_mod} instead
of~\eqref{def_Cti} to define
$(C_t^i)^{i=1,\ldots,N}$. Then
the conclusion of the ergodic theorem (Theorem~\ref{thm_main})
still holds.
\end{corollary}
\begin{proof}
Note that we can write the quantity in square
brackets in~\eqref{sel_varnew} as
\begin{align*}
\textup{Var}_{\eta_{sel}'}(Kh_{t+1}) &= \sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\delta_t^i}{\delta_t(u)}(Kh_{t+1}(\xi_t^i))^2 \\
&\qquad \qquad\qquad- \left(
\sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\delta_t^i}{\delta_t(u)}Kh_{t+1}(\xi_t^i)\right)^2,
\end{align*}
where we define
$$\eta_{sel}'(dy) = \sum_{i:\textup{bin}(\xi_t^i) = u} \frac{\delta_t^i}{\delta_t(u)}\delta_{\xi_t^i}.$$
Now it is easy to check that the
conclusion of Lemma~\ref{lem_bdd} still holds by following
the arguments in the proof thereof.
Corollary~\ref{cor_erg} follows from this,
together with
the fact that the results above
concerning one-step means
remain valid.
In more detail, only the selection step
of Algorithm~\ref{alg2} has been
changed, so
Lemma~\ref{lem1} still holds.
Using~\eqref{pres_means}, it is easy to check that the conclusion
of Lemma~\ref{lem3} is still true.
Thus,
the conclusions of Lemma~\ref{lem5}
and so also Lemma~\ref{lem6} remain
valid. Lemma~\ref{lem_doob} does
not depend on the selection step,
so it still holds.
Thus the conclusions
of Lemma~\ref{lem_bdd_var} and
Lemma~\ref{cor_scale_var}
still hold. The same arguments from
the proof of Theorem~\ref{thm_main} now
establish the ergodic theorem.
\end{proof}
Proposition~\ref{prop_marginals} also
remains true for residual resampling.
\section{Comparison with direct Monte Carlo}
\label{sec:compare_naive}
\hskip15pt Weighted ensemble
can outperform direct Monte Carlo simulations
when $\int f\,d\mu$ is small. Here
we investigate why, in the context
of our variance analysis above.
By direct Monte Carlo simulations,
we mean independent particles
evolving via $K$,
or equivalently weighted ensemble without a
selection step. Selection can be beneficial
when it puts more particles
in high variance regions of state space,
as we will now show.
To compare weighted ensemble with direct
Monte Carlo, first consider the simple case where each point is a bin: $\textup{bin}(\xi_t^i) \ne \textup{bin}(\xi_t^j)$ whenever $\xi_t^i \ne \xi_t^j$.
In this case, Lemma~\ref{lem_sel_var} shows that the
weighted ensemble
selection variance is zero~\cite{aristoff2018steady}.
Direct Monte Carlo also, obviously, has
zero selection variance. Thus,
it suffices to consider
the mutation variances.
For weighted ensemble,
using
Lemma~\ref{lem_mut_var},~\eqref{weights}, and~\eqref{Kvar}, the mutation variance visible at time $t$ before selection is
\begin{align}\begin{split}\label{vis_var0}
&{\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] \\
&= \frac{1}{T^2}\sum_{u \in {\mathcal B}} {\mathbb E}\left[\left.\sum_{i:\textup{bin}(\hat{\xi}_t^i) = u} \left(\hat{\omega}_t^i\right)^2 \left[Kh_{t+1}^2(\hat{\xi}_t^i) - (Kh_{t+1}(\hat{\xi}_t^i))^2\right]\right|{\mathcal F}_t\right] \\
&= \frac{1}{T^2}\sum_{u \in {\mathcal B}} \frac{\omega_t(u)^2}{N_t(u)} \left[Kh_{t+1}^2(u) - (Kh_{t+1}(u))^2\right], \qquad \text{for weighted ensemble}.
\end{split}
\end{align}
Direct Monte Carlo can be seen as a
modified version of Algorithm~\ref{alg2}
where $C_t^i \equiv 1$, $\omega_t^i = \hat{\omega}_t^i \equiv 1/N$, and $N_t(u) = N\omega_t(u)$ for all $t\ge 0$, $i=1,\ldots,N$, and $u \in {\mathcal B}$. In
this case~\eqref{vis_var0} becomes
\begin{align}\begin{split}\label{vis_var01}
&{\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] \\
&= \frac{1}{T^2}\sum_{u} \frac{\omega_t(u)}{N}\left[Kh_{t+1}^2(u) - (Kh_{t+1}(u))^2\right], \qquad \text{for direct Monte Carlo}.
\end{split}
\end{align}
By~\eqref{vis_var0}-\eqref{vis_var01}, weighted ensemble can outperform direct Monte Carlo if
$$\frac{\omega_t(u)^2}{N_t(u)} \ll \frac{\omega_t(u)}{N},\quad \text{whenever } Kh_{t+1}^2(u) - (Kh_{t+1}(u))^2 \text{ is large}.$$
This informal condition rewrites as
\begin{equation}\label{WE_vs_DMC}
N_t(u) \gg N \omega_t(u) \qquad \text{ whenever} \qquad \text{Var}_{K(u,\cdot)}h_{t+1} \text{ is large}.
\end{equation}
Recalling that $N \omega_t(u)$ is the number of particles allocated to bin $u$ at time $t$
for direct Monte Carlo, condition~\eqref{WE_vs_DMC} simply
states that weighted ensemble should allocate more particles to $u$ than direct Monte Carlo when
$\text{Var}_{K(u,\cdot)}h_{t+1}$ is large.
This can be sometimes be achieved even with a naive particle allocation, as we show below.
The variance $\text{Var}_{K(u,\cdot)}h_{t+1}$
tends to be large in ``bottlenecks''
of state space, corresponding
{\em e.g.} to energetic or
or entropic barriers~\cite{aristoff2018steady}.
More generally, when bins contain multiple points, the selection variance is nonzero.
In this case, for weighted ensemble to have a gain over direct Monte Carlo, the benefit from putting more particles
in regions with large values of the variance $\text{Var}_{K(\xi,\cdot)}h_{t+1}$ needs to
offset
the variance cost from the selection
step. We show this is also possible
via a naive particle allocation;
see the numerical example in Section~\ref{sec:counterexample} along with Figures~\ref{fig1}-\ref{fig2}. As discussed
above, the selection variance is
comprised of terms of the form
$\text{Var}_\eta Kh_{t+1}$ where $\eta$ are distributions supported in the individual bins. These terms are small if
the bins resolve the metastable sets
of the dynamics driven by $K$. Even if the metastable
sets are not completely resolved, weighted ensemble may still beat direct Monte Carlo,
as the references in the Introduction attest.
We now give a simple example to illustrate how,
in the context of the discussion surrounding
the mutation variance and~\eqref{vis_var0}-\eqref{WE_vs_DMC}, weighted ensemble can outperform direct Monte Carlo. Consider state space consisting
of three points, $1,2,3$,
each of which is a bin, meaning
${\mathcal B} = \{1,2,3\}$ and
$\textup{bin}(\xi_t^i) = u$ if and only
if $\xi_t^i = u$. As discussed
above, we only need to consider
mutation variance,
as the selection variances
of weighted ensemble and direct
Monte Carlo are zero in this case.
The Markov kernel $K$ can be
interpreted as a transition
matrix. We define the
transition probabilities as
$$K(1,2) = K(2,3) = \delta,\qquad
K(1,1) = K(2,1) = 1-\delta, \qquad
K(3,1) = 1,$$ where
$\delta$ is small. We
take $\delta = 10^{-3}$ and
and compute time averages of $f(u) = \mathbbm{1}_{u = 3}$.
The initial distribution is
$\nu = \delta_{1}$, but it is unimportant for what follows. We
assume $N$ is an integer multiple
of $6$, and the particles
are allocated uniformly over the
occupied bins,
\begin{equation*}
N_t(u) = \begin{cases} \frac{N}{\#\{u:\omega_t(u)>0\}}, & \omega_t(u) > 0 \\
0, & \omega_t(u) = 0\end{cases},
\end{equation*}
Direct calculations show that $K$ satisfies Assumption~\ref{A1} with
\begin{equation}\label{muform}
\mu(u) = O(\delta^4) + \begin{cases} 1-\delta+\delta^2, & u = 1 \\ \delta-\delta^2 , & u =2 \\
\delta^2-\delta^3, & u = 3\end{cases}.
\end{equation}
Moreover,
\begin{equation}\label{Kvar}
\lim_{T\to \infty} Kh_{t+1}^2(u) - (Kh_{t+1}(u))^2 = O(\delta^4)+ \begin{cases}\delta^3, & u = 1 \\ \delta-\delta^2-2\delta^3, & u = 2 \\ 0, & u = 3\end{cases}.
\end{equation}
Below, we will assume $T$ is large and ignore terms of higher order than $\delta^3$.
We will {\em not} however, assume that $N$
is very large. The arguments below could be considerably
simplified in the asympotic $N \to \infty$.
However we feel that treating
the finite $N$ case is important,
due to the fact that in applications
$N$ is often only moderately large.
\begin{figure}
\includegraphics[width=13cm]
{naive_comparison1b-eps-converted-to.pdf}
\caption{Comparison of weighted
ensemble with direct Monte Carlo when $T = 500$. {\em Left}: Average values of $\theta_T$ vs. $N$ computed from $10^4$ independent trials. Error
bars are $\sigma_T/10^2$ where $\sigma_T^2$ are
the empirical variances. {\em Center}:
Weighted ensemble empirical standard deviation compared with~\eqref{WE_var}. {\em Right}:
Direct
Monte Carlo empirical standard deviation compared with~\eqref{DNS_var}.}
\label{fig00a}
\end{figure}
For weighted ensemble, using~\eqref{vis_var0},
the mutation variance visible
at time $t \ll T$ before selection is
\begin{align}\begin{split}\label{vis_var}
{\mathbb E}\left[\left.\left(D_{t+1}-\hat{D}_t\right)^2\right|{\mathcal F}_t\right]
&= \frac{1}{T^2}\sum_{u \in {\mathcal B}} \frac{\omega_t(u)^2}{N_t(u)} \left[Kh_{t+1}^2(u) - (Kh_{t+1}(u))^2\right]\\
&\approx \frac{1}{T^2}\left( \frac{\omega_t(1)^2}{N_t(1)}\delta^3 + \frac{\omega_t(2)^2}{N_t(2)}(\delta-\delta^2-2\delta^3)\right).
\end{split}
\end{align}
For $t \ll T$, using the fact that
$N/3 \le N_t(u) \le N$ when $\omega_t(u)>0$,
\begin{align}\begin{split}\label{ineq1}
&\frac{1}{NT^2}\left({\mathbb E}[\omega_t(1)^2]\delta^3 + {\mathbb E}[\omega_t(2)^2](\delta-\delta^2-2\delta^3)\right) \\
&\qquad \lessapprox {\mathbb E}\left[\left(D_{t+1}-\hat{D}_t\right)^2\right] \\
&\qquad\qquad\lessapprox \frac{3}{NT^2}\left({\mathbb E}[\omega_t(1)^2]\delta^3 + {\mathbb E}[\omega_t(2)^2](\delta-\delta^2-2\delta^3)\right).
\end{split}
\end{align}
When $t \gg 0$ we have the estimates
\begin{align}\begin{split}\label{approx2}
\omega_t(1) &\sim \mu(1) - \frac{\mu(1)}{N_t(1)}\textup{Binomial}(N_t(1),\delta) + \frac{\mu(2)}{N_t(2)}\textup{Binomial}(N_t(2),1-\delta) + \mu(3),\\
\omega_t(2) &\sim \frac{\mu(1)}{N_t(1)}\,\textup{Binomial}(N_t(1),\delta),
\end{split}
\end{align}
with the convention $\textup{Binomial}(0,\delta) = 0$. Provided~\eqref{approx2} holds,
from~\eqref{muform} we get
\begin{align*}
&{\mathbb E}[\omega_t(1)^2]= 1 + O(\delta),\\
&\delta^2 + \frac{\delta - 3\delta^2}{N} +O(\delta^3) \le {\mathbb E}[\omega_t(2)^2] \le \delta^2 + \frac{3(\delta - 3\delta^2)}{N} +O(\delta^3).
\end{align*}
Putting this into~\eqref{ineq1}, for $0 \ll t \ll T$ we have
\begin{equation*}
\frac{1}{NT^2}\left(2\delta^3 + \frac{\delta^2-4\delta^3}{N}\right) \lessapprox {\mathbb E}\left[\left(D_{t+1}-\hat{D}_t\right)^2\right] \lessapprox \frac{3}{NT^2}\left(2\delta^3 + \frac{3(\delta^2-4\delta^3)}{N}\right).
\end{equation*}
Thus the variance can be estimated by
\begin{align}\begin{split}\label{WE_var}
\frac{1}{NT}\left(2\delta^3 + \frac{\delta^2-4\delta^3}{N}\right) \lessapprox \sigma_T^2 &\lessapprox \frac{3}{NT}\left(2\delta^3 + \frac{3(\delta^2-4\delta^3)}{N}\right),\\&\qquad \qquad \qquad\qquad \text{for weighted ensemble}.
\end{split}
\end{align}
\begin{figure}
\includegraphics[width=13cm]
{naive_comparison2b-eps-converted-to.pdf}
\caption{Comparison of weighted
ensemble with direct Monte Carlo when $N = 120$. {\em Left}: Average values of $\theta_T$ vs. $T$ computed from $10^4$ independent trials. Error
bars are $\sigma_T/10^2$ where $\sigma_T^2$ are
the empirical variances. {\em Center}:
Weighted ensemble empirical standard deviation compared with~\eqref{WE_var}. {\em Right}:
Direct
Monte Carlo empirical standard deviation compared with~\eqref{DNS_var}.}
\label{fig00b}
\end{figure}
On the other hand, consider direct
Monte Carlo, or independent
particles.
This is equivalent to weighted
ensemble, if we skip the
selection step and instead set
$\hat{\xi}_t^i = \xi_t^i$
and $\hat{\omega}_t^i = \omega_t^i$
for each $t \ge 0$ and $i=1,\ldots,N$.
The mean mutation variance
at time $t$ is
\begin{align*}
{\mathbb E}\left[\left(D_{t+1}-\hat{D}_t\right)^2\right] &= \frac{1}{T^2N^2}{\mathbb E}\left[\sum_{i=1}^N \left[Kh_{t+1}^2(\xi_t^i)-(Kh_{t+1}(\xi_t^i))^2\right]\right]\\
&\approx \frac{N\mu(1)\delta^3+N\mu(2)(\delta-\delta^2-2\delta^3)}{T^2 N^2} \\
& \approx \frac{\delta^2-\delta^3}{T^2 N}, \qquad 0 \ll t \ll T.
\end{align*}
Thus the variance is approximately
\begin{equation}\label{DNS_var}
\sigma_T^2 \approx T \times \frac{\delta^2-\delta^3}{T^2 N}=\frac{1}{NT}(\delta^2-\delta^3), \qquad \text{for direct Monte Carlo}.
\end{equation}
Observe the improved scaling
in $\delta$ for~\eqref{WE_var} compared to~\eqref{DNS_var} when $N$ is of
order $1/\delta$. The mechanism for
this, illustrated by~\eqref{WE_vs_DMC}
and the computations above,
is the allocation of more particles
to the bin, $u = 2$, with the
largest value of the variance
$Kh_{t+1}^2(u)-(Kh_{t+1}(u))^2$. This
bin typically has much fewer particles
in direct Monte Carlo.
See Figure~\ref{fig00a}
and Figure~\ref{fig00b}
for numerical verification of the estimates~\eqref{WE_var}-\eqref{DNS_var}.
The example in this section is simple
but the ideas hold generally:
we expect
``bottlenecks'' in state space to have
a large value of the variance
$Kh_{t+1}^2-(Kh_{t+1})^2$
but a small $\mu$ probability.
Allocating more particles to such
bottlenecks in weighted ensemble
leads to a smaller variance
compared to direct Monte Carlo.
\section{Comparison of time averages}\label{sec:compare_avg}
\hskip15pt We expect our time averages~\eqref{theta_T}
to have better variance
properties than the naive
averages in~\eqref{theta_T_tilde}.
The reason is simple.
For time averages defined by~\eqref{theta_T_tilde}, we average
over an ancestral tree where
the branches have many
roots in common. These ``duplicate''
samples lead to a larger variance. We give a
simple but illuminating example of this below.
Consider again state space consisting
of three points, $1,2,3$,
each of which is a bin, so
${\mathcal B} = \{1,2,3\}$ and
$\textup{bin}(\xi_t^i) = u$ if and
only if $\xi_t^i = u$.
As above, the Markov kernel $K$ is now a
transition
matrix. We define the transition probabilities as
$$K(1,2) = 1/2 = K(1,3), \qquad K(2,1) = K(3,1) = 1,$$
and consider $f(u) = \mathbbm{1}_{u = 3}$. Though $K$ is periodic and
so does not satisfy Assumption~\ref{A1},
a small modification to $K$ could
fix this without really changing the
following argument. We use the initial distribution $\nu = \delta_{1}$.
The particle allocation
is as follows. For even $t$, $N_t(1) = N$ and $N_t(2) = N_t(3) = 0$, while for odd $t$, $N_t(1) = 0$ and
\begin{align*}
N_t(2) &= \begin{cases} N_2, & \omega_t(2) > 0,\, \omega_t(3)>0 \\ N, & \omega_t(2) > 0,\,\omega_t(3) = 0 \\ 0, & \omega_t(2) = 0\end{cases}, \\
N_t(3) &= \begin{cases} N_3, & \omega_t(3) > 0 , \,\omega_t(2)>0 \\
N, & \omega_t(3) > 0 ,\, \omega_t(2)=0\\ 0, & \omega_t(3) = 0\end{cases},
\end{align*}
where $N_2 + N_3 = N$. The unique invariant measure $\mu$ for $K$ satisfies $\mu(1) = 1/2$, $\mu(2) = \mu(3) = 1/4$. The time averages in~\eqref{theta_T} and~\eqref{theta_T_tilde} converge to $\lim_{T \to \infty} \theta_T \stackrel{a.s.}{=} \lim_{T \to \infty} \tilde{\theta}_T
\stackrel{a.s.}{=} 1/4$.
We make some analytic estimates
to compare the performance of~\eqref{theta_T}
with~\eqref{theta_T_tilde}. Assume for
simplicity that $N$ and $T$ are even, suppose
$N \ll T$, and let
$Y_1,Y_2,\ldots$ be iid Bernoulli-$1/2$
random variables. Regardless of
the choice of $N_2$ and $N_3$,
at every odd time step, there are $N$ particles
with equal weight $1/N$ arriving at $u = 2$ or $u =3$
from $u=1$, with
Bernoulli-$1/2$ probabilities. Thus, the
variance of~\eqref{theta_T} is estimated
by
\begin{equation}\label{scale1}
\sigma_T^2 \approx \textup{Var}\left(\frac{Y_1+Y_2 + \ldots + Y_{NT/2}}{NT}\right) = \frac{1}{8NT}.
\end{equation}
\begin{figure}
\includegraphics[width=13cm]
{time_avg_comparison1-eps-converted-to.pdf}
\vskip-10pt
\caption{Comparison of time averages
$\theta_T$ and $\tilde{\theta}_T$ defined in~\eqref{theta_T} and~\eqref{theta_T_tilde}, for the example in Section~\ref{sec:compare_avg}
when $T = 1000$ and $N_2 = 1$, $N_3 = N-1$. {\em Left}: Average values of $\theta_T$
and $\tilde{\theta}_T$ vs. $N$ computed from
$10^4$ independent trials. Error bars are $\sigma_T/10^2$
and $\tilde{\sigma}_T/10^2$, where $\sigma_T^2$ and $\tilde{\sigma}_T^2$ are the empirical variances of $\theta_T$
and $\tilde{\theta}_T$ respectively.
{\em Center}: Empirical standard deviation $\sigma_T$ vs. $N$ compared to the prediction from~\eqref{scale1}.
{\em Right}: Empirical standard deviation $\tilde{\sigma}_T$ vs. $N$ compared
to the prediction from~\eqref{scale2}.}
\label{fig0a}
\end{figure}
The variance of~\eqref{theta_T_tilde} depends
on the particle allocation.
Suppose first
that $N_2 = 1$ and $N_3 = N-1$.
At time $0$ there are $N$
distinct roots, while at each
odd
time step, some roots are
lost from selection.
A rough approximation is
that $\approx N/2^k$ unique
roots are lost
at the $(2k-1)$st time step.
This means a single root remains
after $O(\log N)$ odd time steps.
In other words, the ancestral lines of the
particles surviving until time $T$
all share a single long root, with a
branch and leaf system of time length $O(\log N)$ that is
small enough we ignore it.
So at
each odd time step there are
are $\approx N$ identical ancestors
at either $u = 2$ or $u = 3$ with Bernoulli-$1/2$ probabilities. The variance is roughly
\begin{align}\begin{split}\label{scale2}
\sigma_T^2 &\approx \textup{Var}\left(\frac{NY_1+NY_2 + \ldots + NY_{T/2}}{NT}\right) \\
&= \frac{1}{8T}, \qquad \text{if }N_2 = 1,\,N_3 = N-1.
\end{split}
\end{align}
Now suppose $N_2 = N_3 = N/2$.
The dominant contribution to
the variance is again a
single long root
of all the ancestral lines,
but now this
root has a slightly smaller time length.
At each odd time step, there
are on average
$\approx N/2+0.4\sqrt{N}$ particles
at either $u = 2$ or $u = 3$, an excess
of $\approx 0.4\sqrt{N}$ over
the $N/2$ particles that
will be selected. We
are interested in
the number of time steps before
only one root remains.
Each excess particle (each
particle that is not selected)
is one
that loses its unique root.
To roughly estimate the
number of time steps before
only one root remains,
we count the mean number
of time steps before
each particle is chosen
as an excess particle.
Since there are $\approx 0.4\sqrt{N}$
excess particles at each odd
time step, we can use the coupon
collector problem~\cite{motwani}
to estimate the
number of odd time steps
before only one root remains as
\begin{equation*}
\ell_N := \frac{N\log N + \gamma N}{0.4 \sqrt{N}}, \qquad \gamma \approx 0.58.
\end{equation*}
By the same argument as above, we can estimate the variance by
\begin{align}\begin{split}\label{scale3}
\sigma_T^2 &\approx \textup{Var}\left(\frac{NY_1+NY_2 + \ldots + NY_{(T-\ell_N)/2}}{NT}\right) \\
&= \frac{T-\ell_N}{8T^2},\qquad \text{if }N_2 = N_3 = N/2.
\end{split}
\end{align}
See Figures~\ref{fig0a} and~\ref{fig0b}
for numerical simulations confirming
these estimates.
Figures~\ref{fig0a} and~\ref{fig0b}
show the
variance of~\eqref{theta_T} is
much smaller than that of~\eqref{theta_T_tilde}. The example
above illustrates the simple mechanism
behind this variance reduction.
Our example also
illustrates that the variance of~\eqref{theta_T}, compared to~\eqref{theta_T_tilde},
should be less sensitive to the choice of the
particle allocation $N_t(u)_{t \ge 0}^{u \in {\mathcal B}}$. This is a significant benefit,
as for complicated systems it
can be difficult to know how many
particles to keep in each bin.
\begin{figure}
\includegraphics[width=13cm]
{time_avg_comparison2b-eps-converted-to.pdf}
\vskip-10pt
\caption{Comparison of time averages
$\theta_T$ and $\tilde{\theta}_T$ defined in~\eqref{theta_T} and~\eqref{theta_T_tilde}, for the example in Section~\ref{sec:compare_avg}
when $T = 1000$ and $N_2 = N_3 = N/2$. {\em Left}: Average values of $\theta_T$
and $\tilde{\theta}_T$ vs. $N$ computed from
$10^4$ independent trials. Error bars are $\sigma_T/10^2$
and $\tilde{\sigma}_T/10^2$, where $\sigma_T^2$ and $\tilde{\sigma}_T^2$ are the empirical variances of $\theta_T$
and $\tilde{\theta}_T$ respectively.
{\em Center}: Empirical standard deviation $\sigma_T$ vs. $N$ compared to the prediction from~\eqref{scale1}.
{\em Right}: Empirical standard deviation $\tilde{\sigma}_T$ vs. $N$ compared
to the prediction from~\eqref{scale3}.}
\label{fig0b}
\end{figure}
\section{Comparison with a sequential Monte Carlo method}\label{sec:counterexample}
\hskip15pt Below, we contrast weighted ensemble with
a well-known sequential Monte Carlo method,
in which the resampling is based
on a Gibbs-Boltzmann fitness function.
See~\cite{del2005genealogical,webber} for a
description
in the context of rare event sampling.
(See also the textbooks~\cite{del2004feynman,doucetSMC} for
more details.) This {sequential
Monte Carlo} method
has the same mutation step
as weighted ensemble, but a
different selection step. Namely,
the selection step
of Algorithm~\ref{alg2}
is replaced by:
\begin{itemize}
\item {\em (sequential Monte Carlo selection step)}
Conditional on ${\mathcal F}_t$, let $(C_t^i)^{i=1,\ldots,N}$ be
multinomial with
$N$ trials
and event probabilities
$$\frac{G_t(\xi_t^i,\ldots)}{\sum_{i=1}^N G_t(\xi_t^i,\ldots)}.$$ The
number of children of particle $\xi_t^i$ is defined by
\begin{equation}\label{Ctj1}
C_t^i = \#\left\{j : \textup{par}(\hat{\xi}_t^j) = \xi_t^i\right\}
\end{equation}
and the children's weights are
\begin{equation}\label{omegatj1}
{\hat \omega}_t^i = \frac{\omega_{t}^j}{G_t(\xi_t^j,\ldots)}\times \frac{\sum_{i=1}^NG_{t}(\xi_{t}^i,\ldots))}{N}, \qquad \textup{if par}({\hat \xi}_{t}^i) = \xi_{t}^j.
\end{equation}
\end{itemize}
Here, the fitness functions, or {\em Gibbs-Boltzmann potentials} $G_t(\xi_t^i,\ldots)$, are positive-valued. We write
$(\xi_t^i,\ldots)$ for the argument
of $G_t$ to indicate that
$G_t$ may depend on the current
position (represented by $\xi_t^i$)
along with the ancestral line (denoted
by $\ldots$) of a particle. Actually, it can
be shown
that a good choice
of $G_t$ has the form~\cite{aristoff2016analysis,chraibi2018optimal,
del2005genealogical}
\begin{equation}\label{Gtopt}
G_t(\xi_t^i,\ldots) = \frac{V_t(\xi_t^i)}{V_{t-1}(\textup{par}(\hat{\xi}_{t-1}^i))}
\end{equation}
for $t > 0$, and $G_0(\xi_0^i) = V_0(\xi_0^i)$,
where the $V_t$ are positive-valued
functions on state space. Informally,
this choice is good because it turns
a path-likelihood term defining
the particle weights into a telescopic
product. However, there
is still a term corresponding
to the total weight
that leads to a selection
variance blowup. We discuss this more below.
We will consider the case where the Gibbs-Boltzmann potential has
the optimal form~\eqref{Gtopt}.
In this setup, by accounting for particle weights, the selection
step can be rewritten without
reference to particles' ancestral histories,
as follows.
\begin{itemize}
\item {\em (sequential Monte Carlo selection step using~\eqref{Gtopt})}
Conditional on ${\mathcal F}_t$, let $(C_t^i)^{i=1,\ldots,N}$ be
multinomial with
$N$ trials
and event probabilities
$$\frac{\omega_t^i V_t(\xi_t^i)}{\sum_{i=1}^N \omega_t^i V_t(\xi_t^i)}.$$ The
number of children of particle $\xi_t^i$ is
\begin{equation}\label{Ctj2}
C_t^i = \#\left\{j : \textup{par}(\hat{\xi}_t^j) = \xi_t^i\right\}
\end{equation}
and the children's weights are
\begin{equation}\label{omegatj2}
{\hat \omega}_t^i =
\frac{1}{V_t(\xi_t^j)}\times \frac{\sum_{i=1}^N \omega_{t}^i V_{t}(\xi_{t}^i)}{N}, \qquad \textup{if par}({\hat \xi}_{t}^i) = \xi_{t}^j.
\end{equation}
\end{itemize}
To see that these selection steps agree when $G_t$ is given by~\eqref{Gtopt}, note that by~\eqref{omegatj2} and~\eqref{weight_mut},
\begin{equation*}
\omega_t^i V_t(\xi_t^i) = \frac{V_t(\xi_t^i)}{V_{t-1}(\xi_{t-1}^j)} \times\frac{\sum_{i=1}^N \omega_{t-1}^i V_{t-1}(\xi_{t-1}^i)}{N}, \qquad \textup{if par}({\hat \xi}_{t-1}^i) = \xi_{t-1}^j.
\end{equation*}
It follows that when $G_t$ is
given by~\eqref{Gtopt}, the
numbers $C_t^i$ defined in~\eqref{Ctj1}
agree with those in~\eqref{Ctj2}. That is,
both selection steps lead to the
same multinomial law for the number
of children of each particle.
Equations~\eqref{omegatj1}
and~\eqref{omegatj2} then
also agree, since both
say that the weight
of a child equals
its parent's weight divided
by the expected number
of children of the
parent, {\em i.e.},
\begin{equation}\label{unbiased}\hat{\omega}_t^i = \frac{\omega_t^j}{{\mathbb E}[C_t^j|{\mathcal F}_t]}, \qquad \text{if }\textup{par}(\hat{\xi}_t^i) = \xi_t^j.
\end{equation}
The rule~\eqref{unbiased}
ensures that the
unbiased property,
Theorem~\ref{thm_unbiased},
holds; see~\cite{aristoff2016analysis}.
In~\eqref{omegatj1} and~\eqref{omegatj2}, we think
of $\omega_t^j/G_t(\xi_t^j,\ldots)$
and $1/V_t(\xi_t^j)$,
respectively, as {\em non-normalized} weights,
and $\sum_{i=1}^N G_t(\xi_t^i,\ldots)/N$ and $\sum_{i=1}^N \omega_t^i V_t(\xi_t^i)/N$,
respectively,
as the {\em weight normalization}
that is the same for every
particle at time $t$. Because of the choice of Gibbs-Boltzmann potential in~\eqref{Gtopt}, the non-normalized weights
in~\eqref{omegatj2}
telescope and thus depend
only on the current particle
position; compare this with~\eqref{omegatj1}
for general $G_t$, where
the non-normalized weight
$\omega_t^j/G_t(\xi_t^j,\ldots)$
depends on $\omega_t^j$.
This telescoping suggests why~\eqref{Ctj2}-\eqref{omegatj2} gives
better performance. In spite
of this, the variance still
explodes at large times.
Intuitively, this is due
to randomness of the
weight normalization:
this randomness leads to a order $1$ contribution
to the selection variance at
each time step. After $T$
time steps this contribution
is order $T$, as we show below.
Contrast this with weighted ensemble, where
the total weight is
equal to $1$ at all times.
We begin by studying the selection
and mutation variances for sequential Monte Carlo.
\begin{proposition}
The selection variance of sequential Monte Carlo at time $t$ is
\begin{align*}
{\mathbb E}\left[\left.\left(\hat{D}_{t}-{D}_{t}\right)^2\right|{\mathcal F}_{t}\right] &= \frac{1}{T^2} \left[\sum_{i=1}^N \frac{(\omega_t^i)^2}{\beta_t^i} (Kh_{t+1}(\xi_t^i))^2 \right. \\
&\qquad \qquad \qquad -\left. \frac{1}{N}\left(\sum_{i=1}^N \omega_t^i Kh_{t+1}(\xi_t^i)\right)^2\right],
\end{align*}
where $\beta_t^i$ is the expected number of children of $\xi_t^i$,
\begin{equation}\label{betati}
\beta_t^i = {\mathbb E}[C_t^i|{\mathcal F}_t] = \frac{N G_t(\xi_t^i,\ldots)}{\sum_{i=1}^N G_t(\xi_t^i,\ldots)}.
\end{equation}
The mutation variances of sequential Monte Carlo and weighted ensemble are the same.
\end{proposition}
\begin{proof}
By definition the selection step
for sequential Monte Carlo,
\begin{equation*}
{\mathbb E}[C_t^iC_t^j|{\mathcal F}_t] = \beta_t^i\beta_t^j\left(1-\frac{1}{N}\right) + \beta_t^j \mathbbm{1}_{i=j}.
\end{equation*}
Using this, and following calculations
similar to the proof of Lemma~\ref{lem_sel_var},
\begin{align*}
&{\mathbb E}\left[\left.\left(\sum_{i=1}^N \hat{\omega}_t^i Kh_{t+1}(\hat{\xi}_t^i)\right)^2\right|{\mathcal F}_t\right] \\
&= \sum_{i=1}^N \sum_{j=1}^N {\mathbb E}\left[\left.\sum_{k:\textup{par}(\hat{\xi}_t^k)=\xi_t^i}\sum_{\ell:\textup{par}(\hat{\xi}_t^\ell)=\xi_t^j}\hat{\omega}_t^k \hat{\omega}_t^\ell Kh_{t+1}(\hat{\xi}_t^k)Kh_{t+1}(\hat{\xi}_t^\ell)\right|{\mathcal F}_t\right] \\
&= \sum_{i=1}^N \sum_{j=1}^N \frac{\omega_t^i \omega_t^j}{\beta_t^i\beta_t^j} Kh_{t+1}(\xi_t^i)Kh_{t+1}(\xi_t^j) {\mathbb E}[C_t^i C_t^j|{\mathcal F}_t] \\
&= \left(1-\frac{1}{N}\right)\sum_{i=1}^N\sum_{j=1}^N \omega_t^i \omega_t^j Kh_{t+1}(\xi_t^i)Kh_{t+1}(\xi_t^j) + \sum_{i=1}^N \frac{(\omega_t^i)^2}{\beta_t^i}(Kh_{t+1}(\xi_t^i))^2.
\end{align*}
It is straightforward to check that the conclusions of Lemmas~\ref{lem3} and~\ref{lem6} remain valid. Thus,
following the proof of Lemma~\ref{lem_sel_var}, we have
\begin{align*}
{\mathbb E}\left[\left.\left(\hat{D_t} - D_t\right)^2\right|{\mathcal F}_t\right]
&= \frac{1}{T^2}{\mathbb E}\left[\left.\left(\sum_{i=1}^N \hat{\omega}_t^iKh_{t+1}(\hat{\xi}_t^i) - \sum_{i=1}^N
\omega_t^i Kh_{t+1}(\xi_t^i)\right)^2\right|{\mathcal F}_t\right] \\
&= \frac{1}{T^2}{\mathbb E}\left[\left.\left(\sum_{i=1}^N\hat{\omega}_t^i Kh_{t+1}(\hat{\xi}_t^i)\right)^2\right|
{\mathcal F}_t\right] \\
&\qquad \qquad- \frac{1}{T^2}\left(\sum_{i=1}^N \omega_t^i Kh_{t+1}(\xi_t^i)\right)^2.
\end{align*}
Combining the last two displays gives the result for the selection variance. The mutation variances of weighted ensemble and sequential Monte Carlo are the same
because they share the
same mutation step.
\end{proof}
\begin{figure}
\includegraphics[width=13cm]
{var_scale-eps-converted-to.pdf}
\vskip-5pt
\caption{Scaling of the variance of sequential Monte Carlo
for the example in
Section~\ref{sec:compare_naive}
with $\delta = 0.25$, where
$G_t$ is defined
by~\eqref{Gtopt} with
$V_t(u) = u$, $u=1,2,3$. {\em Left}: Average values of $\theta_T$ computed from $10^6$ independent
trials. Error bars, which are smaller than the data markers, are $\sigma_T/10^3$, where $\sigma_T^2$
is the empirical variance of $\theta_T$. The exact
value is $\lim_{T \to \infty} \theta_T \approx 0.0476$. {\em Center}: Empirical standard deviations $\sigma_T$ vs. $T$. {\em Right}: Scaled empirical
standard deviations $(1/\sqrt{T})\times \sigma_T$ vs. $T$, demonstrating
that $\sigma_T \sim \sqrt{T}$.}
\label{fig0}
\end{figure}
Consider the sequential Monte Carlo selection variance multiplied by $T^2$,
\begin{equation}\label{term}\sum_{i=1}^N \frac{(\omega_t^i)^2}{\beta_t^i} (Kh_{t+1}(\xi_t^i))^2 - \frac{1}{N}\left(\sum_{i=1}^N \omega_t^i Kh_{t+1}(\xi_t^i)\right)^2.
\end{equation}
In general, the expression in~\eqref{term} will not
be bounded in $T$. Instead~\eqref{term}
will typically be the same size as
$(Kh_{t+1})^2$, which is order $T^2$.
As a consequence the variance, $\sigma_T^2$, is of order $T$.
This variance blowup occurs
even when the
Gibbs-Boltzmann potentials
satisfy~\eqref{Gtopt}. See Figure~\ref{fig0}
for numerical illustration.
See also the center
of Figure~\ref{fig1},
where there is an initial decay of the variance before
the asymptotical growth.
This
problem is not
particular to multinomial
resampling: variance blowup occurs
with residual and
other standard resampling
methods.
A few exceptions and special
cases are worth mentioning.
First, notice that putting $\beta_t^i = N\omega_t^i$
in~\eqref{term} leads to
the selection variance of weighted
ensemble with $1$ bin. This
selection step
is commonly used in diffusion Monte Carlo~\cite{assaraf,weare}.
In this case,~\eqref{term}
can be seen as a
variance of $Kh_{t+1}$
and thus bounded in $T$; see
the proof of
Lemma~\ref{lem_bdd}. However,
in our setting, which is
different from diffusion Monte Carlo, this choice of selection gives
no variance reduction compared to
direct Monte Carlo. We discuss this more below.
Second, consider $\beta_t^i = \omega_t^i N_t(u)/\omega_t(u)$,
which most
closely
resembles weighted ensemble,
but does not fix the total
weight or the number $N_t(u)$
of children in each bin $u$.
(Also, this choice of $\beta_t^i$ does not
quite fit into the sequential
Monte Carlo framework, as
$\beta_t^i$ cannot be
expressed in the form~\eqref{betati}.)
It can be seen from~\eqref{term}
that the variance still
explodes in this case. Lastly,
consider
\begin{equation}\label{opt_control}\beta_t^i =
\frac{N\omega_t^i Kh_{t+1}(\xi_t^i)}{\sum_{i=1}^N \omega_t^i Kh_{t+1}(\xi_t^i)},
\end{equation}
and suppose $f$ is strictly positive so that this expression makes sense.
In this case, equation~\eqref{opt_control} makes~\eqref{term} equal to zero,
so that the selection variance is exactly zero.
This form of selection is closely
related to stochastic optimal control~\cite{schutte}.
As with optimal control, it is not practical to implement
exactly, since
computing the functions
$Kh_{t+1}$ to a given precision is at least as expensive
as evaluating $\int f \,d\mu$ with
the same accuracy.
It is worth noting that
for weighted ensemble, if the selection variance is already small due to a careful choice of bins,
reducing the mutation variance
should be more important than
minimizing the selection
variance.
\begin{figure}
\includegraphics[width=13cm]
{selection_comparison1c-eps-converted-to.pdf}
\vskip-5pt
\caption{Comparison of weighted ensemble, sequential Monte Carlo, and direct Monte Carlo for estimating $\int f \,d\mu \approx \theta_T$,
with $\theta_T$ and $f$ defined in~\eqref{theta_T} and~\eqref{eq_f}. {\em Left}: Average values of $\theta_T$ vs. $T$ computed from
$10^5$ independent trials of each method. Error bars are $\sigma_T/\sqrt{10^5}$ where $\sigma_T^2$ is the empirical variance of $\theta_T$.
{\em Center}: Empirical standard deviations $\sigma_T$ vs. $T$. The sequential Monte Carlo variance is increasing at large times $T$.
{\em Right}: Scaled empirical standard deviations $\sqrt{T}\times\sigma_T$ vs. $T$ for weighted ensemble and direct Monte Carlo. In all plots,
$\beta = 6$.}
\label{fig1}
\end{figure}
In sum, sequential Monte Carlo fails
to satisfy an
ergodic theorem except in very
special cases. This is despite
the fact that sequential
Monte Carlo is unbiased in the same sense as
weighted ensemble (Theorem~\ref{thm_unbiased}).
To investigate this in more detail numerically,
we consider the following
toy model. Define
\begin{equation*}
X_{t+1} = \text{mod}\left(X_t - 2\pi\sin(2\pi X_t)\delta t + \sqrt{2\delta t\beta^{-1}}\alpha_t,\,1\right)
\end{equation*}
where $X_t$ has values in ${\mathbb R}/{\mathbb Z}$ (i.e., the interval $[0,1)$ with periodic boundary), $\delta t = 0.001$, $\beta =5$ or $6$, and $(\alpha_t)_{t \ge 0}$ are iid standard Gaussians. The kernel $K$
is from a $\Delta t$-skeleton of $(X_t)_{t \ge 0}$,
$$K(x,dy) = {\mathbb P}(X_{\Delta t} \in dy|X_0 = x),$$ where $\Delta t = 10$.
It is easy to check that $K$ satisfies
a Doeblin condition, which means
that $K$ satisfies Assumption~\ref{A1}
(see pgs. 168-169 in~\cite{douc}).
In Figures~\ref{fig1} and~\ref{fig2}, we compare the performance of weighted ensemble,
sequential Monte Carlo,
and direct Monte Carlo
for computing $\int f\,d\mu$,
where $\mu$ is the stationary
distribution
of $K$ and
\begin{equation}\label{eq_f}
f(x) = \mathbbm{1}_{0.45\le x \le 0.55}.
\end{equation}
For
sequential Monte Carlo,
we use Gibbs-Boltzmann potentials $G_t$ defined
by~\eqref{Gtopt}
with $V_t(x) = \exp(-10(x-0.5)^2)$, $t \ge 0$.
Notice that $G_t$ favors particles moving toward the support of $f$.
For weighted ensemble,
we use $20$ equally sized
bins with $N_t(u) \approx N/20$. Thus
for $u \in {\mathcal B} = \{1,\ldots,20\}$,
we define $\textup{bin}(\xi_t^i) = u$ if
$\xi_t^i \in [(u-1)/20,u/20)$.
The initial distribution is
$$\nu(dx) = \frac{\exp(\beta \cos(2\pi x))\,dx}{\int_0^1 \exp(\beta \cos(2\pi x))\,dx},$$
which is an approximation of $\mu$.
All simulations have
$N = 200$ particles.
\begin{figure}
\includegraphics[width=13cm]
{selection_comparison2c-eps-converted-to.pdf}
\vskip-5pt
\caption{Comparison of weighted ensemble, sequential Monte Carlo,
and direct Monte Carlo for estimating $\int f \,d\mu \approx \bar{\theta}_T$, where $\bar{\theta}_T$ and $f$ are defined in~\eqref{theta_T_bar}
and~\eqref{eq_f}. {\em Left}: Average values of $\bar{\theta}_T$ vs. $T$ computed from
$5 \times 10^5$ independent trials of each
method. Error bars are $\bar{\sigma}_T/\sqrt{5\times 10^5}$, where $\bar{\sigma}_T^2$ is the empirical variance of $\bar{\theta}_T$.
{\em Center}: Empirical standard deviation $\bar{\sigma}_T$ of weighted ensemble sampling
and direct Monte Carlo vs. $T$.
{\em Right}: Empirical standard deviation $\bar{\sigma}_T$ of sequential Monte Carlo vs. $T$.
In all plots, $\beta = 5$.}
\label{fig2}
\end{figure}
Pictured in Figure~\ref{fig1}
are the average values of $\theta_T$
given by~\eqref{theta_T},
with the corresponding empirical
variance $\sigma_T^2$, computed
from $10^5$ independent
trials of weighted ensemble, sequential Monte Carlo, and
direct Monte Carlo.
Figure~\ref{fig1}
shows
the variance blowup of sequential Monte Carlo,
along with the stability
of weighted ensemble and direct Monte Carlo,
with the variance scaling
implied by Lemma~\ref{lem_bdd_var}{\em (i)}.
The variance explosion of sequential
Monte Carlo is not particular to
our time averages~\eqref{theta_T}.
Indeed, for sequential
Monte Carlo, the empirical distributions
$\sum_{i=1}^N \omega_t^i \delta_{\xi_t^i}$
become unstable at large $t$.
See the right of Figure~\ref{fig2}.
In particular, the simpler
ergodic theorem from Remark~\ref{R1} also
fails for sequential Monte Carlo.
On the
other hand, the corresponding
weighted ensemble distributions
are stable in time $T$, in the sense that the variance
of $\bar{\theta}_T$ from~\eqref{theta_T_bar}
is bounded in $T$ (Proposition~\ref{prop_marginals}). See the center of
Figure~\ref{fig2} for numerical confirmation.
In diffusion Monte Carlo, it
is possible to
mitigate variance
blowup with an ensemble {refreshment} step
which resamples from particles proportionally
to their weights~\cite{assaraf,tony_book},
or $\beta_t^i = N\omega_t^i$ in the notation
above. This
step, which corresponds
to the selection step of weighted ensemble
with $1$ bin, sets all the weights
equal. In our setting,
this does not fix the
variance blowup, as even
if all the weights are equal, the
term~\eqref{term} is still order~$T^2$, making the selection
variance order $T$.
Anyway such refreshment would
counteract the
importance sampling effects
of the Gibbs-Boltzmann potential
$G_t$.
On the other hand, it is possible
to mitigate variance blowup by
normalizing the total weight,
that is, by replacing
$\omega_t^i$ with
$\omega_t^i/\sum_{j=1}^N \omega_t^j$
at each time $t$.
This of course introduces a
bias. Although the bias will vanish in
the limit $N \to \infty$,
it can be large if $N$ is relatively small, or
the order of magnitude of the values of $V_t$ vary
from near zero to order $1$. As we discussed
above, in realistic weighted
ensemble applications, $N$ indeed is not very
large. Moreover, the best choices
of $V_t$ do have values both near
zero and order $1$, as we
illustrate in our companion
paper~\cite{aristoff2018steady}
(see also~\cite{aristoff2016analysis}).
For these
reasons, we believe that normalizing
the total weight is not a good practical
solution to the weight degeneracy problem.
Indeed, weighted ensemble is a simple
and very general unbiased method that automatically
avoids this problem.
We have compared weighted ensemble and
sequential Monte Carlo with
Gibbs-Boltzmann resampling, two unbiased
importance sampling methods, for computing
steady state and finite time averages
with respect to distributions of a
Markov chain.
Both methods use similar mechanisms but
have different
long-time behavior.
The sequential Monte Carlo method uses
a fitness function,
the Gibbs-Boltzmann potential,
as an importance biasing function, while
in weighted
ensemble, the user-chosen
bins and particle allocation
achieve an analogous effect.
Sequential Monte Carlo with Gibbs-Boltzmann
resampling suffers from a large-time
variance blowup that weighted ensemble avoids.
Of course there are a myriad of ways to
resample, and
we cannot compare weighted ensemble
with every sequential Monte Carlo method.
However, suspect that weighted ensemble
has the {\em simplest} resampling
mechanism that leads to ergodicity.
We leave the exploration of this
point to future work.
\section*{Acknowledgements}
The author would like
to acknowledge Fr{\'e}d{\'e}ric C{\'e}rou, Peter Christman, Josselin Garnier, Gideon Simpson, Gabriel Stoltz and
Brian Van Koten for helpful comments,
and especially Jeremy Copperman, Matthias Rousset, Robert J. Webber,
and Dan Zuckerman for interesting discussions
and insights.
The author thanks Fr{\'e}d{\'e}ric C{\'e}rou and
Matthias Rousset in particular for pointing
out the connection between the selection
steps~\eqref{Ctj1}-\eqref{omegatj1} and~\eqref{Ctj2}-\eqref{omegatj2}, and
Robert J. Webber for pointing out
errors and making many helpful suggestions
concerning a previous version of the
manuscript.
The author also gratefully acknowledges support from the National Science Foundation via the awards NSF-DMS-1818726 and
NSF-DMS-1522398.
|
1,116,691,498,371 | arxiv | \section{Introduction}\label{sec:intro}
Family studies provide an important tool for understanding etiology of
diseases, with the key aim of discovering evidence of family
aggregation and to determine if such aggregation can be attributed to
genetic components. Heritability and concordance estimates are
routinely calculated in twin studies of diseases, as a way of
quantifying such genetic contribution. As a key paper for studying
heritability of cancer, \cite{lichtenstein2000environmental} reported
heritability estimates for prostate cancer of 0.42 (95\% confidence
limits 0.29--0.50) and casewise concordance of 0.21 in monozygotic
(MZ) twins and 0.06 in dizygotic (DZ) twins based on combined cohorts
of 44,788 twin pairs from the Nordic twin registries. This suggests a
considerable genetic contribution to the development of prostate
cancer. A polygenic liability threshold model, i.e., a Probit
variance component model, was used to quantify the heritability on the
liability scale from the classification of subjects as cancer cases or
non-cancer cases (died without cancer). However, a large fraction of
the twin-pairs were still alive at the end of follow-up but treated as
non-cancer case. This corresponds to treating this part of the
population as immune to cancer, suggesting that the estimates of the
targeted population parameters in this study could be severely
biased. The censoring mechanism has largely been ignored in the
epidemiological literature of family studies, which unfortunately
makes reported estimates of both heritability, and other population
parameters of interest such as concordance probabilities, very
difficult to interpret.
The key to solving this problem is to consider the event times in the
analysis. Standard techniques for correlated survival data are not
appropriate here, due to the competing risk of death. Dependence on
the hazard scale while taking possible dependence between causes into
account has been considered by \cite{ripatti2003} and
\cite{gorfinehsu2011}. \cite{scheike13:lida} considered dependence on
the probability scale via random effects models and
\cite{scheike13:concordance} examined non-parametric estimates of the
concordance function, i.e., the probability of both twins experiencing
cancer before a given time point. These methods yield constructive
ways of analysing twin data of disease status, however, care in
correctly specifying the dependence structure over time via the random
effects structure has to be taken. Furthermore, none of the approaches
provide heritability estimates that are comparable with the classical
definition of heritability on the liability scale given by
\cite{falconer67}. In the following we will define a simple estimator
which gives consistent concordance estimates and estimates of
heritability on the liability scale under independent right-censoring.
\vspace*{\bigskipamount}
The paper is structured as follows. In Section~\ref{sec:genetics} we
review basic concepts in quantitative genetics and define heritability
with the aim of estimating the degree of association due to genes and
environmental factors through random effects modelling. In particular,
we note that dependence on the probability scale is something quite
different from dependence on the normal scale. We introduce the
competing risks framework and present the inverse probability of
censoring weighted estimating equations in Section~\ref{sec:ipcw}. The
method is demonstrated in simulations in Section~\ref{sec:sim}.
A worked example based on the Danish twin registry is presented in
Section~\ref{sec:application} followed by a general discussion.
\section{Polygenic models}\label{sec:genetics}
The basic idea of family-studies of a quantitative trait is to exploit
that stronger phenotypic resemblance will be seen between closely
related family members when the trait is genetically determined. In
particular, for twin studies we may exploit that monozygotic (MZ)
twins in principle are genetic copies whereas dizygotic (DZ) twins
genetically on average resembles ordinary siblings. This allows us
under appropriate genetic assumptions to decompose the trait into
genetic and environmental components, $Y=Y_{\text{gene}} +
Y_{\text{envir}}$, which may be modelled using random effects.
Assuming independence between genetic and environmental effects the
\textit{broad-sense heritability} may then be quantified as the
fraction of the total variance due to genetic factors.
The theoretic foundation in modern quantitative genetics was laid out
in the pioneering work of \cite{Fisher:1918} who formally described
the above genetic decomposition in terms of additive and dominant
genetic effects. Familial resemblance may defined from the
\textit{kinship-coefficient} $\Phi_{jk}$ which is the probability that
two randomly selected alleles from the same locus of relatives $k$ and
$j$ are \textit{identical by descent}, i.e., the alleles are physical
copies of the same gene carried by a common ancestor. Under
assumptions of random mating (no inbreeding), linkage equilibrium, no
gene-environment interaction and epistasis, and parents do not
transmit their environmental effects to their children, this leads to
a covariance between the observed phenotypes $Y_k$ and $Y_j$ for the
relatives given by
\begin{align*}
\cov(Y_k,Y_j) = 2\Phi_{kj}\sigma_A^2 + \Delta_{7kj}\sigma_D^2 + \sigma_C^2,
\end{align*}
where the identity coefficient $\Delta_{7kj}$ describes the
probability that at a given loci both alleles for the two relatives
are identical by descent
\citep{lange02:_mathem_statis_method_genet_analy}. The variance
components $\sigma_A^2$ describes the additive genetic effects,
$\sigma_D^2$ the dominant genetic effects and $\sigma_C^2$ describes
variance of shared environmental effects for the two relatives.
This can be captured in a random effects model where the polygenic phenotype
$Y_{ij}$ may be modelled as
\begin{align}\label{eq:polygenic1}
Y_{ij} = \beta^T X_{ij} + \eta^A_{ij} + \eta^C_{i} + \eta^D_{ij} + \varepsilon_{ij},
\end{align}
for family $i=1,\ldots,n$ and family member $j=1,\ldots,K$ with
covariates $X_{ij}$. Here we assume that there is the same shared
environmental effect for all family members. All the random effects
are assumed to be independent and normally distributed which in
general may be reasonable for polygenic traits \citep{lange97}
\begin{align*}
(\eta^A_{ij},
\eta^C_{i},
\eta^D_{ij},
\varepsilon_{ij})^T
\sim \mathcal{N}\left(0,\diag(\sigma_A^2,\sigma_C^2,\sigma_D^2,\sigma_E^2)\right).
\end{align*}
The residual terms $\varepsilon_{ij}$ are assumed to be iid normal and
the variance component $\sigma_E^2$ may be interpreted as the variance
of the unique environmental effects. The (broad-sense) heritability may
then be defined as
\begin{align*}
H^2 =
\frac{\sigma_A^2+\sigma_D^2}{\sigma_A^2+\sigma_C^2+\sigma_D^2+\sigma_E^2}.
\end{align*}
For MZ twins we have $\Phi_{kj}^{\text{MZ}}=\tfrac{1}{2}$ and
$\Delta_{7kj}^{\text{MZ}}=1$ and for DZ twins
$\Phi_{kj}^{\text{DZ}}=\Delta_{7kj}^{\text{DZ}}=\tfrac{1}{4}$, hence
\begin{align*}
\cov(Y_{i1}^{\text{MZ}}, Y_{i2}^{\text{MZ}}) &=
\begin{pmatrix}
\sigma_A^2+\sigma_C^2+\sigma_D^2 + \sigma_E^2 & \sigma_A^2+\sigma_C^2+\sigma_D^2 \\
\sigma_A^2+\sigma_C^2+\sigma_D^2 & \sigma_A^2+\sigma_C^2+\sigma_D^2 + \sigma_E^2
\end{pmatrix}, \\
\cov(Y_{i1}^{\text{DZ}}, Y_{i2}^{\text{DZ}}) &=
\begin{pmatrix}
\sigma_A^2+\sigma_C^2+\sigma_D^2+\sigma_E^2 & \tfrac{1}{2}\sigma_A^2+\sigma_C^2+\tfrac{1}{4}\sigma_D^2 \\
\tfrac{1}{2}\sigma_A^2+\sigma_C^2+\tfrac{1}{4}\sigma_D^2 & \sigma_A^2+\sigma_C^2+\sigma_D^2 + \sigma_E^2
\end{pmatrix}.
\end{align*}
Note that one consequence of the model is that MZ and DZ twins follows
the same marginal distribution. Unfortunately, the classic twin
design does not allow identification of all variance
components. Further inclusion of other family members or
twin-adoptives can remedy this problem, but may further complicate
assumptions regarding shared/non-shared environmental effects across
different family members. The pragmatic solution is typically to
report results from the most biologically relevant model, i.e., for
certain traits the shared environmental effect may be known to be
negligible, or to choose a sub-model based on some model selection
criterion \citep{Akaike73}. For the classical twin design omitting one
variance component in the above formulation (typically the dominant
genetic component, leading to the so-called ACE-model), the Maximum
Likelihood Estimates can be obtained using specialised software for
family studies \citep{metspackage} or any general Structural Equation
Model implementation.
\subsection{Liability threshold model}
For binary traits the classical polygenic model \eqref{eq:polygenic1}
may be extended by a model of the form
\begin{eqnarray}
g(\pr(Y_{ij}=1 \mid X_{ij},\eta_{ij}^{A},\eta_{i}^{C},\eta_{ij}^{D})) = \beta^T X_{ij} +
\eta_{ij}^{A} + \eta_{i}^{C} + \eta_{ij}^{D}, \quad j=1,2,
\label{eq:probit1}
\end{eqnarray}
where $g$ is some link-function, $X_{ij}$ are possible covariates that
we wish to adjust for, and $\eta_{ij}^{A},\eta_{ij}^{C},\eta_{ij}^{D}$ are random
effects.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\textwidth]{probit2}
\caption{Liability threshold model where the observed binary pair
$(Y_1,Y_2)$ is a realization defined from underlying unobserved
continuous variables $(Y_1^*,Y_2^*)$ such that $Y_k=1$ exactly
when the \textit{liability} $Y_k^*$ exceeds some threshold $\delta_k$.}
\label{fig:threshold}
\end{figure}
Using the Probit link \citep{falconer67, falconer-mackay-1994,
nealecardon, shamhuman} Equation \eqref{eq:probit1} gives the
\emph{Liability Threshold Model} and has been widely adopted, since
this leads to a model equivalent to \eqref{eq:polygenic1} for a latent
Gaussian variable (see Figure~\ref{fig:threshold})
\begin{align*}
Y_{ij}^* = \beta^T X_{ij} + \eta^A_{ij} + \eta^C_{i} +
\eta^D_{ij} + \varepsilon_{ij}, \quad j=1,2,
\end{align*}
where we only observe the thresholded version
\begin{align*}
Y_{ij} =
\begin{cases}
1,& Y_{ij}^*\geq\delta_j\\
0,& Y_{ij}^*<\delta_j.
\end{cases}
\end{align*}
For identification, the threshold is fixed at $\delta_j=0$
and the variance of the residual term $\epsilon_{ij}$ set to one.
On the Probit-scale this corresponds to
\begin{align}\label{eq:liabilityprob}
\pr(Y_{ij}=1 \mid X_{ij},\eta^A_{ij},\eta^C_{i},\eta^D_{ij}) =
\Phi(\beta^T X_{ij} +
\eta^A_{ij} + \eta^C_{i} + \eta^D_{ij}),
\end{align}
noting that the E component is modelled indirectly through the
inverse link-function $\Phi$ which is the standard normal CDF,
i.e., $\sigma_E^2=1$. In the following we will simplify notation and
use $\eta_{ij}$ to denote the total random effect for the $j$th
twin in the $i$th twin-pair.
Note that the corresponding heritability in this model
\begin{align*}
H^2 = \frac{\sigma_A^2+\sigma_D^2}{\sigma_A^2+\sigma_C^2+\sigma_D^2+1},
\end{align*}
relates to the underlying liability scale, and that there are
additional variation present in the data on the risk scale. Using
only the random effects to define a heritability estimate is thus not
comparable to the one from the standard normal model, where all the
variation is included in the heritability estimate.
The Probit random effects analyses have been criticized for completely
ignoring the time-aspect and the fact that the analyses did not take
censoring into account \citep{duncan04:genepi}. In
\cite{lichtenstein2000environmental} the analysis was based on the
assumption that the probability of occurrence of cancer for twin $j$
in twin pair $i$ was on the same form as \eqref{eq:probit1} with
\begin{eqnarray}
\pr(\text{twin $j$ gets cancer} \mid \eta_{ij}) = \Phi(\eta_{ij}), \quad j=1,2,
\label{eq:probit2}
\end{eqnarray}
and with the complementary outcome being that the twin died
without getting cancer or still was alive and without cancer at the
time of follow-up. The latter group are thus treated as immune to
cancer after they leave the study, which in general makes the results
of the analysis impossible to interpret. The right-censoring
mechanism therefore has to be taken into account, but
additional information on the timing of the events are needed. In
practice, these event times are typically readily available in family
studies of disease.
\begin{figure}[htbp]
\centering
\tikzstyle{plain2}=[rectangle,thick,minimum height=1.2cm,minimum width=2cm,draw=gray!80]
\begin{tikzpicture}[>=latex,text height=1.5ex,text depth=0.25ex]
\matrix[row sep=0.8cm,column sep=2cm]{
& \node(D) [plain2] {Dead}; \\
\node(A) [plain2] {Alive}; \\
& \node(P) [plain2] {\parbox{1.4cm}{\hfuzz=8pt Prostate \\cancer}}; \\
};
\path[->] (A) edge[thick] node [auto] {$\alpha_{13}(t)$} (D) ;
\path[->] (A) edge[thick] node [auto, swap] {$\alpha_{12}(t)$} (P) ;
\path[->] (P) edge[thick] node [auto, swap] {$\alpha_{23}(t)$} (D) ;
\end{tikzpicture}
\caption{Competing risks model for the two competing risks of death
and prostate cancer with the transition probabilities being
described by the cause-specific hazards
$\alpha_{kl}(t)$.\label{fig:comprisk}}
\end{figure}
\section{Inverse Probability of Censoring Weighted Estimating Equation}\label{sec:ipcw}
The definition of the liability threshold model perceives the states
``prostate cancer'' and ``death'' as static endpoints. Our aim of
adjusting the estimating procedure for the right-censoring, however,
requires us to consider the data in a dynamic framework. A more
natural setting for the data generating mechanism is to consider the
problem in the competing risk setting. In the following let $(T_{ik},
C_{ik},\epsilon_{ik},X_{ik})$ denote the event time, right censoring
time, the cause of failure $\epsilon_{ik}\in\{1,\ldots,J\}$ (e.g.,
cancer or death without cancer), and $p$-dimensional covariate vector
$X_{ik}$ for twin pair $i=1,\ldots,n$ and individual $k=1,2$. We will
assume that the $n$ pairs $\{T_i,C_i,\epsilon_i,X_i\} =
\{\{T_{i1},T_{i2}\},\{C_{i1},C_{i2}\},\{\epsilon_{i1},\epsilon_{i2}\},\{X_{i1},X_{i2}\}\}$
are iid. Due to the right-censoring, we only observe $\widetilde{T}_{ik} =
T_{ik}\wedge C_{ik}$ and $\widetilde{\cause}_{ik}=\epsilon_{ik}\Delta_{ik}$, with
the indicator for $T_{ik}$ denoting an actual event time $\Delta_{ik}
= I(T_{ik}\leq C_{ik})$. We will perceive the data as generated by the
model described by the diagram in Figure~\ref{fig:comprisk}, where
every subject starts in the alive state, and then moves to either of
the two states prostate cancer of death with certain intensities
evolving over time. Note that in our application, we are not aiming to
make inference on the transition from prostate cancer to death.
In the univariate setting, the transition may be characterized by the
cumulative incidence functions
\begin{align*}
F_1(t) = \pr(T\leq t,\epsilon=1),
\end{align*}
which may be estimated by the Aalen-Johansen estimator
\citep{aale:joha:1978,andersencounting} and also generalized to the
regression setting as in \cite{scheikebinomialregr08}. The
bivariate case is more complex but the concordance function
\begin{align*}
\mathcal{C}(t) = \pr(T_1\leq t,T_2\leq t, \epsilon_1=1, \epsilon_2=1),
\end{align*}
may be estimated as described in \cite{scheike13:concordance}. Here we
will only consider a fixed time $\tau$
and characterize the joint probability
\begin{align}\label{eq:tau}
\pr(T_1\leq\tau, T_2\leq\tau, \epsilon_1=1,\epsilon_2=1),
\end{align}
which we will assume can be modelled by a random effect
structure as in \eqref{eq:liabilityprob}
\begin{align}\label{eq:liabilitycomprisk}
\pr(T_{ij}\leq\tau, \epsilon_{ij}=1 | \eta_{ij},X_{ij}) = \Phi(\beta^T
X_{ij} + \eta_{ij}).
\end{align}
We will use age as our time-scale, and assuming that everyone were
followed until time $\tau$ this simply corresponds to a standard
liability model where twins are classified as having cancer or not
before time $\tau$, in which case the standard MLE approach of
\eqref{eq:liabilityprob} would be consistent. In practice, a large
fraction of the twins may not have reached the age $\tau$ at the end
of follow-up, and other techniques must be applied.
\subsection{Consistent Estimating Equations}
In this section we will introduce inverse probability weighting to
correct for the right censoring. The intuition for this procedure is
that the observations that have a higher probability of being censored
are under-represented and should therefore count more in the analysis.
These techniques can be traced back to the Horwitz-Thompson estimator
applied in the survey-statistics field \citep{horvitzthompson1952} and
later with many applications in other fields of statistics for dealing
with coarsened data including survival analysis
\citep{rotnitzkyrobbins1995,robins92} and competing risks
\citep{fine_and_gray_competing_risk_1999}. We refer to
\cite{tsiatis2006semiparametric} for a modern and accessible treatment
of the subject in both the parametric and semi-parametric
setting. Here we are interested in estimating dependence between
paired observations which in general complicates the analysis, due to
the need of consistent estimates of the bivariate censoring
probabilities. We will show how the complexity may be reduced
dramatically by exploiting how data is collected in registry studies.
The full-data score equation we obtain from the model
\eqref{eq:liabilitycomprisk} parametrised by $\theta$ (including both
$\beta$ and the parameters of the random effects), when all subjects
are followed until time $\tau$, will be denoted
\begin{align}\label{eq:u0}
\mathcal{U}_0(\theta;X,\widetilde{T},\widetilde{\cause}) = \sum_{i=1}^n
\mathcal{U}_{0i}(\theta; X_{i},\widetilde{T}_i,\widetilde{\cause}_{i}),
\end{align}
where $\mathcal{U}_{0i}(\cdot;X_{i},\widetilde{T}_i,\widetilde{\cause}_{i})$ is the
derivative of the log-likelihood term for a bivariate Probit model
\citep{ashford70probit} for the event
$(\epsilon_{ij}=1,\widetilde{T}_{ij}\leq\tau)$ of the $i$th twin-pair. A nice
property of the Probit random effects model is that the marginal
distribution obtained by integrating over the normal distributed
random effects is also a multivariate Probit model, and the derivative
of the log-likelihood with respect to the parameter vector may in turn
be written as a linear combination of bivariate cumulative normal
distribution functions. The general derivation may be found in
\citep{holst:binarylatent}, and the integration problem related to
evaluating the bivariate cumulative distribution functions can be
dealt with as described in
\citep{genz92:_numer_comput_of_multiv_normal_probab}. In principle, the
same procedure could be applied to higher-dimensional problems thus
allowing us to generalize the modelling framework to larger pedigrees.
We will describe the censoring distribution by its survival function
\begin{align}\label{eq:jointcens}
G_c(t_1,t_2; Z_{i}) = \pr(C_{i1}>t_1,C_{i2}>t_2\mid Z_{i}),
\end{align}
given covariates $Z_{i}$ observed for all twin-pairs
$i=1,\ldots,n$, and we will assume that the failure times are
independent of the censoring times given these covariates.
Furthermore, we will assume that we have a correct model for the censoring
mechanism with estimate $\widehat{G}_c$. We then define the IPCW-adjusted
estimating equation via the new score
\begin{align}\label{eq:ipcw1}
\mathcal{U}(\theta; X,Z,\widetilde{T},\widetilde{\cause}) = \sum_{i=1}^n
\mathcal{U}_i(\theta; X_i,Z_i,\widetilde{T}_i,\widetilde{\cause}_i) = \sum_{i=1}^n
\frac{\Delta_{i1}\Delta_{i2}}{\widehat{G}_c(\widetilde{T}_{i1},\widetilde{T}_{i2};
Z_{i})}\mathcal{U}_{0i}(\theta; X_i,\widetilde{T}_i,\widetilde{\cause}_i).
\end{align}
The censoring mechanism \eqref{eq:jointcens} may be modelled using
frailty models, but in the case where data arises from a twin registry,
censoring will typically be administrative and hence twins are
censored at the same time. In this case
\begin{align}\label{eq:minGc}
G_c(t_1,t_2\mid Z_{i}) = \pr(C_{i}> t_1\vee t_2 \mid Z_{i}) =
G_c(t_1\mid Z_{i})\wedge G_c(t_2\mid Z_{i}).
\end{align}
Therefore, the problem of identifying the bivariate censoring distribution
is simplified to just estimating the marginal
censoring distributions.
Consistency of the parameter estimates relies on a correctly
specified model for the censoring mechanism \eqref{eq:minGc}, which
would suggest a quite rich semi-parametric model for the marginal
censoring distributions.
However, a computational limitation of the semi-parametric
approach is, that the calculation of asymptotic standard errors (from
the estimated influence functions as described below) is quite
computational intensive in the order $\mathcal{O}(n K)$ where $K$ is
the number of event times and $n$ the number of subjects. In large
registry studies a sufficiently flexible parametric survival model may
therefore be preferable. We note that asymptotic double-robustness
could be obtained by adding an augmentation term to the estimating
equation \citep{tsiatis2006semiparametric} requiring just on of the
two models to be correct to obtain consistency. In the following, we
will, however, assume that $G_c$ lies within a parametric family and
let $\widehat{\gamma}$ be a consistent estimator such that
$\widehat{G}_c(\cdot;z) = \widehat{G}_c(\cdot;z,\widehat{\gamma})$.
\begin{thm}\label{thm1}
Let $\{T_i,C_i,\epsilon_i,X_i,Z_i\}$ be iid and $\widehat{\gamma}$
a consistent regular asymptotic linear estimator for the
parametric censoring distribution. Denote the right-hand-side
terms of \eqref{eq:ipcw1} as $\mathcal{U}_i(\theta_0,\gamma_0)$.
Under the following regularity conditions
\begin{enumerate}
\item\label{cond:deriv} In a neighbourhood of $(\theta_0^T,\gamma_0^T)^T$ the function
$\mathcal{U}_i$ is twice continuous differentiable with
$\E(-\partial\mathcal{U}(\theta_0,\gamma_0)/\partial\theta)$ being
positive-definite.
\item \label{cond:indep} The censoring times $(C_{1i},C_{2i})$ are conditionally
independent of $(T_{1i},T_{2i},\epsilon_{1i},\epsilon_{2i})$ implying
$G_c(t_1-,t_2-; z) = \E(\Delta_{1i}\Delta_{2i} \mid
T_{1i}=t_1,T_{2i}=t_2,Z_{i}=z)$.
\item\label{cond:atrisk} $\pr(T_{1i}>\tau,T_{2i}>\tau)>0$
\item\label{cond:bounded} The covariates $X_{i}, Z_{i}$ are bounded.
\item\label{cond:positivity} $G_c(t_1,t_2; z)>0$ with probability 1 for $t_1,t_2\in
[0,\tau]$.
\end{enumerate}
the estimator $\widehat{\theta}$ obtained as the root of
\eqref{eq:ipcw1} is consistent and asymptotically normal.
\end{thm}
Consistency follows from condition \ref{cond:deriv}-\ref{cond:atrisk}
by noting that for any term on the right-hand-side of \eqref{eq:ipcw1},
we obtain for known censoring distribution:
\begin{align*}
\E[\mathcal{U}_i(\theta;X_i,Z_i,\widetilde{T}_i,\widetilde{\cause}_i)] &=
\E\{\E[\mathcal{U}_i(\theta;X_i,Z_i,\widetilde{T}_i,\widetilde{\cause}_i)\mid
X_i,Z_i,\widetilde{T}_i,\widetilde{\cause}_i]\} \\
&= \E\{\E(\Delta_{i1},\Delta_{i2}\mid Z_i,\widetilde{T}_i,\widetilde{\cause}_i)
G_c(\widetilde{T}_{i1},\widetilde{T}_{i2}\mid Z_i)^{-1}\mathcal{U}_{0i}(\theta;
X_i,\widetilde{T}_i,\widetilde{\cause}_i)\} \\
&= \E[\mathcal{U}_{0i}(\theta; X_i,\widetilde{T}_i,\widetilde{\cause}_i)] = 0,
\end{align*}
where we actively assumed
consistency of both the models \eqref{eq:liabilitycomprisk} and
\eqref{eq:jointcens}. Note that the positive probability of being
at risk is fulfilled when the support of the censoring times lies
within the support of $T_{1i}$ and $T_{2i}$.
We emphasize that
a key regularity condition here is that of positivity
\eqref{cond:positivity}, namely that the probability of any twin-pair
being uncensored is strictly larger than zero. In practice, the
probabilities should be sufficiently large to avoid instability of the
estimating equation in smaller sample sizes.
We now sketch the calculation of the asymptotic standard errors of
the estimator. The estimator for $\widehat{\gamma}$ will typically be
a GEE-type $m$-estimator since we will use both twins to estimate the
marginal censoring distribution. This implies asymptotic linearity:
\begin{align*}
\sqrt{n}(\widehat{\gamma}-\gamma_0) = n^{-1/2}\sum_{i=1}^n
\ensuremath{I\!F}_1(\gamma_0; Z_i,\widetilde{T}_i,\widetilde{\cause}_i) + o_p(1),
\end{align*}
where $\ensuremath{I\!F}_1$ is the influence function of the estimator
\citep{stefanski_boos_2002_m-estimator}.
Let $\widehat{\theta}(\widehat{\gamma})$ be the two-stage estimator
obtained by finding the root of \eqref{eq:ipcw1} with the plugin-estimate
of the censoring distribution via $\widehat{\gamma}$. The conditions
of Theorem \ref{thm1} implies that the empirical averages of the derivatives of
the score converges to their corresponding expectations, and a
Taylor expansion of \eqref{eq:ipcw1} around the true parameters
$\theta_0$ and $\gamma_0$, shows that
\begin{align}
\begin{split}\label{eq:IF3}
\sqrt{n}(\widehat{\theta}(\widehat{\gamma}) - \theta_0)
&= n^{-1/2}\sum_{i=1}^n \ensuremath{I\!F}_2(\theta_0; X_i,Z_i,\widetilde{T}_i,\widetilde{\cause}_i) \\
&\hspace*{-6ex}+ n^{-1/2}\,\E[\frac{\partial}{\partial\theta}\mathcal{U}_i(\theta_0,\gamma_0)]^{-1}
\E[\frac{\partial}{\partial\gamma} \mathcal{U}_i(\theta_0,\gamma_0)]
\sum_{i=1}^n\ensuremath{I\!F}_1(\gamma_0; Z_i,\widetilde{T}_i,\widetilde{\cause}_i) + o_p(1) \\
&=
n^{-1/2}\sum_{i=1}^n \ensuremath{I\!F}_3(\theta_0; X_i,Z_i,\widetilde{T}_i,\widetilde{\cause}_i,) + o_p(1),
\end{split}
\end{align}
where the first term corresponds to the iid decomposition for known
censoring distribution
\begin{align}\label{eq:IF2}
\sqrt{n}(\widehat{\theta}(\gamma_0)-\theta_0) = n^{-1/2}\sum_{i=1}^n
\ensuremath{I\!F}_2(\theta_0; X_i,\widetilde{T}_i,\widetilde{\cause}_i) + o_p(1).
\end{align}
The influence functions may be estimated from the bi-products of the
Newton-Raphson optimization, as the matrix product of the derivative of
the score times the score itself. We refer to
\cite{holst:binarylatent} for expressions for the relevant terms of
$\ensuremath{I\!F}_2$, which are implemented in the \texttt{mets} \texttt{R}-package
\citep{metspackage}.
It follows from \eqref{eq:IF3} that the two-stage estimator is
asymptotically normal and the asymptotic variance of \eqref{eq:IF3}
can be estimated by plugging in the parameter estimates
\begin{align*}
\frac{1}{n}\sum_{i=1}^n
\ensuremath{I\!F}_3(\widehat{\theta},\widehat{\gamma}; X_i,Z_i,\widetilde{T}_i,\widetilde{\cause}_i)^{\otimes 2}.
\end{align*}
Similar results can be shown in the general case where $\widehat{G}_c$ is
an asymptotically linear consistent estimator of the censoring
distribution, such that
\begin{align*}
\sqrt{n}\left[\widehat{G}_c(t_1,t_2;Z)-G_c(t_1,t_2;Z)\right] =
n^{-1/2}\sum_{i=1}^n\ensuremath{I\!F}_{G_c}(t_1,t_2,Z;
Z_i,\widetilde{T}_{i},\widetilde{\cause}_{i})
+ o_p(1),
\end{align*}
where the iid terms $\ensuremath{I\!F}_{G_c}$ are the influence functions.
For the choice of a Cox-regression, the proof of the consistency and
asymptotic normality of the IPCW estimator follows along the lines of
\cite{scheikebinomialregr08} or \cite{linmedicalcost2000}.
In the case of a Kaplan-Meier estimator the linear expansion above
follows from \cite{gill-thesis}; see also Section IV.3.2 of
\cite{andersencounting}. In the case of a Cox model the linear
expansion is a consequence of the results in Section VII.2.2 and
VII.2.3 of \cite{andersencounting}. Specific technical assumptions are
also given there. Here, the focus is on the use of parametric models
due to the computational advantages.
\subsection{Model Selection and Testing}\label{sec:modelselect}
The main hypothesis in most applications of the Liability Threshold
model will be to \textit{a)} Test for a genetic component \textit{b)}
Quantify this effect. The first problem should generally not be
examined in the polygenic model to avoid in part the many genetic
model assumptions and in part the difficulties of testing parameters
on the boundary of the parameter space. A reasonable modelling
approach is generally to initially estimate a more flexible model,
where we instead of a random effects model estimate the parameters of
a bivariate Probit model
\begin{align}\label{eq:biprobit}
\pr(T_1\leq\tau ,T_2\leq\tau\,\epsilon_1=1,\epsilon_2=1, \mid X_1,X_2) =
\Phi_{\rho_{\text{zyg}}}(\beta_{\text{zyg}}^T X_1,\beta_{\text{zyg}}^T X_2^T),
\end{align}
where $\Phi_{\rho_{\text{zyg}}}$ is the bivariate normal CDF with mean
0 and variance given by a correlation matrix with correlation
coefficient $_{\rho_{\text{zyg}}}$ depending on zygosity. A test for
identical marginals should be done as a first step, i.e., testing if
$\beta_{\text{MZ}}=\beta_{\text{DZ}}$. Next, a formal test for the
presence of a genetic component can be obtained by testing the null
hypothesis of identical tetrachoric correlations in MZ and DZ twins
$\rho_{\text{MZ}}=\rho_{\text{DZ}}$. Estimates on the risk scale such
as concordance rates are also preferably calculated in this model.
Note that while the test for genetic influence still requires
assumption of same environmental effects in MZ and DZ twins, the many
genetic assumptions of the polygenic model, e.g., linkage
equilibrium and that a subset of ACDE fits the data, are no longer
necessary.
With evidence of a genetic component, the next step should be to
quantify the possible genetic and environmental effects based on the
IPCW adjusted Liability Threshold model \eqref{eq:liabilitycomprisk}.
In population genetics it is common to compare different models using
information criteria such as the AIC \citep{Akaike73}. In general, the
derivation of these measures relies on inference being done within
a maximum likelihood framework, and are no longer generally valid in an
estimating equation framework. The Quasi-AIC (QIC) has been suggested
\citep{QICpan01} in the GEE framework. However, in the case of
\eqref{eq:liabilitycomprisk} the estimating equation corresponds to the
weighted score-function of the complete-data likelihood $\sum_{i=1}^n
\log L_i(\theta; X_i,\widetilde{T}_i,\widetilde{\cause}_i)$ from which \eqref{eq:u0} is obtained.
It follows that
\begin{align*}
\E\left[\frac{\Delta_{i1}\Delta_{i2}}{G_c(T_{i1},T_{i2};
Z_{i})}\log L_i(\theta; X_i,\widetilde{T}_i,\widetilde{\cause}_i)\right] = \E(\log
L_i(\theta; X_i,\widetilde{T}_i,\widetilde{\cause}_i)),
\end{align*}
and hence the weighted AIC
\begin{align*}
\ensuremath{\operatorname{AIC}_{\text{\tiny IPCW}}} =
2\sum_{i=1}^n\frac{\Delta_{i1}\Delta_{i2}}{\widehat{G}_c(T_{i1},T_{i2}; Z_i)}\log
L_i(\theta; X_i,\widetilde{T}_i,\widetilde{\cause}_i) - 2P,
\end{align*}
where $P$ is the number of parameters in $\theta$, will also provide an
approximation of the relative entropy between the estimated model and
the true data generating model, and may therefore serve as a model
selection tool.
\section{Simulation study}\label{sec:sim}
We set up a simulation study to examine the properties of our proposed
estimator in a realistic setup. The cumulative incidence function for
cancer conditional on a random effect $\eta_1$, was chosen as
\begin{align*}
F_1(t\mid \eta_1)= \pr(T\leq t,\epsilon=1\mid \eta_1) =
\Phi_{\sigma_{E_1^2}}(\alpha(t)+\Phi^{-1}(p_1)+\eta_1),
\end{align*}
with $\alpha(t) = -\exp(10-0.15t)), \ p_1=0.065$. The inverse link-function
$\Phi_{\sigma_{E_1}^2}$ was chosen as a normal CDF with variance
$\sigma_{E_1^2} = 1-\var(\eta_1)$. This parametrisation leads to a
marginal CIF resembling the distribution observed in the real data
described in Section~\ref{sec:application}, with a marginal lifetime
prevalence of 0.065 (see Figure~\ref{fig:simflat}). The type of cause
(cancer or death without cancer) were simulated from a
Bernoulli-distribution with probability $F_1(\infty) =
\Phi_{\sigma_{E^2_1}}(\eta_1+\Phi^{-1}(0.065))$, and the event times
drawn from $\pr(T\leq t \mid \eta_k,\epsilon=k)$ which for the
competing risk of death was chosen as the distribution
$\Phi_{\sigma_{E_2^2}}(0.1(t-85) + \eta_2)$, again with a marginal
resembling what was observed in the real data example. The random
effect structure $\eta_1$ was chosen as an ACE-model with the
C-component shared across the two competing risks, and with $\eta_2$
only consisting of this shared environmental effect $\eta_2 =
\eta^{C}$. Independent censoring was simulated from a Weibull
distribution with cumulative hazard $\Lambda_0(t) = (\lambda t)^\nu$,
with scale parameter fixed at $\log(\lambda)=-4.5$, and the parameters
were estimated using a marginal model with working independence
structure.
\begin{figure}[htbp]
\hfuzz=12pt
\centering
\mbox{
\includegraphics[width=0.5\textwidth]{simul2}
\includegraphics[width=0.5\textwidth]{simul1}
}
\caption{Simulated cumulative incidence and concordance function
with $\sigma_A^2=\sigma_C^2=\sigma_E^2=\tfrac{1}{3}$. Thick
lines shows true cumulative incidence for cancer ($F_1$,
benchmark for perfect dependence), MZ concordance function
($\mathcal{C}_{\text{MZ}}$), DZ concordance function
($\mathcal{C}_{\text{DZ}}$), and the squared cumulative
incidence ($F_1^2$, benchmark for independence). The thin
horizontal lines shows the mean estimates and 2.5\% and 97.5\%
quantiles of 1,000 replications with 20,000 twin pairs and 59\%
censoring, for the naive estimator ignoring censoring (left
panel) and the IPCW adjusted estimator (right
panel).\label{fig:simflat}}
\end{figure}
We simulated 10,000 MZ and 10,000 DZ twin pairs from the above model
under three different ACE structures
$(\sigma_A^2,\sigma_C^2,\sigma_C^2)\in
\{(\tfrac{1}{3},\tfrac{1}{3},\tfrac{1}{3}),
(\tfrac{1}{2},\tfrac{1}{4},\tfrac{1}{4}),
(\tfrac{3}{5},\tfrac{1}{5},\tfrac{1}{5})\}$, and with varying degree
of censoring $\log(\nu)\in\{0.5,2\}$ corresponding to roughly 59\% and
48\% right-censoring. In each scenario the naive estimator ignoring
censoring was compared to the IPCW-adjusted estimators based on a
parametric marginal Weibull model, with standard errors based on the
correct influence functions \eqref{eq:IF3} (Weibull${}_2$) and
standard errors based on the influence function \eqref{eq:IF2} without
adjusting for the uncertainty in the weights (Weibull${}_1$), and an
IPCW-adjusted estimator based on the Kaplan-Meier estimator (KM).
\begin{sidewaystable}
\centering
\footnotesize
\begin{tabular}{llcccccccccccc}
\toprule
&& \multicolumn{2}{c}{$F_1$} & \multicolumn{2}{c}{$\mathcal{C}_{MZ}$} & \multicolumn{2}{c}{$\mathcal{C}_{DZ}$} & \multicolumn{2}{c}{$\sigma_A^2$} & \multicolumn{2}{c}{$\sigma_C^2$} & \multicolumn{2}{c}{$\sigma_E^2$}
\\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12} \cmidrule(lr){13-14}
& & Av. & Cv. & Av. & Cv. & Av. & Cv. & Av. & Cv. & Av. & Cv. & Av. & Cv. \\
\midrule & True & 0.065 & & 0.025 & & 0.018 & & 0.333 &
& 0.333 & & 0.333 & \\
\cmidrule(lr){2-14}
\multirow{14}{*}{\parbox{5em}{$\nu=1.6$\\ 59\% cens.}} & \cellcolor{tableShade}Naive & \cellcolor{tableShade}0.031 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.012 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.008 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.280 & \cellcolor{tableShade}0.888 & \cellcolor{tableShade}0.452 & \cellcolor{tableShade}0.559 & \cellcolor{tableShade}0.267 & \cellcolor{tableShade}0.212\\
& \cellcolor{white}Weibull${}_1$ & \cellcolor{white}0.065 & \cellcolor{white}0.948 & \cellcolor{white}0.025 & \cellcolor{white}0.944 & \cellcolor{white}0.018 & \cellcolor{white}0.956 & \cellcolor{white}0.335 & \cellcolor{white}0.957 & \cellcolor{white}0.331 & \cellcolor{white}0.956 & \cellcolor{white}0.334 & \cellcolor{white}0.940\\
& \cellcolor{tableShade}Weibull${}_2$ & \cellcolor{tableShade}0.065 & \cellcolor{tableShade}0.948 & \cellcolor{tableShade}0.025 & \cellcolor{tableShade}0.944 & \cellcolor{tableShade}0.018 & \cellcolor{tableShade}0.957 & \cellcolor{tableShade}0.335 & \cellcolor{tableShade}0.957 & \cellcolor{tableShade}0.331 & \cellcolor{tableShade}0.956 & \cellcolor{tableShade}0.334 & \cellcolor{tableShade}0.940\\
& \cellcolor{white}KM & \cellcolor{white}0.065 &
\cellcolor{white}0.948 & \cellcolor{white}0.025 &
\cellcolor{white}0.944 & \cellcolor{white}0.018 &
\cellcolor{white}0.955 & \cellcolor{white}0.335 &
\cellcolor{white}0.957 & \cellcolor{white}0.331 &
\cellcolor{white}0.955 & \cellcolor{white}0.334 &
\cellcolor{white}0.940\\
\cmidrule(lr){2-14}
& True & 0.065 & & 0.030 & & 0.018 & & 0.500 & &
0.250 & & 0.250 & \\
\cmidrule(lr){2-14}
& \cellcolor{tableShade}Naive & \cellcolor{tableShade}0.031 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.014 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.008 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.414 & \cellcolor{tableShade}0.769 & \cellcolor{tableShade}0.386 & \cellcolor{tableShade}0.453 & \cellcolor{tableShade}0.200 & \cellcolor{tableShade}0.273\\
& \cellcolor{white}Weibull${}_1$ & \cellcolor{white}0.065 & \cellcolor{white}0.952 & \cellcolor{white}0.030 & \cellcolor{white}0.952 & \cellcolor{white}0.018 & \cellcolor{white}0.953 & \cellcolor{white}0.498 & \cellcolor{white}0.956 & \cellcolor{white}0.250 & \cellcolor{white}0.956 & \cellcolor{white}0.252 & \cellcolor{white}0.946\\
& \cellcolor{tableShade}Weibull${}_2$ & \cellcolor{tableShade}0.065 & \cellcolor{tableShade}0.952 & \cellcolor{tableShade}0.030 & \cellcolor{tableShade}0.952 & \cellcolor{tableShade}0.018 & \cellcolor{tableShade}0.953 & \cellcolor{tableShade}0.498 & \cellcolor{tableShade}0.956 & \cellcolor{tableShade}0.250 & \cellcolor{tableShade}0.956 & \cellcolor{tableShade}0.252 & \cellcolor{tableShade}0.946\\
& \cellcolor{white}KM & \cellcolor{white}0.065 &
\cellcolor{white}0.954 & \cellcolor{white}0.030 &
\cellcolor{white}0.954 & \cellcolor{white}0.018 &
\cellcolor{white}0.954 & \cellcolor{white}0.498 &
\cellcolor{white}0.957 & \cellcolor{white}0.250 &
\cellcolor{white}0.955 & \cellcolor{white}0.252 &
\cellcolor{white}0.945\\
\cmidrule(lr){2-14}
& True & 0.065 & & 0.034 & & 0.018 & & 0.600 & &
0.200 & & 0.200 & \\
\cmidrule(lr){2-14}
& \cellcolor{tableShade}Naive & \cellcolor{tableShade}0.031 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.016 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.008 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.491 & \cellcolor{tableShade}0.636 & \cellcolor{tableShade}0.349 & \cellcolor{tableShade}0.365 & \cellcolor{tableShade}0.160 & \cellcolor{tableShade}0.327\\
& \cellcolor{white}Weibull${}_1$ & \cellcolor{white}0.065 & \cellcolor{white}0.946 & \cellcolor{white}0.034 & \cellcolor{white}0.952 & \cellcolor{white}0.018 & \cellcolor{white}0.939 & \cellcolor{white}0.593 & \cellcolor{white}0.950 & \cellcolor{white}0.204 & \cellcolor{white}0.946 & \cellcolor{white}0.203 & \cellcolor{white}0.950\\
& \cellcolor{tableShade}Weibull${}_2$ & \cellcolor{tableShade}0.065 & \cellcolor{tableShade}0.946 & \cellcolor{tableShade}0.034 & \cellcolor{tableShade}0.953 & \cellcolor{tableShade}0.018 & \cellcolor{tableShade}0.942 & \cellcolor{tableShade}0.593 & \cellcolor{tableShade}0.954 & \cellcolor{tableShade}0.204 & \cellcolor{tableShade}0.950 & \cellcolor{tableShade}0.203 & \cellcolor{tableShade}0.951\\
& \cellcolor{white}KM & \cellcolor{white}0.065 &
\cellcolor{white}0.945 & \cellcolor{white}0.034 &
\cellcolor{white}0.952 & \cellcolor{white}0.018 &
\cellcolor{white}0.939 & \cellcolor{white}0.593 &
\cellcolor{white}0.951 & \cellcolor{white}0.204 &
\cellcolor{white}0.948 & \cellcolor{white}0.203 &
\cellcolor{white}0.952\\
\midrule
& True & 0.065 & & 0.025 & & 0.018 & & 0.333 & &
0.333 & & 0.333 & \\
\midrule
\multirow{14}{*}{\parbox{5em}{$\nu=7.4$\\48\% cens.}} & \cellcolor{tableShade}Naive & \cellcolor{tableShade}0.048 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.018 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.012 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.318 & \cellcolor{tableShade}0.951 & \cellcolor{tableShade}0.366 & \cellcolor{tableShade}0.907 & \cellcolor{tableShade}0.315 & \cellcolor{tableShade}0.850\\
& \cellcolor{white}Weibull${}_1$ & \cellcolor{white}0.065 & \cellcolor{white}0.955 & \cellcolor{white}0.025 & \cellcolor{white}0.948 & \cellcolor{white}0.018 & \cellcolor{white}0.951 & \cellcolor{white}0.332 & \cellcolor{white}0.953 & \cellcolor{white}0.333 & \cellcolor{white}0.955 & \cellcolor{white}0.334 & \cellcolor{white}0.949\\
& \cellcolor{tableShade}Weibull${}_2$ & \cellcolor{tableShade}0.065 & \cellcolor{tableShade}0.955 & \cellcolor{tableShade}0.025 & \cellcolor{tableShade}0.948 & \cellcolor{tableShade}0.018 & \cellcolor{tableShade}0.953 & \cellcolor{tableShade}0.332 & \cellcolor{tableShade}0.955 & \cellcolor{tableShade}0.333 & \cellcolor{tableShade}0.956 & \cellcolor{tableShade}0.334 & \cellcolor{tableShade}0.950\\
& \cellcolor{white}KM & \cellcolor{white}0.065 & \cellcolor{white}0.956 & \cellcolor{white}0.025 & \cellcolor{white}0.950 & \cellcolor{white}0.018 & \cellcolor{white}0.955 & \cellcolor{white}0.333 & \cellcolor{white}0.954 & \cellcolor{white}0.332 & \cellcolor{white}0.954 & \cellcolor{white}0.335 & \cellcolor{white}0.953\\
\cmidrule(lr){2-14}
& True & 0.065 & & 0.030 & & 0.018 & & 0.500 & &
0.250 & & 0.250 & \\
\cmidrule(lr){2-14}
& \cellcolor{tableShade}Naive & \cellcolor{tableShade}0.048 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.021 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.012 & \cellcolor{tableShade}0.001 & \cellcolor{tableShade}0.477 & \cellcolor{tableShade}0.936 & \cellcolor{tableShade}0.287 & \cellcolor{tableShade}0.896 & \cellcolor{tableShade}0.236 & \cellcolor{tableShade}0.865\\
& \cellcolor{white}Weibull${}_1$ & \cellcolor{white}0.065 & \cellcolor{white}0.946 & \cellcolor{white}0.030 & \cellcolor{white}0.965 & \cellcolor{white}0.018 & \cellcolor{white}0.938 & \cellcolor{white}0.496 & \cellcolor{white}0.952 & \cellcolor{white}0.252 & \cellcolor{white}0.945 & \cellcolor{white}0.252 & \cellcolor{white}0.950\\
& \cellcolor{tableShade}Weibull${}_2$ & \cellcolor{tableShade}0.065 & \cellcolor{tableShade}0.946 & \cellcolor{tableShade}0.030 & \cellcolor{tableShade}0.966 & \cellcolor{tableShade}0.018 & \cellcolor{tableShade}0.941 & \cellcolor{tableShade}0.496 & \cellcolor{tableShade}0.958 & \cellcolor{tableShade}0.252 & \cellcolor{tableShade}0.952 & \cellcolor{tableShade}0.252 & \cellcolor{tableShade}0.952\\
& \cellcolor{white}KM & \cellcolor{white}0.065 & \cellcolor{white}0.958 & \cellcolor{white}0.030 & \cellcolor{white}0.964 & \cellcolor{white}0.018 & \cellcolor{white}0.942 & \cellcolor{white}0.498 & \cellcolor{white}0.957 & \cellcolor{white}0.251 & \cellcolor{white}0.952 & \cellcolor{white}0.251 & \cellcolor{white}0.957\\
\cmidrule(lr){2-14}
& True & 0.065 & & 0.034 & & 0.018 & & 0.600 & &
0.200 & & 0.200 & \\
\cmidrule(lr){2-14}
& \cellcolor{tableShade}Naive & \cellcolor{tableShade}0.048 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.024 & \cellcolor{tableShade}0.000 & \cellcolor{tableShade}0.012 & \cellcolor{tableShade}0.003 & \cellcolor{tableShade}0.570 & \cellcolor{tableShade}0.918 & \cellcolor{tableShade}0.240 & \cellcolor{tableShade}0.877 & \cellcolor{tableShade}0.189 & \cellcolor{tableShade}0.871\\
& \cellcolor{white}Weibull${}_1$ & \cellcolor{white}0.065 & \cellcolor{white}0.952 & \cellcolor{white}0.034 & \cellcolor{white}0.940 & \cellcolor{white}0.018 & \cellcolor{white}0.940 & \cellcolor{white}0.598 & \cellcolor{white}0.922 & \cellcolor{white}0.201 & \cellcolor{white}0.924 & \cellcolor{white}0.201 & \cellcolor{white}0.939\\
& \cellcolor{tableShade}Weibull${}_2$ & \cellcolor{tableShade}0.065 & \cellcolor{tableShade}0.952 & \cellcolor{tableShade}0.034 & \cellcolor{tableShade}0.942 & \cellcolor{tableShade}0.018 & \cellcolor{tableShade}0.966 & \cellcolor{tableShade}0.598 & \cellcolor{tableShade}0.948 & \cellcolor{tableShade}0.201 & \cellcolor{tableShade}0.949 & \cellcolor{tableShade}0.201 & \cellcolor{tableShade}0.944\\
& \cellcolor{white}KM & \cellcolor{white}0.065 & \cellcolor{white}0.957 & \cellcolor{white}0.034 & \cellcolor{white}0.945 & \cellcolor{white}0.018 & \cellcolor{white}0.941 & \cellcolor{white}0.599 & \cellcolor{white}0.931 & \cellcolor{white}0.200 & \cellcolor{white}0.932 & \cellcolor{white}0.202 & \cellcolor{white}0.948\\
\bottomrule
\end{tabular}
\caption{Simulation based on n=10,000 MZ and
DZ twin pairs. Average (Av.) of estimates across 1,000 replications
and coverage probabilities (Cv.) of corresponding 95\% confidence
limits is shown for prevalence ($F_1$), MZ concordance
($\mathcal{C}_{\text{MZ}}$), DZ concordance
($\mathcal{C}_{\text{DZ}}$), and the variance components
$\sigma_A^2$, $\sigma_C^2$ and $\sigma_E^2$. Results are shown for the
naive estimator not taking the censoring into account (Naive), Weibull IPCW
ignoring uncertainty in weights (Weibull${}_1$), Weibull IPCW with
correct standard errors (Weibull${}_2$), and Kaplan-Meier without
adjustment for uncertainty in weights (KM).
\label{tab:sim1}}
\end{sidewaystable}
The results of the simulation study are summarized in
Table~\ref{tab:sim1} with average estimates and coverage probabilities
of the 95\% confidence limits reported for the prevalence $F_1$,
concordance in MZ twins $\mathcal{C}_{\text{MZ}}$, concordance in DZ
twins $\mathcal{C}_{\text{DZ}}$, and the variance components
$\sigma_A^2$, $\sigma_C^2$, and $\sigma_E^2$. In general the naive
estimates where the censoring mechanism is ignored shows very high
downward bias with poor coverage in both the prevalence and
concordance estimates, which is generally expected. In these
simulations the bias of the heritability estimate $\sigma_A^2$ is in
all cases negative with coverage that performs worse for larger true
value of $\sigma_A^2$. As discussed in \cite{scheike13:lida} the
direction of the bias in the heritability estimates may, however, go in
either direction depending on both the dependence structure and
censoring distribution. The intuition for this is, that while the
concordance is biased downwards in both the MZ and DZ twins, it may change
relatively more/less in the DZ twins.
\newenvironment{DIFnomarkup}{}{}
\begin{DIFnomarkup}
\begin{table}
\footnotesize
\centering
\begin{tabular}{clrrrrrrrc}
\toprule
& & \multicolumn{1}{c}{True} & \multicolumn{3}{c}{IPCW} & \multicolumn{3}{c}{Naive} & \\ \cmidrule(lr){4-6}\cmidrule(lr){7-9
& & & Av. & Cov. & MSE & Av. & Cov. & MSE \\ \midrule
&\cellcolor{col1}{$F_1$} &\cellcolor{col1}{ 0.065} &\cellcolor{col1}{ 0.065} &\cellcolor{col1}{ 0.962} &\cellcolor{col1}{0.0004} &\cellcolor{col1}{ 0.035} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{0.0916}\\
&\cellcolor{col2}{$\rho_{MZ}$} &\cellcolor{col2}{ 0.667} &\cellcolor{col2}{ 0.664} &\cellcolor{col2}{ 0.970} &\cellcolor{col2}{0.0746} &\cellcolor{col2}{ 0.736} &\cellcolor{col2}{ 0.160} &\cellcolor{col2}{0.5282}\\
&\cellcolor{col1}{$\rho_{DZ}$} &\cellcolor{col1}{ 0.500} &\cellcolor{col1}{ 0.499} &\cellcolor{col1}{ 0.951} &\cellcolor{col1}{0.1343} &\cellcolor{col1}{ 0.600} &\cellcolor{col1}{ 0.107} &\cellcolor{col1}{1.0914}\\
&\cellcolor{col2}{$\mathcal{C}_{MZ}$} &\cellcolor{col2}{ 0.025} &\cellcolor{col2}{ 0.025} &\cellcolor{col2}{ 0.974} &\cellcolor{col2}{0.0003} &\cellcolor{col2}{ 0.014} &\cellcolor{col2}{ 0.000} &\cellcolor{col2}{0.0137}\\
&\cellcolor{col1}{$\mathcal{C}_{DZ}$} &\cellcolor{col1}{ 0.018} &\cellcolor{col1}{ 0.018} &\cellcolor{col1}{ 0.951} &\cellcolor{col1}{0.0003} &\cellcolor{col1}{ 0.010} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{0.0065}\\
&\cellcolor{col2}{$\lambda_{R,MZ}$} &\cellcolor{col2}{ 6.000} &\cellcolor{col2}{ 5.971} &\cellcolor{col2}{ 0.955} &\cellcolor{col2}{13.562} &\cellcolor{col2}{11.347} &\cellcolor{col2}{ 0.000} &\cellcolor{col2}{2897.3}\\
&\cellcolor{col1}{$\lambda_{R,DZ}$} &\cellcolor{col1}{ 4.172} &\cellcolor{col1}{ 4.171} &\cellcolor{col1}{ 0.954} &\cellcolor{col1}{12.133} &\cellcolor{col1}{ 7.976} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{1484.2}\\
&\cellcolor{col2}{$\log(\text{OR})_{MZ}$} &\cellcolor{col2}{ 2.670} &\cellcolor{col2}{ 2.660} &\cellcolor{col2}{ 0.968} &\cellcolor{col2}{1.7900} &\cellcolor{col2}{ 3.373} &\cellcolor{col2}{ 0.000} &\cellcolor{col2}{50.969}\\
&\cellcolor{col1}{$\log(\text{OR})_{DZ}$} &\cellcolor{col1}{ 1.942} &\cellcolor{col1}{ 1.940} &\cellcolor{col1}{ 0.955} &\cellcolor{col1}{2.1320} &\cellcolor{col1}{ 2.662} &\cellcolor{col1}{ 0.004} &\cellcolor{col1}{53.658}\\
&\cellcolor{col2}{$\sigma_A^2$} &\cellcolor{col2}{ 0.333} &\cellcolor{col2}{ 0.330} &\cellcolor{col2}{ 0.953} &\cellcolor{col2}{0.8823} &\cellcolor{col2}{ 0.272} &\cellcolor{col2}{ 0.866} &\cellcolor{col2}{0.9011}\\
&\cellcolor{col1}{$\sigma_C^2$} &\cellcolor{col1}{ 0.333} &\cellcolor{col1}{ 0.334} &\cellcolor{col1}{ 0.946} &\cellcolor{col1}{0.6352} &\cellcolor{col1}{ 0.464} &\cellcolor{col1}{ 0.427} &\cellcolor{col1}{2.1052}\\
&\cellcolor{col2}{$\sigma_E^2$} &\cellcolor{col2}{ 0.333} &\cellcolor{col2}{ 0.336} &\cellcolor{col2}{ 0.967} &\cellcolor{col2}{0.0746} &\cellcolor{col2}{ 0.264} &\cellcolor{col2}{ 0.137} &\cellcolor{col2}{0.5282}\\ \midrule
&\cellcolor{col1}{$F_1$} &\cellcolor{col1}{ 0.065} &\cellcolor{col1}{ 0.065} &\cellcolor{col1}{ 0.941} &\cellcolor{col1}{0.0005} &\cellcolor{col1}{ 0.035} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{0.0921}\\
&\cellcolor{col2}{$\rho_{MZ}$} &\cellcolor{col2}{ 0.750} &\cellcolor{col2}{ 0.748} &\cellcolor{col2}{ 0.941} &\cellcolor{col2}{0.0618} &\cellcolor{col2}{ 0.804} &\cellcolor{col2}{ 0.213} &\cellcolor{col2}{0.3272}\\
&\cellcolor{col1}{$\rho_{DZ}$} &\cellcolor{col1}{ 0.500} &\cellcolor{col1}{ 0.499} &\cellcolor{col1}{ 0.949} &\cellcolor{col1}{0.1396} &\cellcolor{col1}{ 0.601} &\cellcolor{col1}{ 0.108} &\cellcolor{col1}{1.0966}\\
&\cellcolor{col2}{$\mathcal{C}_{MZ}$} &\cellcolor{col2}{ 0.030} &\cellcolor{col2}{ 0.030} &\cellcolor{col2}{ 0.948} &\cellcolor{col2}{0.0004} &\cellcolor{col2}{ 0.016} &\cellcolor{col2}{ 0.000} &\cellcolor{col2}{0.0196}\\
&\cellcolor{col1}{$\mathcal{C}_{DZ}$} &\cellcolor{col1}{ 0.018} &\cellcolor{col1}{ 0.018} &\cellcolor{col1}{ 0.943} &\cellcolor{col1}{0.0003} &\cellcolor{col1}{ 0.010} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{0.0065}\\
&\cellcolor{col2}{$\lambda_{R,MZ}$} &\cellcolor{col2}{ 7.166} &\cellcolor{col2}{ 7.154} &\cellcolor{col2}{ 0.944} &\cellcolor{col2}{17.438} &\cellcolor{col2}{13.565} &\cellcolor{col2}{ 0.000} &\cellcolor{col2}{4144.1}\\
&\cellcolor{col1}{$\lambda_{R,DZ}$} &\cellcolor{col1}{ 4.172} &\cellcolor{col1}{ 4.173} &\cellcolor{col1}{ 0.947} &\cellcolor{col1}{13.063} &\cellcolor{col1}{ 7.996} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{1500.3}\\
&\cellcolor{col2}{$\log(\text{OR})_{MZ}$} &\cellcolor{col2}{ 3.118} &\cellcolor{col2}{ 3.113} &\cellcolor{col2}{ 0.943} &\cellcolor{col2}{2.1903} &\cellcolor{col2}{ 3.824} &\cellcolor{col2}{ 0.000} &\cellcolor{col2}{51.544}\\
&\cellcolor{col1}{$\log(\text{OR})_{DZ}$} &\cellcolor{col1}{ 1.942} &\cellcolor{col1}{ 1.939} &\cellcolor{col1}{ 0.945} &\cellcolor{col1}{2.2416} &\cellcolor{col1}{ 2.664} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{54.034}\\
&\cellcolor{col2}{$\sigma_A^2$} &\cellcolor{col2}{ 0.500} &\cellcolor{col2}{ 0.499} &\cellcolor{col2}{ 0.945} &\cellcolor{col2}{0.8144} &\cellcolor{col2}{ 0.407} &\cellcolor{col2}{ 0.716} &\cellcolor{col2}{1.3176}\\
&\cellcolor{col1}{$\sigma_C^2$} &\cellcolor{col1}{ 0.250} &\cellcolor{col1}{ 0.249} &\cellcolor{col1}{ 0.944} &\cellcolor{col1}{0.6247} &\cellcolor{col1}{ 0.397} &\cellcolor{col1}{ 0.332} &\cellcolor{col1}{2.5247}\\
&\cellcolor{col2}{$\sigma_E^2$} &\cellcolor{col2}{ 0.250} &\cellcolor{col2}{ 0.252} &\cellcolor{col2}{ 0.938} &\cellcolor{col2}{0.0618} &\cellcolor{col2}{ 0.196} &\cellcolor{col2}{ 0.169} &\cellcolor{col2}{0.3272}\\ \midrule
&\cellcolor{col1}{$F_1$} &\cellcolor{col1}{ 0.065} &\cellcolor{col1}{ 0.065} &\cellcolor{col1}{ 0.952} &\cellcolor{col1}{0.0005} &\cellcolor{col1}{ 0.035} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{0.0919}\\
&\cellcolor{col2}{$\rho_{MZ}$} &\cellcolor{col2}{ 0.800} &\cellcolor{col2}{ 0.799} &\cellcolor{col2}{ 0.949} &\cellcolor{col2}{0.0476} &\cellcolor{col2}{ 0.845} &\cellcolor{col2}{ 0.239} &\cellcolor{col2}{0.2205}\\
&\cellcolor{col1}{$\rho_{DZ}$} &\cellcolor{col1}{ 0.500} &\cellcolor{col1}{ 0.499} &\cellcolor{col1}{ 0.955} &\cellcolor{col1}{0.1368} &\cellcolor{col1}{ 0.600} &\cellcolor{col1}{ 0.114} &\cellcolor{col1}{1.0871}\\
&\cellcolor{col2}{$\mathcal{C}_{MZ}$} &\cellcolor{col2}{ 0.034} &\cellcolor{col2}{ 0.034} &\cellcolor{col2}{ 0.951} &\cellcolor{col2}{0.0005} &\cellcolor{col2}{ 0.018} &\cellcolor{col2}{ 0.000} &\cellcolor{col2}{0.0243}\\
&\cellcolor{col1}{$\mathcal{C}_{DZ}$} &\cellcolor{col1}{ 0.018} &\cellcolor{col1}{ 0.018} &\cellcolor{col1}{ 0.939} &\cellcolor{col1}{0.0003} &\cellcolor{col1}{ 0.010} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{0.0065}\\
&\cellcolor{col2}{$\lambda_{R,MZ}$} &\cellcolor{col2}{ 7.987} &\cellcolor{col2}{ 7.988} &\cellcolor{col2}{ 0.956} &\cellcolor{col2}{17.964} &\cellcolor{col2}{15.101} &\cellcolor{col2}{ 0.000} &\cellcolor{col2}{5109.9}\\
&\cellcolor{col1}{$\lambda_{R,DZ}$} &\cellcolor{col1}{ 4.172} &\cellcolor{col1}{ 4.175} &\cellcolor{col1}{ 0.956} &\cellcolor{col1}{12.758} &\cellcolor{col1}{ 7.983} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{1489.1}\\
&\cellcolor{col2}{$\log(\text{OR})_{MZ}$} &\cellcolor{col2}{ 3.441} &\cellcolor{col2}{ 3.442} &\cellcolor{col2}{ 0.951} &\cellcolor{col2}{2.3085} &\cellcolor{col2}{ 4.147} &\cellcolor{col2}{ 0.000} &\cellcolor{col2}{51.565}\\
&\cellcolor{col1}{$\log(\text{OR})_{DZ}$} &\cellcolor{col1}{ 1.942} &\cellcolor{col1}{ 1.940} &\cellcolor{col1}{ 0.955} &\cellcolor{col1}{2.1908} &\cellcolor{col1}{ 2.662} &\cellcolor{col1}{ 0.000} &\cellcolor{col1}{53.664}\\
&\cellcolor{col2}{$\sigma_A^2$} &\cellcolor{col2}{ 0.600} &\cellcolor{col2}{ 0.600} &\cellcolor{col2}{ 0.954} &\cellcolor{col2}{0.7214} &\cellcolor{col2}{ 0.489} &\cellcolor{col2}{ 0.596} &\cellcolor{col2}{1.6372}\\
&\cellcolor{col1}{$\sigma_C^2$} &\cellcolor{col1}{ 0.200} &\cellcolor{col1}{ 0.199} &\cellcolor{col1}{ 0.958} &\cellcolor{col1}{0.5866} &\cellcolor{col1}{ 0.356} &\cellcolor{col1}{ 0.262} &\cellcolor{col1}{2.7722}\\
&\cellcolor{col2}{$\sigma_E^2$} &\cellcolor{col2}{ 0.200} &\cellcolor{col2}{ 0.201} &\cellcolor{col2}{ 0.945} &\cellcolor{col2}{0.0476} &\cellcolor{col2}{ 0.155} &\cellcolor{col2}{ 0.178} &\cellcolor{col2}{0.2205}
\\
\bottomrule
\end{tabular}
\caption{Simulation based on n=10,000 MZ and
DZ twin pairs with continuous covariate affecting both the
censoring mechanism and the transition probabilities to cancer and death\label{tab:simx}.
Average (Av.) of estimates across 1,000 replications,
coverage probabilities (Cv.) of corresponding 95\% confidence
limits, and Mean Squared Error multiplied by 100 (MSE)
is shown for prevalence ($F_1$), concordance
($\mathcal{C}_{\text{MZ}}$, $\mathcal{C}_{\text{DZ}}$),
relative recurrence risks ratios ($\lambda_{R,\text{MZ}}$,
$\lambda_{R,\text{DZ}}$), and log odds-ratios
($\log(\text{OR})_{MZ}$, $\log(\text{OR})_{DZ}$),
and the variance components
$\sigma_A^2$, $\sigma_C^2$ and $\sigma_E^2$.
Results are shown for the
naive estimator ignoring the censorings (Naive), and Weibull IPCW
using a correct model for the censoring (IPCW).
}
\end{table}
\end{DIFnomarkup}
Generally, the loss in efficiency using the Kaplan-Meier estimator
seemed to be very modest. Interestingly, the two IPCW-adjusted
estimators ignoring the uncertainty in the estimated weights (KM and
Weibull${}_1$) showed excellent coverage probabilities across almost
all scenarios. This may be explained by the high degree of censoring
(as also seen in the real data), which causes the variance of the
estimator to be dominated
by the variance of the estimator based
on \eqref{eq:ipcw1} where only the uncensored pairs are used. A
tendency was in fact seen towards slightly smaller coverage when the
censoring was smaller and heritability higher, while the estimator
with confidence limits based on \eqref{eq:IF3} performs seemingly
better here. Ignoring the estimated censoring probabilities can in
some situations lead to conservative estimates
\citep{rotnitzkyrobbins1995}. This does not seem to be the case here,
and may be a consequence of the estimator of the censoring distribution
being a GEE-type estimator and not a MLE.
We also examined the effect of introducing a covariate affecting both
the censoring and transition probabilities to death or cancer. Given a
normal distributed covariate $X\sim\mathcal{N}(0,0.25)$, the shared
environmental effect (C component) was defined as
$\eta^C=0.5X+\eta^{C_0}$ with $\eta^{C_0}\sim\mathcal{N}(0,\sigma^2_C
- 0.0625)$, and the random effect for the competing risk of death was
defined as $\eta_2=\eta_C-0.25X$. For the censoring mechanism we used
a proportional hazards model with baseline hazard as described
above, with shape-parameter $\log(\nu)=0.5$, such that the cumulative
hazard took the form $\Lambda(t) = \Lambda_0(t)\exp(-X)$. Results are
summarized in Table \ref{tab:simx}, and are generally very comparable
to the results of Table \ref{tab:sim1}.
\section{Application to twin cancer data}\label{sec:application}
Studying genetic influence on the complex trait of cancer is central
in themes of etiology, treatment and prevention.
Twin and general family studies have reported low to moderate genetic
influence (\cite{lichtenstein2000environmental} and
\cite{baker2005biometrics}). Based on a combined Nordic study of the
Danish, Finnish and Swedish twins registries,
\cite{lichtenstein2000environmental} concluded that 42\% of variation
in prostate-cancer liability was due to genetic factors (95\%
confidence limits 0.29--0.50). However, in these cohorts around 70\%
of the participants are censored resulting in biased estimates of all
population parameters including prevalences, concordances
rates and heritability, as discussed in the previous sections.
We investigate genetic influence on prostate cancer using the
population based twin cohort of Danish twins born 1900 to 1982
constituting N = $15,509$ male pairs of whom $5,488$ MZ and $10,021$
same sex male DZ pairs are eligible for studying prostate cancer. The
cohort is followed up with respect to survival status as of July
2009. Data on cancer diagnosis, status and time of event, were
obtained from the National Cancer Registry which was initiated in 1943
(See \cite{hjelmborgprostate} for further
description of the cohort). Numbers of pairs by status of cancer and death
can be seen in Table~\ref{tab:pairs}.
\begin{table}[htbp]\centering
\begin{tabular}{@{} l l l l @{}}
\toprule
\multicolumn{4}{@{}c@{}}{\textbf{Number of pairs at time of
follow-up}} \\
\hline
MZ \& DZ & Prostate cancer & No cancer and dead & No cancer and alive
\\ \midrule
Prostate cancer & 25 \& 14 & 178 & 108 \\
No cancer and dead & 70 & 843 \& 1,694 & 1,319 \\
No cancer and alive & 39 & 492 & 4,019 \& 6,708 \\
\bottomrule
\end{tabular}
\caption{Number of pairs by status at time of follow-up with MZ pairs in lower left triangle and DZ pairs in upper right triangle. }
\label{tab:pairs}
\end{table}
There was a significant difference in the censoring distributions in
MZ and DZ twins. This may in part be explained by increased used of In
Vitro Fertilizations over time which have caused a chance in DZ/MZ
distribution and perhaps consequently also censoring distributions in
this cohort. We therefore based the IPCW-model on a stratified
Kaplan-Meier model.
As described in Section~\ref{sec:modelselect} we first examined if the
marginal distributions within MZ and DZ twins could be assumed to be
the same (p=0.52). In the reduced model with identical marginals the
tetrachoric correlations was 0.63 (0.47--0.75) for MZ pairs and 0.25
(0.07--0.41) for DZ pairs. A test for genetic effects was performed by
comparing these correlation coefficients which yielded a p-value of
p=0.001, indicating strong evidence in support of a genetic
contribution. In the polygenic models the \ensuremath{\operatorname{AIC}_{\text{\tiny IPCW}}}\, was slightly in favour
of the ADE model but very similar results was obtained from AE and ACE models
in terms of broad-sense heritability. For the chosen ADE model the
broad-sense heritability was 0.63 (0.49--0.77). The results are summarized
in Table~\ref{tab:prostateresults} together with the biased naive
estimates as a reference. Here we also report the \textit{casewise
concordance} \citep{witte99:_likel_based_approac_estim_twin}, i.e.,
the conditional probability that a twin gets cancer given the co-twin
got cancer, and the \textit{relative recurrence risk ratio} which
describes the excess risk of prostate cancer for a twin given the
co-twin got prostate cancer, compared to the marginal (population)
risk
\begin{align*}
\lambda_R = \frac{\pr(T_1\leq\tau, T_2\leq\tau, \epsilon_1=1,\epsilon_2=1)}{\pr(T_1\leq\tau,\epsilon_1=1)^2}.
\end{align*}
All estimates except for the heritability are reported from the more
parsimonious bivariate Probit model but results was almost identical
with the estimates from the ADE-model.
In conclusion, we see strong evidence for a genetic component in the
development of prostate cancer. As expected,
the naive estimator provides heavily downward biased estimates of the
prevalence and concordance, and in this case upward bias of the
heritability estimates and relative recurrence risk ratio estimates.
\begin{table}[htbp]
\centering
\begin{tabular}{lrr}
\toprule
& \textbf{IPCW-adjusted} & \textbf{Naive} \\
\toprule
$F_1$ & 0.055 (0.049; 0.062) & 0.015 (0.014; 0.017) \\
\midrule
$\rho_{\text{MZ}}$ & 0.626 (0.466; 0.746) & 0.730 (0.629; 0.807) \\
$\rho_{\text{DZ}}$ & 0.248 (0.068; 0.412) & 0.350 (0.224; 0.465) \\
\midrule
$\mathcal{C}_{\text{MZ}}$ & 0.019 (0.013; 0.027) & 0.005 (0.004; 0.007) \\
$\mathcal{C}_{\text{DZ}}$ & 0.007 (0.004; 0.013) & 0.001 (0.001;
0.002) \\
\midrule
$\mathcal{C}_{\text{MZ}}/F_1$ & 0.340 (0.241; 0.455) & 0.324 (0.240; 0.421) \\
$\mathcal{C}_{\text{DZ}}/F_1$ & 0.130 (0.076; 0.215) & 0.087 (0.053; 0.140) \\
\midrule
$\lambda_{R,{\text{MZ}}}$ & 6.166 (4.132; 8.201) & 21.17 (15.25; 27.10) \\
$\lambda_{R,{\text{DZ}}}$ & 2.360 (1.148; 3.571) & 5.713 (2.966; 8.459)
\\
\midrule
$H^2$ & 0.626 (0.486; 0.766) & 0.73 (0.642; 0.819) \\
\bottomrule
\end{tabular}
\caption{Estimates (and 95\% confidence limits) of association of prostate cancer for MZ and DZ
twins based on bivariate Probit model. The first column
contains the IPCW-adjusted estimates and the second column the
biased estimates ignoring the right-censoring mechanism.
We show estimates of prevalence $F_1$,
tetrachoric correlations $\rho$, concordance $\mathcal{C}$,
casewise concordance $\mathcal{C}/F_1$, and relative recurrence
risk ratio $\lambda_R$. The broad-sense heritability estimate $H^2$ is based on an ADE-model.}
\label{tab:prostateresults}
\end{table}
We also examined how the association between MZ and DZ twins might
depend on age by choosing different values of $\tau$ in
\eqref{eq:tau}, with different parameters at each time point. As
shown in Figure \ref{fig:timerisk}, this allows us to describe the
cumulative incidence function for prostate cancer and the concordance
functions and relative recurrence risk ratios as functions of age
based on the flexible bivariate Probit model. We also calculated the
heritability for both an ACE and ADE model (Figure~\ref{fig:timeher})
which in agreement indicates higher genetic contribution in earlier
ages. This stronger dependence in early-onset has been suggested for
several types of cancers.
\begin{figure}
\centering
\mbox{
\includegraphics[width=0.49\textwidth]{timerisk}
\includegraphics[width=0.49\textwidth]{timerrr}}
\caption{Concordance and relative recurrence risk ratio estimates
for prostate cancer in MZ and DZ twins. The left panel shows the
concordance for prostate cancer in MZ and DZ twins with point-wise
95\% confidence limits calculated at different ages in two-years
intervals. The two concordance functions are bounded above by the
marginal cumulative incidence corresponding to perfect dependence
and below by the squared marginal corresponding to
independence. In the right panel the relative recurrence risk
ratio is shown for MZ and DZ twins for different ages with
point-wise 95\% confidence limits.\label{fig:timerisk}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{timeher}
\caption{Heritability of prostate cancer calculated at different
ages in two years intervals with point-wise 95\% confidence
limits. Estimates are based on IPCW adjusted ACE (solid line)
and ADE (dashed line) models.\label{fig:timeher}}
\end{figure}
\section{Discussion}\label{sec:discussion}
There has been considerable interest in quantifying the genetic
influence of cancer, and family and twin studies have here served as
important tools. The censoring problem we have discussed in this paper
seems to have been largely ignored in the epidemiological literature,
which makes estimates from these studies difficult to interpret.
We have here presented a simple method based on inverse probability
weighting that corrects for a major source of bias by taking advantage
of the time to event information that is most often provided in cohort
studies along with the binary disease status. The method allows for
flexible and computational robust modelling of twin dependence at
different ages. Our simulations show that the method performs very
well in a realistic setup.
Applied on data from the Danish Twin Registry and Danish Cancer
Registry we estimated a heritability of 0.62 in prostate cancer, and
relative recurrence risk ratios of 6.2 in MZ twins and 2.4 in DZ twins.
\vspace*{\bigskipamount}
Here we have only considered twins but both the estimation and
computational framework can be generalized to larger pedigrees. Also,
extensions to ascertained samples should follow along the lines of
\cite{javarasACEcasecontrol2010}. Another topic for future research
will be development of efficient and robust estimating equations.
All methods are available in the R package \texttt{mets}
\citep{metspackage}.
\section*{Acknowledgement}
We thank our collaborators at the NorTwinCan consortia, that received
support from a Nordic Cancer Union grant and the Ellison Foundation.
\bibliographystyle{elsarticle-harv}
|
1,116,691,498,372 | arxiv |
\section{Introduction}
\label{sec:introduction}
\input{MFCS_introduction}
\section{Preliminaries}
\label{sec:preliminaries}
\input{MFCS_preliminaries}
\section{Copyless cost register automata}
\label{sec:copyless_weighted}
\input{MFCS_copyless-weighted}
\section{Structural properties of copyless CRA}
\label{sec:structure_copyless}
\input{MFCS_structure_copyless}
\section{Non-expressibility of copyless CRA}
\label{sec:nonexpressibility}
\input{MFCS_nonexpressibility}
\section{Bounded alternation copyless CRA}
\label{sec:bounded_alternation}
\input{MFCS_bounded-alternation}
\bibliographystyle{abbrv}
\subsection{Removing zeros from CRA}
\label{subsec:updates_without_zero}
We say that an expression $e \in \operatorname{Expr}(\mathcal{X})$ is \emph{reduced} if $e = \mathbb{0}$ or the $\mathbb{0}$-constant is not mentioned inside $e$.
It is straightforward to show that for any expression $e$ there exists an equivalent expression $e^*$ that is reduced.
Indeed, one can construct inductively an equivalent expression by using the following reductions: $e \oplus \mathbb{0} = e$ and $e \odot \mathbb{0} = \mathbb{0}$.
Then by reducing each subexpression recursively, the resulting expression is either $\mathbb{0}$ or do not use the $\mathbb{0}$-constant at all.
Further, note that if $e$ is copyless, then its reduced expression $e^*$ is copyless as well.
Let $f : \Sigma^* \to \mathbb{S}$ be a function definable by a copyless CRA. We say that $f$ is a \emph{non-zero function} if $f(w) \neq \mathbb{0}$ for all $w \in \Sigma^*$.
The following result shows that, without lost of generality, we can assume that all constants in a copyless CRA are different from $\mathbb{0}$ whenever the function defined by $\mathcal{A}$ is a non-zero function.
\begin{proposition}
\label{proposition:non-zero}
Let $\mathcal{A}$ be a copyless CRA such that $\asem{\mathcal{A}}$ is a non-zero function. Then there exists a copyless CRA $\mathcal{A}'$ such that its initialization function, substitutions and final output functions are reduced and different from $\mathbb{0}$.
\end{proposition}
\begin{proof}
In this proof, we show how to avoid keeping a $\mathbb{0}$ value in registers that where forced to be $\mathbb{0}$ (i.e. forced by a substitution $\sigma$ such that $\sigma(x) =\mathbb{0}$ for some register $x$). The idea is to store in the state which registers are equal to $\mathbb{0}$.
Let $\mathcal{A} = (Q, \Sigma, \mathcal{X}, \delta, q_0, \nu_0, \mu)$ be a copyless CRA.
Define a new copyless CRA $\mathcal{A}' = (Q', \Sigma, \mathcal{X}, \delta, q_0', \nu_0', \mu')$ such that:
\begin{itemize}
\item $Q' = Q \times 2^Q$ is the new set of states,
\item $q_0' = (q_0, S_0)$ where $S_0$ is the set of registers such that $x \in S$ iff $\nu_0(x) = \mathbb{0}$, and
\item $\nu_0'$ is the same as $\nu_0$ for registers in $\mathcal{X} \setminus S_0$ and for registers $x \in S_0$ we define $\nu_0'(r) = \mathbb{1}$ (or any other constant from $\mathbb{S}$).
\end{itemize}
For the definition of $\delta'$ and $\mu'$ we need to first introduce some notation. Let $S \subseteq \mathcal{X}$ and $e$ be an expression over $\mathcal{X}$. We define the expression $e[S]$ that is a reduced expression $d^*$ where $d$ is the result of taking $e$ and replacing all registers $x \in S \cap \operatorname{Var}(e)$ by $\mathbb{0}$.
Now, we are ready to define $\delta'$ and $\mu'$.
For every $(q, S) \in Q'$ and $a \in \Sigma$, we define $\delta'((q,S), a) = ((q',S'), \sigma')$, where $\delta(q, a) = (q',\sigma)$, $S'$ is the set of all registers $x$ such that $\sigma(x)[S]$ is equal to $\mathbb{0}$, and $\sigma'$ is defined for every $x \in \mathcal{X}$ as follows:
\[
\sigma'(x) \; = \; \left\{
\begin{array}{ll}
\sigma(x)[S] \;\;& \text{if } x \notin S \\
\mathbb{1} & \text{if } x \in S
\end{array}
\right.
\]
Finally, we define the output function $\mu'$ such that, for every $(q, S) \in Q'$, it holds that $\mu'((q, S)) = \mu(q)[S]$.
It is straightforward to show that $\mathcal{A}'$ and $\mathcal{A}$ define the same function and that $\nu_0'$, $\delta'$ and $\mu'$ do not use the $\mathbb{0}$-constant.
Note that there cannot exist a reachable state $(q, S) \in Q'$ with $\mu'((q, S)) = \mathbb{0}$, otherwise the output of the function defined by $\mathcal{A}$ will be $\mathbb{0}$ for an input in $\Sigma^*$ which contradict the fact that $\asem{\mathcal{A}}$ is a non-zero function. \hfill \qed
\end{proof}
\subsection{The normal form of a copyless CRA}
\label{subsec:normal_form}
In this subsection, we define a normal form for copyless CRA given a linear order on registers.
Let $\mathcal{A} \; = \; (Q, \Sigma, \mathcal{X}, \delta, q_0, \nu_0, \mu)$ be a copyless CRA and let $\preceq$ be any predefined linear order over $\mathcal{X}$.
We say that $\mathcal{A}$ is in \emph{normal form} with respect to $\preceq$ if for every transition of the form $\delta(p, a) = (q, \sigma)$, it holds that $x \preceq y$ for all $y \in \operatorname{Var}(\sigma(x))$.
In other words, all the variables mentioned in $\sigma(x)$ are greater or equal than $x$ with respect to $\preceq$.
For example, the copyless CRA in Example~\ref{ex:max-b-substrings} is in normal form with respect to the order $y \preceq x$.
The idea behind this normal form is to prevent unexpected behaviors on copyless CRA.
For example, consider the following copyless CRA that is not in normal form with respect to the order $x \preceq y$:
\begin{center}
\begin{tikzpicture}[-\string>,\string>=stealth',shorten \string>=1pt,auto,node distance=3cm,semithick,initial text={},scale=0.5]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\node[state] (p) {$q_1$};
\node[state] [right of=p](r) {$q_2$};
\path (p) edge [loop left] node {$a \;\;
\renewcommand{\arraystretch}{0.9} \begin{array}{|rcl}
\;\; x & := & x+1 \\
\;\; y & := & y
\end{array}$} (p)
(p) edge [bend left] node {$b \;\;
\renewcommand{\arraystretch}{0.9}
\begin{array}{|rcl}
\;\; x & := & y+1 \\
\;\; y & := & x
\end{array}$} (r)
(r) edge [bend left] node {$b \;\;
\renewcommand{\arraystretch}{0.9}
\begin{array}{|rcl}
\;\; x & := & y \\
\;\; y & := & x+1
\end{array}$} (p)
(r) edge [loop right] node [align=center]{$a \;\;
\renewcommand{\arraystretch}{0.9}
\begin{array}{|rcl}
\;\; x & := & x \\
\;\; y & := & y+1
\end{array}$} (r);
\end{tikzpicture}
\end{center}
The initial state in the above copyless CRA is $q_1$ and the initial valuation $\nu_0$ is equal to $\nu_0(x) = \nu_0(y) = 0$. Here, registers $x$ and $y$ count the number of $a$'s and $b$'s, respectively.
However, depending on the state both registers can have either the number of $a$'s or the number of $b$'s.
It is clear that one would like to avoid this type of behaviour since this cyclic information can be easily stored in finite memory.
Intuitively, one register should always contain the number of $a$'s and the other register the number of $b$'s.
In the next result, we show that for every copyless CRA there always exists an equivalent copyless CRA in normal form. The idea behind the proof is to permute the substitutions for registers and keep in the state the reverse permutation to encode the substitutions. Comming back to our running example, the normal form of this copyless CRA is the following:
\begin{center}
\begin{tikzpicture}[-\string>,\string>=stealth',shorten \string>=1pt,auto,node distance=3cm,semithick,initial text={},scale=0.5]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\node[state] (p) {$\operatorname{id}$};
\node[state] [right of=p](r) {\scriptsize{$x \rightleftarrows y$}};
\path (p) edge [loop left] node {$a \;\;
\renewcommand{\arraystretch}{0.9} \begin{array}{|rcl}
\;\; x & := & x+1 \\
\;\; y & := & y
\end{array}$} (p)
(p) edge [bend left] node {$b \;\;
\renewcommand{\arraystretch}{0.9}
\begin{array}{|rcl}
\;\; x & := & x \\
\;\; y & := & y+1
\end{array}$} (r)
(r) edge [bend left] node {$b \;\;
\renewcommand{\arraystretch}{0.9}
\begin{array}{|rcl}
\;\; x & := & x \\
\;\; y & := & y+1
\end{array}$} (p)
(r) edge [loop right] node [align=center]{$a \;\;
\renewcommand{\arraystretch}{0.9}
\begin{array}{|rcl}
\;\; x & := & x+1 \\
\;\; y & := & y
\end{array}$} (r);
\end{tikzpicture}
\end{center}
From now on we assume that copyless cost-register automata are given with a linear order on the registers so we can order the registers, i.e., write $\mathcal{X} = \{x_1, \dots, x_n\}$ with the order defined by $x_i \preceq x_j$ iff $i \leq j$.
The next proposition shows that every copyless CRA can be transformed into an equivalent copyless CRA in normal form with respect to $\preceq$.
\begin{proposition}
For every copyless CRA $\mathcal{A}$ there exists a copyless CRA in normal form $\mathcal{A}'$ such that they recognize the same function. Moreover $\mathcal{A}'$ and $\mathcal{A}$ have the same set of registers and the same linear order on them. The number of states in $\mathcal{A}'$ can be bounded exponentially in the size of the automaton $\mathcal{A}$.
\end{proposition}
\newcommand{\operatorname{id}}{\operatorname{id}}
\begin{proof}
Let $\mathcal{A} \; = \; (Q, \Sigma, \mathcal{X}, \delta, q_0, \nu_0, \mu)$ be a copyless CRA.
We define a copyless CRA $\mathcal{A}'$ in normal form such that $\mathcal{A}'$ computes the same function as $\mathcal{A}$.
The idea of $\mathcal{A}'$ is to store a state $q \in Q$ and a permutation $\rho$ of the set $\mathcal{X}$ such that, if $(q, \nu)$ is the current configuration of $\mathcal{A}$ over an input $w$, then $((q,\rho), \nu')$ is the configuration of $\mathcal{A}'$ over $w$ and $\nu(x) = \nu'(\rho(x))$.
In other words, the content in $\nu$ is still in $\nu'$ but the value $\nu(x)$ of $x$ is now in the register $\rho(x)$ for every $x \in \mathcal{X}$.
The permutation of registers' content will allow us to keep the normal form in $\mathcal{A}'$.
Formally, let $\mathcal{A}' \; = \; (Q', \Sigma, \mathcal{X}, \delta', q_0', \nu_0', \mu')$ where:
\[
Q' \; = \; Q \times \{\rho \mid \rho \text{ is a permutation of the set } \mathcal{X}\}
\]
is the set of states, $q_0' = (q_0, \operatorname{id})$ is the initial state where $\operatorname{id}$ is the identity permutation, and $\nu_0' = \nu_0$ is the initial function.
For the sake of presentation, let us show how the run of $\mathcal{A}'$ will correspond to the run of $\mathcal{A}$ before going into the definition of the transition function $\delta'$ and the output function $\mu'$.
For an expression $e \in \operatorname{Expr}(\mathcal{X})$ and a permutation $\rho$ over $\mathcal{X}$, we define $\rho(e)$ to be the expression $e$ where the registers are replaced according to $\rho$.
Let $(q, \nu)$ and $(q', \nu')$ be the configuration of $\mathcal{A}$ and $\mathcal{A}'$, respectively, after reading $w \in \Sigma^*$. Then we will show that:
\begin{equation}
\label{property}
q' = (q, \rho) \text{ for some permutation } \rho \text{ and } \nu(x) = \nu'(\rho(x)) \text{ for every } x \in \mathcal{X}.
\end{equation}
Note that, for the word $\epsilon$, this correspondence holds since $(q_0, \nu_0)$ and $((q_0, \operatorname{id}), \nu_0')$ are the initial configuration of $\mathcal{A}$ and $\mathcal{A}'$, respectively, and $\nu_0'(\operatorname{id}(x)) = \nu_0'(x) = \nu_0(x)$.
Furthermore, if we define $\mu'((q,\rho)) = \rho(\mu(q))$ and show that~(\ref{property}) always holds, we will prove that $\mathcal{A}$ and $\mathcal{A}'$ computes the same function.
Indeed, if the transition function $\delta'$ preserves (\ref{property}), then
\[
\begin{array}{rcll}
\nu'(\mu'((q,\rho))) & = & \nu'(\rho(\mu(q))) \;\;\; & \text{(by definition of $\mu'$)} \\
& = & \nu(\mu(q)) & \text{(by Property (\ref{property})}
\end{array}
\]
This will prove that the outputs of $\mathcal{A}$ and $\mathcal{A}'$ are the same.
Therefore, in the rest of the proof we will show how to define $\delta'$ such that $\mathcal{A}'$ is a copyless CRA in normal form and its definition satisfies (\ref{property}).
\newcommand{S_{\sigma, \rho}}{S_{\sigma, \rho}}
\newcommand{\tau_{\sigma, \rho}}{\tau_{\sigma, \rho}}
Before defining $\delta'$, we need some additional definitions.
For a copyless substitution $\sigma$ and a permutation $\rho$ both over $\mathcal{X}$, define the set $S_{\sigma, \rho} = \{x \in \mathcal{X} \mid \operatorname{Var}(\rho(\sigma(x))) \neq \emptyset\}$, that is, the set of all variables $x$ where $\rho(\sigma(x))$ is a non-constant expression.
Further, define the set $S_{\sigma, \rho}' = \{\min\{\operatorname{Var}(\rho(\sigma(x)))\} \mid x \in S_{\sigma, \rho} \}$ and the function $\tau_{\sigma, \rho}^0: S_{\sigma, \rho} \rightarrow S_{\sigma, \rho}'$ such that for all $x \in S_{\sigma, \rho}$:
\begin{equation}
\label{property2}
\tau_{\sigma, \rho}^0(x) \; = \; \min\left\{\operatorname{Var}\big(\rho(\sigma(x)) \big) \right\}
\end{equation}
One can easily check that $\tau_{\sigma, \rho}^0$ is a bijective function from $S_{\sigma, \rho}$ to $S_{\sigma, \rho}'$.
It is surjective by the definition of $S_{\sigma, \rho}$ and $S_{\sigma, \rho}'$, and injective by the copyless restriction over $\sigma$.
To see the last claim, recall that any copyless substitution satisfies that $\operatorname{Var}(\sigma(x)) \cap \operatorname{Var}(\sigma(y)) = \emptyset$ and, in particular, $\operatorname{Var}(\rho(\sigma(x))) \cap \operatorname{Var}(\rho(\sigma(y))) = \emptyset$ for any permutation $\rho$.
In other words, this means that:
\[
\tau_{\sigma, \rho}^0(x) \; = \; \min\left\{\operatorname{Var}(\rho(\sigma(x)))\right\} \; \neq \; \min\left\{\operatorname{Var}(\rho(\sigma(y)))\right\} \; = \; \tau_{\sigma, \rho}^0(y)
\]
for every pair $x, y \in S_{\sigma, \rho}$, that is, $\tau_{\sigma, \rho}^0$ is an injective function.
Finally, given that $\tau_{\sigma, \rho}^0$ is a bijective function from $S_{\sigma, \rho} \subseteq \mathcal{X}$ to $S_{\sigma, \rho}' \subseteq \mathcal{X}$, we can extend $\tau_{\sigma, \rho}^0$ to a bijection $\tau_{\sigma, \rho}: \mathcal{X} \rightarrow \mathcal{X}$ such that $\tau_{\sigma, \rho}(x) = \tau_{\sigma, \rho}^0(x)$ for every $x \in S_{\sigma, \rho}$.
Of course, there might be many alternatives for extending $\tau_{\sigma, \rho}^0$ into a bijective function $\tau_{\sigma, \rho}$ but we can choose any of these extensions (i.e. this decision is not important for the construction).
We have now all the ingredients to define the transition function $\delta'$. For every $p \in Q$, $a \in \Sigma$, and permutation $\rho$ over $\mathcal{X}$, if $\delta(p, a) = (q, \sigma)$ then we define $\delta'((p, \rho), a) = ((q, \tau_{\sigma, \rho}), \sigma')$ such that:
\[
\sigma'(x) \; = \; \rho(\sigma(\tau_{\sigma, \rho}^{-1}(x)))
\]
for every $x \in \mathcal{X}$.
From the previous definition, we can easily check that $\sigma'$ is a copyless substitution.
In fact, $\rho \circ \sigma$ is a copyless substitution for every copyless $\sigma$ (i.e. $\rho$ is just a renaming of variables) and $\tau_{\sigma, \rho}^{-1}$ is just permuting the variables.
Therefore, we can conclude that $\sigma'$ is copyless as well.
Our next step is to show that $\sigma'$ is in normal form.
Recall that $\sigma'$ is in normal form if for every $x \in \mathcal{X}$ it holds that $x \preceq y$ for every $y \in \operatorname{Var}(\sigma'(x))$.
We prove this by case analysis by considering whether $x \in S_{\sigma, \rho}'$ or not.
First suppose that $x \in S_{\sigma, \rho}'$.
Since $x$ is in the codomain of $\tau_{\sigma, \rho}^0$ and $\tau_{\sigma, \rho}$ is an extension of $\tau_{\sigma, \rho}^0$, we have that $x = \min\{\operatorname{Var}(\rho(\sigma(\tau_{\sigma, \rho}^{-1}(x))))\}$ by (\ref{property2}).
Then if we replace $\rho(\sigma(\tau_{\sigma, \rho}^{-1}(x)))$ by $\sigma'$, we get that $x = \min\{\operatorname{Var}(\sigma'(x))\}$.
In particular, we have that $x \leq y$ for every $y \in \operatorname{Var}(\sigma'(x))$.
Now suppose that $x \notin S_{\sigma, \rho}'$.
This means that $x$ is not in the codomain of $\tau_{\sigma, \rho}^0$ which implies that $\tau_{\sigma, \rho}^{-1}(x) \notin S_{\sigma, \rho}$.
In other words, $\sigma'(x) = \rho(\sigma(\tau_{\sigma, \rho}^{-1}(x)))$ is a constant expression and, thus, $x \leq y$ for every $y \in \operatorname{Var}(\sigma'(x))$.
Since for both cases it holds that $x \leq y$ for every $x \in \mathcal{X}$ and $y \in \operatorname{Var}(\sigma'(x))$, then we conclude that $\sigma'$ is in normal form.
For the last part of the proof, we show by induction that $\delta'$ satisfies the correspondence (\ref{property}) between $\mathcal{A}$ and $\mathcal{A}'$.
Let $(p, \nu_n)$ and $((p, \rho), \nu_n')$ be the configuration of $\mathcal{A}$ and $\mathcal{A}'$, respectively, after reading $w \in \Sigma^*$ and suppose that $\nu_n(x) = \nu_n'(\rho(x))$ for every $x \in \mathcal{X}$ (i.e. inductive hypothesis).
Furthermore, suppose that:
\[
(p, \nu_n) \:\trans{a} \: (q, \nu_{n+1}) \;\; \text{ and } \; \; ((p, \rho), \nu_n') \:\trans{a} \: ((q, \tau_{\sigma, \rho}), \nu_{n+1}')
\]
are the transitions for $\mathcal{A}$ and $\mathcal{A}'$, respectively, after reading a new letter $a \in \Sigma$.
We prove the correspondence (\ref{property}) between $\nu_{n+1}$ and $\nu_{n+1}'$ as follows:
\[
\renewcommand{\arraystretch}{1.5}
\begin{array}{rcll}
\nu_{n+1}'(\tau_{\sigma, \rho}(x)) & \; = \; & \nu_{n}'(\sigma'(\tau_{\sigma, \rho}(x))) & \text{(by definition of $\nu_{n+1}'$)} \\
& \; = \; & \nu_{n}'(\rho(\sigma(\tau_{\sigma, \rho}^{-1}(\tau_{\sigma, \rho}(x))))) \;\;\; & \text{(by definition of $\sigma'$)} \\
& \; = \; & \nu_{n}'(\rho(\sigma(x))) & \text{(by composing $\tau_{\sigma, \rho}$ and $\tau_{\sigma, \rho}^{-1}$)} \\
& \; = \; & \nu_n(\sigma(x)) & \text{(by inductive hypothesis)} \\
& \; = \; & \nu_{n+1}(x) & \text{(by definition of $\nu_{n+1}$)}
\end{array}
\]
This proves that the transition function $\delta'$ keeps the correspondence (\ref{property}) between $\mathcal{A}$ and $\mathcal{A}'$. Since it also holds for the initial configuration then by induction this shows that it holds for all configurations which proves that the outputs of $\mathcal{A}$ and $\mathcal{A}'$ are the same.
\qed
\end{proof}
Let $\mathcal{A} \; = \; (Q, \Sigma, \mathcal{X}, \delta, q_0, \nu_0, \mu)$ be a copyless CRA. As usual, we define the transitive closure of $\delta$ as the function $\delta^* : Q \times \Sigma^* \to Q \times \operatorname{Subs}(\mathcal{X})$ by induction over the word-length. Formally, we define $\delta^*(q, \epsilon) = (q, \operatorname{id})$ where $\operatorname{id}$ is the identity substitution for all $q \in Q$ and $\delta^*(q_1, w \cdot a) = (q_3, \sigma \circ \sigma')$ whenever $\delta^*(q_1,w) = (q_2,\sigma)$ and $\delta(q_2, a) = (q_3,\sigma')$.
For a CRA $\mathcal{A}$ we define the set $\operatorname{Subs}(\mathcal{A})$ of all substitutions in $\mathcal{A}$ as follows:
\[
\operatorname{Subs}(\mathcal{A}) = \{ \ \sigma \in \operatorname{Subs}(\mathcal{X}) \ \mid \ \exists p, q \in Q. \ \exists w \in \Sigma^*. \;\; \delta^*(p, w) = (q, \sigma) \ \}
\]
It is easy to check that, if all substitutions in $\delta$ are copyless, then all substitution in $\operatorname{Subs}(\mathcal{A})$ are also copyless.
Furthermore, $\delta^*$ and, in particular, $\operatorname{Subs}(\mathcal{A})$ also preserves the normal form, that is, for all $\sigma \in \operatorname{Subs}(\mathcal{A})$ it holds $x \preceq y$ for all $y \in \operatorname{Var}(\sigma(x))$. This can be easy proved by induction over the word-length.
Assume that it holds for all words $w$ of length at most $n$ and let $\delta^*(q,w) = (q', \sigma)$. Suppose we want to extend $w$ with $a \in \Sigma$ and $\delta^*(q',a) = (q'', \sigma')$. By definition, we know that $\delta^*(q,w\cdot a) = (q'',\sigma \circ \sigma')$. Then take $y \in \operatorname{Var}(\sigma \circ \sigma'(x))$. By definition there exists a register $z$ such that $z \in \operatorname{Var}(\sigma'(x))$ and $y \in \operatorname{Var}(\sigma(z))$. Since $\delta$ is in normal form, we conclude that $x \preceq z \preceq y$. In other words, $x \preceq y$ for every $y \in \operatorname{Var}(\sigma \circ \sigma'(x))$ and, thus, $\sigma \circ \sigma'$ is in normal form.
\subsection{Stable registers and collapse substitutions}
\label{subsec:collapse}
Let $\mathcal{A} \; = \; (Q, \Sigma, \mathcal{X}, \delta, q_0, \nu_0, \mu)$ be a copyless CRA in normal form with respect to a fix total order $\preceq$.
In a copyless CRA in normal form, the content of registers flows during a run from higher to lower register with respect to $\preceq$.
Unfortunately, this does not necessarily mean that the content of all registers will eventually reach the $\preceq$-minimum register.
For example, if all substitutions in $\mathcal{A}$ are of the form $\sigma(x) = x \oplus k$ for some $k \in \mathbb{S}$, then each register will store just its own content during the whole run.
Intuitively, in this example each register is ``stable'' with respect to the content flow of $\mathcal{A}$.
We formalize this idea with the notion of stable registers.
Let $\sigma$ be a copyless substitution in normal form (i.e. for each $x \in \mathcal{X}$, $x \preceq y$ for all $y \in \operatorname{Var}(\sigma(x))$).
We say that a register $x$ is $\sigma$-\emph{stable} (or stable on $\sigma$) if $x \in \operatorname{Var}(\sigma(x))$.
\newcommand{\var}{\operatorname{Var}}
The following lemma shows that the composition preserves stability between registers.
\begin{lemma}\label{lemma:stability}
Let $\sigma, \sigma'$ be two copyless substitution in normal form. For any register $x \in \mathcal{X}$ it holds that $x$ is stable on $\sigma$ and $\sigma'$ if, and only if, $x$ is $(\sigma \circ \sigma')$-stable.
\end{lemma}
\begin{proof}
Suppose that $x$ is stable on $\sigma$ and $\sigma'$. This means that $x \in \var(\sigma(x))$ and $x \in \var(\sigma'(x))$ and, thus, $x \in \var(\sigma\circ \sigma'(x))$ by composition.
For the other direction, suppose that $x \in \var(\sigma \circ \sigma')$.
Then we know that there exists $y$ such that $x \in \var(\sigma(y))$ and $y \in \var(\sigma'(x))$.
Since $\sigma$ and $\sigma'$ are in normal form, then $x \preceq y \preceq x$ which implies that $x = y$ and, thus, $x$ is stable on $\sigma$ and $\sigma'$. \hfill \qed
\end{proof}
We generalize the idea of stable register from substitutions to copyless CRA.
We say that a register $x$ is \emph{stable} in $\mathcal{A}$ (or just stable if $\mathcal{A}$ is understood from the context) if $x$ is $\sigma$-stable for all transitions $\sigma \in \operatorname{Subs}(\mathcal{A})$.
A register is called \emph{non-stable} if it is not stable.
By Lemma~\ref{lemma:stability}, it is enough to check that a register $x$ is $\sigma$-stable for all letter-transitions $\delta(p,a) = (q,\sigma)$ to know whether $x$ is stable in $\mathcal{A}$ or not.
For the next proposition, we need to recall some standard definitions of directed labeled graphs and bottom strongly connected components.
For $Q' \subseteq Q$ we say that $Q'$ is a \emph{bottom strongly connected component} (BSCC) of $\mathcal{A}$ if (1) for every pair $q_1, q_2 \in Q'$ there exists $w \in \Sigma^*$ such that $\delta^*(q_1, w) = (q_2, \sigma)$ for some substitution $\sigma$ and (2) for every $q \in Q'$ and $w \in \Sigma^*$ if $\delta^*(q_1, w) = (q_2, \sigma)$ then $q_2 \in \hat{Q}$.
Intuitively a BSCC $Q'$ of $\mathcal{A}$ is a set of mutually reachable states such that there is no word that leaves $Q'$.
We say that $\mathcal{A}$ is \emph{strongly connected} if the whole set $Q$ is a BSCC of $\mathcal{A}$.
In this section, we assume that $\mathcal{A}$ is strongly connected.
By the previous proposition, we know that for every pair of states $q, q'$ there exists a word $w^{q,q'}$ that ``collapses'' the content of non-stable register into stable register.
This motivates the following definition: a substitution $\sigma \in \operatorname{Subs}(\mathcal{A})$ is said to be \emph{collapse} substitution if $\operatorname{Var}(\sigma(x)) = \emptyset$ for all non-stable registers in $\sigma$.
With this definition, the previous lemma saids that for every pair of states $q, q'$ there exists a word $w^{q,q'}$ such that $\delta(q, w^{q,q'}) = (q', \sigma)$ and $\sigma$ is a collapse substitution.
From now on, if $q = q'$, we write $w^{q}$ and $\sigma^{q}$ instead of $w^{q,q}$ and $\sigma^{q,q}$, respectively.
We say that a substitution $\sigma \in \operatorname{Subs}(\mathcal{A})$ is a \emph{collapse substitution} in $\mathcal{A}$ if $\operatorname{Var}(\sigma(x)) = \emptyset$ for all non-stable registers in $\sigma$.
Intuitively, collapse substitution makes all non-stable registers to forget its content.
In this sense, non-stable registers are unstable and they can loose its current content with a collapse substitution.
Notice that by definition stable register can never loose its current content.
\begin{proposition}
\label{prop:collapse_register}
Let $\mathcal{A} = (Q, \Sigma, \mathcal{X}, \delta, q_0, \nu_0, \mu)$ be a copyless and strongly connected CRA.
Then for all $q,q' \in Q$ there exists a word $w^{q,q'}$ and a substitution $\sigma^{q,q'}$ such that (1) $\delta^*(q,w^{q,q'}) = (q', \sigma^{q,q'})$, (2) $w^{q,q'}$ contains each letter in $\Sigma$, and (3) $\sigma^{q,q'}$ is a collapse substitution in $\mathcal{A}$.
\end{proposition}
\begin{proof}
Let $\mathcal{X} = \{x_1, \ldots, x_n\}$ be the registers of $\mathcal{X}$ in increasing order with respect to $\preceq$ and $q, q'$ two states in $Q$.
We construct the word $w^{q,q'}$ by (inverse) induction starting from $x_n$ and ending in $x_1$.
Specifically, for every $i \leq n$ we will define a word $w_i^{q, q'}$ and a substitution $\sigma^{q,q'}_i$ such that the proposition holds for all non-stable register greater than $x_i$.
Clearly the proposition will be shown by considering $w^{q, q'} = w^{q,q'}_1$.
We start by the base case $i = n$ and consider whether $x$ is stable or not.
If $x_n$ is stable, then take a word $u$ and a substitution $\sigma$ such that $\delta^*(q,u) = (q', \sigma)$ and $u$ contains each letter in $\Sigma$.
Given that $\mathcal{A}$ is strongly connected we know that $u$ and $\sigma$ always exists.
Then by defining $w^{q,q'}_n = u$ and $\sigma^{q,q'}_n = \sigma$, the proposition holds for the stable register $x_n$.
Now, suppose that $x_n$ is non-stable which means that there exist a pair $p, p' \in Q$ and a word $u$ such that $\delta^*(p, u) = (p', \sigma)$ and $x_n$ is non-stable in $\sigma$.
Given that $\mathcal{A}$ is in normal form and $x_n$ is the maximum register with respect to $\preceq$, this implies that $\operatorname{Var}(\sigma(x_n)) = \emptyset$.
Pick two words $v_1, v_2 \in \Sigma^*$ such that $\delta^*(q, v_1) = (p, \sigma_1)$ and $\delta^*(p', v_2) = (q', \sigma_2)$ for some substitutions $\sigma_1$ and $\sigma_2$, and $v_1$ contains each letter in~$\Sigma$.
Again, we know that $v_1$ and $v_2$ always exists since $\mathcal{A}$ is strongly connected.
Then define $w^{q,q'}_n = v_1 \cdot u \cdot v_2$ and $\sigma^{q,q'}_n = \sigma_1 \cdot \sigma \cdot \sigma_2$.
By construction, we know that $\delta^*(q, w^{q,q'}_n) = (q', \sigma^{q,q'}_n)$ and $w^{q,q'}_n$ contains all letter in $\Sigma$.
For proving that $\operatorname{Var}(\sigma^{q,q'}_n(x_n)) = \emptyset$, notice that $x$ is non-stable on $\sigma$.
By Lemma~\ref{lemma:stability} this implies that $x$ is non-stable on $\sigma_1 \circ \sigma \circ \sigma_2$.
Given that $\mathcal{A}$ is in normal form and $x_n$ is the maximum register, we get that $\operatorname{Var}(\sigma(x_n)) = \emptyset$.
For the inductive step, we suppose that $w^{q,q'}_{i+1}$ and $\sigma^{q,q'}_{i+1}$ exist and show how to construct $w^{q,q'}_i$ and $\sigma^{q,q'}_i$ that satisfy the proposition for registers greater or equal than $x_i$.
Again, we consider whether $x_i$ is stable or not.
If $x_i$ is stable, then by defining $w^{q,q'}_i = w^{q,q'}_{i+1}$ and $\sigma^{q,q'}_i = \sigma^{q,q'}_{i+1}$ the proposition trivially holds.
Indeed, property (1) and (2) are satisfied by the inductive hypothesis and (3) is satisfied since it holds for every register greater than $x_i$ (again by inductive hypothesis) and also for $x_i$ given that $x_i$ is stable.
Thus, the interesting case is when $x_i$ is non-stable.
Suppose that $x_i$ is non-stable and, therefore, there exist a pair $p, p' \in Q$ and a word $u$ such that $\delta^*(p, u) = (p', \sigma)$ and $x_i$ is non-stable on $\sigma$.
Let $v_1, v_2 \in \Sigma^*$ such that $\delta^*(q, v_1) = (p, \sigma_1)$ and $\delta^*(p', v_1) = (q', \sigma_2)$ for some substitutions $\sigma_1$ and $\sigma_2$.
Recall that $v_1$ and $v_2$ exist by assuming that $\mathcal{A}$ is strongly connected.
Now, define:
\[
\renewcommand{\arraystretch}{1.4}
\begin{array}{rcl}
w^{q,q'}_i & = & w^{q,q}_{i+1} \cdot v_1 \cdot u \cdot v_2 \\
\sigma^{q,q'}_i & = & \sigma^{q,q}_{i+1} \circ \sigma_1 \circ \sigma \circ \sigma_2
\end{array}
\]
It is clear by construction that $\delta^*(q, w^{q,q'}_i) = (q', \sigma^{q,q'}_i)$ and $w^{q,q'}_i$ contains all letter in $\Sigma$.
The last fact holds because we know (by induction) that $w^{q,q}_{i+1}$ contains all letters in $\Sigma$.
To conclude the proof, we must show that $\operatorname{Var}(\sigma^{q,q'}_i(x)) = \emptyset$ for every non-stable register $x \succeq x_i$.
Let $x$ be any non-stable register $x \succeq x_i$ (possibly $x_i$) and let $\sigma^* = \sigma_1 \circ \sigma \circ \sigma_2$.
First, note that all $y \in \var(\sigma^*(x))$ are non-stable.
Otherwise, if $y \in \var(\sigma^*(x))$ is stable, then $y \in \var(\sigma^*(y))$ but this is impossible since $\var(\sigma^*(x)) \cap \var(\sigma^*(y)) = \emptyset$ by the definition of being copyless.
Therefore, we have that every register in $\var(\sigma^*(x))$ is non-stable.
Note also that $x_i \notin \var(\sigma^*(x))$.
This is true when $x \neq x_i$ (i.e. $\mathcal{A}$ is in normal form) and also true when $x = x_i$ since we know that $x_i \notin \var(\sigma^*(x_i))$ (i.e. $x_i$ is non-stable).
Then we have that all registers in $\var(\sigma^*(x))$ are non-stable and strictly greater than $x_i$.
This means that the inductive hyphotesis holds for $i+1$ and $\var(\sigma^{q,q}_{i+1}(y)) = \emptyset$ for all $y \in \var(\sigma^*(x))$. By composing $\sigma^{q,q}_{i+1}$ and $\sigma^*$, we conclude that $\var(\sigma^{q,q}_{i}(x)) = \emptyset$.
This concludes the proof. \hfill \qed
\end{proof}
\subsection{Growing rate of stable registers in a loop}
\label{subsec:loops}
Fix a copyless and non-zero CRA $\mathcal{A} \; = \; (Q, \Sigma, \mathcal{X}, \delta, q_0, \nu_0, \mu)$.
Similar that in the previous section, assume also that $\mathcal{A}$ is in normal form with respect to the total order $\preceq$ and, furthermore, assume that $\mathcal{A}$ is strongly connected.
We start this section with a trivial fact about copyless expression that will be useful during this section.
\begin{lemma}
\label{lemma:easy_exp}
Let $\sigma, \tau \in \operatorname{Subs}(\mathcal{A})$ where $\sigma$ is a collapse substitution and let $x$ be a $\sigma$-stable and $\tau$-stable register.
Then the expression $\sigma \circ \tau(x)$ is equivalent to an expression of the form $(c \odot \sigma(x)) \oplus d$, where $c, d \in \mathbb{S}$ and $c \neq \mathbb{0}$.
\end{lemma}
\begin{proof}
We prove this by induction on the length of expression $\tau(x)$. For the base step take $\tau(x) = x$. Then $\sigma \circ \tau(x) = \sigma(x)$ is equivalent to $(\mathbb{1} \odot \sigma(x)) \oplus \mathbb{0}$. Suppose we can write such an expression $(c \odot x) \oplus d$ for $\tau$ of length $n$. By the inductive assumption when $\tau$ is of length $n+1$ then $\sigma \circ \tau(x)$ can be written in the form $((c \odot \sigma(x)) \oplus d) \oplus \sigma(f)$ or $((c \odot \sigma(x)) \oplus d) \odot \sigma(f)$.
If $f$ is a constant, then $\sigma(f) \neq \mathbb{0}$ by the non-zero assumption.
Otherwise, $f = y$ for some variable in $\mathcal{X} -\{y\}$.
This means that $y$ is non-stable in $\mathcal{A}$ and then $\sigma(f) = f'$ for some constant $f' \neq \mathbb{0}$.
In either case we have that $\sigma(f) = f'$ for some constant $f' \neq \mathbb{0}$ and, thus, $\sigma \circ \tau(x)$ can be rewritten as $(c \odot \sigma(x)) \oplus (d \oplus f')$ and $(((f' \odot c)\odot \sigma(x)) \oplus (f' \odot d))$, respectively. \hfill \qed
\end{proof}
The goal of this section is to study the behavior of loops in $\mathcal{A}$.
A \emph{loop} over a state $q \in Q$ in $\mathcal{A}$ is a word $w \in \Sigma^*$ such that $\delta^*(q, w) = (q, \sigma)$ for some substitution $\sigma$.
The iteration of any loop is also a loop, that is, $w^n$ is a loop over $q$ and $\delta^*(q, w^n) = (q, \sigma^n)$ for any $n$.
This motivates the study of the behavior of $\sigma$ when it is iterated a big number of times.
As the next lemma shows for a copyless CRA $\mathcal{A}$ in normal form the substitution in $\mathcal{A}$ are converted into a collapse substitution when they are iterated a fix number of times.
\begin{lemma} \label{lemma:collapsing-loop}
Let $\sigma \in \operatorname{Subs}(\mathcal{A})$ be any substitution. Then there exists $N \geq 0$ such that $\sigma^N$ is a collapse substitution in $\mathcal{A}$.
\end{lemma}
\begin{proof}
Suppose $x$ is non-stable over $\sigma$, i.e. $x \not \in \operatorname{Var}(\sigma(x))$. Since $\mathcal{A}$ is copyless and in normal form then $x \prec \min(\operatorname{Var}(\sigma(x)))$. But since $\min(\operatorname{Var}(\sigma(x))) \in \operatorname{Var}(\sigma(x))$ and $\mathcal{A}$ is copyless then $\min(\operatorname{Var}(\sigma(x))) \not \in \operatorname{Var}(\sigma(\min(\operatorname{Var}(\sigma(x)))))$
and thus $\min(\operatorname{Var}(\sigma(x))) \not \in \min(\operatorname{Var}(\sigma^2(x)))$. Applying this argument consecutively we get a increasing sequence of registers:
\[x \prec \min(\operatorname{Var}(\sigma(x))) \prec \min(\operatorname{Var}(\sigma^2(x))) \prec \min(\operatorname{Var}(\sigma^3(x))) \prec \dots\]
The number of registers is finite and then this sequence cannot be infinite. Thus there exists an $N$ such that $\operatorname{Var}(\sigma^N(x)) = \emptyset$.
We conclude that it suffices to take $N = |\mathcal{X}|$. \hfill \qed
\end{proof}
From the previous lemma, we know that a non-stable register $x$ over $\sigma$ become a constant when $\sigma$ is iterated more than $|\mathcal{X}|$-times.
In the next proposition, we study the growing of stable registers when a collapse substitution is iterated. This will be useful to understand the behavior of copyless CRA inside their loops.
\begin{proposition}
\label{prop:long_exp}
Let $\sigma \in \operatorname{Subs}(\mathcal{A})$ be a collapsing substitution and $x$ be a $\sigma$-stable register. Then there exist $c,d \in \mathbb{S}$ with $c \neq \mathbb{0}$ such that for every $i \geq 0$ we have:
\[
\sigma^{i+1}(x) = (c^{i} \odot \sigma(x)) \oplus \big( d \odot \bigoplus_{j=0}^{i-1} c^j \big)
\]
\end{proposition}
\begin{proof}
Since $\sigma$ is a copyless and collapsing substitution from $\mathcal{A}$, then for every $y \in \operatorname{Var}(\sigma(x))$ either $y = x$ or $\operatorname{Var}(\sigma(y)) = \emptyset$.
Then consider the expression $e$ that is equal to $\sigma(x)$ where every register $y \neq x$ is replaced with $\asem{\sigma(y)}$. This is a copyless expression with only one variable $x$ and $\mathcal{A}$ is non-zero. By Lemma~\ref{lemma:easy_exp} (i.e. by replacing $\tau$ by $\sigma$ and $\sigma$ by the identity substitution) it can be rewritten in the form $e^* = (c \odot x) \oplus d$ for some $c,d \in S$ and $c \neq \mathbb{0}$.
We prove this claim by induction using the constants $c,d$. We start with the base step for $i = 1$ (for $i=0$ is trivially true).
The expression $\sigma \circ \sigma(x)$ can be rewritten as $\sigma \circ e^* = (c \odot \sigma(x)) \oplus d$, because it takes the expression $\sigma$ and substitutes $x$ with $\sigma(x)$ and all other registers $x$ with $\asem{\tau(x)}$.
For the inductive step, we have that $\sigma^{i+2} = \sigma^{i+1} \circ e^*$ by the same argument used in the previous paragraph.
Then by replacing $\sigma^{i+1}$ with the inductive hypothesis, we get:
\[
\renewcommand{\arraystretch}{1.8}
\begin{array}{rcl}
\sigma^{i+2} (x) & \; = \;\; & (c \odot \sigma^{i+1}(x)) \oplus d \\
& \; = \;\; & \big(c \odot \big( (c^{i} \odot \sigma(x)) \oplus \big( d \odot \bigoplus_{j=0}^{i-1} c^j \big) \big) \big) \oplus d \\
& \; = \;\; & \big((c^{i+1} \odot \sigma(x)) \oplus \big( d \odot \bigoplus_{j=1}^{i} c^j \big) \big) \oplus d \\
& \; = \;\; & (c^{i+1} \odot \sigma(x)) \oplus \big( d \odot \bigoplus_{j=0}^{i} c^j \big)
\end{array}
\]
\hfill \qed
\end{proof}
The previous proposition shows that stable registers grow exponentially with respect of the number of times than a loop is iterated.
In particular, when $\mathbb{S} = \natninf(\max,+)$ a stable register grow linearly.
The next result is a refinement of Proposition~\ref{prop:long_exp} but in terms of the $\natninf(\max,+)$-semiring.
The lemma is technical and it will be used in the next section.
The reader can avoid the proof for a first~read.
For the next lemma, recall that, by Proposition~\ref{prop:collapse_register}, for any $q \in Q$ there exists a word $w^q$ such that $\delta^*(q,w^q) = (q, \sigma^q)$ and $\sigma^q$ is a collapse substitution over every non-stable registers in $\mathcal{X}$.
\begin{lemma}
\label{lemma:loops}
Let $\mathcal{A}$ be a copyless and non-zero CRA in normal form over the $\natninf(\max,+)$ semiring. Furthermore, let $q \in Q$ and $v \neq \epsilon$ be a loop such that $\delta^*(q,v) = (q, \tau)$ for some collapsing substitution $\tau$.
Then for $j \in \mathbb{N}$ big enough, there exists a substitution $\sigma_j$ such that $\delta^*(q,w^q \cdot v^{j+1} \cdot w^q) = (q, \sigma_j)$ where:
\begin{align*}
\sigma_j \circ \lambda (x) \;\; = \;\;
\begin{cases}
\mathcal{O}(1) & \text{if $x$ is non-stable}\\
\max\{ \ j \cdot c + \sigma^q(x) + \mathcal{O}(1), \ j \cdot d + \mathcal{O}(1) \ \} & \text{otherwise,}
\end{cases}
\end{align*}
where $\lambda \in \operatorname{Subs}(\mathcal{A})$ such that its size does not depend on $j$; and $c, d \in \mathbb{N}$ are constants that depend on $x$ but not on $j$ or $\lambda$.
\end{lemma}
\begin{proof}
From now on we work with the $\natninf(\max,+)$ semiring. Let $q \in Q$ and $v \neq \epsilon$ be a loop such that $\delta^*(q,v) = (q, \tau)$ for some collapsing substitution $\tau$.
Recall that $\mathcal{A}$ is a non-zero CRA and in this semiring $\mathbb{0} = - \infty$, thus we can assume that $c \geq 0$. By Proposition~\ref{prop:long_exp}, we have that for every $j \geq 1$ it holds that:
\setlength{\jot}{8pt}
\begin{align}
\tau^{j+1}(x) & \;\; = \;\; (c_x^{j} \odot \tau(x)) \oplus \big( d \odot \bigoplus_{k=0}^{j-1} c_x^k \big) && \text{(by Prop.~\ref{prop:long_exp})} \nonumber \\
& \;\; = \;\; \max\left\{ j \cdot c_x + \tau(x), d_x + \max_{k=0}^{j-1}\{ k \cdot c_x \} \right\} && \text{(by definition of $\natninf(\max,+)$)} \nonumber \\
& \;\; = \;\; \max\left\{ j \cdot c_x + \tau(x), \ d_x + (j-1) \cdot c_x \right\} && \text{(by definition of $\max$)} \nonumber \\
& \;\; = \;\; \max\left\{ j \cdot c_x + \tau(x) + \mathcal{O}(1), \ j \cdot c_x + \mathcal{O}(1) \right\} && \text{(by using the $\mathcal{O}$-notation)} \label{eq:artic-version}
\end{align}
Fix any substitution $\lambda \in \operatorname{Subs}(\mathcal{A})$.
Recall that $w^{q}$ is the word and $\sigma^q$ is the substitution from Proposition~\ref{prop:collapse_register} for $q$.
Then for any $j\geq 1$, consider the substitution $\sigma_j$ such that $\delta^*(q,w^q \cdot v^{j+1} \cdot w^q) = (q, \sigma_j)$.
We show next that $\sigma_j \circ \lambda$ satisfies the above properties for any $j$.
For a non-stable register $x$ in $\mathcal{A}$, the result is straightforward.
Indeed, $\mathcal{A}$ is in normal form and, thus, $\operatorname{Var}(\lambda(x))$ contains just non-stable registers.
This implies that:
\[
\renewcommand{\arraystretch}{1.5}
\begin{array}{rcll}
\sigma_j \circ \lambda(x) & \;\; = \;\; & \sigma^q \circ \tau^j \circ \sigma^q \circ \lambda (x) \;\;\; & \text{(by definition)} \\
& \;\; = \;\; & \sigma^q \circ \lambda (x) & \text{($\operatorname{Var}(\lambda(x))$ contains only non-stable registers)} \\
& \;\; = \;\; & \mathcal{O}(1) & \text{($\sigma^q$ and $\lambda$ do not depend on $j$)}
\end{array}
\]
Suppose now that $x$ is a stable register in $\mathcal{A}$. We need to show that:
\begin{align}
\sigma_j \circ \lambda (x) & \;\; = \;\; \max\{ \ j \cdot c + \sigma^q(x)+ \mathcal{O}(1), \ j \cdot d + \mathcal{O}(1) \ \} \label{eq:stable-cases}
\end{align}
for $j$ big enough and some constants $c, d \in \mathbb{N}$ such that $c,d$ do not depend on $\lambda$.
The expression $\sigma_j \circ \lambda (x)$ is equivalent to $\sigma^q \circ \tau^{j+1} \circ \sigma^{q} \circ \lambda(x)$. In other words, it is equivalent to the expression $\sigma^{q} \circ \lambda(x)$ where all registers $y \in \operatorname{Var}(\sigma^{q} \circ \lambda(x))$ are substituted with $\sigma^{q} \circ \tau^{j+1}(y)$. We start the proof by analyzing these last expressions.
Let $y \in \operatorname{Var}(\sigma^{q} \circ \lambda (x))$ be any variable. If $y$ is not a $\tau$-stable register, then $\operatorname{Var}(\tau(y)) = \emptyset$ and thus $\sigma^q \circ \tau^{j+1}(y) = \mathcal{O}(1)$.
Otherwise, by (\ref{eq:artic-version}) we know that $\tau^{j+1}(y) = \max\left\{ j \cdot c_y + \tau(x) + \mathcal{O}(1), \ j \cdot c_y + \mathcal{O}(1) \right\}$ for some $c_y \in \mathbb{N}$ and, thus, by applying $\sigma^q$ on $\tau^{j+1}(y)$ we get:
\begin{align}
\sigma^{q} \circ \tau^{j+1}(y) & \;\; = \;\; \max\left\{ j \cdot c_y + \sigma^{q} \circ \tau(x) + \mathcal{O}(1), \ j \cdot c_y + \mathcal{O}(1) \right\} \label{eq:artic-version2}
\end{align}
If $y$ is a $\tau$-stable register, but non-stable on $\mathcal{A}$ (i.e. non-stable in general) then $\sigma^{q} \circ \tau(y)$ is equal to a constant and does not depend on $j$. Thus we can estimate $\sigma^{q} \circ \tau(y)$ by $\mathcal{O}(1)$ and then (\ref{eq:artic-version2}) becomes:
\[
\renewcommand{\arraystretch}{1.5}
\begin{array}{rcl}
\sigma^{q} \circ \tau^{j+1}(y) & \;\; = \;\; & \max\left\{ j \cdot c_y + \mathcal{O}(1), \ j \cdot c_y + \mathcal{O}(1) \right\} \\
& \;\; = \;\; & j \cdot c_y + \mathcal{O}(1)
\end{array}
\]
Otherwise, $y$ is a stable register and, moreover, $\sigma^q$ is a collapsing substitution. Then by Lemma~\ref{lemma:easy_exp} we know that $\sigma^q \circ \tau(y)$ can be represented as:
\setlength{\jot}{7pt}
\begin{align}
\sigma^{q} \circ \tau(y) & \;\; = \;\; \max\{ \ c' + \sigma^{q}(y), \ d' \ \} \nonumber \\
& \;\; = \;\; \max\{ \ \sigma^{q}(y) + \mathcal{O}(1), \ \mathcal{O}(1) \ \} \label{eq:stable-simple-version}
\end{align}
for some constants $c',d' \in \mathbb{N}$ not depending on $j$ and where $c' \neq -\infty$. Thus, by combining (\ref{eq:artic-version2}) with the previous equation we get:
\setlength{\jot}{7pt}
\begin{align}
\sigma^{q} \circ \tau^{j+1}(y) & \;\; = \;\; \max\left\{ j \cdot c_y + \sigma^{q} \circ \tau(x) + \mathcal{O}(1), \ j \cdot c_y + \mathcal{O}(1) \right\} && \text{(by (\ref{eq:artic-version2}))} \nonumber \\
& \;\; = \;\; \max\left\{ j \cdot c_y + \max\{ \ \sigma^{q}(y) + \mathcal{O}(1), \ \mathcal{O}(1) \ \} + \mathcal{O}(1), \ j \cdot c_y + \mathcal{O}(1) \right\} && \text{(by (\ref{eq:stable-simple-version}))} \nonumber \\
& \;\; = \;\; \max\left\{ j \cdot c_y + \sigma^{q}(y) + \mathcal{O}(1), \ j \cdot c_y + \mathcal{O}(1), \ j \cdot c_y + \mathcal{O}(1) \right\} && (\text{by distribution}) \nonumber \\
& \;\; = \;\; \max\left\{ j \cdot c_y + \sigma^{q}(y) + \mathcal{O}(1), \ j \cdot c_y + \mathcal{O}(1) \right\} \label{equation:stable_register}
\end{align}
Observe that a variable $y \in \operatorname{Var}(\sigma^{q} \circ \lambda (x))$ is table if, and only if, $y=x$. We summary the three possible cases in the next equation:
\begin{align}
\sigma^{q} \circ \tau^{j+1}(y) \;\; = \;\;
\begin{cases}
\;\; \mathcal{O}(1) & \text{if $y$ is not $\tau$-stable} \\
\;\; j \cdot c_y + \mathcal{O}(1) & \text{if $y$ is non-stable} \\
\;\; \max\left\{ j \cdot c_y + \sigma^{q}(y) + \mathcal{O}(1), \ j \cdot c_y + \mathcal{O}(1) \right\} & \text{if $x=y$}
\end{cases}
\label{eq:sigma-tau-equation}
\end{align}
Now we can prove (\ref{eq:stable-cases}). Recall that the expression $\sigma_j \circ \lambda (x)$ is the expression $\sigma^{q} \circ \lambda (x)$ where all registers $y \in \operatorname{Var}(\sigma^{q} \circ \lambda(x))$ are substituted with $\sigma^q \circ \tau^{j+1}(y)$.
First notice that by Lemma \ref{lemma:easy_exp} the expression $\sigma^{q} \circ \lambda(x)$ is equivalent to $\max\{ c'' + \sigma^q(x), d''\} = \max\{\sigma^q(x) + \mathcal{O}(1), \mathcal{O}(1)\}$. We can do these estimations because $\lambda$ does not depend on $j$.
By combining this equation with the definition of $\sigma_j$ we have:
\begin{align*}
\sigma_j \circ \lambda (x) & \;\; = \;\; \sigma^q \circ \tau^{j+1} \circ \sigma^q \circ \lambda (x) \\
& \;\; = \;\; \sigma^q \circ \tau^{j+1} \circ \max\{ \ \sigma^q(x) + \mathcal{O}(1), \ \mathcal{O}(1) \ \} \\
& \;\; = \;\; = \max\{ \ \sigma_j(x) + \mathcal{O}(1), \ \mathcal{O}(1) \ \}
\end{align*}
Thus to finish the proof it suffices to show that:
\begin{align}
\sigma_j(x) & \;\; = \;\; \max\{ \ j \cdot c + \sigma^q(x)+ \mathcal{O}(1), \ j \cdot d + \mathcal{O}(1) \ \} \label{equation:lemma_induction2}
\end{align}
for some constants $c, d \in \mathbb{N}$. Indeed, if (\ref{equation:lemma_induction2}) holds, then $\sigma_j(x) + \mathcal{O}(1) = \max\{j \cdot c + \sigma^q(x)+ \mathcal{O}(1) , j \cdot d + \mathcal{O}(1)\}$. Similarly the value $\mathcal{O}(1)$ can be estimated by the expression $j \cdot d + \mathcal{O}(1)$ so the last $\max$ operation is not needed. Notice that this does not change the constants $c$ and $d$, which proves that they do not depend on $\lambda$.
To conclude the proof, we show that (\ref{equation:lemma_induction2}) holds for a stable register $x$.
Recall that $\sigma_j = \sigma^q \circ \tau^{j+1} \circ \sigma^q$.
We will show that (\ref{equation:lemma_induction2}) holds even if we change $\sigma_j$ by $\sigma^q \circ \tau^{j+1} \circ \sigma'$ where $\sigma'$ is any copyless substitution and $x$ is $\sigma'$-stable.
The proof is by induction on the size of $\sigma'(x)$. For the base step we must consider $\sigma'(x) = x$ (i.e. $x$ is stable). Then:
\begin{align*}
\sigma_j(x) & \;\; = \;\; \sigma^{q} \circ \tau^{j+1} \circ \sigma'(x) \\
& \;\; = \;\; \sigma^{q} \circ \tau^{j+1} (x) \\
& \;\; = \;\; \max\{ \ j \cdot c_x + \sigma^q(x) + \mathcal{O}(1), \ j \cdot c_x + \mathcal{O}(1) \ \} && \text{(by (\ref{equation:stable_register}))}.
\end{align*}
Suppose that (\ref{equation:lemma_induction2}) holds for expressions $\sigma'$ of length $n$ and we want to show that (\ref{equation:lemma_induction2}) for an expression $\sigma''(x)$ of the form $\sigma''(x) = \sigma'(x) \circledast f$ where $\circledast$ is either $+$ or $\max$, $x$ is $\sigma'$-stable and $f$ is a expression where all their registers are non-stable (recall that $\sigma''$ is copyless).
By unraveling $\sigma^q \circ \tau^{j+1} \circ \sigma''(x)$, we get that:
\begin{align*}
\sigma^q \circ \tau^{j+1} \circ \sigma''(x) & \;\; = \;\; \sigma^q \circ \tau^{j+1} \circ (\sigma'(x) \circledast f) \\
& \;\; = \;\; (\sigma^q \circ \tau^{j+1} \circ \sigma'(x)) \circledast (\sigma^q \circ \tau^{j+1} \circ f) \\
& \;\; = \;\; \max\{ \ j \cdot c + \sigma^q(x)+ \mathcal{O}(1), \ j \cdot d + \mathcal{O}(1) \ \} \circledast (\sigma^q \circ \tau^{j+1} \circ f) && (\text{by induction})
\end{align*}
On the last expression, it is easy to check that $\sigma^q \circ \tau^{j+1} \circ f$ is equal to a constant or to $j \cdot c_f + \mathcal{O}(1)$ for some constant $c_f \neq -\infty$.
Indeed, by (\ref{eq:sigma-tau-equation}) we know that $\sigma^{q} \circ \tau^{j+1}(y) = j \cdot c_y + \mathcal{O}(1)$ for non-stable registers. Since $f$ is an expression over non-stable register, one can show by induction over the size of $f$ that $\sigma^q \circ \tau^{j+1} \circ f$ is either a constant or $j \cdot c_f + \mathcal{O}(1)$ for some constant $c_f \neq -\infty$ and $j$ big enough.
We start with the case when $\sigma^q \circ \tau^{j+1} \circ f \in \mathbb{N}$. Then $\sigma^q \circ \tau^{j+1} \circ f \in \mathcal{O}(1)$ and for $\circledast$ equal to $+$ or $\max$ we have $\sigma^q \circ \tau^{j+1} \circ \sigma''(x) = \max\{ \ j \cdot c + \sigma^q(x)+ \mathcal{O}(1), \ j \cdot d + \mathcal{O}(1) \ \}$ for $j$ big enough.
Suppose now that $\sigma^q \circ \tau^{j+1} \circ f = j \cdot c_f + \mathcal{O}(1)$. Then we must consider when $\circledast$ is $+$ or $\max$ operation. For the former we have:
\begin{align*}
\sigma^q \circ \tau^{j+1} \circ \sigma''(x) & \;\; = \;\; \max\{ \ j \cdot c + \sigma^q(x) + \mathcal{O}(1), \ d \cdot j + \mathcal{O}(1) \ \} + j \cdot c_f + \mathcal{O}(1) \\
& \;\; = \;\; \max\{ \ j \cdot (c + c_f) + \sigma^q(x) + \mathcal{O}(1), \ j \cdot (d + c_f) + \mathcal{O}(1) \ \},
\end{align*}
and for the latter we have:
\begin{align*}
\sigma^q \circ \tau^{j+1} \circ \sigma''(x) & \;\; = \;\; \max\{ \ \max\{ \ j \cdot c + \sigma^q(x) + \mathcal{O}(1), \ j \cdot d + \mathcal{O}(1) \ \}, \ j \cdot c_f + \mathcal{O}(1) \ \} \\
& \;\; = \;\; \max\{ \ j \cdot c + \sigma^q(x) + \mathcal{O}(1), \ j \cdot d + \mathcal{O}(1), \ j \cdot c_f + \mathcal{O}(1) \ \}. \\
& \;\; = \;\; \max\{ \ j \cdot c + \sigma^q(x) + \mathcal{O}(1), \ j \cdot \max\{d, c_f\} + \mathcal{O}(1) \ \}.
\end{align*}
The number of induction steps depends on the size of the expression $\sigma''(x)$ which for $\sigma''(x) = \sigma^q(x)$ does not depend on $j$. This concludes the proof. \hfill \qed
\end{proof}
|
1,116,691,498,373 | arxiv | \section{Introduction}
The study of properties of bound states of heavy quarks called quarkonium
has received a lot of attention since the seminal paper by Matusi and Satz~\cite{Matsui:1986dk},
where it was suggested that the formation of a color deconfined medium in heavy-ion collisions
will lead to the dissolution of quarkonium resulting in a suppression of their production. There are large
experimental efforts in recent times dedicated towards understanding quarkonium production
and its dynamics in heavy-ion collisions, which are supplemented by many phenomenological
studies, see Refs.~\cite{Rothkopf:2019ipj,Aarts:2016hap,Mocsy:2013syh,Bazavov:2009us} for recent reviews.
In-medium properties of quarkonium in a Quark-Gluon plasma (QGP) as well as its
dissolution as a function of temperature are all encoded in the quarkonium spectral functions
that are defined in terms of the real-time correlation functions of the appropriate hadron
operators. Quarkonium states show up as peak\jhw{s} in the spectral functions, which at
high temperatures are expected to be broadened and shifted in the frequency space, and
eventually merge into the continuum of quark-antiquark scattering states. Through an analytic
continuation one can relate the spectral function of the quarkonium state of a specific quantum
number channel to the Euclidean time correlation function, which can be calculated using lattice
field theory techniques. The Euclidean time correlation function is the Laplace transform of the
spectral function, if the anti-particle contribution can be neglected, and has a more complicated
kernel otherwise. Therefore, lattice calculations can in principle provide a model independent
information on the quarkonium spectral functions. However, in practice the reconstruction of the
spectral function from a discrete set of Euclidean correlator data from lattice is a difficult task.
Early works on the reconstruction of the spectral functions have been reported in Refs.
~\cite{Nakahara:1999vy,Asakawa:2000tr,Asakawa:2003re,Wetzorke:2001dk,Karsch:2002wv,
Umeda:2002vr,Datta:2003ww,Jakovac:2006sf}. It has also been pointed out that the Euclidean
correlation functions have limited sensitivity to the in-medium quarkonium properties and/or their melting~\cite{Mocsy:2007yj,Petreczky:2008px} because at high temperatures, the temporal extent
of the lattice becomes smaller. In the case of bound states of bottom quarks (bottomonium) there
is an additional problem of large discretization errors in the correlators due to the large bottom-quark
mass. One can circumvent this problem by using non-relativistic QCD (NRQCD), an effective theory in
which the energy scale associated with the heavy quark mass has been integrated out. This approach is widely used to calculate bottomonium
properties~\cite{Lepage:1992tx,Davies:1994mp,Meinel:2009rd,Meinel:2010pv,Hammant:2011bt,Dowdall:2011wh}.
Recent studies within the NRQCD formalism~\cite{Aarts:2010ek,Aarts:2014cda,Kim:2014iga,Kim:2018yhk,Larsen:2019bwy,Larsen:2019zqv}
have indicated that the ground states of bottomonium channels, $\Upsilon(1S)$ and $\eta_b(1S)$ can survive
up to temperatures of $400$ MeV, whereas the fate of $P$-wave bottomonia is not
yet completely settled. Lattice results from the FASTUM collaboration suggest that the
$P$-wave states melt already at temperatures around $200$ MeV \cite{Aarts:2014cda}, while other independent
studies also within the NRQCD formalism suggest that $P$-wave bottomonia
can survive at higher temperatures within the QGP \cite{Kim:2018yhk,Larsen:2019bwy,Larsen:2019zqv}.
Since NRQCD is an effective theory with an ultra-violet cutoff of around the bottom-quark mass,
the choice of lattice spacings in these calculations cannot be too small. Since the temperature is related
to the inverse lattice spacing $T=1/(a N_{\tau})$ with $N_{\tau}$ being the temporal extent, this also
suggests that studying bottomonia at high temperature with some reasonable choice of $N_\tau$ is
difficult. For this reason we do not know yet at what temperatures the ground state bottomonia melt.
The spatial correlation functions of mesons can offer a different perspective on the problem
of in-medium modification of mesons, in particular
charmonium~\cite{Karsch:2012na,Bazavov:2014cta,Bazavov:2020teh}. In contrast to the temporal
meson correlators, the spatial meson correlation functions can be calculated for large (spatial)
separations between the quark and anti-quarks, therefore are more sensitive to in-medium
modifications of meson states~\cite{Karsch:2012na,Bazavov:2014cta,Bazavov:2020teh}. The
spatial correlation functions are in turn related to the meson spectral function at non-zero
momenta through the relation,
\begin{equation}
G(z,T)=\int_{0}^{\infty} \frac{2 d \omega}{\omega} \int_{-\infty}^{\infty} d p_z e^{i p_z z} \sigma(\omega,p_z,T).
\label{eq:spatial}
\end{equation}
While the above relation is more complicated than the corresponding relation for the temporal correlation
and spectral functions for mesons, it is still quite useful. At large distances the spatial meson correlation
function decays exponentially, and the exponential decay is governed by the so-called screening mass,
$G(z) \sim \rm{exp}(-M_{scr} z)$. When there is a well defined bound state peak in the meson spectral function,
the screening mass will be equal to the meson pole mass. On the other hand at very high temperatures,
when the quark and antiquarks are eventually unbound, the screening mass is give\jhw{n} by
$2\sqrt{(\pi T)^2+m_q^2}$, with $m_q$ being the quark mass. Thus the temperature
dependence of the meson screening masses can provide some valuable information about the melting of
meson states. The analysis of the spatial correlation functions have provided some evidence for sequential
in-medium modification of different charmonium states, i.e. stronger in-medium modification of excited
charmonia compared to its ground states, and for the dissolution of $1S$ charmonium state
at temperatures $T>300$ MeV~\cite{Bazavov:2014cta}.
The aim of this paper is to provide some new insights on the melting of bottomonium states in the QGP through
the study of their spatial correlation functions. We for the first time use the full relativistic Dirac operator
for the bottom quarks in the construction of the meson correlators in 2+1 flavor QCD, which allows us to make independent prediction
on the melting of different quantum number states independent of the NRQCD formalism. We can thus unambiguously
observe an earlier melting of the scalar and axial-vector bottomonium states compared to the pseudo-scalar and
vector channels. The paper is organized as follows. In section II we provide the details of the techniques we use.
Subsequently the main results on the bottomonium screening masses are discussed in section III, followed by our
concluding section.
\section{Lattice setup}
We calculate the screening masses of the bottomonium states in QCD with
$2+1$ flavors of dynamical quarks treating the bottom quarks in the quenched
approximation. We use the Highly Improved Staggered Quark (HISQ) action~\cite{Follana:2006rc}
for the quarks and a tree\jhw{-}level Symanzik improved gauge action. Using HISQ action for the
valence bottom quark is important since it preserves the correct dispersion relation for heavy quarks
~\cite{Follana:2006rc}. The strange quark mass, $m_s$ was chosen to be close to its physical value,
while the light quark masses $m_l=m_s/20$ correspond to a Goldstone pion mass of $160$ MeV in
the continuum limit~\cite{Bazavov:2014pvz}. We perform our calculations on $N_{\sigma}^3\times N_{\tau}$
lattice with temporal extent of $N_\tau=8,10,12$ and the spatial extent fixed by $N_{\sigma}=4 N_\tau$.
The corresponding gauge configurations have been generated by the HotQCD collaboration~\cite{Bazavov:2013uja,Bazavov:2014pvz,Ding:2015fca}. We have specifically focused
on a wide temperature range between $2-8~T_c$, where $T_c=156.5(1.5)$ MeV is the chiral crossover
temperature~\cite{HotQCD:2018pds}, to enable us to measure the full details of the thermal evolution
of the bottomonium correlators. Moreover we ensured that $m_b a\lesssim 1$ for the lattice spacings
over this entire range of interest which in turn allowed us to have sufficient control on the lattice artifacts
in the results of the bottomonium correlators. Having three different lattice extents allowed us to
have a better control on the discretization effects at high temperatures. The bottom-quark mass in this
entire range was set to be $52.5m_s$, which is close to its physical value. The lattice spacing was determined
in physical units using the $r_1$ scale defined in terms of the static quark\jhw{-}antiquark potential through,
$\left . r^2 \frac{dV(r)}{dr}\right|_{r=r_1}=1$.
We used the parameterization of $a/r_1$ obtained in Ref. ~\cite{Bazavov:2017dsy} and the value
used $r_1=0.3106(18)$ fm~\cite{MILC:2010hzw}. The details of the lattice parameters including the
bare lattice gauge coupling $\beta=10/g_0^2$, the quark masses, temperatures as well as the number of
configurations used in this work are summarized in Table~\ref{tab:latpar}.
\begin{center}
\begin{table}[h]
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
$\beta$ & $a m_s$ & $am_b$ & \multicolumn{2}{|c|}{$N_\tau=8$} & \multicolumn{2}{c|}{$N_\tau=10$} & \multicolumn{2}{c|}{$N_\tau=12$} \\
\hline
& & & $T$ &\# c & $T$ &\# c & $T$ & \# c \\
\hline
7.650 & 0.0192& 1.01 & - & - & - & - & 349 & 220 \\
\hline
7.825 & 0.0164& 0.86 & 610 & 500 & 488 & 250 & 407 & 180 \\
\hline
8.000 & 0.0140& 0.74 & 710 & 500 & 568 & 500 & 473 & 180 \\
\hline
8.200 & 0.0117& 0.61 & 842 & 250 & 674 & 250 & 561 & 500 \\
\hline
8.400 & 0.0098& 0.52 & 998 & 240 & 798 & 250 & 665 & 500 \\
\hline
8.570 & 0.0084& 0.44 & - & - & 922 & 250 & 768 & 250 \\
\hline
8.710 & 0.0074& 0.39 & - & - & - & - & 864 & 250 \\
\hline
8.850 & 0.0065& 0.34 & - & - & - & - & 972 & 250 \\
\hline
\end{tabular}
\caption{The gauge coupling, $\beta$, the quark masses, the temperature values and the number of
gauge configurations (\#c) used in this study.}
\label{tab:latpar}
\end{table}
\end{center}
The meson operators in terms of staggered fermions have the form
\begin{equation}
J_M(\mathbf{x})=\bar q(\mathbf{x}) (\Gamma_D \times \Gamma_F) q(\mathbf{x}), ~\mathbf{x}=(x,y,z,\tau),
\end{equation}
where $\Gamma_D, \Gamma_F$ are the Dirac gamma-matrices corresponding to the spin and the
staggered taste (flavor) structure. In this work we consider the case where $\Gamma_D=\Gamma_F=\Gamma$.
This choice corresponds to local operators for the meson currents, which in terms of the staggered quark fields
have the simple form $J_M(\mathbf{x})=\tilde \phi(\mathbf{x}) \bar \chi(\mathbf{x}) \chi(\mathbf{x})$.
The staggered phase $\tilde \phi(\mathbf{x})$ specifies the quantum numbers of the meson channel.
In this work we consider the spatial correlation function\jhw{s} along the $z$-direction
\begin{equation}
C_M(z)=\int dx dy d\tau \langle J_M(\mathbf{x}) J_M(0)\rangle .
\end{equation}
For each choice of $\tilde \phi(\mathbf{x})$, the staggered meson correlation function
contains contributions from both parity states which correspond to the
oscillating and non-oscillating parts in the correlators. If we restrict
ourselves to the lowest energy states, the spatial meson correlation
function can be simply written as
\begin{eqnarray}
\nonumber
C_M(z)&=&A_{NO} \cosh\left[M_{NO}\left(z-\frac{N_s}{2}\right)\right]\\
&-&(-1)^z
A_{O} \cosh\left[M_O\left(z-\frac{N_s}{2}\right)\right].
\label{fit}
\end{eqnarray}
In Table~\ref{tab:meson} we give the details of the staggered phases corresponding to the oscillating
and non-oscillating contributions for different meson quantum numbers, as well as the labels denoting the
screening masses in the pseudo-scalar (PS), scalar (S), vector (V) and the axial-vector (AV) channels.
\begin{table}
\begin{tabular}{cccccc}
\hline
& $- \tilde \phi(\mathbf{x})$ & $\Gamma$ & $J^{PC}$ & meson & screening mass \\
\hline
$M_{NO}$ & \multirow{2}{*}{$1$} & $\gamma_4 \gamma_5$ & $0^{-+}$ & $\eta_b$ & $M_{scr}^{PS}$ \\
$M_{O}$ & & $ 1 $ & $0^{++}$ & $\chi_{b0}$ & $M_{scr}^{S}$ \\
\hline
$M_{NO}$ & \multirow{2}{*}{$(-1)^{x+\tau},~(-1)^{y+\tau}$} & $\gamma_i$ & $1^{--}$ & $\Upsilon$ & $M_{scr}^V$ \\
$M_{O}$ & & $\gamma_j\gamma_k$ & $1^{+-}$ & $h_b$ & $M_{scr}^{AV}$ \\
\hline
\end{tabular}
\caption{The staggered phases, the $\Gamma$ matrices, the bottomonium states and the corresponding
screening masses considered in this study.}
\label{tab:meson}
\end{table}
In this study we used point sources corresponding to the quark and antiquarks in the meson correlators and performed
two state fits of the corresponding correlators to Eq. (\ref{fit}) in order to determine the bottomonium screening
masses.
\section{Results}
\begin{figure}
\includegraphics[width=8cm]{M_PS.pdf}
\caption{The pseudo-scalar screening mass divided by the mass of $\eta_b(1S)$ meson at zero temperature
as function of the temperature obtained on lattices with $N_{\tau}=8,~10$ and $12$.
The solid line is LO prediction for the screening mass, while the dashed line is the NLO prediction, see text.}
\label{fig:PS}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{VC-PS.pdf}
\caption{The difference between the vector and pseudo-scalar screening masses
as function of the temperature obtained on lattices with $N_{\tau}=8,~10$ and $12$.
The solid line corresponds to the difference between the $\Upsilon(1S)$ mass and the $\eta_b(1S)$ mass
from the Particle Data Group (PDG) \cite{PDG20}.
}
\label{fig:VC-PS}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{SC-PS.pdf}
\caption{The difference between the scalar and pseudo-scalar screening masses
as function of the temperature obtained on lattices with $N_{\tau}=8,~10$ and $12$.
The solid line corresponds to the difference between the $\eta_b(1S)$ and the $\chi_{b0}(1P)$ mass from PDG.
}
\label{fig:SC-PS}
\end{figure}
We begin by showing the pseudo-scalar $\eta_b$ screening mass as function of the temperature,
in Fig. \ref{fig:PS}. The results on the screening mass at each temperature have been normalized
by the zero temperature $\eta_b$ meson mass in this figure. While the ratio $m_b/m_s$ is
chosen close to its physical value, the lines of constant physics for the strange
quark mass have not been fixed very precisely for this temperature range~\cite{Bazavov:2014pvz}.
Therefore, we cannot use the experimentally measured mass for $\eta_b$ from the Particle
Data Group. The dependence of the $\eta_b$ meson mass on the $b$ quark mass for the HISQ
action has been studied earlier in Ref. ~\cite{Petreczky:2019ozv} for $\beta=7.956,~
7.825,~8.0,~8.2$ and $8.4$. Therefore we could estimate the $\eta_b$ mass for
these $\beta$ values where the corresponding $b$ quark mass are given in Table \ref{tab:latpar}.
It turns out that the $\eta_b$ mass is larger than the PDG value by 4\%, and 9.8\%
for $\beta=7.596$ and $\beta=8.4$, respectively. For $\beta>8.4$, where we do not have the zero
temperature mass data, we assume that the $\eta_b$ mass is 9.8\% larger than the experimentally
measured value based on the above result. Furthermore we assume that the $\eta_b$ mass
deviates from the PDG value by the same amount at $\beta=7.65$ as for $\beta=7.596$, since
the difference in the $\beta$ values is quite small. Since we did not calculate the
$T=0$ mass for $\eta_b$ explicitly on the given lattice spacings but estimated them based on
the interpolation, we assign a systematic error of $1\%$ to the zero temperature mass for $\eta_b$
to compensate for this systematic effect. For the data in Fig.~ \ref{fig:PS}, we also took into account
the errors in the scale setting $a/r_1$ as well as the error of $r_1$ in physical units. Different sources
of errors have been added in quadrature to determine the error on each data point. We observe that
the lattice cutoff ($N_{\tau}$-dependence) of the results shown in Fig. \ref{fig:PS} is small compared
to the estimated systematic and statistical errors.
In Fig. \ref{fig:PS} we observe that at the lowest temperatures, the $\eta_b$ screening mass
is close to the zero temperature mass, while at high temperature the screening mass increases
linearly with the temperature. The temperature dependence of the $\eta_b$ screening mass
is qualitatively very similar to the temperature dependence of the ground state charmonium
($\eta_c$ and $J/\psi$) screening masses, except that for charmonium the linear increase with
the temperature is seen already at $T>250$ MeV \cite{Bazavov:2014cta}. We recall here that the
linear increase of the screening masses with temperature corresponds to an unbound quark- antiquark
pair, where the screening mass at leading order (LO) for quarks of mass $m_q$ is given by
$2 \sqrt{(\pi T)^2+m_q^2}$. The next-to-leading correction to the screening mass has also been
calculated~\cite{Laine:2003bd}. We show both the LO and NLO result for the bottomonium screening
mass in Fig.~\ref{fig:PS}. For the bottom quarks we use the $\overline{MS}$ mass at the scale
$\mu=m_b$, given by $m(\mu=m_b)=4.188$ GeV \cite{Petreczky:2019ozv}. We observe that the lattice results for the screening
mass are close to the NLO predictions for $T>500$ MeV.
The temperature dependence of the pseudo-scalar screening masses for $T>500$ MeV suggests that the bottom-quark and
antiquark are no longer consistently bound, i.e. the $\eta_b$ meson melting is underway at $T>500$ MeV.
At lower temperatures the $\eta_b$ state exist with small in-medium modifications. The latter conclusion is
consistent with the findings from NRQCD based studies~\cite{Aarts:2010ek,Aarts:2014cda,Kim:2018yhk,Larsen:2019bwy,Larsen:2019zqv}
as well as with the results from obtained from potential models with a screened complex potential~\cite{Petreczky:2010tk,Burnier:2015tda}.
Next we study the temperature dependence of the difference between the vector $\Upsilon$ and $\eta_b$
screening masses, which is shown in Fig. \ref{fig:VC-PS}. We do not expect this difference to be affected by
the small deviations of $b$-quark mass from its physical value and therefore we do not attempt to correct
for these small deviations. In estimating the errors for this observable, we have simply added the errors
in the determination of lattice spacings and the statistical errors in quadrature. We again observe a mild
$N_{\tau}$ dependence of the results compared to the estimated errors. At zero temperature the difference
between the $\Upsilon(1S)$ and $\eta_b(1S)$ mass is about $70$ MeV \cite{PDG20} and is caused spin-dependent interactions,
which is suppressed as $1/m_b^2$. At the lowest two temperatures the difference between the vector and
pseudo-scalar screening masses are consistent with this value. This suggests that at these temperatures the
$\eta_b(1S)$ and $\Upsilon(1S)$ exist as well defined bound states with little in-medium modifications.
For $T \ge 500$ MeV this difference increases linearly with temperature. Perturbative calculation at NLO in
strong-coupling constant predicts this difference to be identically zero. In order to understand the
linear temperature dependence of the difference between the vector and pseudo-scalar screening
masses at high temperatures, one has to go beyond NLO and instead consider a
dimensionally reduced three dimensional effective theory of QCD. Within this effective theory, a
quark and antiquark propagating along the $z$-direction, interact via a spin-dependent potential
which is proportional to the temperature~\cite{Koch:1992nx,Shuryak:1993kg}. This spin dependent
potential causes a splitting between the pseudo-scalar and vector screening mass that is also
proportional to the temperature~\cite{Koch:1992nx,Shuryak:1993kg}. For light quarks, where the
effect of their masses are negligible, this feature has been observed in the lattice calculations for
$T>900$ MeV, and the difference is $\sim 0.3 T$ \cite{Bazavov:2019www}. For bottom quarks however
the effective quark mass is larger resulting in suppression of the spin-dependent interactions. As the result
the difference between the vector and pseudo-scalar screening masses is smaller than for the light quarks
in the studied temperature region. At much higher temperatures $T\gg m_b$, we expect that this
difference will eventually approach the value of $0.3T$ even for the bottomonium states.
Therefore, the increase in the the difference between the vector and pseudo-scalar screening masses
shown in \ref{fig:VC-PS} is in fact expected and is consistent with unbound bottom quark anti-quark pair.
We now examine the difference between the $\eta_b$ and $\chi_{b0}$, i.e. pseudo-scalar and scalar screening
masses, as well as the difference between the $\Upsilon$ and $h_b$ masses. Again we do not expect these
observables to be sensitive to the small error in our determination of the bottom-quark mass. In Fig. ~\ref{fig:SC-PS}
we show the difference between the scalar and pseudo-scalar screening masses, the error on each data point
is determined by adding the statistical and scale-setting errors in quadrature. The difference between the axial-vector
and vector screening masses is very similar to the ones shown in the figure, hence not showed explicitly. This difference
too has a mild lattice cut-off dependence. At the lowest temperature $T=350$ MeV, the difference between the scalar
and pseudo-scalar (or between the axial-vector and vector) screening masses agree with the differences between
the $\chi_{b0}(1P)$ and $\eta_b(1S)$ ($h_b(1P)$ and $\Upsilon(1S)$) masses reported in the PDG \cite{PDG20}.
This again suggests that $\chi_{b0}(1P)$ and $h_b(1P)$ states exist in the deconfined medium at
$T \le 350$ MeV with relatively small medium modifications. This is also consistent with recent lattice results
using NRQCD which show almost no medium modification of the $h_b(1P)$ and $\chi_{b0}(1P)$ states~\cite{Larsen:2019bwy,Larsen:2019zqv}.
For $T>350$ MeV this observable decreases with
increasing temperature, initially very rapidly, by about a factor of two in the temperature
region $350~{\rm MeV} \le T \le 600~{\rm MeV}$. At higher temperatures the splitting
between the scalar and pseudo-scalar screening masses is expected to be extremely small. In the
light quark sector, this observation is related to the restoration of chiral and the effective restoration
of axial U(1) symmetry. For bottom quarks these symmetries are explicitly broken by the large value of
the quark mass. However for $T\gg m_b$ we expect that the difference between the scalar and pseudo-scalar
screening correlators will eventually vanish. Therefore, the first rapid drop in this observable shown in
Fig.~\ref{fig:SC-PS} is related to melting of $\chi_{b0}(1P)$ and $h_b(1P)$ states, while the subsequent
slower decrease for $T>600$ MeV is related to the tiny effects of the bottom-quark mass and their eventual
disappearance in the limit of very high temperatures.
\section{Conclusions}
We performed a first comprehensive study about the temperature dependence of the pseudo-scalar, vector, scalar and axial-vector
bottomonium screening masses on the lattice using a relativistic action (HISQ) for the bottom quarks. We scanned a wide range
of the temperature ranging from $350$ MeV to $1000$ MeV, and for most temperatures performed these calculations at three different
lattice cutoffs corresponding to temporal extent $N_{\tau}=8,~10$ and $12$ of the lattices.
We have found the lattice spacing dependence of our results is small compared to other sources
of errors and thus does not effect our main conclusion.
At the lowest temperature, all four screening masses agree with the corresponding bottomonium masses at zero temperature.
For the axial-vector and scalar screening masses we find a rapid change as a function of temperature for $T>350$ MeV,
while for vector and pseudo-scalar screening masses the corresponding thermal modifications occurs at a higher temperature, $T>450$ MeV.
The small thermal modifications of the ground state bottomonium screening masses ($\eta_b(1S)$ and $\Upsilon(1S)$) for $T<450$ MeV are consistent
with the lattice calculations within the NRQCD formalism~\cite{Aarts:2010ek,Aarts:2014cda,Kim:2018yhk}.
On the other hand we predict that the $1P$ bottomonia will melt at temperatures, somewhere about $350$ MeV, while
the ground state bottomonia will melt at $T>500$ MeV.
\section*{Acknowledgments}
PP was supported by U.S. Department of Energy under Contract No. DE-SC0012704.
JHW work was supported by U.S. Department of Energy, Office of Science,
Office of Nuclear Physics and Office of Advanced Scientific Computing Research within
the framework of Scientific Discovery through Advance Computing (SciDAC) award
Computing the Properties of Matter with Leadership Computing Resources, and by
the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer
417533893/GRK2575 ``Rethinking Quantum Field Theory''. SS gratefully acknowledges
financial support from the Department of Science and Technology, Govt. of India, through a
Ramanujan Fellowship.
|
1,116,691,498,374 | arxiv | \section{Introduction}
The ophthalmologic examination has been regarded as an important routine for detecting not only multiple eye-related diseases but also ocular manifestations of many anomalies in the systemic circulatory system and the nervous system \cite{chatziralli2012value}. Among these detectable anomalies, arteriolosclerosis is critical yet asymptomatic, of which diagnosis may be mostly conducted by medical specialists, requiring vast experiences while mostly subjective qualitative observations.
Assessment of arteriovenous crossing points in retinal images provides rich cues for quick screening of arteriosclerosis and even for classifying them into different severity grades \cite{hubbard1999methods}. The assessment is based on some diagnostic criteria, for example, Scheie's classification \cite{WALSH19821127}, as shown in Figs.~\ref{fig_story}(b)--(e). The grades are described as follows: (i) \textit{none} (no anomaly observed); (ii) \textit{mild} (slight shrink in the caliber at venular edges); (iii) \textit{moderate} (narrowed caliber at a single venular edge); and (iv) \textit{severe} (narrowed caliber at both venular edges).
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{zfig_story.pdf}
\caption{Typical examples of our prediction targets. Images in the first and second rows are raw retinal patches and automatically-generated vessel maps with manually-annotated artery/vein labels, respectively. Red represents arteries while blue represents veins. (a) is false crossing (the vein runs above the artery), while (b)--(e) are for \textit{none}, \textit{mild}, \textit{moderate}, and \textit{severe} grades, respectively. Note that even the state-of-the-art segmentation techniques cannot capture caliber narrowing, therefore, the arterioloscleroses are not very obvious in the vessel maps.}
\label{fig_story}
\end{figure}
However, human graders are subjective and usually with different levels of experiences, and there has been a criticism in the low reproducibility of severity grading, which makes grading results from human graders unreliable for clinical practice, screening, and clinical trials \cite{6547196}. Also, considering the ever-increasing demand for ophthalmologic examination, computer-aided diagnosis (CAD) is extremely helpful for quick screening. Yet, retinal image analysis for CAD is a challenging task due to the high complexity of the vessel system and huge visual differences among retinal images.
In fact, most researchers in this area have been focusing on preliminary tasks, such as vessel segmentation \cite{7042289,8036917,8341481}, artery/vein classification \cite{HUANG2018197,10.1007/978-3-319-93000-8_71,8055572}, etc. A few works address higher-level tasks \cite{hatanaka2011automatic,6547196}, mostly on top of vessel segmentation, such as vessel width measurement, vessel-to-vessel ratio calculation, etc. However, they usually struggle in actual diagnoses: Firstly, vessel segmentation in retinal images \textit{per se} is a challenging task. The vessel maps in Fig.~\ref{fig_story}(c)--(e), which are produced by the state-of-the-art segmentation model \cite{li2019iternet}, cannot capture such deformation. This may imply that deformation is too minor to be captured by segmentation models, although such kind of segmentation-based approaches is a typical solution for automatic severity grading. Secondly, the existing methods detect arteriovenous crossing points by applying some morphological operators to vessel maps \cite{cambocombined}. This approach may not be accurate enough to find crossing points that satisfy diagnostic requirements. For example, we can only use crossing points at which the artery is above the vein for diagnosis, and Fig.~\ref{fig_story}(a) is not a diagnostic crossing point since the artery goes below the vein.
Instead of fully relying on segmentation results, we propose a multi-stage approach, in which segmentation results are used only for finding crossing point candidates, and actual prediction of the severity grade is done for an image patch around each crossing point after validating if the crossing point is an actual and informative one. To the best of our knowledge, this is the first work proposing a fully-automatic methodology aiming at grading arteriolosclerosis through the joint detection and analysis of retinal crossings.
Another issue in our severity grading task, which is very common in medical imaging, is the imbalanced label distribution. Most patients in our dataset have the slightest signs (\textit{none} and \textit{mild}) of arteriolosclerosis while only a few patients suffer from the \textit{severe} grades of artery hardening. Also, the boundaries among different severity labels are not always obvious, making accurate diagnosis challenging.
Inspired by the concept of the multidisciplinary team \cite{taylor2010multidisciplinary}, which strives to make a comprehensive assessment of a patient, we propose a multi-diagnosis team network (MDTNet) in this paper to address the imbalanced label distribution and label ambiguity problems at the same time. MDTNet can combine the features from multiple classification models with different structures or different loss functions. Some of the underlying models in MDTNet use the class-balanced focal loss \cite{FocalLoss} to handle hard or rare samples, of which the original version requires hyperparameter tuning, while MDTNet can utilize the advantage of the focal loss without tuning its hyperparameters.
Our main contribution is two-fold:
(i) We propose a whole pipeline for an automatic method for severity grading of artery hardening. Our method can find and validate possible arteriovenous crossing points, for which the severity grade is predicted.
(ii) We design a new model, MDTNet, which uses the focal loss to address the problem of data ambiguity and unbalance. Interestingly, our experimental results show that by ensembling multiple models' features, our model without hyperparameter tuning outperforms baselines with the focal loss.
\section{Dataset
We built a vessel crossing point dataset extracted from our retinal image database with $1,440$ images in the size of $5,184 \times 3,456$ pixels, which are captured by the CR-2 AF Digital Non-Mydriatic Retinal Camera (Canon, Tokyo).
This database includes the medical data of $684$ people, which are with an average age of $64.5$ (standard deviation: $6.1$). The ratio between female and male is $65.2\%:34.8\%$ and $47.6\%$ of all participants have hypertension disease.
To find crossing points in these images (Fig.~\ref{fig_method}(a)--(d)), we used a segmentation model (\cite{li2019iternet}) to get vessel maps. We then classified each pixel on extracted vessels into artery/vein using \cite{li2020joint}. We combine the vessel segmentation and classification results to find crossing points because classification results, which are more beneficial for crossing point detection, tend to have more errors while segmented vessel maps are more accurate. Therefore, we refine the classification results based on the vessel maps. A classic approach then finds crossing points in these refined artery/vein maps. Specifically, we find the artery pixels neighbouring vein pixels and check whether it is a crossing point or not using the skeletonized vessel map. The points marked in yellow in Fig.~\ref{fig_method} are detected crossing point candidates. Note that for cup zones as indicated by a pink circle and dot in Fig.~\ref{fig_method}, we exclude candidates because the vessel system in this area is with high complexity and thus segmentation and classification are not reliable. Image patches are of size $150\times150$, centered at the crossing point candidates.
Consequently, we detected $4,240$ crossing points and extracted corresponding image patches, centered at these crossing points.
Each image patch was carefully reviewed by a highly experienced ophthalmologist.
Due to the errors in vessel segmentation and artery/vein classification, the detected crossing points may not be actual nor informative. Therefore, the specialist first annotated each image patch with a label on its validity, \textit{i.e}.\@\xspace, if the image patch contains an actual and informative crossing point (\textit{true}) or not (\textit{false}). The numbers of true and false crossing points are $2,507$ and $1,733$, respectively. For each true crossing point, the specialist gave its severity label in $C = \{\mbox{\textit{none}, \textit{mild}, \textit{moderate}, \textit{severe}}\}$. The numbers of image patches with respective labels are $1,177$, $816$, $457$, and $57$.
In both the tasks, the datasets will be divided into training, validation, and test set following a ratio of $8$:$1$:$1$. As an exameinee may have multiple retinal images, it is important to strictly put them into one same subset to prevent the training data contamination.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{zfig_method.pdf}
\caption{Overall pipeline of our severity grading.}
\label{fig_method}
\end{figure}
\section{Severity Grading Pipeline}
Our method forms a pipeline with three main modules, \textit{i.e}.\@\xspace, preprocessing, patch validation, and severity grade prediction. The whole pipeline is shown in Fig.~\ref{fig_method}.
\noindent\textbf{Preprocessing} Steps (a)--(d) in the figure are preprocessing, in which the same processes as our dataset construction are applied to get image patches of $150\times150$ pixels with crossing point candidates.
\noindent\textbf{Crossing Point Validation}
Both crossing point validation and severity grading are classification problems, whereas validation is easier because the label distribution is more balanced and the differences between real and false crossing points are more obvious. We find that commonly used classification models, such as \cite{resnet,densenet,inception}, work well for our validation task (refer to Section \ref{sec:exp}).
\noindent\textbf{Severity Grade Prediction} The severity grade prediction task is much more challenging: Firstly, the label distribution is highly biased. For example, samples with the \textit{none} label account for $68\%$ of the total samples, while ones with the \textit{severe} labels only take up $3\%$. Secondly, the difference among samples with different labels may not be clear enough. Even medical doctors may make diverse decisions on a single image patch.
For such classification tasks with ambiguous or imbalanced classes, the focal loss \cite{FocalLoss} has been used, which makes a model more aware of hard samples than easy ones.
The focal loss introduces a hyperparameter $\gamma$, on which a model's performance depends significantly. Tuning this hyperparameter is extremely important yet computationally expensive \cite{AutomatedFocalLoss}. A greater $\gamma$ may make the model focus too much on hard samples, spoiling the accuracy on other samples, while a smaller $\gamma$ may decrease its ability to classify hard samples.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{zfig_model.pdf}
\caption{MDTNet for severity grade prediction.}
\label{fig_model}
\end{figure}
We propose a multi-diagnosis team network (MDTNet) to address the aforementioned problems in severity grade prediction. As shown in Fig.~\ref{fig_model}, MDTNet consists of three modules, \textit{i.e}.\@\xspace, a base module, a focal module, and a fusion module.
The base and focal modules have multiple sub-models, and all of them take the same image patch as input. The difference between the sub-models in the base and focal modules is the losses: Ones in the base module adopt the cross entropy (CE) loss while ones in the focal module use the focal loss. These sub-models are trained independently with respective losses. The fusion module concatenates all features (\textit{i.e}.\@\xspace, the outputs of the second last layers of the sub-models) into a single vector, which is then fed into two fully-connected layers to make the final prediction.
The focal loss is originally designed for object detection \cite{FocalLoss}, defined as
\begin{equation}
L(y, t) = -\sum_{l} t_{l}(1-y_l)^{\gamma}\log y_l,
\end{equation}
where $t$ is the one-hot representation of label and $y$ is the softmax output from a model ($t_l$ and $y_l$ are the $l$-th entries of $t$ and $y$); $\gamma$ is a hyperparameter to weight hard examples. The focal loss reduces to the CE loss when $\gamma=0$, and a larger $\gamma$ weights more on hard examples. One possible criticism of the focal loss is its sensitivity to $\gamma$. We therefore propose to ensemble sub-models with different $\gamma$'s. The hypothesis behind this choice is that different $\gamma$'s may rely on different cues for prediction and aggregating respective features may help in improving the final decision. This is embodied in the focal module. The same idea can also be applied for different network architectures, embodied in the base module. These sub-models thus provide diagnostic features that may complement each other.
To cope with the imbalanced class distribution, we adopt class weighting \cite{7780949,Cui_2019_CVPR}. We multiply weight $\alpha_l = \ln N_l/\ln N$ to each term (\textit{i.e}.\@\xspace different $l$'s) in the CE/focal loss, where $N$ and $N_l$ are the numbers of all samples and of samples with the label corresponding to the $l$-th entry of $t$. We pre-train the sub-models using their own classifiers and losses, and then freeze their weights to train the additional two fully-connected layers for the final decision.
\noindent\textbf{Data Augmentation}
We adopt extensive data augmentation. During the training process, the input images have 50\% chance to get each operator in Fig.~\ref{fig_data_augmentation}. Among them, (b$\sim$h) are used for shape modification, changing the locations and the shapes of the attention areas of the deep learning models; (i$\sim$k) are to provide variety on imaging quality by blurring or adding random noises; (l) represents sensor characteristics of color (hue and saturation).
\begin{figure}[t]
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_raw.png}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_Flipu.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_Flipl.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_CropA.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_Affin.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_Affin1.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_Affin11.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_Affin111.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_Gauss.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_Addit.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_Frequ.jpg}}
\hfill
\subfloat[]{\includegraphics[width=0.235\textwidth]{zfig_aug_AddTo.jpg}}
\caption{Our data augmentation operator pool. (a) Raw image, (b) vertical flipping, (c) horizontal flipping, (d) cropping and padding, (e) scaling, (f) translating, (g) rotating, (h) sheering, (i) blurring, (j) additional noise, (k) additional frequency noise, and (l) color modification.}
\label{fig_data_augmentation}
\end{figure}
\section{Experiments and Results}\label{sec:exp}
\noindent\textbf{Implementation}
For sub-models in the base module, we used ResNet \cite{resnet}, Inception \cite{inception}, and DenseNet \cite{densenet}. In the focal module, DenseNet with $\gamma =$ $1$, $2$, or $3$ were used. All these models are pretrained over the ImageNet dataset \cite{ILSVRC15}. The fully-connected layers in the fusion module are followed by the ReLU nonlinearity. For optimization, Adam \cite{adam} was adopted with a learning rate of $0.0001$.
\noindent\textbf{Performance of Base Models}
We first evaluated the performance of the base module's sub-models for the crossing point validation and severity grade prediction tasks. For comparison, we also give the results of models without pre-training (w/o PT) and without data augmentation (w/o DA), as well as models using only the green channel (GC Only).
\begin{table}[t]
\caption{Performances of base models with ablation.}\label{table_performance_base_models}
\begin{tabularx}{\textwidth}{L{1.5}C{0.925}C{0.925}C{0.85}C{0.1}C{0.925}C{0.925}C{0.85}}
\hline
\multirow{2}{*}{Models} & \multicolumn{3}{c}{Cross. Point Val.} && \multicolumn{3}{c}{Severity Grade Pred.}\\
\cline{2-4}\cline{6-8}
& Pre. & Rec. & $t$ (ms) && Acc. & Kappa & $t$ (ms)\\
\hline
ResNet-50 & 0.9427 & 0.9526 & 0.274 && 0.8063 & 0.6629 & 0.278\\
\multicolumn{1}{l}{ \qquad---w/o PT} & 0.8646 & 0.6975 & 0.274 && 0.5445 & 0.0177 & 0.278\\
\multicolumn{1}{l}{ \qquad---w/o DA} & 0.9531 & 0.8551 & 0.274 && 0.5340 & 0.0036 & 0.278\\
\multicolumn{1}{l}{ \qquad---GC Only} & 0.9583 & 0.9154 & 0.273 && 0.7277 & 0.5288 & 0.273\\
\hline
Inception v3 & 0.9635 & \textbf{0.9635} & 0.218 && 0.8534 & 0.7432 & 0.222\\
\multicolumn{1}{l}{ \qquad---w/o PT} & 0.9010 & 0.6865 & 0.218 && 0.5183 & 0.0313 & 0.222\\
\multicolumn{1}{l}{ \qquad---w/o DA} & 0.9323 & 0.9179 & 0.218 && 0.5393 & 0.0000 & 0.222\\
\multicolumn{1}{l}{ \qquad---GC Only} & 0.9167 & 0.9119 & \textbf{0.216} && 0.8115 & 0.6771 & \textbf{0.216}\\
\hline
DenseNet-121 & 0.9479 & 0.9630 & 0.266 && \textbf{0.8795} & \textbf{0.7892} & 0.269\\
\multicolumn{1}{l}{ \qquad---w/o PT} & 0.9375 & 0.6742 & 0.266 && 0.5288 & 0.0050 & 0.269\\
\multicolumn{1}{l}{ \qquad---w/o DA} & \textbf{0.9740} & 0.8274 & 0.266 && 0.7225 & 0.4865 & 0.269\\
\multicolumn{1}{l}{ \qquad---GC Only} & \textbf{0.9740} & 0.9212 & 0.266 && 0.6702 & 0.4406 & 0.267\\
\hline
\end{tabularx}
\end{table}
\begin{table}[t]
\caption{Performance of MDTNet models for severity grade prediction.}\label{table_performance_MDTNet}
\begin{tabularx}{\textwidth}{L{1.1}C{1.1}C{1.1}C{1.1}C{1.1}C{0.2}C{1.1}C{1.1}C{1.1}}
\hline
\multirow{2}{*}{Metrics} & \multicolumn{4}{c}{DenseNet-121 (Focal Loss)} && \multicolumn{3}{c}{MDTNet}\\
\cline{2-5}\cline{7-9} & $\gamma=1$ & $\gamma=2$ & $\gamma=5$ & $\gamma=10$ && $n=0$ & $n=1$ & $n=3$ \\
\hline
\hline
Acc. & 0.8639 & 0.7434 & 0.8639 & 0.7958 && 0.8953 & 0.9110 & \textbf{0.9162}\\
Kappa. & 0.7642 & 0.5685 & 0.7641 & 0.6508 && 0.8183 & 0.8453 & \textbf{0.8542}\\
$t$ (ms) & \textbf{0.268} & \textbf{0.268} & \textbf{0.268} & \textbf{0.268} && 0.767 & 1.047 & 1.571\\
\hline
\end{tabularx}
\end{table}
\begin{figure}[t]
\centering
\subfloat[]{\includegraphics[width=0.335\textwidth]{zfig_confusion_matrix_base_model.pdf}}
\subfloat[]{\includegraphics[width=0.335\textwidth]{zfig_confusion_matrix_MDTNet1.pdf}}
\subfloat[]{\includegraphics[width=0.335\textwidth]{zfig_confusion_matrix_MDTNet3.pdf}}
\caption{Confusion matrices for three different severity grade prediction models. The recall is shown in the last row and the precision is shown in the last column. (a) MDTNet without the focal module, (b) MDTNet for $n=1$, and (c) MDTNet for $n=3$.}
\label{fig_confusion_matrix}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{zfig_grad_cam.pdf}
\caption{Visual explanation of prediction results. (a,b) are for the crossing point validation model and (c,d) are from the severity grade prediction model. The first row is the raw input images and the second row is the class-discriminative regions.}
\label{fig_visual_explanation}
\end{figure}
The crossing point validation performances are shown in the left part of Table \ref{table_performance_base_models}. We use two metrics, precision and recall, and the time measurement to show the timing performance. We can see that pre-training and data augmentation can improve the overall performance of the crossing point validation. The Inception model with PT and DA achieved the best recall and the second-best precision.
Note that PT and DA will not change the running time of the model because they do not modify the network structure.
The right part of Table \ref{table_performance_base_models} gives the results of the base models on the severity grade prediction task, and Table~\ref{table_performance_MDTNet} presents the performance of MDTNet and models using the focal loss. In addition to the classification accuracy, we also adopt the Cohen's kappa, which can measure the agreement between the ground-truth labels and predictions. We can see that, compared with the focal loss models, the DenseNet can achieve higher overall accuracy with the CE loss. However, the combination among different models, different losses, as well as different $\gamma$ values can boost the performance. MDTNet achieved the highest performance in this experiment when $n=3$.
To better analyze the severity grade prediction performance, we present the confusion matrices in Fig.~\ref{fig_confusion_matrix}.
It can be seen that, with the increment of the underlying sub-models, MDTNet gains the classification ability.
Fig.~\ref{fig_visual_explanation} shows visual explanation of MDTNet by Grad-CAM \cite{Grad-CAM}. Figs.~\ref{fig_visual_explanation} (a) and (b) show two examples for the crossing point validation. The ground-truth labels are \textit{false} and the predictions were also \textit{false}, \textit{i.e}.\@\xspace, these are not effective crossing points as the arteries are under the veins. The model mainly counted the red area in the second row along the vein. The model might find the vein, track it down, and reach to the conclusion that it lies above the artery. Figs. \ref{fig_visual_explanation} (c) and (d) are for the severity grade prediction. The ground-truth labels are respectively \textit{mild} and \textit{moderate} and were both correctly predicted. We can see the artery runs over the vein deforming the vein. Being different from the example in (a) and (b), the model looks at the crossing points and looks for possible shape deformations and their extent.
\section{Conclusion}
The paper presents a method to automatically predict the arteriolosclerosis severity from retinal images.
To improve the accuracy for ambiguous and unbalanced samples, we design the multi-diagnosis team network (MDTNet), which can jointly consider diagnostic cues from multiple sub-models, without tuning the hyperparameter for the focal loss. Experimental results show the superiority of our method, achieving over 91\% accuracy.
\section{Acknowledgements}
This work was supported by Council for Science, Technology and Innovation (CSTI), cross-ministerial Strategic Innovation Promotion Program (SIP), ``Innovative AI Hospital System'' (Funding Agency: National Institute of Biomedical Innovation, Health and Nutrition (NIBIOHN)). This work was also supported by JSPS KAKENHI Grant Number 19K10662 and 20K23343.
\section{Ethics Approval}
This study was performed in accordance with the World Medical Association Declaration of Helsinki, and the study protocol was approved by the institutional review board of the Osaka University Hospital.
\section{Conflict of Interest}
Liangzhi Li, Manisha Verma, Bowen Wang, Yuta Nakashima, Ryo Kawasaki, and Hajime Nagahara have no conflicts of interest in association with this study.
\bibliographystyle{IEEEtran}
|
1,116,691,498,375 | arxiv | \section{Introduction}
\label{sec:intro}
\IEEEPARstart{L}{ow}-density parity-check (LDPC) codes were first introduced by Gallager in the 1960s \cite{Gallager}, together with a class of iterative decoding algorithms. Later, in the 1990s, the rediscovery of LDPC codes by MacKay and Neal \cite{MacKayNeal1995}, \cite{MacKayNeal1997} launched a period of intensive research on these codes and their decoding algorithms. Significant attention was paid to iterative message-passing (MP) decoders, particularly belief propagation (BP) \cite{BP} as embodied by the sum-product algorithm (SPA) \cite{SPA}.
Despite the unparalleled success of iterative decoding in practice, it is quite difficult to analyze the performance of such iterative MP decoders due to the heuristic nature of their message update rules and their local nature. An alternative approach, linear programming (LP) decoding, was introduced by Feldman et al. \cite{Feldman_LP} as an approximation to maximum-likelihood (ML) decoding.
Many theoretical and empirical observations suggest similarities between the performance of LP and MP decoding methods. For example, graph-cover decoding can be used as a theoretical tool to show the connection between LP decoding and iterative MP decoding~\cite{Vontobel_GC}.
However, there are some key differences that distinguish LP decoding from iterative MP decoding. One of these differences is that the LP decoder has the \emph{ML certificate property}, i.e., it is detectable if the decoding algorithm fails to find an ML codeword. When it fails to find an ML codeword, the LP decoder finds a non-integer solution, commonly called a \emph{pseudocodeword}. Another difference is that while adding redundant parity checks satisfied by all the codewords can only improve LP decoding, adding redundant parity checks may have a negative effect on MP decoding, especially in the waterfall region, due to the creation of short cycles in the Tanner graph. This property of LP decoding allows improvements by tightening the LP relaxation, i.e., reducing the feasible space of the LP problem by adding more linear constraints from redundant parity checks.
In the original formulation of LP decoding proposed by Feldman \emph{et al.}, the number of constraints in the LP problem is linear in the block-length but exponential in the maximum check node degree, and the authors also argued that the number of useful constraints could be reduced to polynomial in code length. The computational complexity of the original LP formulation therefore can be prohibitively high, motivating the design of computationally simplified decoding algorithms that can achieve the same error-rate performance with a smaller number of constraints.
For example, efficient polynomial-time algorithms can be used for solving the original LP formulation \cite{FeldmanThesis}. An alternative LP formulation whose size is linear in the check node degree and code length can also be obtained by changing the graphical representation of the code \cite{YangFeldman,Dendro}; namely, all check nodes of high degree are replaced by dendro-subgraphs (trees) with an appropriate number of auxiliary degree-3 check nodes and degree-2 variable nodes. Several other low-complexity LP decoders were also introduced in \cite{lowcpxLP}, suggesting that LP solvers with complexity similar to the min-sum algorithm and the sum-product algorithm are feasible.
Another approach is to add linear constraints in an adaptive and selective way during the LP formulation~\cite{Taghavi_ALP}. Such an adaptive linear programming (ALP) decoding approach also allows the adaptive incorporation of linear constraints generated by redundant parity checks (RPC) into the LP problem, making it possible to reduce the feasible space and improve the system performance. A linear inequality derived from an RPC that eliminates a pseudocodeword solution is referred to as a ``cut.''
An algorithm proposed in \cite{Taghavi_ALP} uses a random walk on a subset of the code factor graph to find these RPC cuts. However, the random nature of this algorithm limits its efficiency. In fact, experiments show that the average number of random trials required to find an RPC cut grows exponentially with the length of the code.
Recently, the authors of \cite{SepAlg} proposed a separation algorithm that derives Gomory cuts from the IP formulation of the decoding problem and finds cuts from RPCs which are generated by applying Gaussian elimination to the original parity-check matrix.
In~\cite{cutting-plane}, a cutting-plane method was proposed to improve the fractional distance of a given binary parity-check matrix -- the minimum weight of nonzero vertices of the fundamental polytope -- by adding redundant rows obtained by converting the parity-check matrix into row echelon form after a certain column permutation. However, we have observed that the RPCs obtained by the approach in~\cite{cutting-plane} are not able to produce enough cuts to improve the error-rate performance relative to the separation algorithm when they are used in conjunction with either ALP decoding or the separation algorithm. A detailed survey on mathematical programming approaches for decoding binary linear codes can be found in \cite{LPsurvey}.
In this paper, we greatly improve the error-correcting performance of LP decoding by designing algorithms that can efficiently generate cut-inducing RPCs and find possible cuts from such RPCs. First, we derive a new necessary condition and a new sufficient condition for a parity-check to provide a cut at a given pseudocodeword. These conditions naturally suggest an efficient algorithm that can be used to find, for a given pseudocodeword solution to an LP problem, the unique cut (if it exists) among the parity inequalities associated with a parity check. This algorithm was previously introduced by Taghavi \emph{et al.}~\cite[Algorithm~2]{SparseLP} and, independently and in a slightly different form, by Wadayama~\cite[Fig.~6]{Wadayama}.
The conditions also serve as the motivation for a new, more efficient adaptive cut-inducing RPC generation algorithm that identifies useful RPCs by performing specific elementary row operations on the original parity-check matrix of the binary linear code. By adding the corresponding linear constraints into the LP problem, we can significantly improve the error-rate performance of the LP decoder, even approaching the ML decoder performance in the high-SNR region for some codes. Finally, we modify the ALP decoder to make it more efficient when being combined with the new cut-generating algorithm. Simulation results demonstrate that the proposed decoding algorithms significantly improve the error-rate performance of the original LP decoder.
The remainder of the paper is organized as follows. In Section~\ref{sec:LPD}, we review the original formulation of LP decoding and several adaptive LP decoding algorithms. Section~\ref{sec:ECSA} presents the new necessary condition and new sufficient condition for a parity-check to induce a cut, as well as their connection to the efficient cut-search algorithm. In Section~\ref{sec:ACGA}, we describe our proposed algorithm for finding RPC-based cuts. Section~\ref{sec:NR} presents our simulation results, and Section~\ref{sec:concl} concludes the paper.
\section{Linear Programming Decoding and Adaptive Variants}
\label{sec:LPD}
\subsection{Linear Programming (LP) Relaxation of Maximum Likelihood (ML) Decoding}
\label{subsec:LPR}
Consider a binary linear block code $\mathcal{C}$ of length $n$ and a corresponding $m\times n$ parity-check matrix $\mathbf{H}$. A codeword $\mathbf{y}\in\mathcal{C}$ is transmitted across a memoryless binary-input output-symmetric channel, resulting in a received vector $\mathbf{r}$. Assuming that the transmitted codewords are equiprobable, the ML decoder finds the solution to the following optimization problem (see, e.g., \cite{Taghavi_ALP})
\begin{equation}
\label{MLopt}
\begin{array}{l}
{\text{minimize}\quad}{{\boldsymbol{\gamma}}^T}{\mathbf{u}} \\
{\text{subject to}\quad}{\mathbf{u}} \in {\mathcal C} \\
\end{array}
\end{equation}
where $u_i\in\{0,1\}$, and $\boldsymbol{\gamma}$ is the vector of log-likelihood ratios (LLR) defined as
\begin{equation}
\label{LLR}
{\gamma_i} = \log \left( {\frac{\Pr \left( {\left. {R_i = r_i} \right|{u_i} = 0} \right)}{\Pr \left( {\left. {R_i = r_i} \right|{u_i} = 1} \right)}} \right).
\end{equation}
Since the ML decoding problem \eqref{MLopt} is an integer programming problem, it is desirable to replace its integrality constraints with a set of linear constraints, transforming the IP problem into a more readily solved LP problem. The desired feasible space of the corresponding LP problem should be the \emph{codeword polytope}, i.e., the convex hull of all the codewords in $\mathcal{C}$. With this, unless the cost vector of the LP decoding problem is orthogonal to a face of the constraint polytope, the optimal solution is one integral vertex of its codeword polytope, in which case it is the same as the output of the ML decoder. When the LP solution is not unique, there is at least one integral vertex corresponding to an ML codeword. However, the number of linear constraints typically needed to represent the codeword polytope increases exponentially with the code length, which makes such a relaxation impractical.
As an approximation to ML decoding, Feldman \emph{et al.} \cite{Feldman_LP,FeldmanThesis} relaxed the codeword polytope to a polytope now known as \emph{fundamental polytope} \cite{Vontobel_GC}, denoted as $\mathcal{P}(\mathbf{H})$, which depends on the parity-check matrix $\mathbf H$.
\newtheorem{def1}{Definition}
\begin{def1}[Fundamental polytope \cite{Vontobel_GC}]
\label{LP}
Let us define
\begin{equation}
\label{localcodeword}
\mathcal{C}_j\triangleq\left\{ {{\mathbf{x}} \in \mathbb{F}^n_2|\langle\mathbf{x},\mathbf{h}_j\rangle=0~(\text{in }\mathbb{F}_2)} \right\}
\end{equation}
where ${\mathbf h}_j$ is the $j$th row of the parity-check matrix $\mathbf H$ and $1\leq j\leq m$. Thus, $\mathcal{C}_j$ is the set of all binary vectors satisfying the $j$th parity-check constraint. We denote by $\text{conv}(\mathcal{C}_j)$ the convex hull of $\mathcal{C}_j$ in $\mathbb{R}^n$, which consists of all possible real convex combinations of the points in $\mathcal{C}_j$, now regarded as points in $\mathbb{R}^n$. The fundamental polytope $\mathcal{P}(\mathbf{H})$ of the parity-check matrix $\mathbf H$ is defined to be the set
\begin{equation}
\label{fp}
{\mathcal{P}}\left( {\mathbf{H}} \right) = \bigcap\limits_{j = 1}^m {{\text{conv}}\left( {{\mathcal{C}}_j} \right)}.
\end{equation}
\end{def1}
Therefore, LP decoding can be written as the following optimization problem:
\begin{equation}
\label{LP_fp}
\begin{array}{l}
{\text{minimize}\quad}{\boldsymbol{\gamma}^T}{\mathbf{u}} \\
{\text{subject to}\quad}{\mathbf{u}} \in \mathcal{P}(\mathbf{H}). \\
\end{array}
\end{equation}
The solution of the above LP problem corresponds to a vertex of the fundamental polytope that minimizes the cost function. Since the fundamental polytope has both integral and nonintegral vertices, with the integral vertices corresponding exactly to the codewords of $\mathcal C$ \cite{Feldman_LP,Vontobel_GC}, if the LP solver outputs an integral solution, it must be a valid codeword and is guaranteed to be an ML solution, which is called the \emph{ML certificate property}. The nonintegral solutions are called pseudocodewords. Since the fundamental polytope is a function of the parity-check matrix $\mathbf H$ used to represent the code $\mathcal C$, different parity-check matrices for $\mathcal C$ may have different fundamental polytopes. Therefore, a given code has many possible LP-based relaxations, and some may be better than others when used for LP decoding.
The fundamental polytope can also be described by a set of linear inequalities, obtained as follows \cite{Feldman_LP}.
First of all, for a point $\mathbf{u}$ within the fundamental polytope, it should satisfy the box constraints such that $0\leq u_i\leq 1$, for $i=1,\dots,n$.
Then, let $\mathcal{N}(j)\subseteq\{1,2,\ldots,n\}$ be the set of neighboring variable nodes of the check node $j$ in the Tanner graph, that is, $\mathcal{N}(j)=\{i:H_{j,i}=1\}$ where $H_{j,i}$ is the element in the $j$th row and $i$th column of the parity-check matrix, $\mathbf{H}$.
For each row $j=1,\dots,m$ of the parity-check matrix, corresponding to a check node in the associated Tanner graph, the linear inequalities used to form the fundamental polytope $\mathcal{P}(\mathbf{H})$ are given by
\begin{equation}
\label{PI1}
\sum\limits_{i \in \mathcal{V}} \left(1-u_i\right) + \sum\limits_{i \in \mathcal{N}\left( j \right)\backslash \mathcal{V}} {{u_i}} \geq 1
\text{,\quad for all \;} \mathcal{V}\subseteq \mathcal{N}(j){\text{, with }}\left| \mathcal{V} \right|{\text{ odd}}
\end{equation}
where for a set $\mathcal{X}$, $|\mathcal{X}|$ denotes its cardinality. It is easy to see that \eqref{PI1} is equivalent to
\begin{equation}
\label{PI2}
\sum\limits_{i \in \mathcal{V}} {{u_i}} - \sum\limits_{i \in \mathcal{N}\left( j \right)\backslash \mathcal{V}} {{u_i}} \leq \left| \mathcal{V} \right| - 1
\text{,\quad for all \;} \mathcal{V}\subseteq \mathcal{N}(j){\text{, with }}\left| \mathcal{V} \right|{\text{ odd.}}
\end{equation}
Note that, for each check node $j$, the corresponding inequalities in \eqref{PI1} or \eqref{PI2} and the linear box constraints exactly describe the convex hull of the set $\mathcal{C}_j$.
The linear constraints in \eqref{PI1} (and therefore also \eqref{PI2}) are referred to as \emph{parity inequalities}, which are also known as \emph{forbidden set inequalities} \cite{Feldman_LP}. It can be easily verified that these linear constraints are equivalent to the original parity-check constraints when each $u_i$ takes on binary values only.
\newtheorem{prop1}{Proposition}
\begin{prop1}[Theorem 4 in \cite{Feldman_LP}]
\label{prop1}
The parity inequalities of the form \eqref{PI1} derived from all rows of the parity-check matrix $\mathbf{H}$ and the box constraints completely describe the fundamental polytope $\mathcal{P}(\mathbf{H})$.
\end{prop1}
With this, LP decoding can also be formulated as follows
\begin{equation}
\label{origLP}
\begin{array}{l}
{\text{minimize}\quad}{{\boldsymbol{\gamma}}^T}{\mathbf{u}} \\
{\text{subject to}\quad}0\leq u_i\leq 1, {\; \text{for all } \; } i;\\
\text{\quad\qquad\quad~~}\sum\limits_{i \in\mathcal{V}} \left(1-u_i\right) + \sum\limits_{i \in \mathcal{N}\left( j \right)\backslash \mathcal{V}} {{u_i}} \geq 1\\
\text{\qquad\qquad\quad for all \;}j, \mathcal{V}\subseteq \mathcal{N}(j){\text{, with }}\left| \mathcal{V} \right|{\text{ odd.}}
\end{array}
\end{equation}
In the following parts of this paper, we refer to the above formulation of LP decoding problem based on the fundamental polytope of the original parity-check matrix as the \emph{original} LP decoding.
\subsection{Adaptive Linear Programming (ALP) Decoding}
\label{subsec:alp}
In the original formulation of LP decoding presented in \cite{Feldman_LP}, every check node $j$ generates $2^{|\mathcal{N}(j)|-1}$ parity inequalities that are used as linear constraints in the LP problem described in \eqref{origLP}.
The total number of constraints and the complexity of the original LP decoding problem grows exponentially with the maximum check node degree. So, even for binary linear codes with moderate check degrees, the number of constraints in the original LP decoding could be prohibitively large. In the literature, several approaches to reducing the complexity of the original LP formulation have been described \cite{FeldmanThesis,YangFeldman,Dendro,lowcpxLP,Taghavi_ALP}. We will use adaptive linear programming (ALP) decoding \cite{Taghavi_ALP} as the foundation of the improved LP decoding algorithms presented in later sections. The ALP decoder exploits the structure of the LP decoding problem, reflected in the statement of the following lemma.
\newtheorem{lemma1}{Lemma}
\begin{lemma1}[Theorem 1 in \cite{Taghavi_ALP}]
\label{lemma1}
If at any given point $\mathbf{u}\in[0,1]^n$, one of the parity inequalities introduced by a check node $j$ is violated, the rest of the parity inequalities from this check node are satisfied with strict inequality.
\end{lemma1}
\newtheorem{def2}[def1]{Definition}
\begin{def2}
\label{def_cut}
Given a parity-check node $j$, a set $\mathcal{V} \subseteq \mathcal{N}(j)$ of odd cardinality, and a vector $\mathbf{u}\in[0,1]^n$ such that the corresponding parity inequality of the form \eqref{PI1} or \eqref{PI2} does not hold, we say that the constraint is \emph{violated} or, more succinctly, a \emph{cut} at $\mathbf{u}$.
\footnote{In the terminology of \cite{LPsurvey}, if (\ref{PI2}) does not hold for a pseudocodeword $\mathbf{u}$, then the vector $(\mathbf{r},t) \in \mathbb{R}^n \times \mathbb{R}$, where $r_i=1$ for all $i\in \mathcal{V},$ $r_i=-1$ for
all $i\in \mathcal{N}(j)\backslash\mathcal{V},$ $r_i=0$ otherwise, and $t=|\mathcal{V}|-1$, is a \emph{valid cut}, separating $\mathbf{u}$ from the codeword polytope.}
\end{def2}
In \cite{Taghavi_ALP}, an efficient algorithm for finding cuts at a vector $\mathbf{u}\in[0,1]^n$ was presented. It relies on the observation that violation of a parity inequality \eqref{PI2} at $\mathbf{u}$ implies that
\begin{equation}
\label{cut1}
|\mathcal{V}|-1<\sum\limits_{i \in \mathcal{V}} {{u_i}}\leq |\mathcal{V}|
\end{equation}
and
\begin{equation}
\label{cut2}
0\leq \sum\limits_{i \in \mathcal{N}\left( j \right)\backslash \mathcal{V}} {{u_i}} < u_v, {\; \text{for all \;}} v\in \mathcal{V}.
\end{equation}
where $\mathcal{V}$ is an odd-sized subset of $\mathcal{N}(j)$.
Given a parity check $j$, the algorithm first puts its neighboring variables in $\mathbf{u}$ into non-increasing order, i.e., $u_{j_1}\geq\dots\geq u_{j_n}$, for $u_{j_i}\in\mathcal{N}(j)$. It then successively considers subsets of odd cardinality having the form $\mathcal{V}=\{u_{j_1},\dots,u_{j_{2k+1}}\}\subseteq \mathcal{N}(j)$, increasing the size of $\mathcal{V}$ by two each step, until a cut (if one exists) is found. This algorithm can find a cut among the constraints corresponding to a check node $j$ by examining at most $|\mathcal{N}(j)|/2$ inequalities, rather than exhaustively checking all $2^{|\mathcal{N}(j)|-1}$ inequalities in the original formulation of LP decoding.
The ALP decoding algorithm starts by solving the LP problem with the same objective function as \eqref{MLopt}, but with only the following constraints
\begin{equation}
\label{init_cons}
\left\{ \begin{gathered}
0\leq u_i \quad \text{if}\quad \gamma_i\geq0\hfill\\
u_i \leq 1\quad \text{if}\quad \gamma_i<0.\hfill \\
\end{gathered} \right.
\end{equation}
The solution of this initial LP problem can be obtained simply by making a hard decision on the components of a received vector. The ALP decoding algorithm starts with this point, searches every check node for cuts, adds all the cuts found during the search as constraints into the LP problem, and solves it again. This procedure is repeated until an optimal integer solution is generated or no more cuts can be found (see \cite{Taghavi_ALP} for more details). Adaptive LP decoding has exactly the same error-correcting performance as the original LP decoding.
\section{Cut Conditions}
\label{sec:ECSA}
In this section, we derive a necessary condition and a sufficient condition for a parity inequality to be a cut at $\mathbf{u} \in [0,1]^n$. We also show their connection to the
efficient cut-search algorithm proposed by Taghavi \emph{et al.}~\cite[Algorithm~2]{SparseLP} and Wadayama~\cite[Fig.~6]{Wadayama}. This algorithm is more efficient than the search technique from \cite{Taghavi_ALP} that was mentioned in Section~\ref{sec:LPD}.
Consider the original parity inequalities in \eqref{PI1} given by Feldman \emph{et al.} in \cite{Feldman_LP}. If a parity inequality derived from check node $j$ induces a cut at $\mathbf u$, the cut can be written as
\begin{equation}
\label{CUT}
\sum\limits_{i \in \mathcal{V}} \left(1-u_i\right) + \sum\limits_{i \in \mathcal{N}(j)\backslash \mathcal{V}} {{u_i}} < 1
,\quad \text{ for some } \mathcal{V}\subseteq \mathcal{N}(j){\text{ with }} |\mathcal{V}| {\text{ odd.}}
\end{equation}
From \eqref{CUT} and Lemma \ref{lemma1}, we can derive the following necessary condition for a parity-check constraint to induce a cut.
\newtheorem{thm1}{Theorem}
\begin{thm1}
\label{thm1}
Given a nonintegral vector $\mathbf{u}$ and a parity check $j$, let $\mathcal{S}=\{i\in\mathcal{N}(j)|0<u_i<1\}$ be the set of nonintegral neighbors of $j$ in the Tanner graph, and let $\mathcal{T}=\{i\in\mathcal{N}(j)|u_i>\frac{1}{2}\}$. A necessary condition for parity check $j$ to induce a cut at $\mathbf{u}$ is
\begin{equation}
\label{CUT_NC}
\sum\limits_{i \in \mathcal{T}} \left(1-u_i\right) + \sum\limits_{i \in \mathcal{N}(j)\backslash \mathcal{T}} {{u_i}} < 1.
\end{equation}
This is equivalent to
\begin{equation}
\label{CUT_NCe}
\sum\limits_{i \in \mathcal{S}} \left|\frac{1}{2}-u_i\right| > \frac{1}{2}\cdot|\mathcal{S}|-1
\end{equation}
where, for $x\in\mathbb{R}$, $|x|$ denotes the absolute value.
\end{thm1}
\begin{IEEEproof}
For a given vector $\mathbf{u}$ and a subset $\mathcal{X}\subseteq\mathcal{N}(j)$, define the function
\begin{equation}\notag
\label{gX}
g\left(\mathcal{X}\right)=\sum\limits_{i \in \mathcal{X}} \left(1-u_i\right) + \sum\limits_{i \in \mathcal{N}\left( j \right)\backslash \mathcal{X}} u_i.
\end{equation}
If parity-check $j$ incudes a cut at $\mathbf{u}$, there must be a set
$\mathcal{V}\subseteq\mathcal{N}(j)$ of odd cardinality such that (\ref{CUT}) holds. This means that $g\left(\mathcal{V}_{\mathrm{cut}}\right)<1$. Now, it is easy to see that the set $\mathcal{T}$ minimizes the function $g\left(\mathcal{X}\right)$, from which it follows that $g\left(\mathcal{T}\right)\leq g\left(\mathcal{V}_{\mathrm{cut}}\right)<1$. Therefore, inequality \eqref{CUT_NC} must hold in order for parity check $j$ to induce a cut.
For $\frac{1}{2}\leq u_i\leq 1$, we have
\begin{equation}\label{uil}\notag
\frac{1}{2}-\left|\frac{1}{2}-u_i\right| = \frac{1}{2}-\left(u_i-\frac{1}{2}\right) = 1-u_i,
\end{equation}
and for $0\leq u_i\leq \frac{1}{2}$, we have
\begin{equation}\label{uis}\notag
\frac{1}{2}-\left|\frac{1}{2}-u_i\right| = \frac{1}{2}-\left(\frac{1}{2}-u_i\right) = u_i.
\end{equation}
Hence, \eqref{CUT_NC} can be rewritten as
\begin{equation}\notag
\sum\limits_{i \in \mathcal{S}} \left(\frac{1}{2}-\left|\frac{1}{2}-u_i\right|\right)<1
\end{equation}
or equivalently,
\begin{equation}\notag
\frac{1}{2}\cdot|\mathcal{S}| - \sum\limits_{i \in \mathcal{S}} \left|\frac{1}{2}-u_i\right|<1\nonumber
\end{equation}
which implies inequality \eqref{CUT_NCe}.
\end{IEEEproof}
\newtheorem{thm1rm}{Remark}
\begin{thm1rm}
\label{thm1rm}
Theorem~\ref{thm1} shows that to see whether a parity-check node could provide a cut at a pseudocodeword $\mathbf{u}$ we only need to examine its fractional neighbors.
\end{thm1rm}
Reasoning similar to that used in the proof of Theorem~\ref{thm1} yields a sufficient condition for a parity-check node to induce a cut at $\mathbf{u}$.
\newtheorem{thm2}[thm1]{Theorem}
\begin{thm2}
\label{thm2}
Given a nonintegral vector $\mathbf{u}$ and a parity check $j$, let $\mathcal{S}=\{i\in\mathcal{N}(j)|0<u_i<1\}$ and $\mathcal{T}=\{i\in\mathcal{N}(j)|u_i>\frac{1}{2}\}$. If the inequality
\begin{equation}
\label{CUT_SC}
\sum\limits_{i \in \mathcal{T}} \left(1-u_i\right) + \sum\limits_{i \in\mathcal{N}(j) \backslash \mathcal{T}} {{u_i}} + 2\cdot\mathop {\min}\limits_{i\in\mathcal{S}} \left|\frac{1}{2}-u_i\right| < 1
\end{equation}
holds, there must be a violated parity inequality derived from parity check $j$. This sufficient condition can be written as
\begin{equation}
\label{CUT_SCe}
\sum\limits_{i \in \mathcal{S}} \left|\frac{1}{2}-u_i\right| - 2\cdot\mathop {\min}\limits_{i\in\mathcal{S}} \left|\frac{1}{2}-u_i\right| > \frac{1}{2}\cdot|\mathcal{S}|-1.
\end{equation}
\end{thm2}
\begin{IEEEproof}
Lemma \ref{lemma1} implies that, if parity check $j$ gives a cut at $\mathbf{u}$, then there is at most one odd-sized set $\mathcal{V}\subseteq \mathcal{N}(j)$ that satisfies \eqref{CUT}.
From the proof of Theorem \ref{thm1}, we have $g\left(\mathcal{T}\right)\leq g\left(\mathcal{X}\right)$ $\text{for all \;} \mathcal{X}\subseteq\mathcal{N}(j)$.
If $\left|\mathcal{T}\right|$ is even, we need to find one element $i^*\in\mathcal{N}(j)$ such that inserting it into or removing it from $\mathcal{T}$ would result in the minimum increment to the value of $g\left(\mathcal{T}\right)$. Obviously, $i^* = \arg\mathop {\min}\limits_{i\in\mathcal{N}(j)} \left|\frac{1}{2}-u_i\right|$, and the increment is $2\cdot\left|\frac{1}{2}-u_{i^*}\right|$. If more than one $i$ minimizes the expression $\left|\frac{1}{2}-u_i\right| $, we choose one arbitrarily as $i^*$. Hence, setting
\begin{equation}
\mathcal{V}=\left\{ \begin{gathered}
\mathcal{T}\backslash\{i^*\}$, \quad~ if $i^*\in\mathcal{T}\hfill\\
\mathcal{T}\cup\{i^*\}$, \quad if $i^*\notin\mathcal{T}\nonumber\hfill\\
\end{gathered} \right.
\end{equation}
we have $g\left(\mathcal{V}\right)= g\left(\mathcal{T}\right) + 2\cdot\left|\frac{1}{2}-u_{i^*}\right| \geq g\left(\mathcal{T}\right)$.
If inequality \eqref{CUT_SC} holds, then $g\left(\mathcal{T}\right)\leq g\left(\mathcal{V}\right)<1$. Since either $|\mathcal{T}|$ or $|\mathcal{V}|$ is odd, \eqref{CUT_SC} is a sufficient condition for parity-check constraint $j$ to induce a cut at $\mathbf{u}$. Arguing as in the latter part of the proof of Theorem~\ref{thm1}, it can be shown that \eqref{CUT_SC} is equivalent to \eqref{CUT_SCe}.
\end{IEEEproof}
Theorem \ref{thm1} and Theorem \ref{thm2} provide a necessary condition and a sufficient condition, respectively, for a parity-check node to produce a cut at any given vector $\mathbf{u}$. It is worth pointing out that \eqref{CUT_NC} becomes a necessary and sufficient condition for a parity check to produce a cut when $|\mathcal{T}|$ is odd, and \eqref{CUT_SC} becomes a necessary and sufficient condition when $|\mathcal{T}|$ is even.
Together, they suggest a highly efficient technique for finding cuts, the Cut-Search Algorithm (CSA) described in Algorithm~\ref{alg1}. If there is a violated parity inequality, the CSA returns the set $\mathcal{V}$ corresponding to the cut; otherwise, it returns an empty set.
As mentioned above, the CSA was used by Taghavi \emph{et al.}~\cite[Algorithm~2]{SparseLP} in conjunction with ALP decoding, and by Wadayama~\cite[Fig.~6]{Wadayama} as a feasibility check in the context of interior point decoding. In addition to providing another perspective on the CSA, the necessary condition and sufficient condition proved in Theorems 1 and 2, respectively, serve as the basis for a new adaptive approach to finding cut-inducing RPCs, as described in the next section.
\begin{algorithm}
\begin{doublespace}
\caption{Cut-Search Algorithm (CSA)}
\label{alg1}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE parity-check node $j$ and vector $\mathbf u$
\ENSURE variable node set $\mathcal{V}$
\STATE $\mathcal{V}\leftarrow \mathcal{T} = \{i\in\mathcal{N}(j)|u_i>\frac{1}{2}\}$ and $\mathcal{S}\leftarrow \{i\in\mathcal{N}(j)|0<u_i<1\}$
\IF{$|\mathcal{V}|$ is even}
\IF {$\mathcal{S}\neq\varnothing$}
\STATE $i^*\leftarrow \arg\mathop {\min}\limits_{i\in\mathcal{S}} \left|\frac{1}{2}-u_i\right| $
\ELSE
\STATE $i^*\leftarrow$ arbitrary $i\in\mathcal{N}(j)$
\ENDIF
\IF {$i^*\in\mathcal{V}$}
\STATE $\mathcal{V}\leftarrow \mathcal{V}\setminus\{i^*\}$
\ELSE
\STATE $\mathcal{V}\leftarrow \mathcal{V}\cup\{i^*\}$
\ENDIF
\ENDIF
\IF {$\sum\limits_{i \in \mathcal{V}} \left(1-u_i\right) + \sum\limits_{i \in \mathcal{N}(j)\backslash \mathcal{V}} {{u_i}} < 1$}
\STATE Found the violated parity inequality on parity-check node $j$
\ELSE
\STATE There is no violated parity inequality on parity-check node $j$
\STATE $\mathcal{V}\leftarrow\emptyset$
\ENDIF
\RETURN $\mathcal{V}$
\end{algorithmic}
\end{doublespace}
\end{algorithm}
\section{LP Decoding with Adaptive Cut-Generating Algorithm}
\label{sec:ACGA}
\subsection{Generating Redundant Parity Checks}
\label{subsec:RPC}
Although the addition of a redundant row to a parity-check matrix does not affect the $\mathbb{F}_2$-nullspace and, therefore, the linear code it defines, different parity-check matrix representations of a linear code may give different fundamental polytopes underlying the corresponding LP relaxation of the ML decoding problem. This fact inspires the use of cutting-plane techniques to improve the error-correcting performance of the original LP and ALP decoders. Specifically, when the LP decoder gives a nonintegral solution (i.e., a pseudocodeword), we try to find the RPCs that introduce cuts at that point, if such RPCs exist. The cuts obtained in this manner are called \emph{RPC cuts}. The effectiveness of this method depends on how closely the new relaxation approximates the ML decoding problem, as well as on the efficiency of the technique used to search for the cut-inducing RPCs.
An RPC can be obtained by modulo-2 addition of some of the rows of the original parity-check matrix, and this new check introduces a number of linear constraints that may give a cut. In \cite{Taghavi_ALP}, a random walk on a cycle within the subgraph defined by the nonintegral entries in a pseudocodeword served as the basis for a search for RPC cuts. However, there is no guarantee that this method will find a cut (if one exists) within a finite number of iterations. In fact, the average number of random trials needed to find an RPC cut grows exponentially with the code length.
The IP-based separation algorithm in \cite{SepAlg} performs Gaussian elimination on a submatrix comprising the columns of the original parity-check matrix that correspond to the nonintegral entries in a pseudocodeword in order to get redundant parity checks. In~\cite{cutting-plane}, the RPCs that potentially provide cutting planes are obtained by transforming a column-permuted version of the submatrix into row echelon form. The chosen permutation organizes the columns according to descending order of their associated nonintegral pseudocodeword entries, with the exception of the column corresponding to the largest nonintegral entry, which is placed in the rightmost position of the submatrix \cite[p.~1010]{cutting-plane}. This approach was motivated by the fact that a parity check $j$ provides a cut at a pseudocodeword if there exists a variable node in $\mathcal{N}(j)$ whose value is greater than the sum of the values of all of the other neighboring variable nodes \cite[Lemma 2]{cutting-plane}. However, when combined with ALP decoding, the resulting ``cutting-plane algorithm'' does not provide sufficiently many cuts to surpass the separation algorithm in error-rate performance.
Motivated by the new derivation of the CSA based on the conditions in Theorems \ref{thm1} and \ref{thm2}, we next propose a new algorithm for generating cut-inducing RPCs. When used with ALP decoding, the cuts have been found empirically to achieve near-ML decoding performance in the high-SNR region for several short-to-moderate length LDPC codes. However, application of these new techniques to codes with larger block lengths proved to be prohibitive computationally, indicating that further work is required to develop practical methods for enhanced LP decoding of longer codes.
Given a nonintegral solution of the LP problem, we can see from Theorems \ref{thm1} and \ref{thm2} that an RPC with a small number of nonintegral neighboring variable nodes may be more likely to satisfy the necessary condition for providing a cut at the pseudocodeword. Moreover, the nonintegral neighbors should have values either close to 0 or close to 1; in other words, they should be as far from $\frac{1}{2}$ as possible.
Let $\mathbf{p}=(p_1,p_2,\ldots,p_n)\in[0,1]^n$ be a pseudocodeword solution to LP decoding, with $a$ nonintegral positions, $b$ zeros, and $n-a-b$ ones. We first group entries of $\mathbf{p}$ according to whether their values are nonintegral, zero, or one. Then, we sort the nonintegral positions in ascending order according to the value of $\left|\frac{1}{2}-p_i\right|$ and define the permuted vector $\mathbf{p'}=\Pi(\mathbf{p})$ satisfying the following ordering
\begin{equation}
\label{Sortfrac}
\left|\frac{1}{2}-p'_1\right|\leq\dots\leq\left|\frac{1}{2}-p'_a\right|,
\end{equation}
\begin{equation}\notag
p'_{a+1}=\dots=p'_{a+b}=0,
\end{equation}
and%
\begin{equation}\notag
p'_{a+b+1}=\dots=p'_n=1.
\end{equation}
By applying the same permutation $\Pi$ to the columns of the original parity-check matrix $\mathbf{H}$, we get
\begin{equation}
\label{PiH}
\mathbf{H'} \triangleq \Pi(\mathbf{H}) = \left(\mathbf{H}^{(\mathrm{f})} |\mathbf{H}^{(0)}|\mathbf{H}^{(1)}\right)
\end{equation}
where $\mathbf{H}^{(\mathrm{f})}$, $\mathbf{H}^{(0)}$, and $\mathbf{H}^{(1)}$ consist of columns of $\mathbf H$ corresponding to positions of $\mathbf{p}'$ with nonintegral values, zeros, and ones, respectively.
The following familiar definition from matrix theory will be useful \cite[p. 10]{HornJohnson}.
\newtheorem{def3}[def1]{Definition}
\begin{def3}
\label{def_ref}
A matrix is in \emph{reduced row echelon form} if its nonzero rows (i.e., rows with at least one nonzero element) are above any all-zero rows, and the leading entry (i.e., the first nonzero entry from the left) of a nonzero row is the only nonzero entry in its column and is always strictly to the right of the leading entry of the row above it.
\end{def3}
By applying a suitable sequence of elementary row operations $\Phi$ (over $\mathbb{F}_2$) to $\mathbf{H'}$, we get
\begin{equation}
\label{PhiH}
\mathbf{\bar H} \triangleq \Phi(\mathbf{H'}) = \left(\mathbf{\bar H}^{(\mathrm{f})} |\mathbf{\bar H}^{(0)}|\mathbf{\bar H}^{(1)}\right),
\end{equation}
where $\mathbf{\bar H}^{(\mathrm{f})}$ is in reduced row echelon form. Applying the inverse permutation $\Pi^{-1}$ to columns of $\mathbf{\bar H}$, we get an equivalent parity-check matrix \begin{equation}
\label{tildeH}
\mathbf{\tilde H}=\Pi^{-1}(\mathbf{\bar H})
\end{equation}
whose rows are likely to be cut-inducing RPCs, for the reasons stated above.
Multiple nonintegral positions in the pseudocodeword $\mathbf{p}$ could have values of the same distance from $\frac{1}{2}$, i.e., $|\frac{1}{2}-p_i|=|\frac{1}{2}-p_j|$ for some $i\neq j$. In such a case, the ordering of the nonintegral positions in \eqref{Sortfrac} is not uniquely determined. Hence, the set of RPCs generated by operations \eqref{PiH}--\eqref{tildeH} may depend upon the particular ordering reflected in the permutation $\Pi$. Nevertheless, if the decoder uses a fixed, deterministic sorting rule such as, for example, a stable sorting algorithm, then the decoding error probability will be independent of the transmitted codeword.
The next theorem describes a situation in which a row of $\mathbf{\tilde H}$ is guaranteed to provide a cut.
\newtheorem{thm3}[thm1]{Theorem}
\begin{thm3}
\label{thm3}
If there exists a weight-one row in submatrix $\mathbf{\bar H}^{(\mathrm{f})}$, the corresponding row of the equivalent parity-check matrix $\mathbf{\tilde H}$ is a cut-inducing RPC.
\end{thm3}
\begin{IEEEproof}
Given a pseudocodeword $\mathbf{p}$, suppose the $j$th row of submatrix $\mathbf{\bar H}^{(\mathrm{f})}$ has weight one and the corresponding nonintegral position in $\mathbf{p}$ is $p_i$. Since it is the only nonintegral position in $\mathcal{N}(j)$, the left-hand side of \eqref{CUT_SCe} is equal to $-\left|\frac{1}{2}-p_i\right|$. Since $0 < p_i < 1$, this is larger than $-\frac{1}{2}$, the right-hand side. Hence, according to Theorem \ref{thm2}, RPC $j$ satisfies the sufficient condition for providing a cut. In other words, there must be a violated parity inequality induced by RPC $j$.
\end{IEEEproof}
\newtheorem{thm3rm}[thm1rm]{Remark}
\begin{thm3rm}
\label{thm3rm}
Theorem~\ref{thm3} is equivalent to \cite[Theorem~3.3]{SepAlg}. The proof of the result shown here, though, is considerably simpler, thanks to the application of Theorem~\ref{thm2}.
\end{thm3rm}
Although Theorem~\ref{thm3} only ensures a cut for rows with weight one in submatrix $\mathbf{\bar H}^{(\mathrm{f})}$, rows in $\mathbf{\bar H}^{(\mathrm{f})}$ of weight larger than one may also provide RPC cuts. Hence, the CSA should be applied on every row of the redundant parity-check matrix $\mathbf{\tilde H}$ to search for all possible RPC cuts.
The approach of generating a redundant parity-check matrix $\mathbf{\tilde H}$ based on a given pseudocodeword and applying the CSA on each row of this matrix is called adaptive cut generation (ACG). Combining ACG with ALP decoding, we obtain the ACG-ALP decoding algorithm described in Algorithm~\ref{alg2}.
Beginning with the original parity-check matrix, the algorithm iteratively applies ALP decoding. When a point is reached when no further cuts can be produced from the original parity-check matrix, the ACG technique is invoked to see whether any RPC cuts can be generated. The ACG-ALP decoding iteration stops when no more cuts can be found either from the original parity-check matrix or in the form of redundant parity checks.
\begin{algorithm}
\begin{doublespace}
\caption{Adaptive Linear Programming with Adaptive Cut-Generation (ACG-ALP) Decoding Algorithm}
\label{alg2}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE cost vector $\boldsymbol{\gamma}$, original parity-check matrix $\mathbf{H}$
\ENSURE Optimal solution of current LP problem
\STATE Initialize the LP problem with the constraints in \eqref{init_cons}.
\STATE Solve the current LP problem, and get optimal solution $x^*$.
\STATE Apply {\bf Algorithm 1 (CSA)} on each row of $\mathbf{H}$.
\IF{No cut is found {\bf and} $x^*$ is nonintegral}
\STATE Construct $\mathbf{\tilde H}$ associated with $x^*$ according to \eqref{PiH}--\eqref{tildeH}.
\STATE Apply {\bf Algorithm 1 (CSA)} to each row of $\mathbf{\tilde H}$.
\ENDIF
\IF {No cut is found}
\STATE Terminate.
\ELSE
\STATE Add cuts that are found into the LP problem as constraints, and go to line 2.
\ENDIF
\end{algorithmic}
\end{doublespace}
\end{algorithm}
\subsection{Reducing the Number of Constraints in the LP Problem}
\label{subsec:malp}
In the ALP decoding, the number of constraints in the LP problem grows as the number of iterations grows, increasing the complexity of solving the LP problem. For ACG-ALP decoding, this problem becomes more severe since the algorithm generates additional RPC cuts and uses more iterations to successfully decode inputs on which the ALP decoder has failed.
From Lemma~\ref{lemma1}, we know that a binary parity-check constraint can provide at most one cut. Hence, if a binary parity check gives a cut, all other linear inequalities introduced by this parity check in previous iterations can be removed from the LP problem. The implementation of this observation leads to a \emph{modified ALP (MALP)} decoder referred to as the MALP-A decoder~\cite{SparseLP}.
This decoder improves the efficiency of ALP decoding, where only cuts associated with the original parity-check matrix are used. However, with ACG-ALP decoding, different RPCs may be generated adaptively in every iteration and most of them give only one cut throughout the sequence of decoding iterations. As a result, when MALP-A decoding is combined with the ACG technique, only a small number of constraints are removed from the LP problem, and the decoding complexity is only slightly reduced.
\newtheorem{def4}[def1]{Definition}
\begin{def4}
\label{def_act}
A linear inequality constraint of the form $\mathbf{ax}\geq b$ is called \emph{active} at point $\mathbf{x}^*$ if it holds with equality, i.e., $\mathbf{ax}^*= b$, and is called \emph{inactive} otherwise.
\end{def4}
For an LP problem with a set of linear inequality constraints, the optimal solution $\mathbf{x}^*\in[0,1]^n$ is a vertex of the polytope formed by the hyperplanes corresponding to all active constraints. In other words, if we set up an LP problem with only those active constraints, the optimal solution remains the same. Therefore, a simple and intuitive way to reduce the number of constraints is to remove all inactive constraints from the LP problem at the end of each iteration, regardless of whether or not the corresponding binary parity check generates a cut. This approach is called MALP-B decoding~\cite{SparseLP}. By combining the ACG technique and the MALP-B algorithm, we obtain the ACG-MALP-B decoding algorithm. It is similar to the ACG-ALP algorithm described in Algorithm~\ref{alg2} but includes one additional step that removes all inactive constraints from the LP problem, as indicated in Line 3 of Algorithm~\ref{alg3}.
Since adding further constraints into an LP problem reduces the feasible space, the minimum value of the cost function is non-decreasing as a function of the number of iterations. In our computer simulations, the ACG-MALP-B decoding algorithm was terminated when no further cuts could be found.
(See Fig.~\ref{fig_iter} for statistics on the average number of iterations required to decode one codeword of the (155,64) Tanner LDPC code.)
\begin{algorithm}[t]
\begin{doublespace}
\caption{ACG-MALP-B/C Decoding Algorithm}
\label{alg3}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE cost vector $\boldsymbol{\gamma}$, original parity-check matrix $\mathbf{H}$
\ENSURE Optimal solution of current LP problem
\STATE Initialize LP problem with the constraints in \eqref{init_cons}.
\STATE Solve the current LP problem, get optimal solution $x^*$.
\STATE ACG-MALP-B only: remove all inactive constraints from the LP problem.
\STATE ACG-MALP-C only: remove inactive constraints that have above-average slack values from the LP problem.
\STATE Apply {\bf CSA} only on rows of $\mathbf{H}$ that have not introduced constraints.
\IF{No cut is found {\bf and} $x^*$ is nonintegral}
\STATE Construct $\mathbf{\tilde H}$ according to $x^*$
\STATE Apply {\bf CSA} on each row of $\mathbf{\tilde H}$.
\ENDIF
\IF {No cut is found}
\STATE Terminate.
\ELSE
\STATE Add found cuts into LP problem as constraints, and go to line 2.
\ENDIF
\end{algorithmic}
\end{doublespace}
\end{algorithm}
In our implementation of both MALP-B and ACG-MALP-B decoding, we have noticed that a considerable number of the constraints deleted in previous iterations are added back into the LP problem in later iterations, and, in fact, many of them are added and deleted several times. We have observed that MALP-B-based decoding generally takes more iterations to decode a codeword than ALP-based decoding, resulting in a tradeoff between the number of iterations and the size of the constituent LP problems. MALP-B-based decoding has the largest number of iterations and the smallest LP problems to solve in each iteration, while ALP-based decoding has a smaller number of iterations but larger LP problems.
Although it is difficult to know in advance which inactive constraints might become cuts in later iterations, there are several ways to find a better tradeoff between the MALP-B and ALP techniques to speed up LP decoding. This tradeoff, however, is highly dependent on the LP solver used in the implementation. For example, we used the Simplex solver from the open-source GNU Linear Programming Kit (GLPK)~\cite{glpk}, and found that the efficiency of iterative ALP-based decoders is closely related to the total number of constraints used to decode one codeword, i.e., the sum of the number of constraints used in all iterations. This suggests a new criterion for the removal of inactive constraints whose implementation we call the MALP-C decoder.
In MALP-C decoding, instead of removing all inactive constraints from the LP problem in each iteration, we remove only the linear inequality constraints with slack variables that have above-average values, as indicated in Line 4 of Algorithm~\ref{alg3}. The ACG-MALP-B and ACG-MALP-C decoding algorithms are both described in Algorithm~\ref{alg3}, differing only in the use of Line 3 or Line 4. Although all three of the adaptive variations of LP decoding discussed in this paper -- ALP, MALP-B, and MALP-C -- have the exact same error-rate performance as the original LP decoder, they may lead to different decoding results for a given received vector when combined with the ACG technique, as shown in the next section.
\section{Numerical Results}
\label{sec:NR}
To demonstrate the improvement offered by our proposed decoding algorithms, we compared their error-correcting performance to that of ALP decoding (which, again, has the same performance as the original LP decoding), BP decoding (two cases, using the sum-product algorithm with a maximum of 100 iterations and 1000 iterations, respectively), the separation algorithm (SA) \cite{SepAlg}, the random-walk-based RPC search algorithm \cite{Taghavi_ALP}, and ML decoding for various LDPC codes on the additive white Gaussian noise (AWGN) channel. We use the Simplex algorithm from the open-source GLPK \cite{glpk} as our LP solver. The LDPC codes we evaluated are MacKay's rate-$\frac{1}{2}$, (3,6)-regular LDPC codes with lengths 96 and 408, respectively \cite{MackayCode}; a rate-$\frac{1}{4}$, (3,4)-regular LDPC code of length 100; the rate-$\frac{2}{5}$, (3,5)-regular Tanner code of length 155 \cite{Tanner}; and a rate-0.89, (3,27)-regular high-rate LDPC code of length 999 \cite{MackayCode}.
The proposed ACG-ALP, ACG-MALP-B, and ACG-MALP-C decoding algorithms are all based on the underlying cut-searching algorithm (Algorithm~\ref{alg1}) and the adaptive cut-generation technique of Section~\ref{subsec:RPC}. Therefore, their error-rate performance is very similar. However, their performance may not be identical, because cuts are found adaptively from the output pseudocodewords in each iteration and the different sets of constraints used in the three proposed algorithms may lead to different solutions of the corresponding LP problems.
In our simulation, the LP solver uses double-precision floating-point arithmetic, and therefore, due to this limited numerical resolution, it may round some small nonzero vector coordinate values to 0 or output small nonzero values for vector coordinates which should be 0. Similar rounding errors may occur for coordinate values close to 1. Coordinates whose values get rounded to integers by the LP solver might lead to some ``false'' cuts -- parity inequalities not actually violated by the exact LP solution. This is because such rounding by the LP solver would decrease the left-hand side of parity inequality \eqref{PI1}. On the other hand, when coordinates that should have integer values are given nonintegral values, the resulting errors would increase the left-hand side of parity inequality \eqref{PI1}, causing some cuts to be missed. Moreover, this would also increase the size of the submatrix $\mathbf{H}^{(\mathrm{f})}$ in \eqref{PiH}, leading to higher complexity for the ACG-ALP decoding algorithm.
To avoid such numerical problems in our implementation of the CSA, we used $1-10^{-6}$ instead of 1 on the right-hand side of the inequality in line 14 of Algorithm~\ref{alg1}. Whenever the LP solver outputs a solution vector, coordinates with value less than $10^{-6}$ were rounded to 0 and coordinates with value larger than $1-10^{-6}$ were rounded to 1. The rounded values were then used in the cut-search and RPC-generation steps in the decoding algorithms described in previous sections. If such a procedure were not applied, and if, as a result, false cuts were to be produced, the corresponding constraints, when added into the LP problem to be solved in the next step, would leave the solution vector unchanged, causing the decoder to become stuck in an endless loop. We saw no such behavior in our decoder simulations incorporating the prescribed thresholding operations.
Finally, we want to point out that there exist LP solvers, such as \emph{QSopt\_ex Rational LP Solver} \cite{QSopt}, that produce exact rational solutions to LP instances with rational input. However, such solvers generally have higher computational overhead than their floating-point counterparts. For this reason, we did not use an exact rational LP solver in our empirical studies.
\begin{figure}[!t]
\includegraphics[width=0.9\linewidth]{jnl_fer_100}
\centering
\caption{FER versus $E_b/N_0$ for random (3,4)-regular LDPC code of length 100 on the AWGN channel.}\label{fig_fer_100}
\end{figure}
Fig.~\ref{fig_fer_100} shows the simulation results for the length-100, regular-(3,4) LDPC code whose FER performance was also evalutated in \cite{Taghavi_ALP} and \cite{SepAlg}.
We can see that the proposed algorithms have a gain of about 2~dB over the original LP and ALP decoder. They also perform significantly better than both the separation algorithm and the random-walk algorithm. The figure also shows
the results obtained with the Box-and-Match soft-decision decoding algorithm (BMA) \cite{BMA}, whose FER performance is guaranteed to be within a factor of 1.05 times that of ML decoding. We conclude that the performance gap between the proposed decoders and ML decoding is less than 0.2~dB at an FER of $10^{-5}$.
\begin{figure*}[!t]
\includegraphics[width=0.9\linewidth]{jnl_fer_96}
\centering
\caption{FER versus $E_b/N_0$ for MacKay's random (3,6)-regular LDPC code of length 96 on the AWGN channel.}\label{fig_fer_96}
\end{figure*}
\begin{table*}
\renewcommand{\arraystretch}{1.3}
\caption{Frame Errors of ACG-ALP decoder on MacKay's random (3,6)-regular LDPC code of length 96 on the AWGN channel.}
\label{tabMLLB}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\bfseries $\text{E}_{\text{b}}/\text{N}_0$ (dB) & \bfseries Transmitted Frames & \bfseries Error Frames & \bfseries Pseudocodewords & \bfseries Incorrect Codewords\\
\hline
3.0 & 1,136,597 & 3,000 & 857 & 2,143\\
3.5 & 4,569,667 & 3,000 & 395 & 2,605\\
4.0 & 16,724,921 & 3,000 & 103 & 2,897\\
4.5 & 54,952,664 & 3,000 & 12 & 2,988\\
5.0 & 185,366,246 & 3,000 & 0 & 3,000\\
5.5 & 665,851,530 & 3,000 & 0 & 3,000\\
\hline
\end{tabular}
\end{table*}
In Fig.~\ref{fig_fer_96}, we show simulation results for MacKay's length-96, (3,6)-regular LDPC code (the 96.33.964 code from \cite{MackayCode}). Again, the proposed ALP-based decoders with ACG demonstrate superior performance to the original LP, BP, and SA decoders over the range of SNRs considered. Table~\ref{tabMLLB} shows the actual frame error counts for the ACG-ALP decoder, with frame errors classified as either pseudocodewords or incorrect codewords; the ACG-MALP-B and ACG-MALP-C decoder simulations yielded very similar results. We used these counts to obtain a lower bound on ML decoder performance, also shown in the figure, by dividing the number of times the ACG-ALP decoder converged to an incorrect codeword by the total number of frames transmitted. Since the ML certificate property of LP decoding implies that ML decoding would have produced the same incorrect codeword in all of these instances, this ratio represents a lower bound on the FER of the ML decoder. We note that, when $E_b/N_0$ is greater than 4.5~dB, all decoding errors correspond to incorrect codewords, indicating that the ACG-ALP decoder has achieved ML decoding performance for the transmitted frames.
Fig.~\ref{fig_fer_155} compares the performance of several different decoders applied to the (3,5)-regular, (155,64) Tanner code, as well as the ML performance curve from \cite{SepAlg}. It can be seen that the proposed ACG-ALP-based algorithms narrow the 1.25~dB gap between the original LP decoding and ML decoding to approximately 0.25~dB.
\begin{figure}
\includegraphics[width=0.9\linewidth]{jnl_fer_155}
\centering
\caption{FER versus $E_b/N_0$ for (155,64) Tanner LDPC code on the AWGN channel.}\label{fig_fer_155}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\linewidth]{jnl_fer_408}
\centering
\caption{FER versus $E_b/N_0$ for MacKay's random (3,6)-regular LDPC code of length 408 on the AWGN channel.}\label{fig_fer_408}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\linewidth]{jnl_fer_999}
\centering
\caption{FER versus $E_b/N_0$ for MacKay's random (3,27)-regular LDPC code of length 999 on the AWGN channel.}\label{fig_fer_999}
\end{figure}
We also considered two longer codes, MacKay's rate-$\frac{1}{2}$, random (3,6)-regular LDPC code of length 408 (the 408.33.844 code from \cite{MackayCode}) and a rate-0.89 LDPC code of length 999 (the 999.111.3.5543 code from \cite{MackayCode}). Because of the increased complexity of the constituent LP problems, we only simulated the ACG-MALP-B and ACG-MALP-C decoders.
In Fig.~\ref{fig_fer_408}, it is confirmed that the proposed decoding algorithms provide significant gain over the original LP decoder and the BP decoder, especially in the high-SNR region.
The results for the high-rate LDPC code, as shown in Fig.~\ref{fig_fer_999}, again show that the proposed decoding algorithms approaches ML decoding performance for some codes, where the ML lower bound is obtained using the same technique as in Fig.~\ref{fig_fer_96}. However, for the code of length 408, we found that the majority of decoding failures corresponded to pseudocodewords, so, in constrast to the case of the length-96 and length-999 MacKay codes discussed above, the frame error data do not provide a good lower bound on ML decoder performance to use as a benchmark.
Since the observed improvements in ACG-ALP-based decoder performance comes from the additional RPC cuts found in each iteration, these decoding algorithms generally require more iterations and/or the solution of larger LP problems in comparison to ALP decoding. In the remainder of this section, we empirically investigate the relative complexity of our proposed algorithms in terms of such statistics as the average number of iterations, the average size of constituent LP problems, and the average number of cuts found in each iteration. All statistical data presented here were obtained from simulations of the Tanner (155,64) code on the AWGN channel. We ran all simulations until at least 200 frame errors were counted.
\begin{figure}[!t]
\includegraphics[width=0.9\linewidth]{jnl_Iter_155}
\centering
\caption{Average number of iterations for decoding one codeword of (155,64) Tanner LDPC code.}\label{fig_iter}
\end{figure}
\begin{figure}[!t]
\centerline
{\subfigure[Average number of constraints in final iteration]{\includegraphics[width=0.45\linewidth]{jnl_Constraint_155}
\label{subfig_cst}}
\hfil
\subfigure[Average number of cuts found per iteration]{\includegraphics[width=0.45\linewidth]{jnl_Cut_iter_155}
\label{subfig_CpI}}}
\centerline
{\subfigure[Average number of cuts found from $\mathbf H$ for decoding one codeword]{\includegraphics[width=0.45\linewidth]{jnl_Hcut_155}
\label{subfig_Hcut}}
\hfil
\subfigure[Average number of cuts found from RPCs for decoding one codeword]{\includegraphics[width=0.45\linewidth]{jnl_Rcut_155}
\label{subfig_Rcut}}}
\caption{Average number of constraints/cuts during decoding iterations for decoding one frame of (155,64) Tanner LDPC code.}
\label{fig_cuts}
\end{figure}
In Fig.~\ref{fig_iter}, we compare the average number of iterations needed, i.e., the average number of LP problems solved, to decode one codeword. Fig.~\ref{subfig_cst} compares the average number of constraints in the LP problem of the final iteration that results in either a valid codeword or a pseudocodeword with no more cuts to be found. In Fig.~\ref{subfig_CpI}, we show the average number of cuts found and added into the LP problem in each iteration. Fig.~\ref{subfig_Hcut} and Fig.~\ref{subfig_Rcut} show the average number of cuts found from the original parity-check matrix $\mathbf H$ and from the generated RPCs, respectively.
\begin{table*}
\renewcommand{\arraystretch}{1.3}
\caption{The average accumulated number of constraints in all iterations of decoding one codeword of (155,64) Tanner code on the AWGN channel}
\label{tabCST}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\bfseries $\text{E}_{\text{b}}/\text{N}_0$ (dB) & \bfseries ACG-ALP & \bfseries ACG-MALP-B & \bfseries ACG-MALP-C\\
\hline
1.83 & 5495.8 & 5223.3 & 4643.1\\
2.33 & 1401.2 & 1387.3 & 1217.0\\
2.83 & 339.7 & 326.9 & 300.9\\
3.33 & 111.0 & 106.4 & 105.4\\
3.83 & 64.3 & 58.8 & 62.8\\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\includegraphics[width=0.9\linewidth]{jnl_SimT_155}
\centering
\caption{Average simulation time for decoding one codeword of (155,64) Tanner LDPC code.}\label{fig_simt}
\end{figure}
From Fig.~\ref{fig_iter} and Fig.~\ref{subfig_cst}, we can see that, as expected, the ACG-ALP decoder takes fewer iterations to decode a codeword on average than the ACG-MALP-B/C decoders, but the ACG-MALP-B/C decoders have fewer constraints in each iteration, including the final iteration. We have observed that the ACG-MALP-B/C decoders require a larger number of iterations to decode than the ACG-ALP decoder, and fewer cuts are added into the constituent LP problems in each iteration on average, as reflected in Fig.~\ref{subfig_CpI}. This is because there are some iterations in which the added constraints had been previously removed.
Among all three proposed ACG-based decoding algorithms, we can see that the ACG-ALP decoder has the largest number of constraints in the final iteration and needs the least overall number of iterations to decode, while ACG-MALP-B decoding has the smallest number of constraints but requires the largest number of iterations. The ACG-MALP-C decoder offers a tradeoff between those two: it has fewer constraints than the ACG-ALP decoder and requires fewer iterations than the ACG-MALP-B decoder. If we use the accumulated number of constraints in all iterations to decode one codeword as a criterion to judge the efficiency of these algorithms during simulation, then ACG-MALP-C decoding is more efficient than the other two algorithms in the low and moderate SNR regions, as shown in Table~\ref{tabCST}. Note that the ACG-MALP-B decoder is most efficient at high SNR where the decoding of most codewords succeeds in a few iterations and the chance of a previously removed inactive constraint being added back in later iterations is quite small. Hence, ACG-MALP-B decoding is preferred in the high-SNR region.
Fig.~\ref{fig_simt} presents an alternative way of comparing the complexity of the decoding algorithms. It shows the average decoding time when we implement the algorithms using C++ code on a desktop PC, with GLPK as the LP solver. The BP decoder is implemented in software with messages represented as double-precision floating-point numbers, and the exact computation of sum-product algorithm is used, without any simplification or approximation. The BP decoder iterations stop as soon as a codeword is found, or when the maximum allowable number of iterations -- here set to 100 and 1000 -- have been attempted without convergence. The simulation time is averaged over the number of transmitted codewords required for the decoder to fail on 200 codewords.
We observe that the ACG-MALP-B and ACG-MALP-C decoders are both uniformly faster than ACG-ALP over the range of SNR values considered, and, as expected from Table~\ref{tabCST}, ACG-MALP-C decoding is slightly more efficient than ACG-MALP-B decoding in terms of actual running time.
Of course, the decoding time depends both on the number of LP problems solved and the size of these LP problems, and the preferred trade-off depends heavily upon the implementation, particularly the LP solver that is used. Obviously, the improvement in error-rate performance provided by all three ACG-based decoding algorithms over the ALP decoding comes at the cost of increased decoding complexity. As SNR increases, however, the average decoding complexity per codeword of the proposed algorithms approaches that of the ALP decoder. This is because, at higher SNR, the decoders can often successfully decode the received frames without generating RPC cuts.
Fig.~\ref{fig_iter} shows that the ACG-ALP decoder requires, on average, more iterations than the SA decoder.
Our observations suggest that this is a result of the fact that the ACG-ALP decoder can continue to generate new RPC cuts after the number of iterations at which the SA decoder can no longer do so and, hence, stops decoding. The simulation data showed that the additional iterations of the ACG-ALP decoder often resulted in a valid codeword, thus contributing to its superiority in perfomance relative to the SA decoder.
From Fig.~\ref{subfig_CpI}, it can be seen that the ACG-ALP-based decoding algorithms generate, on average, fewer cuts per iteration than the SA decoder. Moreover, as reflected in Fig.~\ref{subfig_Hcut} and \ref{subfig_Rcut}, the ACG-ALP decoders find more cuts from the original parity-check matrix and generate fewer RPC cuts per codeword. These observations suggest that the CSA is very efficient in finding cuts from a given parity check, while the SA decoder tends to generate RPCs even when there are still some cuts other than the Gomory cuts that can be found from the original parity-check matrix. This accounts for the fact, reflected in Fig.~\ref{fig_simt}, that the SA becomes less efficient as SNR increases, when the original parity-check matrix usually can provide enough cuts to decode a codeword. The effectiveness of our cut-search algorithm permits the ACG-ALP-based decoders to successfully decode most codewords in the high-SNR region without generating RPCs, resulting in better overall decoder efficiency.
Due to limitations on our computing capability, we have not yet tested our proposed algorithms on LDPC codes of length greater than 1000. We note that, in contrast to \cite{Taghavi_ALP} and \cite{SparseLP}, we cannot give an upper bound on the maximum number of iterations required by the ACG-ALP-based decoding algorithms because RPCs and their corresponding parity inequalities are generated adaptively as a function of intermediate pseudocodewords arising during the decoding process. Consequently, even though the decoding of short-to-moderate length LDPC codes was found empirically to converge after an acceptable number of interations, some sort of constraint on the maximum number of iterations allowed may have to be imposed when decoding longer codes. Finally, we point out that the complexity of the algorithm for generating cut-inducing RPCs lies mainly in the Gaussian elimination step, but as applied to binary matrices, this requires only logical operations which can be executed quite efficiently.
\section{Conclusion}
\label{sec:concl}
In this paper, we derived a new necessary condition and a new sufficient condition for a parity-check constraint in a linear block code parity-check matrix to provide a violated parity inequality, or cut, at a pseudocodeword produced by LP decoding. Using these results, we presented an efficient algorithm to search for such cuts and proposed an effective approach to generating cut-inducing redundant parity checks (RPCs). The key innovation in the cut-generating approach is a particular transformation of the parity-check matrix used in the definition of the LP decoding problem. By properly re-ordering the columns of the original parity-check matrix and transforming the resulting matrix into a ``partial'' reduced row echelon form, we could efficiently identify RPC cuts that were found empirically to significantly improve the LP decoder performance. We combined the new cut-generation technique with three variations of adaptive LP decoding, providing a tradeoff between the number of iterations required and the number of constraints in the constituent LP problems. Frame-error-rate (FER) simulation results for several LDPC codes of length up to 999 show that the proposed adaptive cut-generation, adaptive LP (ACG-ALP) decoding algorithms outperform other enhanced LP decoders, such as the separation algorithm (SA) decoder, and significantly narrow the gap to ML decoding performance for LDPC codes with short-to-moderate block lengths.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
1,116,691,498,376 | arxiv | \section{Precision timing for the HL-LHC}
The next phase of the already successful Large Hadron Collider (LHC), called the High-Luminosity LHC (HL-LHC) is foreseen to start operation in 2027. Within a 10 year time period the HL-LHC will provide about ten times the LHC dataset. This rate will require an instantaneous luminosity three to four times higher than presently available. Such an exceptional environment poses great challenges to the experimental detectors, which were designed for the current LHC phase.
In order to cope with the more challenging environment and mitigate effects of radiation damage, the CMS collaboration~\cite{Chatrchyan:2008aa} is planning an upgrade of its detector systems and infrastructure.
Due to the extremely harsh conditions in the forward region, the present endcap calorimeters will be replaced by the new High Granularity Calorimeter (HGCAL)~\cite{TDR, FS}.
Finely segmented silicon sensors and scintillator tiles with silicon photomultiplier readout will cover 50 longitudinal layers in order to provide high granularity information for physics objects reconstruction.
A grand challenge of the HL-LHC will be the large pileup with up to an average of 200 simultaneous proton collisions. The interaction vertices will be spread out by $\pm 50$\,mm along the beam in distance, as well as in time, about $\pm 150$\,ps. These characteristics imply that timing measurements with resolutions in the 30\,ps range will greatly enhance pileup mitigation.
In order to meet these performance goals, HGCAL will provide timing measurement for individual hits with signals above 12\,fC (equivalent to 3-10\,MIPs), such that clusters with $p_T > 5$ \,GeV should have a timing resolution better than 30\,ps.
\section{Timing use in reconstruction}
\label{sec:reco}
Extensive studies using full simulations were performed to evaluate the use of timing information in the reconstruction of physics objects within HGCAL (cf. chapter 5.5 in \cite{TDR}).
Based on electronics simulations, the single channel resolution was assumed to be:
$$
\sigma_t = \sigma _{noise} \oplus \sigma _{floor},
\textrm{where } \sigma _{noise} = \frac{A}{S/N},
$$
with a noise term $A$ of 1.5\,ns/fC, a constant term $\sigma _{floor}$ of 20\,ps. $S/N$ denotes the signal-over-noise ratio, whereas the symbol $\oplus$ indicates quadratic summation.
Given such a noise term, single-hit timing information may not allow to discriminate hits from pileup objects.
However, the large granularity results in a high hit multiplicity, and therefore the best timing performance is expected from the combination of all cells from within the same particle shower.
With the studies presented in the HGCAL TDR, it was shown that rejecting hits in the tails provides robust pileup rejection. Further, taking only cells within 2\,cm of the shower axis, an efficiency of 100\,\% for photons is achievable with a resolution of 20\,ps for $p_T > 2$\,GeV. For neutral kaons, $K^0_L$, with $p_T > 5$\,GeV the resolution is below 30\,ps with a 90\,\% efficiency.
\section{Electronics requirements}
Precision timing in such a complex, high granularity detector is a great challenge for the electronics system of the whole experiment. Therefore, very strict requirements on the timing performance is imposed on all components, starting from the sensitive elements up until the clock distribution system.
The timing performance of single HGCAL prototype silicon sensors has been evaluated in a dedicated beam test campaign, see Ref.~\cite{TB2016}. After irradiation up to a fluence of $10^{16} \textrm{cm}^{-2} \textrm{ MeV neq}$ the timing resolution constant term for single channels was found to be about 20\,ps. The clock distribution system is being designed to have a reduced jitter ($<$ 15\,ps).
Thus, the already very challenging front-end ASIC HGCROC~\cite{HGCROC} is key to the overall timing performance, since it is responsible for the precise measurement of the incoming physics signals.
The HGCROC features a Time-of-Arrival (TOA) block based on a constant-threshold discriminator and a 3-stage TDC (Fig.~\ref{fig:hgcroc_tdc}). The first stage uses a 2 bit gray counter on the 160 MHz clock of the chip internal phase-locked loop (PLL), hence with a least significant bit (LSB) of 6.25\,ns.
Further, a coarse time-to-digital converter (TDC) with a classical delay-locked loop (DLL) measures the time between the last 40 MHz clock edge and the trigger signal with an LSB of 195\,ps.
Finally, a fine TDC with an LSB of 24.4\,ps uses a residue integrator based on a DLL line.
The architectural advantages of such a design are high-speed conversion, low power consumption, and large time range due to the global counter. This design also ensures the same performance under temperature and process variations.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\linewidth]{figs/hgcroc/tdc_blocks}
\includegraphics[width=0.5\linewidth]{figs/hgcroc/tdc_inl}
\caption{Left: architecture of the TDC block in the HGCROC. Right: integral non-linearity (INL) of the uncalibrated TDC measurement with respect to the 10-bit TDC code.}
\label{fig:hgcroc_tdc}
\end{figure}
Performance measurements are currently being performed with HGCROC prototypes.
For the TDC block, the differential non-linearity is within 1 LSB, whereas the raw integral non-linearity (INL) is about 10 LSB peak-to-peak. After INL calibration, the TDC time resolution is found to be about 10\,ps. The full analog and digital TOA chain is reaching a resolution of about 50\,ps satisfying the specification.
\section{Prototype timing performance in beam test }
The performance and feasibility of the HGCAL design have been validated in several beam test campaigns in the years from 2016 until 2018. A major beam test has taken place in October 2018 at the CERN SPS H2 beamline, featuring almost 100 silicon prototype modules~\cite{TB}.
One of the goals of these tests was to validate the the timing performance of the prototype.
For this purpose, the dedicated front-end readout ASIC SKIROC2-CMS~\cite{SK2cms} was equipped with a Time-of-Arrival measurement circuit (Fig.~\ref{fig:toa_block}).
The output of the preamplifier is fed into a CRRC fast shaper, followed by a threshold discriminator which starts a voltage ramp (Time-to-Amplitude converter, TAC) that is sampled at the edge of the 40 MHz system clock.
The fast shaping time is set to 5\,ns, and the discriminator threshold to an equivalent of about 40\,fC for an optimal noise and timing performance.
Due to the design specifics, the TAC ramp saturates after about 12\,ns causing a non-linearity in large TOA values. The measurements of the rising and falling clock edge, thus offset by 12.5\,ns, allow to reduce the impact of the non-linearity by taking a weighted average.
Measurements of the TOA resolution on a single-ASIC test board resulted in a constant term of 50\,ps, compatible with the specifications. In order to effectively asses the performance in the beam test, a MCP-PMT device with a 20\,ps resolution was used as a time reference.
\begin{figure}[tb]
\centering
\includegraphics[width=0.54\linewidth]{figs/beamtest/sk2cms_toa_block}
\includegraphics[width=0.45\linewidth]{figs/beamtest/sk2cms_toa_ramp}
\caption{SKIROC2-CMS TOA analog blocks (left) and time-to-amplitude (TAC) ramp (right).}
\label{fig:toa_block}
\end{figure}
After the initial large common-mode noise found in tests during 2017 was eliminated, the TOA threshold was optimised for data taking in 2018.
Since noise can cause TOA inefficiencies, the thresholds were determined using the turn-on curve method with external pulse injection in laboratory environment (Fig.~\ref{fig:toa_scurves}). The thresholds were selected corresponding to 95\,\% TOA efficiency. Measurements using the beam test data showed that the effective TOA thresholds were in a range from 10 to 20 minimum-ionising particle (MIP) charge equivalent.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\linewidth]{figs/beamtest/scurves_inj}
\qquad
\includegraphics[width=0.35\linewidth]{figs/beamtest/scurves_data}
\caption{TOA turn-on curves for a threshold scan (left) and as measured in beam data (right).}
\label{fig:toa_scurves}
\end{figure}
A dedicated data-driven calibration method was developed in order to find the response curve of the TOA.
The method is based on the fact that the SPS beam is asynchronous with respect to the ASIC clock used for the TOA measurement. Therefore, the true hit time distribution is uniform in its 25\,ns range.
According to the probability integral transform\footnote{From probability theory: it states that if a random variable X has a continuous distribution for which the CDF is $F_X$, then the random variable defined as $F_X(X)$ has a standard uniform distribution.},
the variable constructed from the cumulative distribution function (CDF) of the TOA has a uniform distribution. Hence, this variable is proportional to the true hit time. Due to the normalization of the CDF, a multiplication by 25 is needed to obtain the unit in nanoseconds (Fig.~\ref{fig:toa_calib1}).
Note that for convenience, normalized TOA values in the range from 0 ro 1 are used.
In addition, an inversion of the delay and TOA variables is performed to reflect the fact that
the measurement is done against the following clock edge.
In such a representation the TAC non-linearity is visible for large TOA and low delay values (Fig.~\ref{fig:toa_calib1}, right).
\begin{figure}[h]
\centering
\includegraphics[width=0.32\linewidth]{figs/beamtest/toa_dist}
\includegraphics[width=0.31\linewidth]{figs/beamtest/toa_cumul_dist}
\includegraphics[width=0.3\linewidth]{figs/beamtest/toa_cumul_curve}
\caption{Distribution of the uncalibrated normalized TOA in beam test data (left), its cumulative distribution (middle), and the reconstructed TOA response curve for the true hit time (delay) from the CDF (right).}
\label{fig:toa_calib1}
\end{figure}
This data-driven calibration method was validated in several ways.
The median of the raw TOA distribution corresponds to the 12.5\,ns point as can be exactly verified from the measurement of both the rise and fall time.
The linear part of the TOA is compatible with an intercept of 25\,ns, which is the maximum range of the TOA.
Finally, a comparison of the calibration curves obtained from the data-driven method and independently from external charge injection shows a reasonable agreement for most of the range (Fig.~\ref{fig:toa_calib2}, left).
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\linewidth]{figs/beamtest/toa_calib_data_vs_inj}
\qquad
\includegraphics[width=0.3\linewidth]{figs/beamtest/toa_timewalk}
\caption{Left: comparison of the TOA response curve derived in beam data (black dots) and with external charge injection (dashed line). Right: TOA timewalk of a single channel with respect to the MCP-PMT reference.}
\label{fig:toa_calib2}
\end{figure}
Due to the finite shaping time, the TOA measurement is affected by timewalk, i.e. an energy-dependence of the time measurement. For low-energy hits the timewalk reaches about 10\,ns.
In order to calibrate for the timewalk, the response-corrected TOA is compared to the MCP-PMT external time reference (Fig.~\ref{fig:toa_calib2}, right) and fitted with this function: $a + b x + c/(x-d)$.
The large fit residuals seen around 200\,MIP correspond to the switching of the ASIC gain, highlighting the importance of the gain inter-calibration.
The constant term in the timewalk function $a$ provides the systematic offset to the time reference. Figure~\ref{fig:toa_calib2} (right) shows the offsets as determined for each central module of the setup: a clear trend corresponding to the time-of-flight for relativistic particles is visible (Fig.~\ref{fig:toa_results}, left).
Since the prototype's clock distribution system ensures equal trace lengths to each detector module, deviations can be attributed to various residual electronics effects. It must be noted that the improper accounting of the systematic offsets will significantly deteriorate the final resolution when combining individual channels.
\begin{figure}[b]
\centering
\includegraphics[width=0.4\linewidth]{figs/beamtest/tof_pions}
\includegraphics[width=0.58\linewidth]{figs/beamtest/toa_reso_comp_refs}
\caption{Left: difference of the measured TOA time with respect to the MCP-PMT reference in pion data. The trend follows the time-of-flight of pions and deviations are due to residual electronics effects.
Right: single-channel time resolution using another silicon cell or the MCP as a reference as described in the text. }
\label{fig:toa_results}
\end{figure}
Finally, the single channel timing resolution was evaluated using both MCP-PMT and another silicon channel as reference (Fig.~\ref{fig:toa_results}, right). For the latter case the resolution was estimated based on the time difference between two channels for which the energy was required to be in the same energy bin; the two channel's measurements were treated uncorrelated and therefore the resolution was divided by a factor of $1/\sqrt{2}$.
The resolution was found to be compatible between the two references, with the constant term reaching the specification of the SKIROC2-CMS. The noise term (as in Sec.~\ref{sec:reco}) is degraded with respect to the single-chip lab measurements due to the increased noise environment in the prototype modules.
\section{Outlook}
The present CMS endcap calorimeters will be replaced by the High Granularity Calorimeter for the HL-LHC era.
In order to improve the physics potential, including pileup mitigation, HGCAL will provide timing information at the single-hit level.
Clusters with $p_T >5$\,GeV will be able to achieve resolutions of at least 30\,ps.
The electronics challenges of high-precision timing side are tackled by the front-end ASIC HGCROC using a multi-stage TDC. Preliminary measurements show a resolution below 13\,ps in a current prototype with a new iteration planned in 2020.
Beam tests have been performed with HGCAL prototypes in order to verify the technical and physics feasibility of timing measurements.
Data-driven TOA calibrations developed, and the single-channel resolution was found to be within the expectation of the SKIROC2-CMS prototype ASIC.
Further analysis, in particular the challenging combination of multiple channels, is ongoing.
The results will be documented in a future publication.
|
1,116,691,498,377 | arxiv | \section{Introduction}
One of the most important functions of an autonomous system is to be able to follow a specified
path, pertinent to the mission. Trajetory tracking is often accomplished using a combination of a
waypoint path planner and a control system that is used to follow the computed way-points. Path
planning is an extensively researched topic but it is not pertinent to planners can be converted into
3-D trajectories. The control effort required to achieve these maneuvers can then be used by the
controller to follow the trajectory. In the autonomous satellite docking problem, the main goal is achieve the position tracking of the deputy satellite to the chief satellite. The deputy satellite objective is to complete the docking procedure
with the chief. In other words, the relative position/velocity of the deputy needs to follow
some reference input generated by chief. Under this framework, we consider the output regulation
problem to achieve the autonomous satellite docking procedure.
The output regulation problem has gained the consideration and attraction of a wide audience in control systems society since it is a general mathematical formulation to tremendous control problems applications in engineering, biology, satellite clustering and other disciplines; see, for instance, \cite{bonivento2001output, huang2004nonlinear, isidori2003robust, sontag2003adaptation, trentelman2002control} and many references therein. The linear output regulation problem is mainly concerned in designing a control policy to achieve asymptotic tracking of a class of reference inputs, in addition to rejecting nonvanishing disturbances, in which both the reference signals and the disturbances are generated by a class of an autonomous systems, named exosystems. Essentially, the output regulation problem is solved using either feedback-feedforward method, or the internal model principle. In this work, we focus on solving the output regulation problem in the feedback-feedforward scheme.
Solving the output regulation problem by itself cannot guarantee an optimal behaviour for the studied dynamical system, whether in its transient or steady state responses. Dynamic programming (DP) is a backbone in solving optimal control problems. DP was first introduce by Bellman in the early 1950s \cite{bellman1954theory,Bellman1956,Bellman1957}. Bellman principle of optimality is the fundamental idea behind DP, which states that an optimal policy has the property that the following actions must achieve an optimal policy with regard to the state resulting from those previous actions, no matter what previous actions have been \cite{Frank2012optimal}. Based on the theoretical foundation of DP, reinforcement learning \cite{sutton2018reinforcement} and adaptive dynamic programming (ADP) \cite{Lewis2009}, methods have been developed to provide learning-based solutions to optimal control and decision making problems without using the modeling information.
In particular, ADP has gained numerous attentions recently and been applied to control both continuous-time systems \cite{Lewis2009, jiang2012computational, Gao2014WCICA,Bian2014Auto,bertsekas2015value,Bian2016,Bian2016TAC,zhong2016event,kiumarsi2017optimal,wang2017adaptive,article,doi:10.1137/18M1214147,fong2018dual,rizvi2019reinforcement,9034079,Vamvoudakis2015Auto} and discrete-time systems \cite{wei2015value,6917009,6815973,liu2019h,8685683,Jiang2020FnT}.
Therefore, different studies have considered combining the theories of adaptive optimal control with the output regulation in order to achieve the adaptive optimal output regulation problem; see \cite{Qasem2022,GAO2022110366,Deng2020,Zhao2021,Gao2017ACC,Gao2015MICNON,Gao2015ACC,Gao2016TAC,Gao2016CDC,Gao2016ACC,Gao2016JINT,Gao2014WCICA,Gao2016Auto} and references therein. Using reinforcement learning and Bellman's principle of optimality \cite{sutton2018reinforcement}, ADP methods
\cite{gao2021reinforcement,he2019adaptive,JiangBook2017,kamalapurkar2018reinforcement,vamvoudakis2012multi,wei2020continuous,yang2021model,Bian2016,bian2021reinforcement,heydari2018stability,
rizvi2019reinforcement,
zhao2020event,QasemHI,GAO2022110366,gao2022learning}, which are essentially based on reinforcement learning,
are developed such that the agent can learn towards the optimal control policy by interacting with its unknown environment. With this learning framework, see Fig. \ref{fig: actor critic diagram}, and by taking into account the output regulation problem, one can develop an adaptive optimal feedback-feedforward controller which behaves optimally on a long term without the knowledge of the system matrices.
In addition to the asymptotic tracking and disturbance rejection, two minimization problems of a predefined costs function are also considered, in which by solving these minimization problems, the optimal output regulation is achieved. Besides the issue of maintaining the asymptotic tracking, obtaining the full knowledge of the dynamics of the satellite is usually a difficult task, or even impossible. Moreover, the modelling information may not be exact enough which can cause modelling mismatch and therefore, the designed controller may not achieve satisfactory results. To fill in the gap between the output regulation problem and optimality, and overcome the barrier of modelling the physics of the system, a data-driven optimal controller is designed to approximate the feedback and feedforward control gains without the knowledge of the dynamics of the satellite (deputy) using the sate/input information collected along the trajectories of the deputy.
The contribution of this paper is summarized by the following:
(i) We consider the autonomous satellite docking problem based on the ADP and under the framework of the output regulation problem.
(ii) The Clohessy-Wiltshire equations are considered wherein the optimal feedback-feedforward control gain matrices are obtained using value iteration.
(iii) It is shown the tracking of the deputy to the chief is perfectly achieved in an optimal sense by considering reinforcement learning with the output regulation problem.
(iv) To best of our knowledge, this work is the first of its kind to consider ADP strategies and the output regulation concept in autonomous satellite docking applications to regulate the relative position of the deputy to chief.
\subsubsection*{\textbf{Notations.}}
The operator $|\cdot|$ represents the Euclidean norm for vectors and the induced norm for matrices. $\mathbb{Z}_{+}$ denotes the set of nonnegative integers. The Kronecker product is represented by $\otimes$, and the block diagonal matrix operator is denoted by $\textrm{bdiag}$.
$I_n$ denotes the identity matrix of dimension $n$ and $0_{n\times m}$ denotes a $n\times m$ zero matrix.
$\text{vec}(A) = [a_1^\textrm{T},a_2^\textrm{T},...,a_m^\textrm{T}]^\textrm{T}$, where $a_i \in \mathbb{R}^n$ is the $i^{\text{th}}$ column of $A \in \mathbb{R}^{n\times m}$.
For a symmetric matrix $P=P^\textrm{T} \in \mathbb{R}^{m\times m},$ $\text{vecs}(P)=\left[p_{11},2p_{12},...,2p_{1m},p_{22},2p_{23},...,2p_{m-1,m},p_{mm}\right]^\textrm{T}\in \mathbb{R}^{\frac{1}{2}m(m+1)}$.
$P\succ(\succeq)0$ and $P\prec(\preceq)0$ denote the matrix $P$ is positive definite (semidefinite) and negative definite (semidefinite), respectively. For a column vector $v\in \mathbb{R}^n$, ${\rm vecv}(v)$$=$$[v_1^2,v_1 v_2,\cdots,v_1v_n,v_2^2,v_2v_3,\cdots,v_{n-1}v_n,v_n^2]^T\in\mathbb{R}^{\frac{1}{2}n(n+1)}$. For a matrix $A\in \mathbb{R}^{n\times n}$, $\sigma(A)$ denotes the spectrum of $A$. For any $\lambda \in \sigma(A)$, $\text{Re}(\lambda)$ represents the real part of the eigenvalue $\lambda$.
\section{Objective and Methodology}
The objective of this paper is to study and analyze the docking scenario of a deputy satellite into a chief satellite such that the process is done with guaranteed stability, and maintaining the asymptotic tracking of the deputy to the chief. Moreover, the method should not rely on the prior knowledge of the physics of the system.Therefore, the learning method is carried by the adaptive optimal output regulation, in which the optimal control policy are learned using ADP.
To begin with, consider the following continuous-time linear system described by
\begin{align}\label{eq: exosystem}
\dot{v}&=Ev,\\\label{eq: x-system}
\dot{x}&=Ax+Bu+Dv\\\label{eq: e system}
e&=Cx+Fv,
\end{align}
where the vector $x\in\mathbb{R}^{n}$ is the state, $u\in\mathbb{R}^{m}$ is the control input, and $v\in\mathbb{R}^q$ stands for the exostate of an autonomous system \eqref{eq: exosystem}. The vector $e\in\mathbb{R}^{p}$ represents the output tracking error. The matrices $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times m}$, $D\in\mathbb{R}^{n\times q}$, $C\in\mathbb{R}^{p\times n}$, and $F\in\mathbb{R}^{p\times q}$ are real matrices with the pair $(A,B)$ are assumed to be unknown.
In our case, the relative motion between a 'Chief' and a 'Deputy' which are in close vicinity of each other is defined by the Clohessy-Wiltshire (CW) equations \cite{schaub2003analytical}. Relative accelerations in Cartesian coordinates are given by:
\begin{align}\label{eq: CW 1}
\Ddot{\textbf{x}}-2\bar n\dot{\textbf{y}}-3\bar{n}^2 \textbf{x}&=0,\\\label{eq: CW 2}
\ddot{\textbf{y}}+2\bar{n}\dot{\textbf{x}}&=0,\\\label{eq: CW 3}
\ddot{\textbf{z}}+\bar{n}^2\textbf{z}&=0,
\end{align}
where $(\textbf{x},\textbf{y},\textbf{z})$ represent the relative position of the two satellites in the orthogonal Cartesian coordinate system and $\bar n$ is the mean orbital rate. The vector components are taken in the rotating chief Hill frame. The advantage of using Hill frame coordinates is that the physical relative orbit dimensions are immediately apparent from these coordinates. The $(\textbf{x},\textbf{y})$ coordinates define the relative orbit motion in the chief orbit plane. The $\textbf{z}$ coordinate defines any motion out of the chief orbit plane. The following assumptions are taken into account such \eqref{eq: CW 1}-\eqref{eq: CW 3} can be held.
\begin{assumption} The relative distance between the chief and the deputy is much smaller than the orbit radius $r$.
\end{assumption}
\begin{assumption}
The relative orbit is assumed to be circular.
\end{assumption}
\begin{figure}
\centering
\includegraphics{figures/diagram.png}
\caption{The framework of the learning-based adaptive optimal output regulation}
\label{fig: actor critic diagram}
\end{figure}
To begin with, we define the state space vector $x$ as $x=\left[\textbf{x},~\textbf{y},~ \textbf{z},~\dot{\textbf{x}},~\dot{\textbf{y}},~ \dot{\textbf{z}}\right]^\textrm{T}$, and the control input vector as $u=\left[T_{1},~T_{2},~T_{3}\right]^\textrm{T}$, such that the thrusters are available to control the deputy in any of the three directions. We can then write eqs \eqref{eq: CW 1}-\eqref{eq: CW 3} in the form of \eqref{eq: exosystem}-\eqref{eq: e system}.
Some general assumptions are also considered when solving the output regulation as follows.
\begin{assumption}\label{assumption: controllability}
The pairs ($A$,$B$) and ($C$,$A$) are stabilizable and observable, respectively.
\end{assumption}
\begin{assumption}\label{assumption: rank}
{\rm rank}$\left(\begin{bmatrix}
A-\lambda I_n & B\\C&0
\end{bmatrix}\right) = n +p,\,\forall\lambda \in \sigma (E)$.
\end{assumption}
In order to solve the optimal output regulation problem, {two optimization} problems need to be addressed. {The static optimization Problem \ref{Problem 1} is solved in order to find the optimal solution $(X^\star,U^\star)$ to the regulator equations \eqref{regeq1}-\eqref{regeq2}. While the dynamic optimization problem described in Problem \ref{Problem 2} is solved to find the optimal feedback control policy.} Both problems are stated as follows:
\begin{problem}\label{Problem 1}
\begin{align}
\min_{(X,U)}{\rm Tr}&(X^{\rm T}\bar{Q}X+U^{\rm T}\bar{R}U), \label{min_Tr}\\
{\rm subject~ to~~ } XE&=AX+BU+D,\label{regeq1}\\
0&=CX+F,\label{regeq2}
\end{align}
where $\bar Q=\left(\bar Q\right)^{\rm T}\succ0$ and $\bar R=\left(\bar R\right)^{\rm T}\succ0$.
\end{problem}
\nocite{ABDULLAH20195541}
Based on Assumption \ref{assumption: rank}, the solvability of the regulator equations defined by \eqref{regeq1}-\eqref{regeq2} is guaranteed and the pair $(X,U)$ exist for any matrices $D$ and $F$; see \cite{huang2004nonlinear}.
Additionally, the solution to Problem \ref{Problem 1}, i.e., $(X^\star,U^\star)$ is unique, which will guarantee that the feedforward control policy obtained using $(X^\star,U^\star)$ is also unique and optimal.
\begin{problem}\label{Problem 2}
\begin{align}
\min_{\bar{u}_i} \int_{0}^{\infty} &\left(\bar{x}^{\rm T}Q\bar{x}+\bar{u}^{\rm T}{R}\bar{u}\right) ~{\rm d}t, \\\label{eq: error system 1}
{\rm~subject~ to~~ } \dot{\bar{x}}&=A\bar{x}+B\bar{u},\\\label{eq: error system 2}
e&=C\bar{x},
\end{align}
where $Q=\left(Q\right)^{\rm T} \succeq 0, R =\left(R\right)^{\rm T} \succ0,$ with $\left(A,\sqrt{Q}\right)$ being observable. The equations \eqref{eq: error system 1}-\eqref{eq: error system 2} form the error system with $\bar{x}:=x-Xv$ and $\bar{u}:=u-Uv$.
\end{problem}
Note that if the deputy's dynamics in \eqref{eq: x-system} are perfectly known, one can develop the optimal controller in the following form.
\begin{align}\label{eq: ui*}
u^\star(K^\star,L^\star)=-K^\star x+L^\star v,
\end{align} where $K^\star=R^{-1}B^\textrm{T}P^\star$, and $P^\star $ is the unique solution of the following albegraic Ricatti equation (ARE)
\begin{align}
A^\textrm{T}P^\star+P^\star A+Q-P^\star BR^{-1}B^\textrm{T}P^\star=0.\label{eq: ARE}
\end{align}
The solutions to the regulator equations \eqref{regeq1}-\eqref{regeq2}, i.e., $(X,U)$, form the optimal feedforwad gain matrix such that \begin{align}\label{eq: Li*}
L^\star&=U+K^\star X.
\end{align}
It is remarkable that equation \eqref{eq: ARE} is nonlinear in $P^\star $. Therefore, different iterative methods have considered to solve the ARE iteratively, including policy iteration (PI) and value iteration (VI). The following lemma shows the convergence of \eqref{eq: ARE} in the sense of the PI method.
\begin{lemma}[\hspace{-0.2pt}\cite{Kleiman1968}]
Let $K_{0} \in \mathbb{R}^{m \times n}$ be a stabilizing feedback gain matrix, the matrix $P_{k}=(P_{k})^{\rm T}\succ0$ be the solution of the following equation
\begin{align}\label{eq: Policy evaluation}
P_{k}(A-BK_{k-1})+(A-&BK_{k-1})^{\rm T} P_{k} +Q+K_{k-1}^{\rm T} R K_{k-1}=0,
\end{align}
and the control gain matrix $K_{k}$, with $k=1,2,\cdots,$ are defined recursively by
\begin{align}\label{eq: Policy improvement}
K_{k}=R^{-1} B^\textrm{T} P_{k-1}.
\end{align}
Then the following properties hold for any $k\in \mathbb{Z}_{+}$.
\begin{enumerate
\item The matrix $A-BK_{k}$ is Hurwitz.
\item $P^\star \preceq P_{k} \preceq P_{k-1}$.
\item $\underset{{k\rightarrow \infty}}\lim K_{k} = K^\star ,\; \underset{{k\rightarrow \infty}}\lim P_{k}=P^\star $.
\end{enumerate}
\end{lemma}
It is notable that an initial stabilizing control policy is required to initiate the learning process of PI. In this paper we consider an iterative reinforcement learning method based on VI to solve $P^\star $, wherein the solvability of the VI is considered under ADP scheme. We consider the use of VI since no initial stabilizing control policy is required to initiate the learning process. This gives VI an advantage over PI since obtaining the prior knowledge of an initial stabilizing control policy is a stringent requirement and may be impossible to obtain, especially when the system dynamics are not available or are not known perfectly. The iterative process of VI to find the optimal control policy is done by repeating the value update step until the value function converges to its optimal value. In the following sections, we show in further details the use of VI to solve our problem.
\section{Model-Based Value Iteratio
}
Throughout this section, the value iteration is used such that the value matrix is iteratively updated until the value matrix converges within a predefined condition.
{To begin with}, $\{B_r\}_{r=0}^{\infty}$ is defined as a collection of nonempty interiors bounded sets, which satisfies
\begin{align*}
B_r \subset B_{r+1} \in \mathcal{J}_+^n , \; r\in \mathbb{Z}_{+},\; \lim_{r \rightarrow \infty} B_r = \mathcal{J}^{n}_{+},\end{align*} and $\varepsilon>0$ is a small constant selected as a threshold. In addition, select a deterministic sequence $\{\epsilon_{k}\}_{k=0}^{\infty}$ such that the following conditions are satisfied: \begin{align}\label{eq: ek condition}\epsilon_k>0,\;\;\sum_{k=0}^{\infty}\epsilon_k=\infty,\;\;\lim_{k \rightarrow 0}\epsilon_k = 0.\end{align}
As mentioned earlier, the VI is different from the policy iteration, described by \eqref{eq: Policy evaluation}-\eqref{eq: Policy improvement} in the sense that an initial stabilizing control policy is not required. Instead, the learning process is initiated with an arbitrary value matrix $P_{0}=(P_{0})^\textrm{T}\succ 0$. In the following, the model-based VI algorithm is given, in which the system matrices $(A,B)$ are used to learn the optimal control policy, based on the results in \cite{Bian2016}.
\begin{algorithm
\begin{algorithmic}[1]
\caption{Model-based Value Iteration}
\State Select a small constant
$\varepsilon>0$, and
{$P_{0}=(P_{0})^\textrm{T}\succ0$}.
\State $k,r \gets 0$.
\Repeat
\State ${\Tilde{P}_{k+1}\gets P_{k}+\epsilon_k (P_{k}A+A^{\textrm{T}}P_{k}+ {
}} {-P_{k} B R^{-1} B^{\textrm{T}} P_{k})}$
\If{$\Tilde{P}_{k+1}\notin B_{r}$}
{$P_{k+1}\leftarrow P_{0},~ r\leftarrow r+1$}.
\Else
{ $P_{k+1}\gets\Tilde{P}_{k+1}$}
\EndIf {\textbf{endif}}
\State $k\gets k+1$
\Until $|\Tilde{P}_{k}-P_{k-1}|/\epsilon_{k-1}\prec\varepsilon$
\State $k^\star \gets k$
\State Find the pair $(X,U)$ from \eqref{regeq1}-\eqref{regeq2}.
\State $L_{k\star}\gets U + K_{k^\star}X$
\State Obtain the optimal controller {using}
$ u^\star=-K_{k^\star}x+L_{k^\star}v.$
\label{model-based VI Algorithm}
\end{algorithmic}
\end{algorithm}
\begin{remark}
It is noteworthy to mention that if the bound of $P^\star$ is known in prior, i.e., $|P^\star|<\gamma$, then one can fix $B_r$ to $B_r=\gamma$.
\end{remark}
\section{Data-Driven Value Iteration for Output Regulation Problem}
From the previous section, it is notable that the model-based VI requires the full knowledge of the system matrices $(A,B)$. In practice, obtaining these matrices may not be easy when considering higher order and more complex systems. In this section, we consider a data-driven VI method in which the optimal control policy is obtained without relying on the dynamics or the physics of the system, but the data (state/input information) collected along the trajectories of the underlying dynamical system are used to learn an approximated optimal control policy.
Considering the $x-$system in \eqref{eq: x-system}, define $\bar{x}_{j}=x - X_{j}v$ for $0\leq j\leq h +1$, where $X_{0}=0_{n \times q}$, $X_{j}\in \mathbb{R}^{n \times q}$ so that $C X_{1}+F=0$. The matrices $X_{j
$ for $2\leq j\leq h +1$, where $h = (n - p)q$ is the null space dimension of $I_{q}\otimes C$, are selected such that the basis for $\text{ker}(I_{q}\otimes C)$ are formed by all the vectors $\text{vec}(X_{j})$. With the above definitions along with \eqref{eq: exosystem}--\eqref{eq: x-system}, the following differential equation is then obtained.
\begin{align} \label{xbar equation for VI}
\dot{\bar x}_{j}&= A x + B u +(D-X_{j}E)v\\
\label{xbar equation for PI}
&{ = A_{k} \bar{x}_{j} +B (K_{k} \bar x_{j}+u)+(D - S (X_{j}))v}
\end{align}
where the Sylvester map $S:\mathbb{R}^{n \times q}\rightarrow \mathbb{R}^{n\times q}$ satisfies $S(X)=XE-AX$, $\forall$ $X\in \mathbb{R}^{n\times q}$, and $A_{k}=A - B K_{k}$.
For any two vectors {$a(t)\in\mathbb{R}^{\boldsymbol{n}},b(t)\in\mathbb{R}^{\boldsymbol{m}}$}, and a sufficiently large $\rho\in\mathbb{Z}_+$, the following matrices are defined.
\begin{align}
\delta_b&=\left[\text{vecv}({ b})|_{t_0}^{t_1},\text{vecv}({ b})|_{t_1}^{t_2} \cdots,\text{vecv}({ b})|_{t_{\rho -1}}^{t_\rho}\right]^\textrm{T}{\in\mathbb{R}^{\rho\times {\boldsymbol{m}(\boldsymbol{m}+1)}/{2}}},\nonumber\\
\Gamma_{a,b}&=\begin{bmatrix}\int_{t_0}^{t_1}a\otimes b\;\textrm{d}\tau,\int_{t_1}^{t_2}a\otimes b\;\textrm{d}\tau,\cdots,\int_{t_{\rho-1}}^{t_\rho}a\otimes b\;\textrm{d}\tau\end{bmatrix}^\textrm{T}
{\in\mathbb{R}^{\rho\times \boldsymbol{n}\boldsymbol{m}}}.\nonumber
\\
{\mathbb{I}_{a}}&{ =\begin{bmatrix}\int_{t_0}^{t_1}\text{vecv}(a)\textrm{d}\tau,\int_{t_1}^{t_2}\text{vecv}(a)\textrm{d}\tau \cdots,\int_{t_{\rho -1}}^{t_\rho}\text{vecv}(a)\textrm{d}\tau\end{bmatrix}^\textrm{T}\nonumber}{{\in\mathbb{R}^{\rho\times {\boldsymbol{m}(\boldsymbol{m}+1)}/{2}}}.\nonumber}
\end{align}
Consider the Lyapunov candidate $V_{k}(\bar x_{j})=\bar x_{j}^\textrm{T} P_{k} \bar x_{j}$, where $k\in \mathbb{Z}_{+}$. By taking the time derivative of $V_{k}(\bar x_{j})$ along with
\eqref{xbar equation for PI}, with some mathematical manipulations and rearrangements, one obtains the following
\begin{align
\dot V_k(\bar x_{j})&=\dot{ \bar x}_{j}^\textrm{T}P_{k}\bar x_{j}+\bar x_{j}^\textrm{T}P_{k}\dot{ \bar x}_{j}\nonumber\\
\label{VI Lyapunov}
&=\bar x_{j}^\textrm{T}(H_{k})\bar x_{j}+2u^\textrm{T}RK_{k+1}\bar x_{j}+2v^\textrm{T}(D - S(X_{j}))^\textrm{T}P_{k}\bar x_{j}
\end{align}
where $H_{k}=A^\textrm{T}P_{k}+P_{k}A$.\\
By taking the integral of \eqref{VI Lyapunov} over $[t_0, t_s]$, {where $\left\{t_l\right\}_{l=0}^{s}$ with $t_l=t_{l-1}+\Delta t,$ $\Delta t>0$ is a strictly increasing sequence}, the result can be written in the following Kronecker product representation.
\begin{align}\label{VI data-driven}
&{\Theta_{j}}\begin{bmatrix}
{\text{vecs}}(H_{k})\\\text{vec}(K_{k+1})\\\text{vec}((D - S(X_{j}))^{\textrm{T}}P_{k})
\end{bmatrix}=\delta_{\bar x_{j} , \bar x_{j}}\text{vecs} (P_{k})\hspace{-0.2em}
\end{align}
where ${\Theta_{j}} = \begin{bmatrix}
{\mathbb{I}_{\bar{x}_{j}}}
,& 2\Gamma_{\bar x_{j},{u}}(I_{n} \otimes R),&2\Gamma_{\bar x_{j},{v}}
\end{bmatrix} $. If $\Theta_j$ is full column rank, the solution of \eqref{VI data-driven} is obtained in the sense of least square error by using the pseudo-inverse of $\Theta_j$, i.e., $\Theta_j^\dagger=\left(\Theta_j^\textrm{T}\Theta_j\right)^{-1}\Theta_j^\textrm{T}$. The full column rank condition of $\Theta_j$ is satisfied by the following lemma.
\begin{lemma}\label{rank lemma}
For all $j\in \mathbb{Z}_{+}$, if there exist a $s'\in \mathbb{Z}_{+}$ such that for all $s>s'$ the following rank condition is satisfied
\begin{align}\label{eq: rank}
{\rm rank}\left(\begin{bmatrix
\mathbb{I}_{\bar x_{j}}, \Gamma_{\bar x_{j} ,u}, \Gamma_{\bar x_{j},v}
\end{bmatrix}\right)=\frac{n (n+1)}{2}+(m+q)n
\end{align} for any increasing sequence $\{t_l\}_{l=0}^{s}$, $t_l=t_{l-1}+\Delta t,$ $\Delta t>0$, then the matrix {$\Theta_{j}$} has full column rank, $\forall\;k\in \mathbb{Z}_{+}.$
\end{lemma}
Lemma \ref{rank lemma} shows that if \eqref{eq: rank} is satisfied, the existence and uniqueness of the solution of \eqref{VI data-driven} is guaranteed, {where the solution can be obtained using the pseudo-inverse of {$\Theta_{j}$}.}
{\begin{remark}
The matrix $\Theta_{j}$ is fixed for all $k\in\mathbb{Z}_+$ and does not require to be updated at each iteration $k$.
\end{remark}}
To this end the value matrix is updated using stochastic approximation by
\begin{align*}
{P}_{k+1}\gets P_{k} + \epsilon_k(H_{k} + Q -(K_{k+1})^\textrm{T}RK_{k+1})
\end{align*} where $\epsilon_k$ satisfies \eqref{eq: ek condition}, until the condition $|P_{k}-P_{k-1}|/\epsilon_k\leq{\varepsilon}$ is satisfied, where ${\varepsilon}>0$ is a small threshold. By that, it is guaranteed that the obtained control policy is close enough to the actual optimal one.
The data-driven VI algorithm can now be introduced. It is presented in Algorithm \ref{Algorithm:HI Data-driven}.
\begin{algorithm}
\caption{Data-Driven Value Iteration for Optimal Output Regulation}
\begin{algorithmic}[1]
\State Choose a small threshold constant ${\varepsilon}>0$ and {$P_{0}=(P_{0})^\textrm{T} \succ 0$}.
\State Compute {the} matrices $X_{0}, X_{1},\cdots, X_{h+1}$.
\State Choose an arbitrary $K_{0}$, not necessarily stabilizing, and employ $u_{0}=-K_{0}x+\eta$, with $\eta$ being an exploration noise over $[t_0,t_s]$.
\State
$j\gets 0.$
\Repeat
\State
Compute $\mathbb{I}_{\bar{x}_{j}}\text{, }\Gamma_{\bar x_{j} u} {\text{, and }} \Gamma_{\bar x_{j} v}$ {while} satisfying \eqref{eq: rank}.
\State
$j\gets j+1.$
\Until $j=h+2$
\State $k\gets 0$, $j\gets 0$, $r\gets 0$.
\Repeat
\State Solve $H_{k}$ and $K_{k+1}$ from \eqref{VI data-driven}.
\State
{\Tilde{P}_{k+1}\gets P_{k} + \epsilon_k(H_{k} + Q -(K_{k+1})^\textrm{T}RK_{k+1})}$
\If{$\Tilde{P}{_{k+1}}\notin B_r$}
{$P_{k+1}\gets P_{0}$, $r\gets r+1$.}\Else
{ $P_{k+1}\gets \Tilde{P}_{k+1}$}
\EndIf \textbf{end if}
\State $k\gets k+1$
\Until $|P_{k}-P_{k-1}|/\epsilon_{k-1}<{\varepsilon}$
\State $k\gets k^*, j\gets 1$.
\Repeat
\State From \eqref{VI data-driven}, solve $S({X}_{j})$.
$j\gets j+1.$
\Until $j=h +2$
From Problem \ref{Problem 1}, {find} $(X^\star,U^\star)$ using online data.
\State $L_{k\star}\gets U^\star + K_{k^\star}X^\star$
\State Obtain the suboptimal controller {using}
\begin{align}\label{eq: suboptimal controller}
u^\star&=-K_{k^\star}x+L_{k^\star}v.
\end{align}
\end{algorithmic}
\label{Algorithm:HI Data-driven}
\end{algorithm}
If \eqref{eq: rank} is satisfied, it is guaranteed that the sequences $\{{P_{k}}\}_{k=0}^{\infty}$ and $\{{K_{k}}\}_{k=1}^{\infty}$ learned by Algorithm \ref{Algorithm:HI Data-driven} converge respectively to $P^\star$ and $K^\star$. It is worth mentioning that the proposed VI Algorithm \ref{Algorithm:HI Data-driven} is an off-policy learning algorithm. Since the value function in VI is increasing, the increasing sequence of $\{P_k\}_{k=0}^\infty$ will not affect the trajectories of the system during the learning period.
{\begin{remark}
An exploration noise is added to the input of the system \eqref{eq: x-system}--\eqref{eq: e system} during the learning process of Algorithm \ref{Algorithm:HI Data-driven}. Such an input is chosen to satisfy the rank condition \eqref{eq: rank}---which is similar to the condition of persistent excitation. The noise selected can be a random noise or a summation of sinusoidal signals with distinct frequencies, see \cite{sutton2018reinforcement,Bian2016,jiang2012computational} and references therein.
\end{remark}}
\section{Implementation}
In this section, the simulation of the autonomous satellite docking is implemented using MATLAB, and the results are presented.
The system to be considered is assumed to be under J2 Oblateness perturbation, which is modeled as a disturbance injected into the system. The dynamics considered in this paper are based on the work done in \cite{schweighart2001development}. In this we consider the disturbances to be created by the by the exosystem. The system is described in the following form:
\begin{align}\label{eq: CW1 J2}
\Ddot{\textbf{x}}-2\bar{n}c\dot{\textbf{y}}-(5c^2-2)\bar{n}^2\textbf{x}&=-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}(\frac{1}{2}-\frac{3\sin^2{(i)} \sin^2{(\bar{n}ct)} }{2}-\frac{1+3\cos{(2i)}}{8}),\\
\ddot{\textbf{y}}+2\bar{n}\dot{\textbf{x}}&=-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}\sin^2{(i)}\sin^2{(\bar{n}ct)}\cos{(\bar{n}ct)},\\\label{eq: CW3 J2}
\ddot{\textbf{z}}+\bar{n}^2\textbf{z}&=-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}\sin{(i)}\sin{(\bar{n}ct)}\cos{(i)},
\end{align}where $\bar{n}$ is the mean orbital rate, $R_e$ is the radius of the earth, $r_{\rm ref}$ is the position of the reference orbit, $t$ is the time, and $i$ is the angle of incidence.
The system in \eqref{eq: CW1 J2}-\eqref{eq: CW3 J2} can be reformulated and described in the following form: \begin{align}\label{eq: xddot}
\Ddot{\textbf{x}}-2\bar{n}c\dot{\textbf{y}}-(5c^2-2)\bar{n}^2\textbf{x}&=-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}\mathcal{Q}_1,\\\label{eq: yddot}
\ddot{\textbf{y}}+2\bar{n}\dot{\textbf{x}}&=-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}\mathcal{Q}_2,\\\label{eq: zddot}
\ddot{\textbf{z}}+\bar{n}^2\textbf{z}&=-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}\mathcal{Q}_3,
\end{align}
where $c\equiv\sqrt{1+\textbf{s}}$ with $\textbf{s}=\frac{3J_2R_e^2}{8{r_{\rm ref}}^2}(1+3\cos{2i})$, $\mathcal{Q}_1$, $\mathcal{Q}_2$ and $\mathcal{Q}_3$ are disturbances generated by the exosystem with $|\mathcal{Q}_1|\leq 1$, $|\mathcal{Q}_2|\leq 1$ and $|\mathcal{Q}_3|\leq 1$. Therefore, the above equations can be transformed in the form of \eqref{eq: exosystem}-\eqref{eq: e system} by assuming sinusoidal signals are generated by the exosystem \eqref{eq: exosystem} in addition to the tracking signals.
{Based on \eqref{eq: xddot}--\eqref{eq: zddot}, the system matrices can be found as follows:
\begin{align}
A=\begin{bmatrix}
0&0&0&1&0&0\\
0&0&0&0&1&0\\
0&0&0&0&0&1\\
(5c^2-2)\bar{n}^2&0&0&0&2\bar{n}c&0\\
0&0&0&-2\bar{n}&0&0\\
0&0&\Bar{n}^2&0&0&0
\end{bmatrix},~B=\begin{bmatrix}
0&0&0\\
0&0&0\\
0&0&0\\
1&0&0\\
0&1&0\\
0&0&1
\end{bmatrix}
\end{align}
The performed simulations are summarized in the following steps:
\begin{enumerate}
\item An essentially bounded input is applied to deputy satellite along with a non-stabilizing control policy.
\item State, input and exosystem information are collected along the trajectories of the system described in \eqref{eq: exosystem}-\eqref{eq: x-system} for the time interval $[0,10](s)$.
\item The optimal control problem is obtained by solving Problem 2, wherein an optimal state feedback gain matrix is obtained.
\item The output regulation problem is solved by solving Problem 1, wherein the output regulation is then achieved.
\item The adaptive optimal output regulation is achieved by applying the optimal feedback-feedforward matrix designated in \eqref{eq: Li*}
\end{enumerate}
The proposed approach shown in Algorithm \ref{Algorithm:HI Data-driven} is used to learn the optimal feedback-feedforward control policy to regulate the relative positions. In addition. Instead of using the modelling information of the system, we use the online collected data to learn the optimal control policy, which removes the stringent requirement of knowing the exact physics of the studied system. The data collection and learning is set to be in the interval $[0,25] (s)$. Last but not least, besides achieving the asymptotic tracking of the exosystem signals, we are also able to achieve rejection for class of disturbances generated by the exosystem, with minimizing a predefined cost function.
{For simulation purposes, we assume the reference signal and the disturbances are generated by the exosystem with the matrix $E$ defined as follow
\begin{align}
E&=\textrm{bdiag}\left(\begin{bmatrix}
0&0.1\\-0.1&0\end{bmatrix},\begin{bmatrix}
0&0.2\\-0.2&0\end{bmatrix},\begin{bmatrix}
0&0.3\\-0.3&0\end{bmatrix},
\begin{bmatrix}
0&0.4\\-0.4&0\end{bmatrix}\right)
\end{align}
The rest of the matrices are shown below, where $Dv$ represents the disturbances applied to the system, and $-Fv$ is the tracking signal.
\begin{align}
C&=\begin{bmatrix}
1&0&0&0&0&0\\
0&1&0&0&0&0\\
0&0&1&0&0&0
\end{bmatrix},~F=\begin{bmatrix}
1&0&0&0&0&0&0&0\\
0&1&0&0&0&0&0&0\\
0&0&1&0&0&0&0&0
\end{bmatrix},\\
D&=\begin{bmatrix}
0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0\\
0&0&-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}&0&0&-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}&0&0\\
0&0&0&0&-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}&0&0&0\\
-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}&0&0&0&0&0&-3\bar{n}^2J_{2}\frac{R_{e}^2}{r_{\rm ref}}&0
\end{bmatrix}
\end{align}
}}
The cost function matrices are considered to be $Q=0.05I_6$ and $R=10I_3$. $B_r=10(r+1)$ and $\epsilon_k=\frac{1}{k}$. The CW parameters are chosen same to those used in \cite{schweighart2001development}, where $r_{\textrm{ref}}=7000~km$ and $\Bar{n}=0.00108~1/s$. The altitude of the chief with respect to the center of the earth is chosen to be 6776 km (Low-Earth Orbit).
\section{Results}
For simulation purposes, we assume that the chief is moving in the space in all $x$, $y$, and $z$ directions. In addition, the velocity in each direction is different. Using Algorithm \ref{Algorithm:HI Data-driven}, the results are obtained depicted in Figs. \ref{fig: errorP}-\ref{fig: position}. In the following, the actual and the learned feedback and feedforward control gain matrices are shown.
\begin{align*}
L^{(35)}&=\begin{bmatrix}
-2.1645 & -4.0389 & 0& 0& 0.0005 & 0 & 0 & 0\\
4.0387 & -2.1644 & 0.0061 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0.8375 & -8.0807 & 0 & 0 & 0.003 &0
\end{bmatrix} \\
L^{\star}&=\begin{bmatrix}
-2.1644 & -4.0387 & 0& 0& 0 & 0 & 0 & 0\\
4.0387 & -2.1644 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0.8377 & -8.0807 & 0 & 0 & 0 &0
\end{bmatrix}
\end{align*}
It is noticed that the learned control gain matrices are close enough to the optimal actual ones. In addition, one can realize from Figure \ref{fig: position} that the position of the deputy follows the position of the chief and the error converges to zero which confirms the completion of the docking procedure. Figure \ref{fig: errorP} illustrates the convergence of the learned value matrix to the optimal one. It is noted that the convergence of the value matrix is done in 5512 iterations. The large number of iterations incurred by the VI
is due to the sublinear convergence rate of VI \cite{Bian2016}. However, the VI has less computational complexity comparing to the PI \cite{jiang2012computational} which converges in a quadratic convergence rate, but it still needs a stabilizing control policy to converge.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{figures/errorP3-eps-converted-to.pdf}
\caption{The norm of difference between the learned value matrix $P_k$ and $P^\star$ at each iteration $k$ under value iteration Algorithm \ref{Algorithm:HI Data-driven}}
\label{fig: errorP}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{figures/relative_position2-eps-converted-to.pdf}
\caption{The relative positions $(\textbf{x},\textbf{y},\textbf{z})$ using the optimal control policy learned by Algorithm \ref{Algorithm:HI Data-driven}}
\label{fig: relative position}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/position_xy2-eps-converted-to.pdf}
\caption{y-position vs. x-position while fixing the z-position using the optimal control policy learned by Algorithm \ref{Algorithm:HI Data-driven}}
\label{fig: position xy}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{figures/position2-eps-converted-to.pdf}
\caption{Deputy's position (x,y,z) and Chief's position $\text{(x}_\text{ref},\text{y}_\text{ref},\text{z}_\text{ref})$}
\label{fig: position}
\end{figure}
\section{Conclusion}
This work considers the control of autonomous satellite docking by developing a direct adaptive optimal control using adaptive dynamic programming (ADP). More specifically, two problems are considered to guarantee that optimal tracking. First, the output regulation problem is solved to achieve asymptotic tracking and disturbance rejection. Second, we consider solving the output regulation problem by adaptive dynamic programming. Therefore, the states and dynamics information of the system are not needed in order to compute the optimal feedback-feedforward control policy. The ADP approach is implemented on an autonomous satellite docking problem by considering the Clohessy-Wiltshire equation with J2 perturbations, where the problem is reformulated into an adaptive optimal output regulation problem. The simulation results illustrate the efficacy of the proposed method.
\bibliographystyle{AAS_publication}
|
1,116,691,498,378 | arxiv | \section{Introduction}
The so called \emph{cutting lemma} is a very useful combinatorial partition tool with numerous applications in computational and incidence geometry and related areas (see e.g. \cite[Sections 4.5, 6.5]{mbook} or \cite{cuttings} for a survey). In it's simplest form it can be stated as follows (see e.g. \cite[Lemma 4.5.3]{mbook}).
\begin{fact}
For every set $L$ of $n$ lines in the real plane and every $1< r < n$ there exists a $\frac{1}{r}$-cutting for $L$ of size $O(r^2)$. That is, there is a subdivision of the plane into generalized triangles (i.e. intersections of three half-planes)
$\Delta_1, \ldots, \Delta_t$ so that the interior of each $\Delta_i$ is intersected by at most $\frac{n}{r}$ lines in $L$, and we have $t \leq C r^2$ for a certain constant $C$ independent of $n$ and $r$.\end{fact}
This result provides a method to analyze intersection patterns in families of lines, and it has many generalizations to higher dimensional sets and/or to families of sets of more complicated shape than lines, for example for families of algebraic or semialgebraic curves of bounded complexity \cite{chaz}. The proofs of these generalizations typically combine some kind of geometric ``cell decomposition'' result with the so-called random sampling technique of Clarkson and Shor \cite{13}.
The aim of this article is to establish a rather general version of the cutting lemma for definable (in the sense of first-order logic) families of sets in a certain model-theoretically tame class of structures (namely, for distal structures --- see Section \ref{sec: prelims and distality} for the definition), as well as to apply it to generalize some of the results in the area from the semialgebraic context to arbitrary $o$-minimal structures. This work can be viewed as a continuation and refinement of the work started in \cite{distal}, where the connection of model-theoretic distality with a weak form of the cutting lemma was discovered (we don't assume familiarity with that paper, but recommend its introduction for an expanded discussion of the model theoretic preliminaries). We believe that distal structures provide the most general natural setting for investigating questions in ``generalized incidence combinatorics''.
Let us describe the main results of the paper. Our first theorem establishes a cutting lemma for a definable family of sets in a distal structure, with the bound corresponding to the bound on the size of its distal cell decomposition. This is a generalized form of Matou\mathfrak{v}{s}ek's axiomatic treatment of Clarkson's random sampling method discussed in \cite[Section 6.5]{mbook}.
The proof relies in particular on Lemma \ref{lem-set_system_corr} on correlations in set-systems to deal with the lack of the corresponding notion of ``being in a general position''.
\begin{thm*}(Theorem \ref{lem-r_cut}, Distal cutting lemma) Let $\varphi(x;y)$ be a formula admitting a distal cell decomposition $\CT$ (given by a finite set of formulas $\Psi(x; \bar{y})$ --- see Definition \ref{def: def cell decomp}) with
$|\CT(S)| = O(|S|^d)$ (i.e. for some constant $C\in \mathbb{R}$, for any \textbf{non-empty}
finite $S\subseteq M^{|y|}$ we have $|\CT(S)| \leq C|S|^d$).
Then for any finite $H \subseteq M^{|y|}$ of size $n$ and any real $r$ satisfying $1 < r < n$, there are subsets $X_1, \ldots, X_t$ of $M^{|x|}$ covering $M^{|x|}$ with
$$
t \leq Cr^d
$$
for some constant $C = C(\varphi)$ (and independent of $H$, $r$ and $n$), and with each $X_i$ crossed by at most $n/r$ of the formulas $\{\varphi(x;a): a \in H\}.$
Moreover, each $X_i$ is the intersection of at most two sets $\Psi$-definable over $H$ (see Definition \ref{def: definable, crossing}).
\end{thm*}
While every formula in a distal structure admits a distal cell decomposition (see Fact \ref{fac: char of distality}), establishing optimal bounds in dimension higher than $1$ is non-trivial. In our second theorem, we demonstrate that formulas in $o$-minimal structures admit distal cell decompositions of optimal size ``on the plane''.
\begin{thm*}(Theorem \ref{thm:cell-dec})
Let $\mathcal{M}$ be an $o$-minimal expansion of a real closed field.
For any formula $\varphi(x;y)$ with $|x|=2$ there is a distal cell decomposition $\CT$ with
$|\CT(S)| = O(|S|^2)$.
\end{thm*}
In our proof, we show that a version of the vertical cell decomposition can be generalized to arbitrary $o$-minimal theories. This gives an optimal bound for subsets of $M^2$, but determining the exact bounds for distal cell decompositions in higher dimensions remains open, even in the semialgebraic case.
Finally, in Section \ref{sec: Zarank} we apply these two theorems to generalize the results in \cite{zaran} on the semialgebraic Zarankiewicz problem to arbitrary $o$-minimal structures, in the planar case (our result is more general and applies to arbitrary definable families admitting a quadratic distal cell decomposition, see Section \ref{sec: Zarank} for the precise statements).
\begin{thm*} (Theorem \ref{thm: everything o-min})
Let $\mathcal{M}$ be an $o$-minimal expansion of a real closed field and let $E(x,y) \subseteq M^2 \times M^d$ be a definable relation.
\begin{enumerate}
\item For every $k \in \mathbb{N}$ there is a constant $c = c(E,k)$ such that for any finite $P \subseteq M^2, Q \subseteq M^d$, $|P|=m, |Q| = n$, if $E \cap (P \times Q)$ does not contain a copy of $K_{k,k}$ (the complete bipartite graph with two parts of size $k$), then we have
$$ |E(P,Q)| \leq c \left( m^{\frac{d}{2d-1}} n^{\frac{2d-2}{2d-1}} + m + n \right).$$
\item There is some $k' \in \mathbb{N}$ and formulas $\varphi(x,v), \psi(y,w)$ depending only on $E$ such that if $E$ contains a copy of $K_{k',k'}$, then there are some parameters $b \in M^v, c \in M^w$ such that both $\varphi(M,b)$ and $ \psi(M,c)$ are infinite and $\varphi(M,b) \times \psi(M,c) \subseteq E$.
\end{enumerate}
\end{thm*}
Combining the two parts, it follows that either $E$ contains a product of two infinite definable sets, or the upper bound on the number of edges in part (1) holds for all finite sets $P,Q$ with some fixed constant $c =c (E)$.
The special case with $d = 2 $ can be naturally viewed as a generalization of the classical Szemer\'edi-Trotter theorem for $o$-minimal structures.
\begin{cor}\label{cor: Sz-Trot}
Let $\mathcal{M}$ be an $o$-minimal expansion of a real closed field. Then for every definable relation $E(x,y) \subseteq M^2 \times M^2$ there is a constant $c$ and some formulas $\varphi(x,v), \psi(y,w)$ depending only on $E$ such that exactly one of the following occurs:
\begin{enumerate}
\item For any finite $P \subseteq M^2, Q \subseteq M^2$, $|P|=m, |Q| = n$ we have $$ |E(P,Q)| \leq c \left( m^{\frac{2}{3}} n^{\frac{2}{3}} + m + n \right),$$
\item there are some parameters $b \in M^v, c \in M^w$ such that both $\varphi(M,b)$ and $ \psi(M,c)$ are infinite and $\varphi(M,b) \times \psi(M,c) \subseteq E$.
\end{enumerate}
\begin{rem}
While this paper was in preparation, we have learned that Basu and Raz \cite{basu2016minimal} have obtained a special case of Corollary \ref{cor: Sz-Trot} using different methods.
\end{rem}
\end{cor}
\section{Preliminaries and the distal cell decomposition}\label{sec: prelims and distality}
Let $\mathcal{M}$ be an arbitrary first-order structure in a language $\mathcal{L}$. At this point we don't make any additional assumptions on $\mathcal{M}$, e.g. we may work in ``set theory'',
i.e. in a structure where every subset is definable.
We introduce some basic notation and terminology. Given a tuple of variables $x$, we let $|x|$ denote its length. For each $n \in \mathbb{N}$, $M^n$ denotes the corresponding cartesian power of $M$, the underlying set of $\mathcal{M}$.
For a fixed formula $\varphi(x;y) \in \mathcal{L}$ with two groups of variables $x$ and $y$, given $b \in M^{|y|}$ we write $\varphi(M;b)$ to denote the set $\{ a \in M^{|x|} : \mathcal{M} \models \varphi(a;b) \}$. Hence the formula $\varphi(x;y)$ can be naturally associated with the definable family of sets $\{ \varphi(M;b) : b \in M^{|y|}\}$. E.g., if $\mathcal{M}$ is the field of reals, all sets in such a family for a fixed $\varphi(x;y)$ are semialgebraic of description complexity bounded by some $d = d(\varphi)$ and conversely, the family of all semialgebraic sets of description complexity bounded by some fixed $d$ can be obtained in this way for an appropriate choice of the formula $\varphi(x;y)$. We refer to \cite{distal} for a more detailed introduction and examples of the relevant model-theoretic terminology.
\begin{defn}
For sets $A,X \subseteq M^{d}$
we say that $A$ \emph{crosses} $X$ if both
$X\cap A$ and $X\cap \neg A$ are nonempty.
\end{defn}
We extend the above definition to a set of formulas.
\begin{defn}\label{def: definable, crossing} Let $\Phi(x;y)$ be a set of $\mathcal{L}$-formulas of the form $\varphi(x;y) \in \mathcal{L}$ and $S\subseteq
M^{|y|}$.
\begin{enumerate}
\item We say that a subset $A\subseteq M^{|x|}$ is \emph{$\Phi(x;S)$-definable}
if $A=\varphi(M;s)$ for some $\varphi(x;y)\in \Phi$ and $s\in S$.
\item For a set $X\subseteq M^{|x|}$ we say that \emph{$\Phi(x;S)$
crosses $X$} if some $\Phi(x;S)$-definable set crosses $X$. In
other words $\Phi(x;S)$ does not cross $X$ if for any $\varphi(x;y)
\in \Phi(x;y)$ and $s\in S$ the formula $\varphi(x;s)$ has a constant truth
value on $X$.
\end{enumerate}
\end{defn}
We define a very general combinatorial notion of an abstract cell decomposition for formulas (equivalently, for definable families of sets).
\begin{defn}\label{def: cell decomp}Let $\Phi(x;y)$ be a finite set of formulas.
\begin{enumerate}
\item Given a finite set $S \subseteq M^{|y|}$, a family $\mathcal F$ of subsets of $M^{|x|}$ is
called \emph{ an abstract cell decomposition for $\Phi(x;S)$} if $M^{|x|}=\cup
\mathcal F$ and every $\Delta\in \mathcal F$ is not crossed by $\Phi(x;S)$.
\item \emph{An abstract cell decomposition for $\Phi(x;y)$} is an assignment $\CT$ that to each finite
set $S\subseteq M^{|y|}$ assigns an abstract cell decomposition $\CT(S)$ for $\Phi(x;S)$.
\end{enumerate}
\end{defn}
\begin{rem}
In the above definition, the term ``cell decomposition'' is understood in a very weak sense. Firstly, the ``cells'' in $\CT(S)$ are not required to have any ``geometric'' properties, and secondly, we don't require the family
$\CT(S)$ to partition $M^{|x|}$, but only ask for it to be a covering.
\end{rem}
Every $\Phi(x;y)$ admits an obvious abstract cell decomposition, with $\CT(S)$ consisting of the atoms in the Boolean algebra generated by the $\Phi(x;S)$-definable sets. In general, defining these cells would require longer and longer formulas when $S$ grows, and the aim of the following definitions is to avoid this possibility.
\begin{defn}
Let $\Phi(x;y)$ be a finite set of formulas and $\CT$ an abstract cell
decomposition for $\Phi(x;y)$. We say that $\CT$ is \emph{weakly
definable} if there is a finite set of formulas
$\Psi(x; \bar y)=\Psi(x;y_1,\dotsc,y_k)$ with
$|y_1|=\ddotsb=|y_k|=|y|$ such that for any finite $S\subseteq M^{|y|}$,
every $\Delta\in \CT(S)$ is $\Psi(x;\bar y)$-definable over $S$ (i.e., $\Delta=\psi(M; s_1,\dotsc,s_k)$ for some $s_1,\dotsc,s_k \in S$ and $\psi\in \Psi$).
\end{defn}
Notice that if an abstract cell decomposition $\CT$ for $\Phi(x;y)$ is weakly
defined by $\Psi(x;\bar y)$ then $\Psi(x;\bar y)$ does not determine
$\CT$ uniquely; however there is a maximal abstract cell decomposition
$\CT^{\textrm{max}}$ weakly defined by $\Psi(x;\bar y)$, where
$\CT^{\textrm{max}}(S)$ consists of \emph{all} $\Psi(x;\bar y)$-definable over $S$ sets $\Delta$
such that $\Phi(x;S)$ does not cross $\Delta$.
For combinatorial applications discussed in this paper, we would like to have a cell decomposition with as few sets as
possible, and we want to have control over the sets appearing in
$\CT(S)$ in a definable way.
\begin{defn}\label{def: def cell decomp} Let $\Phi(x;y)$ be a finite set of formulas.
An abstract cell decomposition $\CT$
for $\Phi$ is \emph{definable}
if for every finite $S\subseteq M^{|y|}$ there is a family $\Psi(S)$ of subsets of
$M^{|x|}$, uniformly definable in $S$, and for each $\Delta\in \Psi(S)$ a
subset $\mathcal I(\Delta)\subseteq M^{|y|}$, uniformly definable in $\Delta$,
such that
\begin{equation}
\label{eq:1}
\CT(S)=\{ \Delta\in \Psi(S) \colon \mathcal I(\Delta)\cap S
=\emptyset\}.
\end{equation}
By the uniform definability of $\Psi(S)$ we mean the existence of
a finite set of formulas
$\Psi(x; \bar y)$ as above, so that
for any finite $S\subseteq M^{|y|}$ the set
$\Psi(S)$ consists of all $\Psi(x; \bar y)$-definable over $S$ sets;
and uniform definability of $\mathcal I(\Delta)$ means that for every
$\psi(x;\bar y )\in \Psi(x;\bar y)$ there is a formula
$\theta_\psi(y;\bar y)$ such that for any $s_1,\dotsc,s_k\in
M^{|y|}$ if $\Delta=\psi(M;s_1,\dotsc,s_k)$ then
$\mathcal I(\Delta)=\theta_\psi(M;s_1,\dotsc,s_k)$.
\end{defn}
For example, $\CT^{\textrm{max}}$ defined above is definable with $\mathcal I(\Delta)=\{ s\in M^{|y|} \colon
\Phi(x;s) \text{ crosses }\Delta \}$.
\begin{rem}\label{rem: I Delta contains crossing params}
It follows from Definition \ref{def: def cell decomp} that for every $\Psi(x;M)$-definable set $\Delta \subseteq M^{|x|}$, the set of all $s \in M^{|y|}$ such that $\Phi(x;s)$ crosses $\Delta$ is contained in $\mathcal I(\Delta)$ (strict containment is possible, however).
Indeed, assume that $s \in M^{|y|}$ and $\varphi(x;y) \in \Phi$ are such that $\varphi(x;s)$ crosses $\Delta$. By Definition \ref{def: cell decomp}(1), necessarily $\Delta \notin \CT(\{ s \})$. But then $\mathcal I(\Delta) \cap \{s \} \neq \emptyset$ by (\ref{eq:1}), hence $s \in \mathcal I(\Delta)$.
\end{rem}
\begin{rm}
\end{rm}
As was realized in \cite{distal}, such combinatorial definable cell decompositions have a close connection to the model-theoretic notion of distality. Distal structures were introduced in \cite{DistalPierre} for purely model theoretic purposes (we don't give the original definition here). The following was pointed out in \cite{distal} (and can be used as the definition of a distal structure in this paper).
\begin{fact} \label{fac: char of distality} The following are equivalent for a first-order structure $\mathcal{M}$.
\begin{enumerate}
\item $\mathcal{M}$ is distal,
\item for every formula $\varphi(x;y)$ there is a weakly definable cell
decomposition for $\{ \varphi(x;y) \}$,
\item for every formula $\varphi(x;y)$ there is a definable cell decomposition for $\{ \varphi(x;y) \}$.
\end{enumerate}
\end{fact}
Equivalence of the original definition of distality and existence of weakly definable cell
decompositions is given by \cite[Theorem 21]{chernikov2015externally}; and if $\CT$ is a weakly definable cell decomposition for $\varphi(x;y)$, then $\CT^{\textrm{max}}$ as defined above is definable.
Examples of distal structures include (again, we refer to the introduction of \cite{distal} for a more detailed discussion):
\begin{enumerate}
\item $o$-minimal structures,
\item Presburger arithmetic $(\mathbb{Z}, +, 0, <)$,
\item the field of $p$-adics $\mathbb{Q}_p$.
\end{enumerate}
There are several contexts in model theory relevant for the topics of this paper where certain notions of cell decomposition play a prominent role (e.g. $o$-minimal cell decomposition, $p$-adic cell decomposition, etc.). These cell decompositions tend to carry more geometric information, while the one distinguished here captures combinatorial complexity. To distinguish from those cases, and in view of Fact \ref{fac: char of distality}, we will from now on refer to a definable cell decomposition $\CT$ for a finite set of formulas $\Phi(x;y)$ as in Definition \ref{def: def cell decomp} as a \emph{distal cell decomposition} for $\Phi(x;y)$. Hence, \textbf{a structure $\mathcal{M}$ is distal if and only if every formula admits a distal cell decomposition}.
Distality of the examples listed above had been established by different (sometimes infinitary) methods and the question of obtaining the exact bounds on the size of the corresponding distal cell decompositions hasn't been considered. While it is easy to verify in the examples listed above that all formulas $\varphi(x,y)$ with $|x|=1$ admit a cell decomposition $\CT$ with the best possible bound $|\CT(S)| = O(|S|)$, already the case of formulas with $|x|=2$ becomes more challenging (and grows in complexity with $|x|$). In Section \ref{sec: omin-cell-decomp} we establish that in an $o$-minimal expansion of a field, all formulas with $|x|=2$ admit a distal cell decomposition $\CT$ with the optimal bound $|\CT(S)| = O(|S|^2)$ (the case $|x|\geq 3$ remains open, even in the semialgebraic case).
\section{Distal cutting lemma}
In this section we show how a bound on the size of a distal cell decomposition for a given definable family can be used to obtain a definable cutting lemma with the corresponding bound
on its size.
\begin{thm} \label{lem-r_cut}
(Distal cutting lemma) Let $\varphi(x;y)$ be a formula admitting a distal cell decomposition $\CT$ (given by a finite set of formulas $\Psi(x; y_1, \ldots, y_s)$ --- see Definition \ref{def: def cell decomp}) with
$|\CT(S)| = O(|S|^d)$.
Then for any finite $H \subseteq M^{|y|}$ of size $n$ and any real $r$ satisfying $1 < r < n$, there are subsets $X_1, \ldots, X_t$ of $M^{|x|}$ covering $M^{|x|}$ with
$$
t \leq Cr^d
$$
for some constant $C = C(\varphi)$ (and independent of $H$, $r$ and $n$), and with each $X_i$ crossed by at most $n/r$ of the formulas $\{\varphi(x;a): a \in H\}.$
Moreover, each of the $X_i$'s is an intersection of at most two $\Psi(x;H)$-definable sets.
\end{thm}
Our proof generalizes (and closely follows) the axiomatic treatment of the Clarkson-Shor random sampling technique in \cite[Section 6.5]{mbook}.
Let $\CT, \Psi$ and $H$ as in the assumption be fixed.
Then (recalling Definition \ref{def: def cell decomp}) for each finite $S \subseteq M^{|y|}$, we have a finite collection ${\mathcal T}(S)$ of subsets of $M^{|x|}$ that covers $M^{|x|}$ and satisfies the following conditions.
\begin{enumerate}
\item[\textbf{(C1)}] Let
$$
{\rm Reg} := \{\Delta : \Delta \in {\mathcal T}(S)~\mbox{for some $S \subseteq H$}\}.
$$
Then every set in {\rm Reg} is definable by an instance of a formula from $\Psi$ with parameters in $H$.
\item[\textbf{(C2)}] For every $S \subseteq H$ we have
$$
|{\mathcal T}(S)| \leq C'\left(|S|^d+1\right)
$$
for some constant $C'$ depending only on $\varphi$. (The hypothesis of the theorem ensures that for {\emph non-empty} $S$ we have $|{\mathcal T}(S)| \leq C|S|^d$ for some constant $C=C(\varphi)$. We add ``$+1$'' here to take into account the case $S = \emptyset$.)
\item[\textbf{(C3)}] To each $\Delta \in {\rm Reg}$ we associate a collection ${\mathcal D}(\Delta)$ of subsets of $H$, called the {\em defining sets} of $\Delta$, via
$$
{\mathcal D}(\Delta) := \{S \subseteq H: |S| \leq s,~\Delta \in {\mathcal T}(S)\}.
$$
(Here $s$ is a fixed constant corresponding to the number of parameters in $\Psi(x;y_1, \ldots, y_s)$ given by the distal cell decomposition and depending only on $\varphi$).
Given $\mathcal I$ as in Definition \ref{def: def cell decomp}, we define $\mathcal I_H(\Delta) := \mathcal I(\Delta) \cap H$. Notice that $\mathcal I_H(\Delta)$ contains all of the $a \in H$ such that $\varphi(x;a)$ crosses $\Delta$ (by Remark \ref{rem: I Delta contains crossing params}). We have:
$$
\Delta \in {\mathcal T}(S) \iff {\mathcal I}_H(\Delta) \cap S = \emptyset~\mbox{and there is}~S_0 \in {\mathcal D}(\Delta)~\mbox{with}~S_0 \subseteq S.
$$
\end{enumerate}
\begin{rem} It follows from the proof that the distal cutting lemma (Theorem \ref{lem-r_cut}) holds for any abstract cell decomposition satisfying the conditions (C1)--(C3) with an appropriately chosen relation $\mathcal I(\Delta)$.
\end{rem}
\medskip
Before proceeding to the proof of the distal cutting lemma (Theorem \ref{lem-r_cut}) we isolate two key tools. The first is a tail bound on the probability that a cell $\Delta \in {\mathcal T}(S)$ is crossed by many formulas, where $S$ is a randomly chosen subset of $H$.
For $S \subseteq H$, let ${\mathcal T}(S)_{\geq t}$ denote the set of $\Delta \in {\mathcal T}(S)$ with $|{\mathcal I}_H(\Delta)| \geq tn/r$. Recall that for $0 \leq p \leq 1$ we say that $S \subseteq H$ is selected by {\em independent Bernoulli trials with success probability $p$} if $S$ is selected according to the distribution $\mu$ (supported on the power set of $H$) given by
$$
\mu(S') = p^{|S'|}(1-p)^{|H|-|S'|}
$$
for each $S' \subseteq H$; observe that this is essentially the process of flipping a biased coin (biased to show heads with probability $p$) $|H|$ times independently, and for $1 \leq i \leq |H|$ putting the $i$th element of $H$ in $S$ if and only if the $i$th flip comes up heads.
\begin{lem} \label{lem-tail}
(Tail Bound Lemma) Let $\varphi(x;y)$ be a formula as in Theorem \ref{lem-r_cut}. Let $H \subseteq M^{|y|}$ be a finite set of size $n$. Fix $\varepsilon > 0$ and let $r$ be a parameter satisfying $1 \leq r \leq (1-\varepsilon)n$. Let $S \subseteq H$ be selected by independent Bernoulli trials with success probability $r/n$, and let $t \geq 0$ be given. Then there is a constant $C=C(\varepsilon)$ such that
$$
{\bf E}_{\mu}\left(\left|{\mathcal T}(S)_{\geq t}\right|\right) \leq C2^{-t}r^d.
$$
\end{lem}
We use this to derive the second main tool, a cutting lemma that is weaker than Theorem \ref{lem-r_cut}.
\begin{lem} \label{lem-subopt_cut}
(Suboptimal Cutting Lemma) Let $\varphi(x;y)$ be a formula as in Theorem \ref{lem-r_cut}. Let $H \subseteq M^{|y|}$ be a finite set of size $n$. Let $r$ be a parameter satisfying $1 < r < n$. There is $S \subseteq H$ with
$$
|{\mathcal T}(S)| \leq Kr^d\log^d (r+1)
$$
for some constant $K$ independent of $H$, $r$ and $n$ and with each $X \in {\mathcal T}(S)$ crossed by at most $n/r$ of the formulas $\{\varphi(x;a): a \in H\}.$
\end{lem}
\medskip
\begin{proof}
Let $A$ be such that $3\times 2^{2d}CA^d = 2^A$, where $C$ is the constant appearing in Lemma \ref{lem-tail}. We treat separately the cases $2Ar \log(r+1) \leq n$ and $2Ar \log(r+1) \geq n$. If $2Ar \log(r+1) \geq n$ then we may take ${\mathcal T}(H)$ as our $r$-cutting, since it has size $C'(n^d +1) \leq C'((2A)^dr^d \log^d(r+1) + 1) \leq Kr^d\log^d (r+1)$ for suitably large $K$ (note that by \textbf{(C3)} no instance of $\varphi(x;y)$ over $H$ can cross any of the sets in ${\mathcal T}(H)$).
Suppose now that $2Ar \log(r+1) \leq n$. Set $r' = Ar\log (r+1)$. Applying Lemma \ref{lem-tail} with $r'$ taking the role of $r$ (valid since $r' < n/2$) and with $t=0$ we obtain that if $S \subseteq H$ is selected by independent Bernoulli trials with success probability $r'/n$ (with associated distribution $\mu'$) then
$$
{\bf E}_{\mu'}\left(\left|{\mathcal T}(S)\right|\right) \leq CA^dr^d \log^d(r+1).
$$
Applying Lemma \ref{lem-tail} again with $t=A \log(r+1)$ we get
$$
{\bf E}_{\mu'}\left(\left|{\mathcal T}(S)_{\geq A \log(r+1)}\right|\right) \leq \frac{CA^d r^d \log^d(r+1)}{(r+1)^A} \leq \frac{CA^d}{(r+1)^{A-2d}} \leq 1/3,
$$
the second inequality using $r\log(r+1) \leq (r+1)^2$ and the third using our choice of $A$ and the fact that $r \geq 1$. By linearity of expectation
$$
{\bf E}_{\mu'}\left(\frac{\left|{\mathcal T}(S)\right|}{3CA^dr^d \log^d(r+1)} + \left|{\mathcal T}(S)_{\geq A \log(r+1)}\right| \right) \leq 2/3,
$$
so there exists an $S \subseteq H$ such that
$$
\left|{\mathcal T}(S)\right| \leq 3CA^dr^d \log^d(r+1)
$$
and ${\mathcal T}(S)_{\geq A \log(r+1)} = \emptyset$. This last condition implies that each $\Delta \in {\mathcal T}(S)$ is crossed by at most
$(A \log(r+1) n)/r' = n/r$
formulas.
\end{proof}
\medskip
We use Lemmas \ref{lem-subopt_cut} and \ref{lem-tail} to derive Theorem \ref{lem-r_cut}, before turning to the proof of Lemma \ref{lem-tail}.
\medskip
\begin{proof}[Proof of Theorem \ref{lem-r_cut}]
Just as in the proof of Lemma \ref{lem-subopt_cut} we begin by observing that ${\mathcal T}(H)$ furnishes an $r$-cutting for all $r$, with size at most $C'(n^d +1)$. This allows us to assume, say, $r \leq n/2$, which allows us to use Lemma \ref{lem-tail}.
Let $S \subseteq H$ be selected by independent Bernoulli trials with success probability $r/n$, and let ${\mathcal T}(S)$ be as in the assumption.
For $\Delta \in {\mathcal T}(S)$ define $t_\Delta$ by $|{\mathcal I}_H(\Delta)| = t_\Delta n/r$. Note that if $t_\Delta \leq 1$ then the number of $a$ in $H$ such that $\varphi(x,a)$ crosses $\Delta$ is no more than $n/r$.
For $\Delta \in {\mathcal T}(S)$ with $t_\Delta > 1$, consider the set ${\mathcal I}_H(\Delta)$, it contains all of $a \in H$ for which $\varphi(x,a)$ crosses $\Delta$. By Lemma \ref{lem-subopt_cut} there is $S' \subseteq {\mathcal I}_H(\Delta)$ with ${\mathcal T}(S')$ having size at most $O(t^d_\Delta \log (t^d_\Delta+1))$ with the property that for every $\Delta' \in {\mathcal T}(S')$, the number of $a \in {\mathcal I}_H(\Delta)$ such that $\varphi(x,a)$ crosses $\Delta'$ is at most
$$
\frac{|{\mathcal I}(\Delta)|}{t_\Delta} = \frac{n}{r}.
$$
In particular that means that for every $\Delta' \in {\mathcal T}(S')$ the number of $a \in H$ such that $\varphi(x,a)$ crosses $\Delta' \cap \Delta$ is at most $n/r$.
It follows that the family of subsets of $M^{|x|}$ consisting of those $\Delta \in {\mathcal T}(S)$ for which $t_\Delta \leq 1$, together with all sets of the form $\Delta' \cap \Delta$ where $\Delta \in {\mathcal T}(S)$ has $t_\Delta > 1$ and $\Delta' \in {\mathcal T}(S')$ (with $S'$ constructed from $S$ via Lemma \ref{lem-subopt_cut}, as described above), forms a cover of $M^{|x|}$ with size at most
\begin{equation} \label{eq-decomp_bound}
\sum_{\Delta \in {\mathcal T}(S)} \left({\bf 1}_{\{t_\Delta \leq 1\}} + Ct_\Delta^d\log^d(t_\Delta+1) {\bf 1}_{\{t_\Delta > 1\}}\right).
\end{equation}
We now upper bound the expectation (with respect to $\mu$) of this quantity. By linearity the expectation is at most
\begin{equation} \label{eq-size}
{\bf E}_{\mu}\left(\left|{\mathcal T}(S)\right|\right) + C\sum_{i \geq 0} {\bf E}_{\mu}\left(\sum_{\Delta \in {\mathcal T}(S) \colon 2^i \leq t_\Delta < 2^{i+1}} t_\Delta^{2d}\right)
\end{equation}
(using $\log (t_\Delta+1) \leq t_\Delta$ for $t_\Delta \geq 1$).
We bound the first term in (\ref{eq-size}) by an application of Lemma \ref{lem-tail} with $t=0$. This gives
$$
{\bf E}_{\mu}\left(\left|{\mathcal T}(S)\right|\right) \leq O(r^d).
$$
For the second term in (\ref{eq-size}) we have
\begin{multline*}
\sum_{i \geq 0} {\bf E}_\mu \left(\sum_{\Delta \in {\mathcal T}(S) \colon 2^i \leq t_\Delta < 2^{i+1}} t_\Delta^{2d}\right) \leq \sum_{i \geq 0} 2^{2d(i+1)} {\bf E}_{\mu}\left(\left|{\mathcal T}(S)_{\geq 2^i}\right|\right) \\
\leq C\sum_{i \geq 0} 2^{2d(i+1)} 2^{-2^i} r^d = O(r^d),
\end{multline*}
with the last inequality being an application of Lemma \ref{lem-tail}.
We conclude that the expectation of the quantity in (\ref{eq-decomp_bound}) is $O(r^d)$, so there is at least one choice of $S \subseteq H$ for which (\ref{eq-decomp_bound}) is at most $O(r^d)$, proving Theorem \ref{lem-r_cut} (the definability clause follows by \textbf{(C1)} as every set in the constructed covering is an intersection of at most two sets from ${\rm Reg}$).
\end{proof}
\medskip
Before proving Lemma \ref{lem-tail} we isolate a useful set-systems lemma.
\begin{lem} \label{lem-set_system_corr}
Let $\Omega$ be a set of size $m$, and let $\{D_1, \ldots, D_q\}$ be a collection of subsets of $\Omega$ with $|D_i| \leq u$ for all $i$, $1 \leq i \leq q$, for some $u$. Let
$$
{\mathcal F} = \{X \subseteq \Omega:D_i \subseteq X~\mbox{for some $i$, $1 \leq i \leq q$}\}
$$
be the up-set (or filter) generated by the $D_i$'s. Let $\tilde{p}$ and $p$ satisfy $0 < \tilde{p} \leq p \leq 1$. We have
\begin{equation} \label{eq-correlation_inq}
\frac{\sum_{X \in {\mathcal F}} \tilde{p}^{|X|}(1-\tilde{p})^{m-|X|}}{\sum_{X \in {\mathcal F}} p^{|X|}(1-p)^{m-|X|}} \geq \left(\frac{\tilde{p}}{p}\right)^u.
\end{equation}
\end{lem}
\begin{proof}
With each $X \in {\mathcal F}$ associate (arbitrarily) a set $D_X$ satisfying $D_X \subseteq X$ and $D_X \in \{D_1, \ldots, D_q\}$ (such a set exists by the definition of ${\mathcal F}$).
Let $A \subseteq \Omega$ be selected by independent Bernoulli trials with success probability $p$, and, independently, let $B \subseteq \Omega$ be selected by independent Bernoulli trials with success probability $\tilde{p}/p$. Observe that
\begin{equation} \label{eq-first_random_selection}
\Pr(A \in {\mathcal F}) = \sum_{X \in {\mathcal F}} p^{|X|}(1-p)^{m-|X|}
\end{equation}
and
\begin{equation} \label{eq-combination_random_selection}
\Pr(A \cap B \in {\mathcal F}) = \sum_{X \in {\mathcal F}} \tilde{p}^{|X|}(1-\tilde{p})^{m-|X|},
\end{equation}
with (\ref{eq-combination_random_selection}) holding by independence and because for each $\omega \in \Omega$, $\Pr(\omega \in A \cap B)=\Pr(\omega \in A)\Pr(\omega \in B)$.
Now consider the two events
$$
E_1 = \{A \in {\mathcal F}~\mbox{and}~D_A \subseteq B\}
$$
and
$$
E_2 = \{A \cap B \in {\mathcal F}\}.
$$
If $A \in {\mathcal F}$ and $D_A \subseteq B$ then $D_A \subseteq A \cap B$, so that $A \cap B \in {\mathcal F}$. It follows that $E_1 \subseteq E_2$ and
\begin{equation} \label{eq-containment}
\Pr(E_1) \leq \Pr(E_2).
\end{equation}
Using independence we have
\begin{multline*}
\Pr(E_1) = \sum_{X \in {\mathcal F}} \Pr(A=X~\mbox{and}~D_X \subseteq B) = \sum_{X \in {\mathcal F}} \Pr(A=X)\left(\frac{\tilde{p}}{p}\right)^{|D(X)|} \\
\geq \Pr(A \in {\mathcal F})\left(\frac{\tilde{p}}{p}\right)^u.
\end{multline*}
Combining with (\ref{eq-first_random_selection}), (\ref{eq-combination_random_selection}) and (\ref{eq-containment}) we get (\ref{eq-correlation_inq}).
\end{proof}
We are now ready to prove Lemma \ref{lem-tail}. We follow Matou\mathfrak{v}{s}ek's approach in \cite[Section 6.5]{mbook}, but add an additional argument.
\begin{proof}[Proof of Lemma \ref{lem-tail}]
We start by establishing
\begin{equation} \label{claim1}
{\bf E}_\mu\left(\left|{\mathcal T}(S)\right|\right) = O(r^d),
\end{equation}
which gives Lemma \ref{lem-tail} for $t \leq 1$.
To see (\ref{claim1}) note that \textbf{(C2)} yields ${\bf E}_\mu\left(\left|{\mathcal T}(S)\right|\right) \leq C{\bf E}_\mu\left(|S|^d\right)+1$. Now $|S|=X_1+\ldots + X_n$ where the $X_i's$ are independent Bernoulli random variables each with parameter $p=r/n$. We claim that for all $d \geq 1$ we have
\begin{equation} \label{eq-bin-moment-bound}
{\bf E}_\mu(|S|^d) \leq (r+d)^d.
\end{equation}
(from which (\ref{claim1}) immediately follows; note that we can drop the $+1$ since $r \geq 1$).
To see (\ref{eq-bin-moment-bound}), note first that by linearity we have
$$
{\bf E}_\mu(|S|^d) = \sum_{(i_1,i_2,\ldots,i_d) \in \{1, \ldots, n\}^d} {\bf E}(X_{i_1}X_{i_2}\cdots X_{i_d}).
$$
Let $a_k$ be the number of tuples $(i_1,i_2,\ldots,i_d) \in \{1, \ldots, n\}^d$ such that $|\{i_1,i_2,\ldots,i_d\}|=d-k$. By independence of the $X_i$, and the fact that $X_i^\ell$ has the same distribution as $X_i$ for any integer $\ell \geq 1$ we have
\begin{equation} \label{eq-bin-moment-bound_int}
{\bf E}_\mu(|S|^d) = \sum_{k=0}^d a_k p^{d-k}.
\end{equation}
We claim that
\begin{equation} \label{eq-bin-moment-bound_int2}
a_k \leq \binom{d}{k}d^kn^{d-k}.
\end{equation}
Inserting into (\ref{eq-bin-moment-bound_int}) and using the binomial theorem together with $np=r$, this gives (\ref{eq-bin-moment-bound}).
To see (\ref{eq-bin-moment-bound_int2}) note that we overcount $a_k$ by first specifying $d-k$ indices from $\{1,\ldots,d\}$ on which the $i_j$'s are all different from each other ($\binom{d}{d-k}=\binom{d}{k}$ choices), then choosing values for these $i_j$'s ($n(n-1)\cdots(n-(d-k)+1) \leq n^{d-k}$ choices), and finally choosing values for the remaining indices ($(d-k)^k \leq d^k$ choices, since these indices are all constrained to lie among the $d-k$ distinct indices chosen initially). It follows that $a_k \leq \binom{d}{k} n^{d-k} d^k$, as claimed.
(We note that in the case $d=2$ things are considerably easier: we have
$$
|S|^2 = \sum_{i=1}^n X_i^2 + 2\sum_{1 \leq i < j \leq n} X_iX_j
$$
so
\begin{align*}
{\bf E}_\mu(|S|^2) & = \sum_{i=1}^n {\bf E}(X_i^2) + 2\sum_{1 \leq i < j \leq n} {\bf E}(X_iX_j) \\
& = np + n(n-1)p^2 \\
& \leq np + n^2p^2 = r^2+r.)
\end{align*}
We assume from now on that $t\geq 1$. For $\Delta\in {\rm Reg}$ denote by $p(\Delta)$ the probability that
$\Delta$ appears in ${\mathcal T}(S)$, i.e.
$$
p(\Delta)= \mu(\{ S\subseteq H \colon \Delta\in {\mathcal T}(S)\})=\sum_{\Delta\in {\mathcal T}(S)} \mu(S).
$$
Let ${\rm Reg}_{\geq t}=\{ \Delta\in {\rm Reg} \colon |{\mathcal I}_H(\Delta)|\geq tn/r\}$. By linearity of expectation we have
\begin{equation} \label{eq:2}
{\bf E}_\mu\left(\left|{\mathcal T}(S)_{\geq t}\right|\right)= \sum_{\Delta\in {\rm Reg}_{\geq t}} p(\Delta).
\end{equation}
Now set $\tilde{p}=p/t$ and let $\tilde{\mu}$ be the distribution associated with selection from $H$ by independent Bernoulli trials with success probability $\tilde{p}$. By (\ref{claim1}) we have
\begin{equation} \label{eq:3}
{\bf E}_{\tilde{\mu}}\left(\left|{\mathcal T}(S)\right|\right)=O(r^d/t^d).
\end{equation}
Also, as in \eqref{eq:2} we have
\begin{multline}\label{eq:4}
{\bf E}_{\tilde{\mu}}\left(\left|{\mathcal T}(S)\right|\right) = \sum_{\Delta\in {\rm Reg}}
\tilde{p}(\Delta)
\geq \sum_{\Delta\in {\rm Reg}_{\geq t}} \tilde{p}(\Delta) \\
= \sum_{\Delta\in {\rm Reg}_{\geq t}} p(\Delta)\frac{\tilde{p}(\Delta)}{p(\Delta)}
\geq \min\left\{ \frac{\tilde{p}(\Delta)}{p(\Delta)}\colon \Delta\in
{\rm Reg}_{\geq t}\right\} \sum_{\Delta\in {\rm Reg}_{\geq t}} p(\Delta) \\
= \min\left\{ \frac{\tilde{p}(\Delta)}{p(\Delta)}\colon \Delta\in
{\rm Reg}_{\geq t}\right\} {\bf E}_\mu\left(\left|{\mathcal T}(S)_{\geq t}\right|\right).
\end{multline}
We now estimate from below the quantity $\tilde{p}(\Delta)/p(\Delta)$ for $\Delta \in {\rm Reg}_{\geq t}$. Fix such a $\Delta$ and let ${\mathcal F}(\Delta)$ be the up-set on ground set $H \setminus {\mathcal I}_H(\Delta)$ generated by ${\mathcal D}(\Delta)$.
Using \textbf{(C3)} we see that
$$
p(\Delta) = (1-p)^{|{\mathcal I}_H(\Delta)|}\sum_{X \subseteq {\mathcal F}(\Delta)} p^{|X|}(1-p)^{|H \setminus {\mathcal I}_H(\Delta)|-|X|}
$$
with an analogous expression for $\tilde{p}(\Delta)$. Recalling $\tilde{p}/p=1/t$ and that defining sets have size at most $s$, an application of Lemma \ref{lem-set_system_corr} immediately yields
\begin{align}
\frac{\tilde{p}(\Delta)}{p(\Delta)} & \geq \frac{(1-\tilde{p})^{|{\mathcal I}_H(\Delta)|}}{(1-p)^{|{\mathcal I}_H(\Delta)|}}\left(\frac{1}{t}\right)^s \nonumber \\
& \geq \left(\frac{1-\tilde{p}}{1-p}\right)^{tn/r} \left(\frac{1}{t}\right)^s \nonumber \\
& \geq \left(\frac{e^{-c\tilde{p}}}{e^{-p}}\right)^{tn/r} \left(\frac{1}{t}\right)^s \nonumber \\
& = e^{t-c} t^{-s}, \label{eq:5}
\end{align}
with the second inequality using $(1-\tilde{p})/(1-p) \geq 1$ and $|{\mathcal I}_H(\Delta)| \geq tn/r$, and the third inequality using the standard bound $1- p\leq e^{- p}$ (valid for all real $p$). In the third inequality we also use that for $0 \leq \tilde{p} \leq (1-\varepsilon)n$ (which certainly holds, since $\tilde{p} \leq p \leq (1-\varepsilon)n$) we have
$1-\tilde p \geq e^{-c\tilde{p}}$ for some sufficiently large $c=c(\varepsilon)$ ($c=\log(1/\varepsilon)/(1-\varepsilon)$ will do).
Inserting (\ref{eq:5}) into (\ref{eq:4}) and combining with (\ref{eq:3}) we finally get
$$
{\bf E}_\mu\left(\left|{\mathcal T}(S)_{\geq t}\right|\right) \leq t^s e^{c-t} O(r^d/t^d) \leq C2^{-t}r^d
$$
for sufficiently large $C$.
\end{proof}
\section{Optimal distal cell decomposition on the plane in $o$-minimal expansions of fields}
\label{sec: omin-cell-decomp}
Our goal in this section is to prove the following theorem.
\begin{thm}
\label{thm:cell-dec}
Let $\mathcal{M}$ be an $o$-minimal expansion of a real closed field.
Then any formula $\varphi(x;y)$ with $|x|=2$ admits a distal cell decomposition $\CT$ with
$|\CT(S)| = O(|S|^2)$.
\end{thm}
Towards this purpose, we fix a formula $\varphi(x;y)$ with $|x|=2$ (and often we will write
$x$ as $(x_1,x_2)$).
We first construct a finite set of
formulas $\Phi(x;y)$ such that for any $s\in M^{|y|}$ the set
$\varphi(M;s)$ is a Boolean combination of $\Phi(x;s)$-definable sets,
and formulas in $\Phi(x;y)$ have a very simple form, and then we
construct a definable cell decomposition $\CT$ for
$\Phi(x;y)$ (hence also for $\varphi$) with $|\CT(S)|=O(|S|^2)$.
\medskip
Let $\varphi_1(x_1;x_2,y)$ be the formula
$\varphi(x_1,x_2;y)$.
Using $o$-minimality and definable choice we can find definable functions
$h_1,\dotsc,h_k\colon M\ttimes M^{|y|}\to M$ such that
\[
h_1(a,s)\leq h_2(a,s)\leq \dotsb \leq h_k(a,s) \text{ for all } a\in
M, s\in M^{|y|} \]
and for all $a\in M, s\in M^{|y|}$ and $i=0,\dotsc,k$ we have
\[ h_i(a,s) < x_1,x_1' < h_{i+1}(a,s) \rightarrow [\varphi(x_1;a,s)\leftrightarrow
\varphi(x_1';a,s)], \]
where for convenience we let $h_0(a,s)=-\infty$ and $
h_{k+1}(a,s)=+\infty$.
\medskip
At this point we have that for a fixed $i=0,\dotsc,k$ for all $a\in
M$, $s\in M^{|y|}$ the truth value of $\varphi(x_1,a; s)$ is constant on
the interval $ h_i(a,s) < x_1 < h_{i+1}(a,s)$ but may vary if we
perturb $a$. We need to partition $M$ into pieces where this truth
value does not depend on $a$.
For $a,a'\in M$ and $s\in M^{|y|}$ we define the relation $a
\sim_s a'$ as
\begin{multline*} a\sim_s a' \text{ iff for all } i=0,\dotsc,k
\\ \text{ and any } h_i(a,s) < x_1
< h_{i+1}(a,s), \, h_i(a',s) < x_1' < h_{i+1}(a',s) \\\text{ we have }
\varphi_1(x_1;a,s)\leftrightarrow
\varphi(x_1';a',s).
\end{multline*}
Clearly $\sim_s$ is an equivalence relation on $M$ with at most
$2^{k+1}$-classes uniformly definable in
terms of $s$. Using $o$-minimality and definable choice, we can find definable functions
$u_i\colon M^{|y|}\to M$, $i=1,\dotsc,l$ with $u_1(y)\leq u_2(y) \leq \dotsb\leq
u_l(y)$ such that for all $s\in M^{|y|}$ and $i=0,\dotsc,l$ we have
\[ u_i(s)< x_2,x_2' <u_{i+1}(s) \rightarrow x_2\sim_s x_2', \]
where again for convenience we use $u_0(y)=-\infty$ and
$u_{l+1}(y)=+\infty$.
We would prefer that for $s\in M^{|y|}$ each of the
functions $x_2\mapsto h_i(x_2,s)$ is continuous.
For $k \in \mathbb{N}$, we will write $[k]$ to denote the set $\{1,2, \ldots, k\}$. Since every definable function is piecewise continuous, we can further
partition $M$ and in addition require that for any $i=0,\dotsc,l$,
$j\in[k]$ and
every $s\in M^{|y|}$ the function $x_2\mapsto h_j(x_2,s)$ is continuous on
the interval $u_i(s) <x_2 < u_{i+1}(s)$.
\medskip
We take $\Phi(x;y)$ to be the following set of formulas (recall that $x=(x_1,x_2)$):
\begin{gather*}
\{ x_2 = u_i(y) \colon i\in [l]\} \cup \{ x_2 < u_i(y) \colon i \in [l] \} \\
\cup \, \{ x_2 > u_i(y) \colon i\in [l] \}\cup \{ x_1=h_i(x_2,y)\colon i\in [k]\} \\
\cup \, \{ x_1<h_i(x_2,y)\colon i\in [k]\} \cup \{ x_1>h_i(x_2,y)\colon i\in [k]\}.
\end{gather*}
It is not hard to see that for any $s\in M^{|y|}$ the set $\varphi(M;s)$
is a Boolean combination of $\Phi(x;s)$-definable sets.
We now
proceed with a construction of a definable cell
decomposition for $\Phi(x;y)$. \\
Geometrically we view $M^2$ as $(x_1,x_2)$-plane, with $x_1$ being on the
vertical axis and $x_2$ on horizontal. Then
$\Phi(x;S)$-definable sets partition the plain by vertical lines
$x_2=u_i(s)$ and ``horizontal'' ``curves'' $x_1=h_j(x_2,s)$.
Unfortunately we cannot use complete $\Phi$-types over $S$ as
$\CT(S)$. Since $S$ is finite every complete $\Phi$-type is equivalent
to a formula however in general we cannot get uniform definability.
Consider a simple example of a partition of a plane by straight lines,
i.e. the case when we don't have functions $u_i$ and
have only one $h(x_2,a,b)$ defining the straight
lines $x_1=ax_2+b$. In the example below all points in the gray area have the same
$\Phi$-type, but we need at least $5$ lines to describe the region; and in general, this
number may be as big as one wants.\\
\begin{tikzpicture}[scale=0.4]
\draw[->,ultra thick] (-0.5,0.5)--(20,0.5) node[right]{$x_2$};
\draw[->,ultra thick] (0.5,-0.5)--(0.5,10) node[left]{$x_1$};
\draw[thick] (0,2)--(20,2);
\draw[thick] (0,4)--(6,10);
\draw[thick] (0,6)--(8,0);
\draw[thick] (0,7)--(20,7);
\draw[thick] (8,10)--(20,4);
\draw[thick] (0, 0.66)--(20,7.5);
\fill[gray!50] (1.2,5.15)--(3,7) --(14,7)--(15.85,6.1)--(4.9,2.38)--cycle;
\end{tikzpicture}
We could solve this problem by using also vertical lines through all
points of intersections, as shown
below, but then the size of the partition would be $O(|S|^3)$. \\
\begin{tikzpicture}[scale=0.4]
\draw[->,ultra thick] (-0.5,0.5)--(20,0.5) node[right]{$x_2$};
\draw[->,ultra thick] (0.5,-0.5)--(0.5,10) node[left]{$x_1$};
\draw[thick] (0,2)--(20,2);
\draw[thick] (0,4)--(6,10);
\draw[thick] (0,6)--(8,0);
\draw[thick] (0,7)--(20,7);
\draw[thick] (8,10)--(20,4);
\draw[thick] (0, 0.66)--(20,7.5);
\draw[dashed] (1.15,0) -- (1.15,10);
\draw[dashed] (3,0) -- (3,10);
\draw[dashed] (4.9,0) -- (4.9,10);
\draw[dashed] (5.4,0) -- (5.4,10);
\draw[dashed] (14,0) -- (14,10);
\draw[dashed] (15.85,0)--(15.85, 10);
\draw[dashed] (18.5,0)--(18.5, 10);
\end{tikzpicture}
Using the idea of ``vertical decomposition'' from \cite{13}
we add only vertical line segments where they are
needed, i.e. from an intersection point to the first line above (or plus infinity) and
the first line below (or minus infinity), as in the following picture.
\begin{tikzpicture}[scale=0.4]
\draw[->,ultra thick] (-0.5,0.5)--(20,0.5) node[right]{$x_2$};
\draw[->,ultra thick] (0.5,-0.5)--(0.5,10) node[left]{$x_1$};
\draw[thick] (0,2)--(20,2);
\draw[thick] (0,4)--(6,10);
\draw[thick] (0,6)--(8,0);
\draw[thick] (0,7)--(20,7);
\draw[thick] (8,10)--(20,4);
\draw[thick] (0, 0.66)--(20,7.5);
\draw[dashed] (1.15,2) -- (1.15,7);
\draw[dashed] (3,3.8) -- (3,10);
\draw[dashed] (4.9,2) -- (4.9,7);
\draw[dashed] (5.4,0) -- (5.4,2.55);
\draw[dashed] (14,5.5) -- (14,10);
\draw[dashed] (15.85,2)--(15.85, 7);
\draw[dashed] (18.5,4.69)--(18.5, 10);
\end{tikzpicture}
Our general case is slightly more complicated since the functions
$x_2\mapsto h_i(x_2,s)$ are not linear and even not continuous, just
piecewise continuous, so their graphs may intersect without one crossing
another. \\
For $i\in [l]$ and $s\in M^{|y|}$ we will denote by $\hat u_i(s)$ the
corresponding vertical line
\[ \hat u_i(s) :=\{ (x_1,x_2)\in M^2 \colon x_2=u_i(s)\},
\]
and also for $i\in [k]$ and $s\in M^{|y|}$ we will denote by $\hat h_i(s)$
the ``curve''
\[ \hat h_i(s) :=\{ (x_1,x_2)\in M^2 \colon x_1=h_i(x_2,s)\}. \]
For $i,j\in [k]$,$s_1,s_2\in M^{|y|}$ and $(a,b)\in \mathbb{R}^2$ we say that
$\hat h_i(s_1)$ and $\hat h_j(s_2)$ \emph{properly intersect} at
$(a,b)$ if $(a,b)\in \hat h_i(s_1)\cap \hat h_j(s_2)$ and
$\hat h_i(s_1),\hat h_j(s_2)$ have different germs at $(a,b)$.
Formally it means that
$a=h_i(b,s_1)=h_j(b,s_2)$ and for any $\varepsilon>0$ there is
$b'\in (b-\varepsilon, b+\varepsilon)$ with $h_i(b',s_1)\neq
h_j(b',s_2)$. We will denote by $\hat h_i(s_1) \sqcap \hat h_j(s_2)$
the set of all points $(a,b)\in M^2$ where $\hat h_i(s_1)$ and $
\hat h_j(s_2)$ intersect properly. It is easy to see using
$o$-minimality that the set
$\hat h_i(s_1) \sqcap \hat h_j(s_2)$
is finite and there is $N_l\in \mathbb N$ such that
$|\hat h_i(s_1) \sqcap \hat h_j(s_2)|\leq N_l$ for all $i,j\in [k]$
and $s_1,s_2\in M^{|y|}$. Also all points in $\hat h_i(s_1)\sqcap \hat
h_j(s_2)$ are definable over $s_1,s_2$, i.e. there are definable
functions $f_{i,j}^m(y_1,y_2)$ with $m\in [N_l]$ such that for all
$s_1,s_2$ the set $\hat h_i(s_1)\sqcap \hat
h_j(s_2)$ is either empty or it is exactly $\{ f_{ij}^m(s_1,s_2) \colon
m\in [N_l] \}$.
\medskip
We will construct a definable cell decomposition $\CT(S)$ for
$\Phi(x;y)$ as a union of 5 families of
cells:
\begin{itemize}
\item $\CT_0(S)$ -- 0-dimensional cells, i.e. points;
\item $\CT_1^u(S)$ -- 1-dimensional ``vertical'' cells;
\item $\CT_1^e(S)$ -- extra 1-dimensional vertical cells;
\item $\CT_1^h(S)$ -- 1-dimensional ``horizontal'' cells;
\item $\CT_2(S)$ -- 2-dimensional cells.
\end{itemize}
For each family $\CT_{\star}^\star(S)$ we will have
$|\CT_\star^\star(S)|=O(|S|^2)$, and also we will have appropriate
uniformly definable $\Psi_\star^\star(S)$ and $\mathcal I_\star^\star(\Delta)$
so that $\CT^\star_\star(S)=\{ \Delta\in \Psi_\star^\star(S)
\colon \mathcal I_\star^\star(\Delta) \cap S =\emptyset\}$.
\medskip
\textbf{ The family $\mathbf{\CT_0(S)}$.} We take $\CT_0(S)$ to be the
set of all points of intersections of vertical lines $\hat u_i(s)$ and
curves $\hat h_j(s')$ together with all points where
curves $\hat h_i(s)$ and $\hat h_j(s')$ intersect properly. I.e.,
\begin{gather*}
\CT_0(S)=\cup \{ (\hat u_i(s)\cap \hat h_j(s') \colon
i\in [l]; j\in [k]; s,s'\in S\}\, \\
\cup\, \{ \hat h_i(s) \sqcap \hat h_j(s') \colon i,j\in [k];
s,s'\in S \}.
\end{gather*}
We take $\Psi_0(S) :=\CT(S)$ and $\mathcal I_0(\Delta) :=\emptyset$.
It is easy to see that $\Psi_0(S)$ is uniformly definable.
We also have $|\CT_0(S)|\leq k l|S|^2 + N_lk^2|S|^2 =O(|S|^2)$.
\medskip
\textbf{ The set $\mathbf{\CT_1^u(S)}$.}
For fixed $i\in [l]$ and $s\in S$ let
$I_i^s$ be the set of all definably connected components of $\hat
u_i(s) \setminus \CT_0(S)$.
Since
\[ \hat u_i(s) \cap \CT_0(S)=\{ \hat u_i(s)\cap \hat h_j(s')\colon
j\in [k], s'\in S\}, \]
we have $|I_i^s|\leq (k+1)|S|$, and every $\Delta\in I_i^s$ has form
\[ \Delta=\{ (x_1,x_2) \in M^2 \colon x_2=u_i(s); h_j(s_1) < x_1 < h_{j'}(s_2) \}, \]
for some $j,j'\in \{ 0,\dotsc, k+1\}$, and $s_1,s_2\in S$.
We take $\CT^u_1 (S)$ to be the union of all $I_i^s$ for $i\in
[l]$ and $s\in S$. Clearly $|\CT_1^u(S)|\leq l(k+1)|S|^2 =O(|S|^2)$.
We take $\Psi_1^u(S)$ to be the set of all vertical lines segments
of the form
$\{ (x_1,x_2) \in M^2 \colon x_2=u_i(s); h_j(s_1) < x_1 < h_{j'}(s_2)
\}$, for $i\in [l]$, $j,j'\in \{ 0,\dotsc, k+1\}$, $s,s_1,s_2\in S$.
For $\Delta\in \Psi_1^u(S)$ we take $\mathcal I_1^u(\Delta) :=\{ s\in M^{|y|} \colon
\Phi(x;s) \text{ crosses } \Delta\}$.
It is not hard to see that $\Psi_1^u$ and $\mathcal I_1^u$ are uniformly
definable and $\CT_1^u(S)= \{ \Delta\in \Psi_1^u(S) \colon
\mathcal I_1^u(\Delta)\cap S = \emptyset\}$.
\medskip
\textbf{ The set $\mathbf{\CT_1^e(S)}$.} For each point where two horizontal curves intersect properly we add two vertical
line segments: one from the point to the curve above (or to plus infinity if there is no curve above) and one to the
curve below (or to minus infinity if there is no curve below).
Let $i,j\in [k]$, $s,s_1\in S$ and $p=(p_1,p_2)\in \hat h_i(s) \sqcap
\hat h_j(s_1)$.
Let \[p^+ :=\inf \{ h_m(p_2,s') \colon m=1,\dotsc,k+1; s'\in S;
h_m(p_2,s')>p_1\},\]
and
\[p^- :=\sup\{ h_m(p_2,s') \colon m=0,\dotsc,k; s'\in S;
h_m(p_2,s')<p_1\}.\]
We define $I_p^+ :=\{ (x_1,x_2)\in M^2 \colon x_2=p_2;\, p_1<x_1<p^+
\}$, $I_p^- :=\{ (x_1,x_2)\in M^2 \colon x_2=p_2;\, p^-<x_1<p_1
\}$; and take
\[ \CT_1^e(S) := \{ I_p^+, I_p^- \colon p\in \hat h_i(s) \sqcap
\hat h_j(s_1); \, i,j\in [k]; \, s,s_1\in S\}. \]
Obviously $|\CT_1^e(S)| \leq 2N_l k^2|S|^2 =O(|S|^2)$.
We take $\Psi_1^e(S)$ to be the family of all sets of the form
\[ \{ (x_1,x_2)\in M^2 \colon x_2=p_2;\, p_1<x_1< h_m(p_2,s')
\}\]
for all $i,j\in [k]$, $m\in \{ 1,\dotsc,k+1\}$, $s,s_1,s'\in S$, and
$p=(p_1,p_2)\in \hat h_i(s) \sqcap
\hat h_j(s_1)$; and of the form
\[ \{ (x_1,x_2)\in M^2 \colon x_2=p_2;\, h_m(p_2,s')<x_1< p_1
\}\]
for all $i,j\in [k]$, $m\in \{ 0,\dotsc,k\}$, $s,s_1,s'\in S$, and
$p=(p_1,p_2)\in \hat h_i(s) \sqcap
\hat h_j(s_1)$. It is not hard to see that $\Psi(S)$ is uniformly
definable.
For $\Delta\in \Psi_1^e(S)$ we take $\mathcal I_1^e(\Delta) :=\{ s\in M^{|y|} \colon
\Phi(x;s) \text{ crosses } \Delta\}$. It is not hard to see
$\mathcal I_1^e(\Delta)$ is uniformly
definable and $\CT_1^e(S)= \{ \Delta\in \Psi_1^e(S) \colon
\mathcal I_1^e(\Delta)\cap S = \emptyset\}$.
\medskip
\textbf{ The set $\mathbf{\CT_1^h(S)}$.}
Given $i\in [k]$ and $s\in S$, let $J_i^s$ be the set of all definably
connected components of $\hat h_i(s) \setminus \CT_0(S)$.
It is easy to see that
\begin{multline*}
\hat h_i(s)\cap \CT_0(S)=\{ \hat h_i(s)\cap \hat u_j(s')\colon j\in
[l]; s'\in S \} \\
\cup\, \{ \hat h_i(s)\sqcap \hat h_j(s')\colon j\in
[k]; s'\in S \}.
\end{multline*}
In particular $|J_i^s|\leq (l+N_l k+1)|S|$.
We take $\CT_1^h(S)$ to be the union of all $J_i^s$ for $i\in [k]$,
$s\in S$. Clearly $|\CT_1^h (S)|\leq k(l+N_l k+1)|S|^2=O(|S|^2)$.
Given $i\in [k]$, $s\in S$ and $s_1,s_2\in S$ let
$\mathcal A_{i,s}[s_1,s_2]$ be the family of all sets of the form $\{ (x_1,x_2)\in \hat h_i(s); c_1<x_2 < c_2
\},$ with
\begin{multline*}
c_1,c_2\in \{ u_j(s_1) \colon j\in [l]\}\\
\cup \,\{ p_2
\colon (p_1,p_2) \in \hat h_i(s) \sqcap \hat h_j(s_2) \text{ for some }
p_1\}\cup
\{ \pm\infty\}.
\end{multline*}
We take $\Psi_1^h(S)$ to be the union of all $\mathcal A_{i,s}[s_1,s_2]$ with $i\in
[k]$ and $s,s_1,s_2\in S$.
It is not hard to see that
$\Psi_1^h(S)$ is uniformly
definable and $\CT_1^h(S)= \{ \Delta\in \Psi_1^h(S) \colon
\mathcal I_1^h(\Delta)\cap S = \emptyset\}$, where $\mathcal I_1^h(\Delta)=\{ s\in M^{|y|} \colon
\Phi(x;s) \text{ crosses } \Delta\}$.
\medskip
\textbf{ The set $\mathbf{\CT_2(S)}$.} For the family $\CT_2(S)$ we
take all definably connected components of $M^2\setminus
(\CT_0(S)\cup
\CT_1^u(S)\cup
\CT_1^e(S)\cup
\CT_1^h(S))$.
Given $i,j\in \{0,\dotsc,k+1\}$ , $s_1,s_2\in S$ and $c_1<c_2\in
M\cup\{\pm\infty\}$
with $h_i(x_2,s_1)< h_j(x_2,s_2)$ for all $x_2\in (c_1,c_2)$,
let
$A_{i,s_1}^{j,s_2}(c_1,c_2)$ be the set
\[ A_{i,s_1}^{j,s_2} (c_1,c_2) =\{ (x_1,x_2)\in M^2\colon c_1<x_2 < c_2;\,
h_i(x_2,s_1)< x_1< h_j(x_2,s_2) \}. \]
It is not hard to see that if $\Delta\in \CT_2(S)$ then $\Delta=
A_{i,s_1}^{j,s_2} (c_1,c_2)$ for some $i,j\in\{0,\dotsc,k+1\}$,
$s_1,s_2\in S$ and $c_1,c_2$ belonging to the following
set:
\begin{gather*}S_{i,s_1}^{j,s_2}=\{ u_{i'}(s') \colon i'\in \{ 0,\dotsc l+1\}; s'\in
S\} \\
\cup\,\{ p_2 \colon (p_1,p_2)\in \hat h_i(s_1)\sqcap \hat h_{i'}(s')
\text{ for some } i'\in [k], s'\in S, p_1\in M \} \\
\cup\,\{ p_2 \colon (p_1,p_2)\in \hat h_j(s_2)\sqcap \hat h_{i'}(s')
\text{ for some } i'\in [k], s'\in S, p_1\in M \}.
\end{gather*}
We take $\Psi_2(S)$ to be the family of all $A_{i,s_1}^{j,s_2}
(c_1,c_2)$, for all $c_1,c_2\in S_{i,s_1}^{j,s_2}$.
It is not hard to see that $\Psi_2(S)$ is uniformly definable family,
and we have $\CT_2(S) \subseteq \Psi_2(S)$.
It is also not hard to see that a set $\Delta\in \Psi_2(S)$ is in
$\CT_2(S)$ if and only if it is not crossed by $\Phi(x;S)$, and is also not crossed
by any line segment in $\CT_1^e(S)$.
Hence a set $\Delta=A_{i,s_1}^{j,s_2}(c_1,c_2)\in \Psi_2(S)$ \textbf{ is not in } $\CT_2(S)$ if and
only if there is $s\in S$ satisfying at least one of the following conditions.
\begin{enumerate}[(C1)]
\item $\Phi(x;s)$ crosses $\Delta$.
\item There are $i'\in [k]$ and
$(p_1,p_2)\in \hat h_i(s_1) \sqcap \hat h_{i'}(s)$ with
$c_1<p_2<c_2$.
\item There are $i'\in [k]$ and
$(p_1,p_2)\in \hat h_j(s_2) \sqcap \hat h_{i'}(s)$ with
$c_1<p_2<c_2$.
\end{enumerate}
For $\Delta\in \Psi_2(S)$ we take $\mathcal I_2(\Delta)$ to be the set of all
$s\in M^{|y|}$ satisfying any of the conditions $(C1)-(C3)$. It is not
hard to see that $\mathcal I_2(\Delta)$ is uniformly definable and
$\CT_2(S)=\{ \Delta\in \Psi_2(S) \colon \mathcal I_2(\Delta)\cap S
=\emptyset\}$.
\medskip
We are left to check that $|\CT_2(S)|=O(|S|^2)$.
Since $\CT_2(S)$ consists of definably connected components of
$M^2\setminus
(\CT_0(S)\cup
\CT_1^u(S)\cup
\CT_1^e(S)\cup
\CT_1^h(S))$, any two $\Delta,\Delta'\in \CT_2(S)$ are either disjoint
or coincide, hence every $\Delta\in \CT_2(S)$ is completely determined
by its ``left lower corner'', i.e. if $\Delta=A_{i,s_1}^{j,s_2}(c_1,c_2)$
and $\Delta'=A_{i,s_1}^{j',s'_2}(c_1,c'_2)$ are in $\CT_2$ then
$\Delta=\Delta'$.
We divide $\CT_2(S)$ into 4 disjoint families:
\begin{itemize}
\item The family $F_1(S)$ of all $A_{i,s_1}^{j,s_2}(c_1,c_2)\in \CT_2(S)$
with $c_1=-\infty$.
\item The family $F_2(S)$ of all $A_{i,s_1}^{j,s_2}(c_1,c_2)\in \CT_2(S)$
with $c_1=u_{i'}(s')$ for some $i' \in [l]$ and $s'\in S$.
\item The family $F_3(S)$ of all $A_{i,s_1}^{j,s_2}(c_1,c_2)\in \CT_2(S)$
that are not in $F_2(S)$ and $(p_1,c_1)\in \hat h_i(s_1)\sqcap \hat
h_{i'}(s')$ for some $i' \in [k]$, $s'\in S$, and $p_1\in M$.
\item The family $F_4(S)$ of all $A_{i,s_1}^{j,s_2}(c_1,c_2)\in \CT_2(S)$
that are not in $F_1(S)\cup F_2(S)\cup F_3(S)$. In this case we have that $\{ (x_1,c_1) \colon h_i(c_1,s_1)< x_1
< h_j(c_1,s_2) \}\in \CT_1^e(S)$.
\end{itemize}
Every $A_{i,s_1}^{j,s_2}(c_1,c_2)\in F_1(S)$ is completely determined by
$i$ and $s_1$, hence $|F_1(S)|\leq (k+1)|S|$ (we get $k+1$, since we
allow $i=0$).
Every $A_{i,s_1}^{j,s_2}(c_1,c_2)\in F_2(S)$ is completely determined by
$i$, $s_1$, some $i'\in [l]$ and $s'\in S$. Hence
$|F_2(S)|\leq (k+1)l|S|^2$.
Since $ \hat h_i(s_1)\sqcap \hat
h_{i'}(s')\leq N_l$ we have $|F_3(S)|\leq k^2N_l|S|^2$.
Finally, each $A_{i,s_1}^{j,s_2}(c_1,c_2)\in F_4(S)$ is completely
determined by its ``left side'' $\{ (x_1,c_1) \colon h_i(c_1,s_1)< x_1
< h_j(c_1,s_2) \}$ that is in $\CT_1^e(S)$. Since
$|\CT_1^e(S)|=O(|S|^2)$, we also have $|F_4(S)|=O(|S|^2)$.
Therefore $|\CT_2(S)|=O(|S|^2)$.
\medskip
Taking $\CT(S)=\CT_0(S)\cup
\CT_1^u(S)\cup
\CT_1^e(S)\cup
\CT_1^h(S)\cup \CT_2(S)$ we obtain a definable cell decomposition
for $\Phi(x;y)$ with $|\CT(S)|=O(|S|^2)$.
\section{Planar Zarankiewicz's problem in distal structures}\label{sec: Zarank}
\subsection{Zarankiewicz's problem}
Zarankiewicz's problem in graph theory asks to determine the largest possible number of edges in a bipartite graph on a given number of vertices that has no complete bipartite subgraphs of a given size.
In \cite{zaran} the authors investigate Zarankiewicz's problem for semialgebraic graphs of bounded description complexity, a setting which in particular subsumes a lot of different incidence-type questions.
In particular, they prove the following upper bound on the number of edges (they have more general results in $\mathbb{R}^n$ for arbitrary $n$ as well, but here we will be only concerned with the ``planar'' case).
\begin{fact} \cite[Theorem 1.1]{zaran}\label{fac: semialg zarank}
Let $E \subseteq \mathbb{R}^2 \times \mathbb{R}^2$ be a semi-algebraic relation such that $E$ has description complexity at most $t$ (i.e., $E$ can be defined as a Boolean combination of at most $t$ polynomial inequalities, with all of the polynomials involved of degree at most $t$). Then for any $k \in \mathbb{N}$ there is some constant $c = c(t,k)$ satisfying the following.
If $P, Q \subseteq \mathbb{R}^2$ with $|P|=m, |Q|=n$ are such that $E \cap (P \times Q)$ doesn't contain a copy of $K_{k,k}$ (the complete bipartite graph with both parts of size $k$), then
$$|E(P,Q)| \leq c \left( (mn)^{\frac{2}{3}} + m + n \right),$$
where $E(P,Q) = E \cap (P \times Q)$.
\end{fact}
\begin{rem} This result is a natural generalization of the Szemer\'edi-Trotter theorem over $\mathbb{R}$ \cite{szemeredi1983extremal}. Namely, if $P$ a set of points on the plane, $Q$ the dual of the lines (i.e. lines are semialgebraically coded by points in $\mathbb{R}^2$), and $E$ the incidence relationship (which is also clearly semialgebraic), then $E(P,Q)$ is $K_{2,2}$-free as any two distinct lines intersect in at most one point.
\end{rem}
We will give a common generalization of Fact \ref{fac: semialg zarank} and the semialgebraic ``points / planar curves'' incidence bound from \cite[Theorem 4]{pach1992repeated} to arbitrary definable families admitting a quadratic distal cell decomposition (e.g. any definable family of subsets of $M^2$ in an $o$-minimal expansion of a field). To state the result, we first recall the notion of the VC-density of a formula (and refer to \cite{VCD1} for a detailed discussion).
\begin{defn}
\begin{enumerate}
\item Given a set $X$ and a family $\mathcal F$ of subsets of $X$, the \emph{shatter function} $\pi_{\mathcal F}: \mathbb{N} \to \mathbb{N}$ of $\mathcal F$ is defined as
$$ \pi_{\mathcal F}(n) := \max \{ |\mathcal F \cap A| : A \subseteq X, |A| = n \}, $$
where $\mathcal F \cap A = \{ S \cap A : S \in \mathcal F \}$.
\item The \emph{VC-density} of $\mathcal F$, or $\operatorname{vc}(\mathcal F)$, is defined as the infimum of all real numbers $r$ such that $\pi_{\mathcal F}(n) = O(n^r)$ (and $\operatorname{vc}(\mathcal F) = \infty$ if there is no such $r$).
\item Given a formula $\varphi(x;y)$, we define its VC density $\operatorname{vc}(\varphi)$ as the VC-density of the family $\{ \varphi(M,b) : b \in M^{|y|}\}$ of subsets of $M^{|x|}$.
\item Given a formula $\varphi(x;y)$, we consider its dual formula $\varphi^*(y;x) := \varphi(x;y)$ obtained by interchanging the roles of the variables. It is easy to see then that the family $\{ \varphi^*(y; a) : a \in M^{|x|}\}$ of subsets of $M^{|y|}$ is the dual set system for the family $\{ \varphi^*(x; b) : b \in M^{|y|}\}$ of subsets of $M^{|x|}$.
\end{enumerate}
\end{defn}
VC-density in various classes of NIP structures is investigated e.g. in \cite{VCD1, VCD2}, and the optimal bounds are known in some cases including the $o$-minimal structures.
\begin{fact} \cite[Theorem 6.1]{VCD1}\label{fac: vc bound in o-min}
Let $\mathcal{M}$ be an $o$-minimal structure, and let $\varphi(x;y)$ be any formula. Then $\operatorname{vc}(\varphi^*) \leq |x|$.
\end{fact}
\begin{rem}\label{rem: distal decomp bounds dual VC density}
Let $\varphi(x;y)$ be a formula admitting a distal cell decomposition $\CT$ with $|\CT(S)| = O(|S|^d)$. Then $\operatorname{vc}(\varphi^*) \leq d$.
Indeed, recalling Definition \ref{def: def cell decomp}, given any finite $S \subseteq M^{|y|}$ and $\Delta \in \CT(S)$, $S \cap \varphi^*(M,a) = S \cap \varphi^*(M,a')$ for any $a,a' \in \Delta$ (and the sets in $\CT(S)$ give a covering of $M^{|x|}$), hence at most $|S|^d$ different subsets of $S$ are cut out by the instances of $\varphi^*(y;x)$.
\end{rem}
We will need the following weaker bound that applies to graphs of bounded VC-density.
\begin{fact} \label{VCBoundOnEdges}
\cite[Theorem 2.1]{zaran} For every $c, d, k$ there is some constant $c_1 = c_1(c,d,k)$ such that the following holds.
Let $E \subseteq P \times Q$ be a bipartite graph with $|P|=m, |Q|=n$ such that the family of sets $\mathcal F = \{ E(q) : q \in Q \}$ satisfies $\pi_{\mathcal{F}}(z) \leq c z^d$ for all $z \in \mathbb{N}$ (where $E(q) = \{ p \in P : (p,q) \in E \}$). Then if $E$ is $K_{k,k}$-free, we have
$$|E(P,Q)| \leq c_1(m n^{1- 1/d}+n).$$
\end{fact}
We are ready to prove the main theorem of this section.
\begin{thm}\label{thm: distal Zarank}
Let $\mathcal{M}$ be a structure, and assume that $E(x,y)$ is a formula admitting a distal cell decomposition $\CT$ with $|\CT(S)| = O(|S|^2)$ and such that $\operatorname{vc}(E) \leq d$. Then for any $k \in \mathbb{N}$ there is a constant $c = c(E,k)$ satisfying the following.
For any finite $P \subseteq M^{|x|}, Q \subseteq M^{|y|}$, $|P|=m, |Q| = n$, if $E(P,Q)$ is $K_{k,k}$-free, then we have:
$$ |E(P,Q)| \leq c \left( m^{\frac{d}{2d-1}} n^{\frac{2d-2}{2d-1}} + m + n \right).$$
\end{thm}
\begin{proof}
Our argument is a generalization of the proofs of \cite[Theorem 3.2]{zaran} and \cite[Theorem 4]{pach1992repeated}.
If $n > m^d$, then by Fact \ref{VCBoundOnEdges} and assumption we have
$$|E(P,Q)| \leq c_0 ( m n^{1 - \frac{1}{d}} + n) \leq c_0 (n^{\frac{1}{d}} n^{1 - \frac{1}{d}} + n) = 2c_0n$$
for some $c_0 = c_0(E,d,k)$, and we are done.
Hence we assume $n \leq m^2$.
Let $r := \frac{m^{\frac{d}{2d-1}}}{n^{\frac{1}{2d-1}}}$, and consider the family $\Sigma = \{E(M,q) : q \in Q\}$ of subsets of $M^{|x|}$. By assumption and Theorem \ref{lem-r_cut}, there is a family $\mathcal{C}$ of subsets of $M^{|x|}$ giving a $1/r$-cutting with respect to $\Sigma$. That is, $M^{|x|}$ is covered by the union of the sets in $\mathcal{C}$ and any of the sets $C \in \mathcal{C}$ is crossed by at most $|\Sigma|/r$ elements from $\Sigma$. Moreover, $|\mathcal{C}| \leq c_1 r^2$ for some $c_1 = c_1(E)$.
Then there is a set $C \in \mathcal{C}$ containing at least $\frac{m}{c_1 r^2} = \frac{n^{\frac{2}{2d-1}}}{c_1 m ^{\frac{1}{2d-1}}}$ points from $P$. Let $P' \subseteq P \cap C$ be a subset of size exactly $\lceil \frac{n^{\frac{2}{2d-1}}}{c_1 m ^{\frac{1}{2d-1}}} \rceil$.
If $|P'| < k$, we have $\frac{n^{\frac{2}{2d-1}}}{c_1 m ^{\frac{1}{2d-1}}} \leq |P'| < k$, so $n < k^{\frac{2d-1}{2}} c_1^{\frac{2d-1}{2}}m^{\frac{1}{2}}$.
Then by Fact \ref{VCBoundOnEdges} applied to the dual formula $E^*$ (and using Remark \ref{rem: distal decomp bounds dual VC density}) we have
$$|E(P,Q)| \leq c_2(n m^{1 - \frac{1}{2}} + m) \leq c_2(k^{\frac{2d-1}{2}} c_1^{\frac{2d-1}{2}}m^{\frac{1}{2}} m^{\frac{1}{2}} + m) \leq c_3 m$$
for some $c_3 = c_3 (E,k)$, so we are done.
Hence we may assume that $|P'| \geq k$.
Let $Q'$ be the set of all points $q \in Q$ such that $E(M,q)$ crosses $C$. We know that
$$|Q'| \leq \frac{|Q|}{r} \leq \frac{n n^{\frac{1}{2d-1}}}{m^{\frac{d}{2d-1}}} = \frac{n^{\frac{2d}{2d-1}}}{m^{\frac{d}{2d-1}}} \leq c_1^d |P'|^d.$$
Again by Fact \ref{VCBoundOnEdges} we get
$$|E(P',Q')| \leq c_4(|P'| |Q'|^{1-\frac{1}{d}} + |Q'|) \leq c_4 (|P'|c_1^d |P'|^{d-1} + c_1^d |P'|^d) \leq c_5 |P'|^d$$
for some $c_5 = c_5(E,k)$. Hence there is a point $p \in P'$ such that $|E(p) \cap Q'| \leq c_5 |P'|^{d-1}$.
Since $E(P,Q)$ is $K_{k,k}$-free, there are at most $k-1$ points in $Q\setminus Q'$ from $E(p)$ (otherwise, since none of those points crosses $C$ and $C$ contains $P'$, which is of size $\geq k$, we would have a copy of $K_{k,k}$).
Hence
$$|E(p)| \leq c_5 |P'| + (k-1) \leq \frac{c_5}{c_1} \frac{n^{\frac{2(d-1)}{2d-1}}}{m^{\frac{d-1}{2d-1}}} + (k-1).$$
We remove $p$ and repeat the argument until we have no vertices remaining in $P$, and see that
$$|E(P,Q)| \leq (c/2)(n+m) + \sum_{i=n^{\frac{1}{d}}}^{m} \left( \frac{c_5}{c_1} \frac{n^{\frac{2(d-1)}{2d-1}}}{i^{\frac{d-1}{2d-1}}} + (k-1) \right)$$
$$ \leq (c/2)(n+m) + \frac{c_5}{c_1} n^{\frac{2(d-1)}{2d-1}} \sum_{i=n^{\frac{1}{d}}}^{m} \frac{1}{i^{\frac{d-1}{2d-1}}} + (k-1) m $$
$$\leq (c/2)(n+m) + \frac{c_5}{c_1} n^{\frac{2(d-1)}{2d-1}} m^{1-\frac{d-1}{2d-1}} + (k-1)m$$
$$ \leq c ( m^{\frac{d}{2d-1}} n^{\frac{2d-2}{2d-1}} + m + n)$$
for a sufficiently large $c = c(E,k)$.
\end{proof}
\subsection{Omitting $K_{k,k}$ versus omitting infinite complete bipartite graphs}
We recall a theorem of Bukh and Matou\mathfrak{v}{s}ek.
\begin{fact}\cite[Theorem 1.9]{bm}\label{fac: semialg BM}
For every $d,D$ and $k$ there exists $N$ such that for every semialgebraic relation $R(x_1, \ldots, x_k)$ with $|x_1| = \ldots = |x_k|=d$ of description complexity $D$, the following two conditions are equivalent.
\begin{enumerate}
\item There exist $A_1, \ldots, A_k \subseteq \mathbb{R}^d$ such that $|A_1| = \ldots = |A_k| = N$ and $A_1 \times \ldots \times A_k \subseteq R$.
\item There exist \emph{infinite} sets $A_1, \ldots, A_k \subseteq \mathbb{R}^d$ such that $A_1 \times \ldots \times A_k \subseteq R$.
\end{enumerate}
\end{fact}
We give a generalization of this result for any distal structure in which finite sets in every definable family have a uniform bound on their size.
Recall:
\begin{defn}\label{def: elimination of infinity}
An $\mathcal{L}$-structure $\mathcal{M}$ \emph{eliminates $\exists^\infty$} if for every $\varphi(x,y) \in \mathcal{L}$ there is some $n_\varphi \in \mathbb{N}$ such that for any $b \in M^{|y|}$, $\varphi(M,b)$ is infinite if and only if $|\varphi(M,b)| \geq n_\varphi$.
\end{defn}
We will use the definable strong Erd\H{o}s-Hajnal property for hypergraphs in distal structures from \cite{distal} (and we will use some terminology from that paper in our argument).
\begin{fact}\cite[Corollary 4.6]{distal}\label{fac: definable strong EH in distal}
Let $\mathcal{M}$ be a distal $\mathcal{L}$-structure. Then for every formula $\varphi(x_1, \ldots, x_k; z) \in \mathcal{L}$ there are some $\alpha > 0$ and formulas $\psi_i(x_i, y_i) \in \mathcal{L}$ for $1 \leq i \leq k$ such that the following holds.
For any generically stable Keisler measures $\mu_i$ on $M^{|x_i|}$ and any $c \in M^{|z|}$, there are some $b_i \in M^{|y_i|}$ such that $\mu_i(\psi_i(M^{|x_i|},b_i)) \geq \alpha$ and either $\prod_{1\leq i \leq k} \psi_i(M^{|x_i|},b_i) \subseteq \varphi(M^{|x_1|}, \ldots, M^{|x_k|}; c)$ or $\prod_{1\leq i \leq k} \psi_i(M^{|x_i|},b_i) \subseteq \neg \varphi(M^{|x_1|}, \ldots, M^{|x_k|}; c)$.
\end{fact}
\begin{thm}\label{thm: distal BukhMatousek}
Let $\mathcal{M}$ be a distal $\mathcal{L}$-structure eliminating $\exists^{\infty}$. Then for any formula $\varphi(x_1, \ldots, x_k;z) \in \mathcal{L}$ there is some $N \in \mathbb{N}$ and $\psi_i(x_i,y_i) \in \mathcal{L}$, for $1\leq i \leq k$, such that the following are equivalent for any $c \in M^{|z|}$, letting $R \subseteq M^{|x_1|} \times \ldots \times M^{|x_k|}$ be given by $R := \varphi(M^{|x_1|}, \ldots, M^{|x_k|}, c)$.
\begin{enumerate}
\item There exist $A_i \subseteq M^{|x_i|}$ for $1\leq i \leq k$ such that $|A_1| = \ldots = |A_k| = N$ and $A_1 \times \ldots \times A_k \subseteq R$.
\item There are some $b_i \in M^{|y_i|}$ such that $\psi_i(M^{|x_i|}, b_i)$ is infinite for all $1 \leq i \leq k$ and $\psi_1(M^{|x_1|}, b_1) \times \ldots \times \psi_k(M^{|x_k|}, b_k) \subseteq R$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $\alpha > 0$ and $\psi_i(x_i,y_i) \in \mathcal{L}$, for $1\leq i \leq k$, be as given by Fact \ref{fac: definable strong EH in distal} for $\varphi(x_1, \ldots, x_k;z)$.
Let $n_i \in \mathbb{N}$ be as given by Definition \ref{def: elimination of infinity} for $\psi_i(x_i,y_i)$, and let $n := \max \{ n_i : 1 \leq i \leq k \}$.
We take $N := \lceil \frac{n}{\alpha} \rceil$, then $N = N(\varphi)$.
Let $c \in M^{|z|}$ be arbitrary, and let $R := \varphi(M^{|x_1|}, \ldots, M^{|x_k|}, c)$. Assume that (1) holds. That is, there are some $A_i \subseteq M^{|x_i|}$ such that $|A_1| = \ldots = |A_k| = N$ and $A_1 \times \ldots \times A_k \subseteq R$. Let $\mu_i$ be a Keisler measure on $M^{|x_i|}$ defined by $\mu_i(X) := \frac{|A_i \cap X|}{|A_i|}$ for all definable $X \subseteq M^{|x_i|}$, then $\mu_i$ is generically stable for all $1 \leq i \leq k$. Applying Fact \ref{fac: definable strong EH in distal}, we find some $b_i \in M^{|y_i|}$ such that $\mu_i(\psi_i(M^{|x_i|}, b_i)) \geq \alpha$ and $\prod_{1\leq i \leq k} \psi_i(M^{|x_i|},b_i) \subseteq R$ (note that $\prod_{1\leq i \leq k} \psi_i(M^{|x_i|},b_i) \subseteq \neg R$ is impossible as $\prod_{1 \leq i \leq k} A_i \subseteq R$). Now for any $1\leq i \leq k$, $\mu_i(\psi_i(M^{|x_i|}, b_i)) \geq \alpha$ implies $|\psi_i(A_i, b_i)| \geq \alpha N \geq n_i$, hence $\psi_i(M^{|x_i|}, b_i)$ is infinite by the choice of $n_i$, as wanted.
\end{proof}
\begin{rem}
Examples of structures satisfying the assumption of Theorem \ref{thm: distal BukhMatousek} are given by arbitrary $o$-minimal structures and $p$-minimal structures (e.g. the field $\mathbb{Q}_p$). Hence Fact \ref{fac: semialg BM} follows by applying it to the field of reals.
\subsection{The $o$-minimal case}
Theorem \ref{thm: distal BukhMatousek} implies that in Theorem \ref{thm: distal Zarank}, assuming $\mathcal{M}$ eliminates $\exists^\infty$, we can relax the assumption to just assuming that $E$ doesn't contain a copy of an infinite complete bipartite graph.
\end{rem}
We conclude by observing that all of these results apply to $o$-minimal expansions of fields.
\begin{thm}\label{thm: everything o-min}
Let $\mathcal{M}$ be an $o$-minimal expansion of a field and let $E(x;y) \subseteq M^2 \times M^d$ be a definable relation.
\begin{enumerate}
\item For every $k \in \mathbb{N}$ there is a constant $c = c(E,k)$ such that for any finite $P \subseteq M^2, Q \subseteq M^d$, $|P|=m, |Q| = n$, if $E(P,Q)$ does not contain a copy of $K_{k,k}$ (the complete bipartite graph with two parts of size $k$), then we have
$$ |E(P,Q)| \leq c \left( m^{\frac{d}{2d-1}} n^{\frac{2d-2}{2d-1}} + m + n \right).$$
\item There is some $k' \in \mathbb{N}$ and formulas $\varphi(x,v), \psi(y,w)$ depending only on $E$ such that if $E$ contains a copy of $K_{k',k'}$, then there are some parameters $b \in M^v, c \in M^w$ such that both $\varphi(M,b)$ and $ \psi(M,c)$ are infinite and $\varphi(M,b) \times \psi(M,c) \subseteq E$.
\end{enumerate}
\end{thm}
\begin{proof}
(1) Follows by applying Theorem \ref{thm: distal Zarank}. Its assumptions are satisfied for $E$ by Theorem \ref{thm:cell-dec} and by Fact \ref {fac: vc bound in o-min} (applied to the dual formula $E^*(y;x)$).
(2) Follows by Theorem \ref{thm: distal BukhMatousek} as $o$-minimal theories eliminate the $\exists^\infty$ quantifier.
\end{proof}
\begin{cor}
In the setting of Theorem \ref{thm: everything o-min}, there is a constant $c$ and formulas $\varphi(x,v), \psi(y,w)$ depending only on $E$ such that either
$$ |E(P,Q)| \leq c \left( m^{\frac{d}{2d-1}} n^{\frac{2d-2}{2d-1}} + m + n \right)$$
for all finite $P \subseteq M^2, Q \subseteq M^d$ with $|P|=m, |Q| = n$, or there are some $b \in M^v, c \in M^w$ such that both $\varphi(M,b)$ and $ \psi(M,c)$ are infinite and $\varphi(M,b) \times \psi(M,c) \subseteq E$.
\end{cor}
\begin{proof}
Immediate combining (1) and (2) in Theorem \ref{thm: everything o-min} (let $k'$, $\varphi$, $\psi$ be as given by (2) for $E(x,y)$, and let $c$ be as given by (1) for this $k'$).
\end{proof}
\begin{rem}
The special case with $d=2$ and $E$ satisfying an additional assumption of $1$-dimensionality of its fibers was obtained independently by Basu and Raz \cite{basu2016minimal} using different methods.
\end{rem}
\bibliographystyle{acm}
|
1,116,691,498,379 | arxiv |
\section{Introduction}
Although audio spoofing countermeasure (CM) \cite{wu2015spoofing} is highly related to Automatic Speaker Verification (ASV)\cite{bai2021speaker}, most of the research on these two tasks has been carried out independently in recent years. This may lead to CM systems not being well suited to some ASV scenarios due to overfitting or domain mismatch \cite{muller21_asvspoof}. To address this gap, the organizers of the ASVspoof Challenge \cite{todisco19_interspeech,yamagishi21_asvspoof} proposed the tandem detection cost function (t-DCF) metric \cite{kinnunen2018t}, which is highly correlated to both the ASV system and the CM system, to replace the Equal Error Rate (EER) metric, which relied only on the CM system itself. However, the ASVspoof Challenge still focuses on designing and optimizing a stand-alone CM system to calculate the min t-DCF metric in combination with a given official black-box ASV system. This prevents participants from improving the overall performance by enhancing the ASV system or leveraging joint optimization. Therefore, the first spoofing-aware speaker verification (SASV) challenge \cite{jung2022sasv}, which aims to promote the development of integrated systems that can perform both ASV and CM tasks, has been organized this year. The goal of this challenge is to build a hybrid system that can detect both zero-effort impostor access attempts and spoofing attacks simultaneously.
The 2022 SASV challenge focuses on logical access spoofing attacks (LA), such as text-to-speech (TTS) and voice conversion (VC), rather than physical access spoofing attacks (PA), such as record and play-back. Due to the lack of large scale dataset with both speaker and spoofing labels available, few studies involving joint ASV and CM optimization have been conducted in the past. Two categories of jointly optimized solutions have been summarized in the evaluation plan \cite{jung2022sasv}. The first is ensemble systems based on a fusion of separate ASV and CM systems. Gomez et al. \cite{gomez2020joint} use an embedding concatenation strategy to construct an ensemble classification system. Another approach is to build a single integrated system. Li et al. \cite{li2020joint} propose a single model using multi-task learning with contrastive loss function.
We investigated and implemented different dual-system score combination methods, including score-fusion and cascaded systems. Since the ensemble kind of solutions highly relies on the performance of the pre-trained subsystems, we investigate two innovative schemes to improve the CM system performance. The one-class confusion loss aims to reduce the intra-class Euclidean distance of bonafide audio embeddings. The random embedding sampling augmentation mechanism is also proposed for improving the model's generalization to unseen attacks. Moreover, we found that the performance of CM systems trained on ASVspoof 2019 LA \cite{todisco19_interspeech} data degraded substantially on Voxceleb2 \cite{nagrani2017voxceleb}, making the CM system more challenging to utilize the speaker embeddings trained by Voxceleb2 data. Among our explorations, the cascaded system still achieves the lowest error rate without considering the computational cost and real-time latency.
The rest of this paper is organized as follows. In section \ref{sec2}, our submitted system for the SASV challenge is represented, which mainly focuses on the CM system structure and score combination strategies. Implementation details in terms of the dataset usage and model hyperparameters are provided in Section \ref{sec3}. Section \ref{sec4} describes and discusses the results based on our experiments. Conclusions are provided in Section \ref{sec5}.
\section{System Description}
\label{sec2}
\subsection{ASV subsystem}
Four speaker verification subsystems with different network structures have been adopted in our experiments, including ResNet \cite{resnet}, SE-ResNet \cite{senet}, SimAM-ResNet \cite{simam} and ECAPA-TDNN \cite{ecapatdnn}. The global statistic pooling (GSP) is used for the ResNet subsystem while the attentive statistics pooling (ASP) \cite{asp_pooling} is used for the other three apporaches. The ArcFace\cite{arcface} which could increase intra-speaker distances while ensuring inter-speaker compactness is utilized.
\subsection{CM subsystem}
This subsection describes the basic network structure of our CM subsystems and the two proposed methods, namely one-class confusion loss and embedding random sampling augmentation.
\subsubsection{Basic network architecture}
We choose AASIST \cite{jung2021aasist} which is the challenge CM baseline as our backbone network. It contains a RawNet2 \cite{tak2021end} based encoder and an attention network based graph module. AASIST utilizes raw waveforms as the input to learn meaningful high-dimensional spectro-tempora feature maps and then extract graph nodes of feature maps in temporal and frequency domains respectively \cite{jung2021aasist}. With a stack node that learns information from all nodes, the final CM embedding is attained by concatenating various nodes' mean and maximum values. Moreover, as mentioned in \cite{tak2022automatic}, Tak et al. provide an improved architecture in which the max pooling layer of the encoded feature maps is replaced by a 2D self-attentive pooling \cite{zhu2018self}, named as AASIST-SAP.
\subsubsection{One-Class Confusion Loss function}
Although the basic models, AASIST and AASIST-SAP, obtain great results in the development and evaluation set, there is still a large performance gap since there are unseen attack algorithms in the evaluating set. Therefore, it is necessary to enhance the generalization of our system. The space of bonafide audios is relatively stable while the domain of the attack algorithms is very diverse and unpredictable. Inspired by one-class learning \cite{zhang2021one,wang2021dku}, we proposed a One-Class Confusion Loss (OCCL), which is similar to the pairwise confusion loss defined in \cite{dubey2018pairwise}.
The binary cross-entropy loss can be defined as follows:
$$\mathcal{L}_{ce}= \sum_{i} -{(y_i\log(p_i) + (1 - y_i)\log(1 - p_i))}$$
where $y_i \in \{0,1\}$ is the class label and $p_i$ is the probability output of the classifier. The anti-spoofing CM model is trained using a combined objective with the cross-entropy loss and the proposed one-class confusion loss, which is defined as:
$$\mathcal{L}_{occ}= \sum_{i} \sum_{j\neq i} ||e_i-e_j||^2$$
where $e_i$ denotes the embedding vector extracted from the bonafide audios. The purpose of this loss function is to make the Euclidean distance of all bonafide samples more compact in the embedding space. The one-class confusion loss is only applied on bonafide audios during the training process. Therefore, the final combination loss function is defined as follows:
$$\mathcal{L}= \mathcal{L}_{ce} + \lambda \mathcal{L}_{occ}$$
where lambda is a constant hyperparameter.
\subsubsection{Embedding Random Sampling Augmentation}
Considering that the evaluation set contains many unseen logical attacks \cite{wang2020asvspoof}, we propose a fine-tuning Embedding Random Sample Augmentation (ERSA) strategy that aims to improve the robustness of the model for unknown scenarios inspired by \cite{baweja2020anomaly}. The key idea is to generate random embedding samples from the Gaussian distributions with boundary spoof embedding centers as the mean, and labeled as spoofed speech. Firstly, we initialize the embedding centers of bonafide audio and each type of spoofing audio in the development set separately based on the pre-trained model. The boundary embedding center for each type of spoofing audio is defined as the average of the bonafide embedding centers and the spoofing embedding center. During fine-tuning, the boundary embedding centers are dynamically updated based on the embeddings of current iteration. Then we randomly generated samples from $N(\hat{\mu},\Sigma)$ where $N$ is a Gaussian distribution, $\hat{\mu}$ is the boundary spoof embedding center and $\Sigma$ is the covariance matrix of spoofing embeddings calculated in advance. After each 5 epoch training, the mean and covariance matrix of embedding centers are updated.
A detailed algorithm description can be found in \cite{sasv2022dku}.
\subsubsection{Integrating Speaker Verification Embeddings}
In addition, we also explore the possibility of integrating SV embeddings into the CM model. In this case, the final CM embedding is obtained by concatenating the original CM embedding and the SV embedding together followed by two linear layers.
\subsection{System Combination}
\subsubsection{Score Fusion System}
The baseline 1 provided by the challenge organizers just generates the final SASV score by a simple score summation. However, the score distribution of the SV system and the CM system are quite different. Thus, we explore two different strategies to combine the scores, namely the normalized score multiplication approach the same way as in \cite{sasv2022dku,zhang2022new} and the score calibration and fusion approach based on the Bosaris toolkit \cite{brummer2013bosaris}.
\subsubsection{Cascaded System}
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{cascade_system5.png}
\centering
\caption{{\it The illustration of the ASV followed by CM cascaded system. $\varepsilon$ represents the minimum CM score in the development set. The CM followed by ASV cascaded system is built in the same way, but switching the SV and CM systems in the pipeline.}}
\label{cascade}
\end{figure}
As is shown in Figure \ref{cascade}, the cascaded system consists of two tandem modules: a) the first module generates a hard decision based on a threshold, the threshold is determined by the Equal Error Rate (EER) on the development set; b) the second module directly outputs the raw scores if the decision of module 1 is positive; c) the second module generates the fixed minimum score on the development set when the decision of module 1 is negative. Therefore, once the test audio is tagged as negative by the first module, the score of the second module is not useful.
For the cascaded system, two designs are considered in this work: the first one is ASV module followed by CM module, named as Cascade-ASV-CM; and the other one is CM module followed by ASV module, named as Cascade-CM-ASV. The thresholds of the first system were determined by the EER criteria on the development set.
\section{Experimental setup}
\label{sec3}
\subsection{Data Usage and Evaluation Metrics}
All datasets we used for training and validation are the training and development set of the ASVspoof2019 \cite{wang2020asvspoof} LA database and the VoxCeleb2 \cite{nagrani2017voxceleb} as requested by the organizers. The ASVspoof2019 LA database consists of bonafide and spoofing audios. Although the database contains both the speaker and spoofing labels, in general it was only used for anti-spoofing countermeasure due to the low number of speakers. The VoxCeleb2 database contains 1128246 audios from 6112 speakers and has been widely used for ASV training. However, it is difficult to directly train a multi-speaker TTS or VC system just using Voxceleb2 \cite{Chung18b}. The official SASV evaluation trial consists of audios from the ASVspoof2019 LA evaluation partition, with unseen logical access spoofing attacks compared with audios in the train and development partitions.
The SASV-EER \cite{jung2022sasv}, which represents the EER between target and both nontarget and spoof samples, is the primary metric. SPF-EER and SV-EER are adopted as secondary metrics \cite{jung2022sasv}.
\subsection{Domain Mismatch between ASVspoof2019 LA and VoxCeleb2}
Although the AASIST-based CM system has excellent performance on the ASVspoof2019 evaluation set, it performs poorly on VoxCeleb2 based on our experiments. Most audios in VoxCeleb2 are classified as spoofing ones. We summarize two reasons that may lead to this phenomenon.
\begin{enumerate}
\item Most audio in VoxCeleb2 contain various kinds of noises and have been coded and transmitted through some codecs and channels.
\item The CM model trained based on the ASVspoof2019 LA dataset may learn the priori information of silent segments\cite{muller21_asvspoof}.
\end{enumerate}
The great domain mismatches between ASVspoof2019 LA and VoxCeleb2 datasets makes it difficult to improve the performance of the CM system using the VoxCeleb2 dataset. Hence, we keep those audio files in VoxCeleb2 that are classified as bonafide by our CM system as the \textbf{Vox-sub} dataset with approximately 20000 audio files in total.
\label{vox}
\subsection{Model setup}
\subsubsection{ASV subsystem}
For feature extraction, logarithmical Mel-spectrogram is extracted by applying 80 Mel filters on the spectrogram computed over Hamming windows of 20ms shifted by 10ms. The on-the-fly data augmentation \cite{cai_on-the-fly} is employed to add additive background noise or convolutional reverberation noise for the time-domain waveform. The MUSAN \cite{musan} and RIR Noise \cite{RIR} datasets are used as noise sources and room impulse response functions, respectively. We apply amplification or speed change (pitch remains untouched) to audio signals to further diversify training samples. Also, we apply speaker augmentation with speed perturbation \cite{speed_perturb_spk,dku_voxsrc20,sdsv21_qin}. We adopt the Reduceonplateau learning rate (LR) scheduler with 0.1 initial LR. The SGD optimizer is adopted to update the model parameters.
\subsubsection{CM subsystem}
In contrast to the baseline training strategy, our trained AASIST-SAP network receives random length audio between 3-5 seconds as input. The initial learning rate is 0.001 with a Reduceonplateau learning rate scheduler. Adam optimizer is used to update the weights in models. The embedding random sample augmentation is only used during fine-tuning with two generated embeddings per center. Since there are six boundary spoof embedding centers, there will be 12 generated embeddings per batch. The batch size is set as 64 in this phase. And for the one-class confusion loss, $\lambda$ is set as 1 during training.
For fusing SV embedding into the CM model, considering the great mismatch mentioned in section 3.2, we adopt SV embeddings generated by the Resnet GSP model trained by Voxceleb2 and Voxsub as \textbf{SV-embd-V1} and \textbf{SV-embd-V2}, respectively.
\label{SVembd}
\section{Results and discussion}
\label{sec4}
\subsection{Results of the ASV Subsystem}
Table \ref{table1} reports the results of different speaker verification models. Our used models achieve state-of-the-art results on the VoxCeleb1 original test set. The ResNet with GSP achieves the best single model performance on the SASV dataset which might be because the generalization of ResNet with statistic pooling is better on this ASVspoof dataset.
\begin{table}[htb]\centering \footnotesize
\caption{\label{table1} {\it The performances of different speaker verification subsystems on the VoxCeleb1 original test set and the SASV challenge dataset.
}}
\begin{tabular}{cccc}
\toprule
\multirow{2}*{\textbf{Model}} & \multirow{2}*{\textbf{Vox-O EER[\%]}} & \multicolumn{2}{c}{\textbf{SV-EER[\%]}} \\
\cmidrule(lr){3-4} & &Dev &Eval \\
\midrule
ECAPA (Baseline) & - & 1.86 & 1.64 \\
\midrule
ResNet GSP & 0.851 & \bf{0.135} & \bf{0.192} \\
SE-ResNet34 ASP & 0.776 & 0.404 & 0.410 \\
SimAM-ResNet34 ASP & 0.643 & 0.404 & 0.252 \\
ECAPA-TDNN & 0.734 & 0.225 & 0.228 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htb]\centering \footnotesize
\caption{\label{table2} {\it Comparison of different single CM systems based on SPF-EER used in SASV challenge.
The 19LA denotes using both train and dev sets of ASVspoof2019 for training.
The Vox-sub represents sub-bonafide audios selected from VoxCeleb2 as mentioned in Sec \ref{vox}.
The SV-embd-V1 and V2 are defined in Sec \ref{SVembd}.
}}
\begin{tabular}{cccc}
\toprule
\multirow{2}*{\textbf{Model}} & \multirow{2}*{\textbf{Data}} & \multicolumn{2}{c}{\textbf{SPF-EER[\%]}} \\
\cmidrule(lr){3-4} & &Dev & Eval \\
\midrule
AASIST(Baseline) & 19LA train & 0.07 & 0.67 \\
\midrule
AASIST & 19LA & 0.067 & 0.668 \\
AASIST-SAP & 19LA & 0.067 & 0.570 \\
AASIST-SAP+$ERSA$ & 19LA & 0.067 & 0.510 \\
AASIST-SAP+SV-embd-V1& 19LA & 0.058 & 4.320 \\
AASIST-SAP+SV-embd-V2& 19LA & 0.067 & 1.229 \\
AASIST-SAP & 19LA+Vox-sub & 0.049 & 1.564 \\
AASIST-SAP+$OCCL$ & 19LA+Vox-sub & \bf{0.000} & \bf{0.360} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table*}[!htb]\centering \footnotesize
\caption{\label{table3} {\it Performance of different systems evaluated in the SASV Challenge. Due to a large number of combinations, only selected combinations are listed. The $\sigma$ denotes $sigmoid$ normalization and $\times$ denotes multiplication.
}}
\begin{tabular}{l@{&}lllllllll}
\toprule
\multirow{2}*{\textbf{ID}} & \multirow{2}*{\textbf{Model}} & \multirow{2}*{\textbf{Fusion}} & \multicolumn{2}{c}{\textbf{SV-EER[\%]}} & \multicolumn{2}{c}{\textbf{SPF-EER[\%]}} & \multicolumn{2}{c}{\textbf{SASV-EER[\%]}} \\
\cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} & & & Dev & Eval &Dev & Eval &Dev & Eval \\
\midrule
CM System \\
1 & AASIST(Baseline) & -& 46.01 & 49.24 & 0.07 & 0.67 & 15.86 & 24.38 \\
2 & AASIST-SAP+$ERSA$ & -&47.304 & 47.188 & 0.067 & 0.510 & 15.963 & 24.655 \\
3 & AASIST-SAP+$OCCL$ & -& 50.644 & 55.161 & \bf{0.000} & \bf{0.360} & 16.328 & 26.872 \\
\midrule
ASV System \\
4 & ECAPA-TDNN (Baseline)& -& 1.86 & 1.64 & 20.28 & 30.75 & 17.31 &23.84 \\
5 & ResNet34 GSP & -& \bf{0.135} & \bf{0.192} & 14.084 & 23.069 & 11.616 & 17.449 \\
6 & SE-ResNet34 ASP & -& 0.404 & 0.410 & 11.540 & 22.402 & 9.745 & 16.888 \\
7 & ECAPA-TDNN & -& 0.225 & 0.228 & 14.420 & 21.899 & 12.354 & 16.795 \\
8 & SimAM-ResNet34 ASP & -& 0.404 & 0.252 & 12.011 & 22.500 & 10.512 & 16.994 \\
\midrule
& Baseline 1 (official) & Sum & 32.89 & 35.33 & 0.07 & 0.67 & 13.06 & 19.31 \\
& Baseline 2 (official) & Back-end ensemble & 7.94 & 9.29 & 0.07 & 0.80 & 3.10 & 5.23 \\
\midrule
Score-fuse & ID 5+6+7 \& ID 1+2+3 & Sum & 19.694 & 23.706 & 0.000 & 0.186 & 8.630 & 13.892 \\
& ID 5+6+7 \& ID 1+2+3 & $\sigma$ and $\times$ & 0.202 & 0.317 & 0.000 & 0.186 & \bf{0.103} & \bf{0.279} \\
& ID 5+6+7 \& ID 1+2+3 & Bosaris & 0.134 & 0.298 & 0.009 & 0.577 & 0.067 & 0.487 \\
\midrule
Cascade & \multirow{2}*{ID 5 \& ID 1 } & Cascade-ASV-CM & 0.134 & 0.204 & 0.202 & 0.477& 0.202 & 0.428 \\
& & Cascade-CM-ASV & 0.202 & 0.335 & 0.067 & 0.684 & 0.149 & 0.503 \\
\\
& \multirow{2}*{ID 5 \& ID 3 } & Cascade-ASV-CM & 0.135 & 0.205 & 0.135 & 0.410 & 0.135 & 0.391 \\
& & Cascade-CM-ASV & 0.135 & 0.410 & 0.000 & 0.298 & 0.128 & 0.410 \\
\\
& \multirow{2}*{ID 8 \& ID 2 } & Cascade-ASV-CM & 0.404 & 0.390 & 0.404 & 0.260 & 0.404 & \bf{0.288} \\
& & Cascade-CM-ASV & 0.451 & 1.359 & 0.067 & 1.252 & 0.202 & 1.322 \\
\\
& \multirow{2}*{ID 5+6+7 \& ID 1+2+3} & Cascade-ASV-CM \bf{(submitted)} & 0.202 & 0.462 & 0.202 & 0.186 & 0.202 & \bf{0.209} \\
& & Cascade-CM-ASV & 0.173 & 0.242 & 0.000 & 0.230 & \bf{0.096} & 0.242 \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Results of the CM Subsystem}
Table \ref{table2} shows the anti-spoofing performance of different CM subsystems. It can be seen from the table that the AASIST based model achieves a great performance improvement by replacing the max pooling layer with SAP. In addition, the model achieves a further generalizability improvement in the evaluation set by fine-tuning with ERSA strategy.
It is worth mentioning that although we extracted the Vox-sub dataset, simply adding these bonafide samples to the CM training set does not improve the CM performance. However, after integrating the proposed OCCL loss function, the overall CM performance is further enhanced. This improvement may be attributed to the fact that this loss function makes the Euclidean distances between embeddings of bonafide audios in VoxCeleb2 and ASVSpoof2019 LA closer, and thus the bonafide embedding space is more compact.
Unfortunately, simply fusing SV embedding into the CM system seems not useful. This may still be due to the domain mismatch mentioned in section 3.2, as the CM performance with SV-embd-V2 has increased considerably compared with the one with SV-embd-V1.
\subsection{Results on the Combined System}
The results of score ensemble experiments are summarized in Table \ref{table3}. More detailed results can be found in \cite{sasv2022dku}.
As can be seen from the score fusion section of the table, the simple summation method performs poorly due to the differences among the score distributions of different subsystems. This problem can be effectively mitigated by normalizing the scores through the $sigmoid$ function and multiplying them together \cite{zhang2022new}. The optimal result in this part is also obtained by this method.
The cascaded systems section of the table shows results of different cascading combinations. We have noted that the AASIST CM system in the baseline is highly complementary to the systems trained by ourselves. Furthermore, while the Cascade-CM-ASV approach performed better on the development set, the Cascade-ASV-CM approach generally performed better on the evaluation set, possibly because the development set has appeared in the training data of CM systems. In other words, for unknown scenarios, the more generalized system with lower EER is more suitable to be the first module with hard decisions. Finally, we submit the results of the Cascade-ASV-CM method.
\section{Conclusion}
\label{sec5}
In this paper, we describe our submitted system for the 2022 SASV challenge. We mainly focus on the CM subsystem and propose an embedding random sampling fine-tuning strategy to improve performance. Besides, by considering the great domain mismatch between datasets, we propose the one-class confusion loss, which improves the CM subsystem's performance even further. The final cascaded system submitted achieved 0.21\% EER on the SASV challenge evaluation set. In the future, we will try to collect a large scale database with both speaker and spoofing labels available. So we can explore more advanced joint learning approaches with sufficient data.
\section{Acknowledgements}
This research is funded in part by the National Natural Science Foundation of China (62171207), Science and Technology Program of Guangzhou City (202007030011). Many thanks for the computational resource provided by the Advanced Computing East China Sub-Center.
\bibliographystyle{IEEEtran}
|
1,116,691,498,380 | arxiv | \section{Introduction}
Astronomical observations allow information to be collected about the
distribution of matter in the universe. This distribution contains
structures on many different scales. Astrophysicists would like to
provide a theoretical account of how these structures formed.
In particular, cosmologists would like to do this for structures on
the largest scales which can be observed. This means for instance giving
an explanation of the way in which galaxies cluster. The most powerful
influence on the dynamics of the matter distribution on very large scales
is gravity. The most appropriate description of the gravitational field
in this context is given by the Einstein equations of general relativity.
It is also necessary to choose a model of the matter which generates the
gravitational field. A frequent choice for this is a perfect fluid
satisfying the Euler equations. Thus, from a mathematical point of view,
the basic object of study is the Einstein-Euler system describing the
evolution of a self-gravitating fluid. This is a system of quasilinear
hyperbolic equations.
The standard cosmological models are the Friedmann-Lemaitre-Robertson-Walker
(FLRW) models which are homogeneous and isotropic. This means in particular
that the unknowns in the Einstein-Euler system depend only on time and the
partial differential equations reduce to ordinary differential equations.
With appropriate assumptions on the fluid these ODE's can be solved explicitly
or, at least, the qualitative behaviour of their solutions can be determined
in great detail. When it comes to the study of inhomogeneous structures,
however, the FLRW models are by definition not sufficient. Since fully
inhomogeneous solutions of the Einstein-Euler system are difficult to
understand a typical strategy is to linearize the system about a
background FLRW model. Under favourable conditions the linearized
perturbations could give information about the evolution under the
Einstein-Euler system of initial data which are small but finite
perturbations of those for the FLRW background.
Linearization about a highly symmetric solution is a classical practise in
applied mathematics. For some examples see \cite{turing}, \cite{chandrasekhar}
and \cite{kellersegel}. It should be noted, however, that there is an unusual
feature in the case of the Einstein-Euler system which has to do with the
fact that these equations are invariant under diffeomorphisms. This is
related to the fact that the only thing that is of physical significance
are equivalence classes of solutions under diffeomorphisms. Since it is
not known how to develop PDE theory in a manifestly diffeomorphism-invariant
way this leads to difficulties. There is a corresponding equivalence relation
on linearized solutions. Different linearized solutions are related by the
linearizations of one-parameter families of diffeomorphisms, which are known
in the literature on cosmology as gauge transformations. In the end what is
interesting is not the vector space of solutions of the linearized equations
but its quotient by gauge transformations. It is useful to represent this
quotient space by a subspace. This is what is known in the literature on
cosmology as gauge-invariant perturbation theory. This subject would no doubt
benefit from closer mathematical scrutiny but that task will not be attempted
in the present paper.
Instead the following pragmatic approach will be adopted: take an equation
from the astrophysical literature on cosmological perturbation theory and
analyse the properties of its solutions. As a basic source the book of
Mukhanov \cite{mukhanov} will be used. The notation in the following will
generally agree with that of \cite{mukhanov}. It is standard to classify
cosmological perturbations into scalar, vector and tensor perturbations.
These terms will not be defined here. It should be noted that scalar
perturbations play a central role in the analysis of structure formation.
This motivates the fact that the results of this paper are concerned with
that case. After a suitable gauge choice scalar perturbations are
described by solutions of a scalar wave equation for a function $\Phi$
which corresponds, roughly speaking, to the Newtonian gravitational
potential. In order to get definite expressions for the Einstein-Euler
system and its linearization about an FLRW model it is necessary to
choose an equation of state $p=f(\epsilon)$ for the fluid. Here $\epsilon$
is the energy density of the fluid and $p$ its pressure. A case which
is particularly simple analytically is that of a linear equation of state
$p=w\epsilon$ where $w$ is a constant. For physical reasons $w$ is
chosen to belong to the interval $[0,1]$. In fact the condition $w\ge 0$
is necessary in order to make the Euler equations hyperbolic. The case
$w=0$, known as dust, is somewhat exceptional and does not always fit well
with the general arguments in the sequel. Since, however, dust frequently
comes up in the literature on cosmology it is important to include it. In
those cases where the general argument fails for dust this will be pointed
out.
For a linear equation of state as just described the equation for $\Phi$ is
\begin{equation}\label{basic}
\Phi''+\frac{6(1+w)}{1+3w}\frac1{\eta}\Phi'=w\Delta\Phi
\end{equation}
Here a prime stands for $\frac{d}{d\eta}$. The time coordinate $\eta$ belongs
to the interval $(0,\infty)$. The spatial variables, which will be denoted
collectively by $x$, are supposed to belong to the torus $T^3$. Thus
periodic boundary conditions are imposed. The Laplacian is that of a fixed
flat metric on the torus. Its expression in adapted coordinates agrees with
that for the usual Laplacian on ${\bf R}^3$. As a consequence of standard
theory for linear hyperbolic equations this equation has a unique solution
on the whole time interval $(0,\infty)$ for appropriate initial data given at
a fixed time $\eta=\eta_0>0$. These are the restrictions of $\Phi$ and
$\Phi'$ to $\eta=\eta_0$.
In the following, after some background and notation has been collected in
Sect. \ref{background}, the asymptotics of solutions of equation \eqref{basic}
is studied in the regimes $\eta\to 0$ and $\eta\to\infty$. Theorems and
proofs for the first of these cases are given in Sect.~\ref{asympsing}
(Theorems \ref{LinearSingularityExpansion} and \ref{LinearSingularityData})
and for the second in Sect.~\ref{asymplatetime}
(Theorem \ref{LinearExpandingThm}).
It is shown how all solutions can be parametrized by asymptotic data in either
of these regimes. These are alternatives to the usual parametrization of
solutions by Cauchy data. An interesting feature of the expanding direction
$\eta\to\infty$ is that the main part of the asymptotic data is a solution of
the flat space wave equation $W''=w\Delta W$. Many of these results can be
extended to more general equations of state. This is the subject of
Theorem \ref{GeneralSingularityExpansion} of Sect.~\ref{asympsing}
(limit $\eta\to 0$) and Sect.~5. It is found that for equations of state
with power law behaviour $p\sim\epsilon^{1+\sigma}$ at low density there
is a bifurcation with a fundamental change in the asymptotic behaviour at
$\sigma=\frac13$.
\section{Preliminaries}\label{background}
As outlined above, we study perturbations of FLRW cosmological models which
are spatially flat and have $T^3$ spatial topology. The spacetime being
perturbed, which we refer to as the background, is described by a metric of
the form
\begin{equation}\label{BackgroundMetric}
a^2\left(-d\eta^2 + dx^2\right)
\end{equation}
on $(0,\infty)\times T^3$. Here $dx^2$ indicates the flat metric on $T^3$ and
the scale factor $a=a(\eta)$ is a non-decreasing function of the conformal
time $\eta$. We use $x$ to indicate points on $T^3$. The signature used
here is the opposite of that used by Mukhanov \cite{mukhanov} but all the
equations required in the following are unaffected by this change.
We make use of the perfect fluid matter model, described by the pressure $p$
and energy density $\epsilon$ of the fluid. In order to specify the matter
model completely, one must provide an equation of state $p=f(\epsilon)$. Under
this ansatz, the Einstein-Euler equations reduce to a coupled system of ODEs
for $a$ and $\epsilon$:
\begin{align}
a''& = \frac{4\pi G}{3}\left(\epsilon -3f(\epsilon)\right)a^3
\label{FriedmannEqn}
\\
\epsilon' &=-3\mathcal{H}\left(\epsilon + f(\epsilon)\right).
\label{ContinuityEqn}
\end{align}
As mentioned in the introduction, $(\phantom{\Phi} )'$ indicates a derivative
with respect to $\eta$. Here $G$ is Newton's gravitational constant and
$\mathcal{H}$ is the conformal Hubble parameter, given by
$\mathcal{H} = a^{-1}a'$. We note the following useful relation
(known as the Hamiltonian constraint)
\begin{equation}\label{SecondFriedmann}
\mathcal{H}^2 = \frac{8\pi G}{3}a^2\epsilon.
\end{equation}
For a linear equation of state $f(\epsilon) = w\epsilon$, solutions $a(\eta)$
of \eqref{FriedmannEqn} are explicitly given by
\begin{equation}
\frac{a(\eta)}{a(\eta_0)} = \left(\frac{\eta}{\eta_0}\right)^{2/(1+3w)},
\end{equation}
for some arbitrarily fixed $\eta_0\in(0,\infty)$. As the scale factor $a$
vanishes as $\eta\to 0$, the spacetime develops a curvature singularity in
that limit, which is known as a ``big-bang'' type singularity and is viewed as
being in the past of $\eta_0$. Likewise the limit as $\eta\to\infty$ is
referred to as ``late times'' as it corresponds to the distant future of $\eta_0$.
Note that spacetimes described by these models are expanding, in the sense
that the scale factor is an increasing function of $\eta$.
Note also that, since $\epsilon'$ is negative, large values of $\eta$
correspond to small values of $\epsilon$ and vice-versa.
We study behavior near the singularity and at late times for those
perturbations to the metric \eqref{BackgroundMetric} which are of the type
usually referred to as scalar perturbations. They satisfy evolution equations
obtained by linearizing the
Einstein equations about the FLRW background. For the perfect fluid matter
model all such perturbations can be described, up to gauge freedom, by a
single function $\Phi(\eta,x)$. Using a certain gauge, the
conformal-Newtonian gauge, the metric takes the form
\begin{equation}
a^2\left[-(1+2\lambda\Phi)d\eta^2 +(1-2\lambda\Phi)dx^2\right]
\end{equation}
up to an error which is quadratic in the expansion parameter $\lambda$.
The first order perturbation satisfies the linearized Einstein-Euler equations
provided
\begin{equation}\label{MainEquation}
\Phi'' + 3\left(1 + f'(\epsilon)\right)\mathcal{H}\Phi'
+3 \left(f'(\epsilon)-\frac{f(\epsilon)}
{\epsilon}\right)\mathcal{H}^2\Phi
- f'(\epsilon)\Delta\Phi=0,
\end{equation}
where $\Delta$ is the Laplacian for the flat metric on $T^3$.
For a derivation of this equation we refer the reader to \S 7.2 of
\cite{mukhanov}.
The corresponding perturbations to the energy density, denoted by
$\delta\epsilon$, are determined by
\begin{equation}\label{EnergyPerturbation}
\delta\epsilon = \frac{1}{4\pi G a^2}
\left(-3\mathcal{H} \Phi' - 3\mathcal{H}^2\Phi + \Delta\Phi \right)
\end{equation}
and thus can be computed once \eqref{MainEquation} is understood.
The quantity $f'(\epsilon)$ represents the square of the speed of sound for
the fluid. For physical reasons we require that $f'$ always take values in the
interval $[0,1]$ i.e., that the speed of sound be real and not exceed
the speed of light. A special case of particular interest is that of a linear
equation of state $p=w\epsilon$. In this situation the speed of sound is
constant and equation \eqref{MainEquation} reduces to \eqref{basic}. Before
the asymptotics of solutions of \eqref{MainEquation} can reasonably be
studied a prerequisite is a theorem which guarantees global existence of
solutions on the interval $(0,\infty)$. In order to get this from the standard
theory of hyperbolic equations it is necessary to assume that $f'$ never
vanishes. In the following it is always assumed that this holds
except in the special case of dust which is discussed separately.
Our analysis below relies on establishing a number of energy-type estimates
for solutions to \eqref{MainEquation}. As the coefficients of this linear
equation depend only on $\eta$, any spatial derivative of $\Phi$ satisfies the
same equation. Thus any estimate we obtain for $\Phi$, $\Phi'$, or
$\nabla\Phi$ (the gradient of $\Phi$ with respect to the flat metric on
$T^3$) holds also for all spatial derivatives of those quantities. One may
then make use of the Sobolev embedding theorem in order to establish pointwise
estimates. We also make use of the Poincar\'e estimate which implies that
quantities having zero (spatial) mean value are controlled in $L^2$ by the
norm of their (spatial) gradient.
Each of these norms is defined on the $\eta$-constant ``spatial'' slices of
$(0,\infty)\times T^3$ with respect to the flat ($\eta$-independent) metric
induced on $T^3$ by viewing $T^3$ as a quotient of Euclidean space. All
integration on $T^3$ is done with respect to the corresponding volume element
which we suppress in our notation. We generally suppress dependence of
functions on the spatial variable $x$, except in situations where the
inclusion of such dependence provides additional clarity. When necessary, we
denote Cartesian coordinates on $T^3$ by $x = (x^i)$; the corresponding
derivatives are denoted $\partial_i$.
\section{Asymptotics in the approach to the singularity}\label{asympsing}
The purpose of this section is to analyse the asymptotics of solutions of
\eqref{basic} in the limit $\eta\to 0$ and to give some extensions of these
results to more general equations of state which need not be linear. Define
$\nu=\frac12\left(\frac{5+3w}{1+3w}\right)$. Note that $\nu$ belongs to the
interval $[1,5/2]$.
\begin{theorem}\label{LinearSingularityExpansion}
Let $\Phi(\eta)$ be a smooth solution of \eqref{basic} on
$(0,\infty)\times T^3$. Then there are coefficients $\Phi_{k,l}$ with
$k\ge -2\nu$ belonging to an increasing sequence of real numbers tending to
infinity and $l\in\{0,1\}$, smooth functions on $T^3$, such that the formal
series $\sum_k(\Phi_{k,0}+\Phi_{k,1}\log\eta)\eta^k$ is asymptotic to
$\Phi(\eta)$
in the limit $\eta\to 0$ in the sense of uniform convergence of the function
and its spatial derivatives of all orders. All coefficients can be expressed
as linear combinations of $\Phi_{-2\nu,0}$, $\Phi_{0,0}$ and their spatial
derivatives. If $\nu$ is not an integer then all coefficients with $l=1$
vanish. For any value of $w$ the coefficients $\Phi_{k,l}$ with $l=1$ and $k<0$
vanish.
In more detail, $\Phi_{k,0}$ may only be non-zero when $k$ is of the
form $-2\nu+2i$ or $2i$ for a non-negative integer $i$ while $\Phi_{k,1}$
may only be non-zero for $k$ of the form $2i$ with $i$ a non-negative
integer. These coefficients are related by the following equations:
\begin{equation}\label{consist1}
k(k+2\nu)\Phi_{k,0}=w\Delta\Phi_{k-2,0}-(2k+2\nu)\Phi_{k,1}
\end{equation}
and
\begin{equation}\label{consist2}
k(k+2\nu)\Phi_{k,1}=w\Delta\Phi_{k-2,1}.
\end{equation}
\end{theorem}
\begin{proof}
The basic tool which allows the solutions to be controlled
is provided by energy estimates. Let
\begin{equation}\label{energy}
E_1(\eta)=\frac12\int_{T^3} |\Phi'(\eta)|^2+w|\nabla\Phi(\eta)|^2.
\end{equation}
It satisfies the identity
\begin{equation}
\frac{d}{d\eta}\left[\eta^{2(2\nu+1)}E_1(\eta)\right]=(2\nu+1)\eta^{4\nu+1}\int_{T^3}
w|\nabla\Phi(\eta)|^2.
\end{equation}
Since the right hand side is manifestly non-negative it can be
concluded that if an initial time $\eta_0$ is given then
$\eta^{2(2\nu+1)}E_1(\eta)$ is bounded
for $\eta\le\eta_0$. Any spatial derivative of $\Phi$ satisfies the same
equation as $\Phi$. Thus corresponding bounds can be obtained for the $L^2$
norms of all spatial derivatives. Applying the Sobolev embedding theorem then
provides pointwise bounds for $\Phi$ and its spatial derivatives of all
orders in the past of a fixed Cauchy surface. These estimates can now
be put back into the equation to obtain further information about the
asymptotics. To do this it is convenient to write \eqref{basic} in the
form
\begin{equation}
\frac{d}{d\eta}\left[\eta^{2\nu+1}\Phi'(\eta)\right]=\eta^{2\nu+1}w\Delta\Phi(\eta).
\end{equation}
It can be deduced that
\begin{multline}\label{intformula}
\Phi'(\eta)=\eta^{-2\nu-1}\Big[\eta_0^{2\nu+1}\Phi'(\eta_0)
\\
-w\int_0^{\eta_0}\zeta^{2\nu+1}\Delta\Phi(\zeta)d\zeta
+w\int_0^\eta\zeta^{2\nu+1}\Delta\Phi(\zeta)
d\zeta\Big]
.\end{multline}
The bounds already obtained guarantee the convergence of the integrals.
This formula allows the asymptotic expansions to be derived inductively.
Using the fact that the second integral is $O(\eta^2)$ already gives a
one-term expansion for $\Phi'$ and this can be integrated to give
a one-term expansion for $\Phi$. Analogous expansions can be obtained for
all spatial derivatives of $\Phi$ in the same way using the corresponding
spatial derivatives of \eqref{intformula}. When an asymptotic expansion
with a finite number of explicit terms is substituted into the right hand
side of \eqref{intformula} an expansion for $\Phi'$ (and thus by integration
for $\Phi$) with additional explicit terms is obtained. If the last explicit
term in the input is a multiple of $\eta^p$ with $p<-2$ then there is one new
term in the output and it is a multiple of $\eta^{p+2}$. If the last explicit
term is a multiple of $\eta^{-2}$ there is one new term and it is a multiple
of $\log \eta$. If the last explicit term is a multiple of $\log \eta$ then
there are two new terms, one a multiple of $\eta^2\log \eta$ and one a
constant. If the last explicit term is $\eta^p$ or $\eta^p\log \eta$ with
$p>-2$ then there is one new term and it is a multiple of $\eta^{p+2}$ or
$\eta^{p+2}\log \eta$ respectively. These statements rely on the fact that
when any of the terms in the asymptotic expansion is substituted into the
last integral in \eqref{intformula} the power $-1$ never arises. These
remarks suffice to prove the first part of the theorem. The resulting series
is by construction a formal series solution of the original equation.
Comparing coefficients gives the rest of the theorem.
\end{proof}
Note that the only two values of $w$ in the range of interest where
logarithmic terms occur in the expansions of the theorem are $w=\frac19$
and $w=1$ corresponding to $\nu=2$ and $\nu=1$ respectively. The two cases
of most physical interest, $w=0$ (dust) and $w=\frac13$ (radiation), are
free of logarithms. In the case $w=0$ most of the expansion coefficients
vanish and the two non-vanishing terms define an explicit solution which is
a linear combination of two powers of $\eta$.
The relative density perturbation is given by
\begin{equation}\label{densitypert}
\frac{\delta\epsilon}{\epsilon}=
-2\Phi-2{\cal H}^{-1}\Phi'+\frac23{\cal H}^{-2}\Delta\Phi.
\end{equation}
Now ${\cal H}=\frac{2}{(1+3w)\eta}$. Substituting this relation and the
asymptotic expansion for $\Phi$ into the expression for the density
perturbation gives:
\begin{multline}\label{densitypert2}
\frac{\delta\epsilon}{\epsilon}=\sum_k\left[-(k(1+3w)+2)\Phi_{k,0}
-(1+3w)\Phi_{k,1}+\frac16(1+3w)^2\Delta\Phi_{k-2,0}\right. \\
\left.+(-(k(1+3w)+2)\Phi_{k,1}+\frac16(1+3w)^2\Delta\Phi_{k-2,1}
\log\eta\right]\eta^k
\end{multline}
The relations in Theorem \ref{LinearSingularityExpansion} place no
restrictions on the coefficients
$\Phi_{-2\nu,0}$ and $\Phi_{0,0}$ and so it is natural to ask if these can
be prescribed freely. In other words, if two smooth functions on $T^3$
are given, is there a smooth solution of the equations in
whose asymptotic expansion for $\eta\to 0$ precisely these functions
occur as the coefficients $\Phi_{-2\nu,0}$ and $\Phi_{0,0}$? The next
theorem answers this question in the affirmative. Since the proof
is closely analogous to arguments which are already in the literature
it will only be sketched.
\begin{theorem}\label{LinearSingularityData}
Let $\Psi_1$ and $\Psi_2$ be smooth functions on $T^3$.
Then there exists a unique solution of \eqref{basic} of the type considered
in Theorem \ref{LinearSingularityExpansion} with $\Phi_{-2\nu,0}=\Psi_1$ and
$\Phi_{0,0}=\Psi_2$.
\end{theorem}
\begin{proof}[Proof (sketch)]
The proof of this theorem uses Fuchsian techniques. It
implements the strategy applied in \cite{rendall00} to prove theorems on the
existence of solutions of the vacuum Einstein equations belonging to the Gowdy
class with prescribed singularity structure. In the present situation some
simplifications arise in comparison to the argument for Gowdy due to the fact
that the equation being considered is linear. The procedure is to first
treat the case of analytic data and then use the resulting analytic
solutions to handle the smooth case. To reduce the equation to
Fuchsian form the following new variables are introduced. First
define a function $v(\eta,x)$ by the relation
\begin{equation}\label{expansion}
\Phi(\eta)=\Psi_1\eta^{-2\nu}+\sum_{-2\nu<k<0}\Phi_{k,0}\eta^k+\Phi_{0,1}\log\eta
+\Psi_2+v(\eta).
\end{equation}
Here it is assumed that the consistency relations \eqref{consist1} hold for
$-2\nu\le k\le 0$. As a consequence of these relations and the original
equation, $v$ satisfies
\begin{equation}\label{firstfuch}
v''+\frac{2\nu+1}{\eta}v'-w\Delta v=w\Delta\Phi_{0,1}\log\eta+w\Delta\Psi_2
+w\sum_{0<k<2}\Delta\Phi_{k,0}\eta^k
\end{equation}
Note that the last sum will contain one non-vanishing term for $\nu$ not an
integer and none for $\nu$ an integer. Denote the right hand side of
\eqref{firstfuch} by $Q$. This equation can be reduced to a first order
system by introducing new variables $v^0=\eta v'$ and $v^i=\eta\partial_i v$.
Let $V$ be the vector-valued unknown with components $(v,v^0,v^i)$. Then the
first order system is
\begin{equation}
\eta\partial_\eta V+NV=\eta^\zeta f(\eta,V,DV)
\end{equation}
where
\begin{equation}
N=\left[\begin{array}{ccc}
0 & -1 & 0\\0 & 2\nu & 0\\ 0 & 0 & 0\end{array}\right]
\ \ \ {\rm and}\ \ \
f=\left[\begin{array}{c}
0\\
\eta^{1-\zeta}w\partial_i v^i+\eta^{2-\zeta}Q\\
\eta^{1-\zeta}\partial_i(v^0+v)\end{array}\right].
\end{equation}
Here $\zeta$ is any positive real number less than one and $DV$ denotes
the collection of spatial derivatives of $V$.
It will be shown that this equation has a unique solution $v$ which converges
to zero as $\eta\to 0$. Initially we assume that the functions
$\Psi_1$ and $\Psi_2$ are analytic. Then results
proved in \cite{kr} can be applied. See also section 4 of \cite{ar} for some
further information on these ideas. One of the hypotheses required
is that $f$ is regular in the analytic sense defined there. What this means
is that $f$ and all its derivatives with respect to any argument other
than $\eta$ are real analytic for $\eta>0$ and extend continuously to
$\eta=0$. The other hypothesis is that the matrix exponential
$\sigma^N$ should be uniformly bounded for all $0<\sigma<1$. This follows
from the fact that $N$ is diagonalizable with non-negative eigenvalues.
To extend this result to the smooth case more work is necessary. The basic
idea is to approximate the smooth functions $\Psi_1$ and $\Psi_2$ by
sequences of analytic functions $(\Psi_1)_n$ and $(\Psi_2)_n$, apply the
analytic existence theorem just discussed to get a sequence of solutions
$V_n$ of the Fuchsian system and then show that $V_n$ tends to a limit
$V$ as $n\to\infty$. The function $V$ is then the solution of the problem
with smooth data. To show the convergence of $V_n$ suitable estimates
are required and in order to obtain these the Fuchsian equation is written
in an alternative form which is symmetric hyperbolic. This rewriting is
only possible for $w\ne 0$ but for $w=0$ the system, being an ODE, is
already symmetric hyperbolic and so the extra step is not required.
In general a simplification of the system is achieved by introducing a new
time variable by $t=\eta^\zeta$ and rescaling $f$ by a factor $\zeta^{-1}$.
Then the system can be written as
\begin{equation}
tA^0\partial_t V+tA^j\partial_jV+MV=tg(t,V,DV)
\end{equation}
where
\begin{equation}
M=\left[\begin{array}{ccc}
0 & -1 & 0\\0 & \frac{2\nu}{\zeta w} & 0\\ 0 & 0 &
-\frac{1+\zeta}{\zeta}I\end{array}\right]
\ \ \ , \ \ \
g=\left[\begin{array}{c}
0\\
t^{\frac{2-\zeta}{\zeta}}Q\\
0\end{array}\right]
\end{equation}
and the other coefficient matrices are given by $A^0={\rm diag}(1,\frac1w,I)$
and
\begin{equation}
A^j=\left[\begin{array}{ccc}
0 & 0 & 0\\
0 & 0 & -\zeta^{-1}t^{\frac{1-\zeta}{\zeta}}e_j\\
0 & -\zeta^{-1}t^{\frac{1-\zeta}{\zeta}}e_j & 0
\end{array}\right]
\end{equation}
with $e_j$ the $j$th standard basis vector in $R^3$. This is a symmetric
hyperbolic system. A disadvantage is that in passing from $N$ to $M$
positivity is lost.
The fact that $M$ has a negative eigenvalue can be overcome by
subtracting an approximate solution from $v$ to obtain a new unknown.
Expressing the equation in terms of the new unknown leads to a system
which is similar to that for $v$ but with $M$ replaced by $M+nI$ for an
integer $n$. For $n$ sufficiently large this means that the replacement
for $M$ is positive definite. With this choice the system is both
in Fuchsian form and symmetric hyperbolic. The necessary approximate
solution can be taken to be a formal solution of sufficently high order
as introduced in section 2 of \cite{rendall00}. The fact that the system
is symmetric hyperbolic leads to energy estimates which can be used to
prove the convergence of the sequence of analytic solutions to a solution
corresponding to the smooth initial data, thus completing the proof of
the existence part of the theorem. Uniqueness can be proved using an
energy estimate as has been worked out in \cite{rendall00}.
\end{proof}
It would presumably be possible to extend the above results to the case that
the data are only assumed to belong to a suitable Sobolev space. An
alternative approach to doing so would be try to apply ideas in the paper
\cite{kichenassamy} of Kichenassamy.
The proofs just presented have been strongly influenced by work on
Gowdy spacetimes. For a special class of these, the polarized
Gowdy spacetimes, the basic field equation is $P_{tt}+t^{-1}P_t=P_{xx}$.
Evidently this is closely related to \eqref{basic} although they
are not identical for any choice of $w$, even if attention is
restricted to solutions of \eqref{basic} depending on only one space
variable. The energy arguments above were inspired by those applied to the
polarized Gowdy equation in \cite{im}. The following analogue
of Theorem 2 is a special case of a result in \cite{rendall00}. If smooth
periodic functions $k(x)$ and $\omega(x)$ are given with $k$ everywhere
positive there is a smooth solution of the polarized Gowdy equations
which satisfies
\begin{equation}
P(t,x)=k(x)\log t+\omega (x)+o(1)
\end{equation}
as $t\to 0$. It is plausible that the positivity restriction on $k$, while
very important for general (non-polarized) Gowdy spacetimes, should be
irrelevant in the polarized case. It turns out that following the arguments
used above to analyse \eqref{basic} allows this intuition to be proved
correct.
One way of attempting to reduce the polarized Gowdy equation to Fuchsian
form is to mimic \eqref{expansion} and write $P=k\log t +\omega+v$. This
fails because the analogue of the matrix $N$ has $\nu$ replaced by zero.
Thus the matrix has all eigenvalues zero and includes a non-trivial Jordan
block. To access the Fuchsian theory in the analytic case the
expansion for $P$ may be replaced by
\begin{equation}
P=k\log t +\omega+t^\delta v
\end{equation}
for a small positive $\delta$. With this modification the reduction procedure
applied to \eqref{basic} gives a Fuchsian system. It can be concluded that
$k$ and $\omega$ can be prescribed in the case that they are analytic.
Once this has been achieved the smooth case can be handled just as in
the proof of Theorem 2.
It will now be shown that some of the results which have been proved for a
linear equation of state can be extended to more general equations of state.
In the discussion which follows it will be convenient to exclude the case
of a linear equation of state which has been treated already. This in
particular excludes dust so that by our general assumptions $f'$ never
vanishes. In this case we consider solutions to \eqref{MainEquation} rather than
\eqref{basic}.
Choose an initial time $\eta_0$ and for a given background solution let
$\epsilon (\eta_0)=\epsilon_0$.
From the condition that $f'(\epsilon)\leq 1$ it follows that
\begin{equation}
\Lambda:=\sup_{(\epsilon_0,\infty)}\left|f'(\epsilon)-\frac{f(\epsilon)}
{\epsilon}\right|^{1/2}
\end{equation}
is strictly positive and finite. It will be assumed in addition that the
equation of state satisfies the condition
\begin{equation}\label{secondderiv}
\sup_{(\epsilon_0,\infty)}\left|\left(\frac{\epsilon+f(\epsilon)}
{f'(\epsilon)}\right)\frac{d^2 f}{d\epsilon^2}\right|<\infty.
\end{equation}
Using the fact that $\Lambda>0$ it follows that there exists a positive
number $\lambda$ satisfying the following three inequalities:
\begin{equation}\label{ineq1}
2\lambda\frac{df}{d\epsilon}\ge 3(\epsilon+f(\epsilon))
\frac{d^2 f}{d\epsilon^2},
\end{equation}
\begin{multline}\label{ineq2}
4\lambda^2-2\left[6\left(1+f'(\epsilon)\right)+\left(1+\frac{3f(\epsilon)}
{\epsilon}
\right)\right]\lambda \\
+6\left(1+f'(\epsilon)\right)\left(1+\frac{3f(\epsilon)}{\epsilon}\right)
- \Lambda^{-2}\left|\Lambda^{2}-3\left(f'(\epsilon)-\frac{f(\epsilon)}
{\epsilon}\right)
\right|^2\ge 0
\end{multline}
and
\begin{equation}\label{ineq3}
\lambda\ge 3(1+f'(\epsilon)).
\end{equation}
That \eqref{ineq1} can be satisfied follows from \eqref{secondderiv}. The
fact that $f'(\epsilon)$ and $f(\epsilon)/\epsilon$ are bounded means
that the first term in the expression on the left hand side of \eqref{ineq2}
dominates the other terms for $\lambda$ sufficiently large and so the second
condition on $\lambda$ can also be satisfied. The constant $\lambda$ can be
chosen to satisfy \eqref{ineq3} since the right hand side of that inequality
is bounded. Note for comparison that for a linear equation
of state $\Lambda=0$. In that case $\lambda$ can be taken to be
the larger root of the expression obtained from the left hand side of
\eqref{ineq2} by omitting the term containing $\Lambda$. This root is $3(1+w)$.
Define the following generalization of the energy functional \eqref{energy}:
\begin{equation}
E_2(\eta)=\frac12\int_{T^3} |\Phi'(\eta)|^2+f'(\epsilon)|\nabla\Phi(\eta)|^2
+\Lambda {\cal H}^2|\Phi(\eta)|^2.
\end{equation}
(Note that we suppress the dependence of $\epsilon$ and $\mathcal{H}$ on
$\eta$.)
A computation shows that if $a$ denotes the scale factor, then due to the
inequalities \eqref{ineq1}-\eqref{ineq3}
\begin{equation}\label{energynonlin}
\frac{d}{d\eta}\left[a^{2\lambda}E_2(\eta)\right]\ge 0.
\end{equation}
In more detail, computing the time derivative of $a^{2\lambda}E_2$ and using
equation \eqref{MainEquation} along with the equations satisfied by the
background
quantities $\epsilon$ and ${\cal H}$ gives an integral where the
the integrand is a sum of terms each of which has a factor $\Phi^2$,
$|\Phi'|^2$, $\Phi\Phi'$ or $|\nabla\Phi|^2$. The aim is to show that
the sum of these terms is non-negative. To do this it is first assumed that
the coefficient of $|\nabla\Phi|^2$ is non-negative. This leads to the
condition \eqref{ineq1}. Next it is shown that the quadratic form in
$\Phi$ and $\Phi'$ is positive semidefinite. This can be done by using
the inequality
\begin{equation}
|\Lambda^{} {\cal H}\Phi\Phi'|\le
\frac{\delta}{2}\Lambda^2 {\cal H}^2\Phi^2 +\frac{1}{2\delta}(\Phi')^2,
\end{equation}
which holds for any $\delta>0$, to estimate the quadratic form from below by
the following sum of a term containing $\Phi^2$ and one containing $|\Phi'|^2$:
\begin{multline}
\frac12\Lambda^2{\cal H}^2\left[2\lambda-\left(1+\frac{3f(\epsilon)}
{\epsilon}\right)
-\delta \Lambda^{-1} \left|\Lambda^2-3\left(f'(\epsilon) -\frac{f(\epsilon)}
{\epsilon}\right)\right|\right]\Phi^2
\\
+\frac12\left[2\lambda-6(1+f'(\epsilon))
-\frac{\Lambda^{-1}}{\delta}\left|\Lambda^2-{3}\left(f'(\epsilon)
-\frac{f(\epsilon)}{\epsilon}\right)\right|\right]|\Phi'|^2
\end{multline}
It
remains to ensure that the coefficients of these terms are non-negative and
this follows from \eqref{ineq2} and \eqref{ineq3}, choosing $\delta$
sufficiently small. It can be concluded from
\eqref{energynonlin}
that $E_2(\eta)=O(a(\eta)^{-2\lambda})$ as $\eta\to 0$. As in the case
of a linear equation of state, corresponding estimates hold for spatial
derivatives and pointwise estimates follow by Sobolev embedding. An integral
formula for $\Phi'$ can be obtained as in the case of a linear equation of
state. It reads (with some arguments suppressed; recall $\epsilon$ and
$\mathcal{H}$ depend on $\eta$)
\begin{multline}\label{intformula2}
\Phi'(\eta)=\left(f(\epsilon)+\epsilon\right)
\left[\frac{\Phi'(\eta_0)}{f(\epsilon_0)+\epsilon_0}\right.
\\
\left.-\int_{\eta_0}^\eta\frac{1}{f(\epsilon)+\epsilon}
\left(f'(\epsilon)\Delta\Phi+3\left(f'(\epsilon)-\frac{f(\epsilon)}
{\epsilon}\right)\mathcal{H}^2\Phi\right)
\right]
\end{multline}
If no further assumptions are made on the equation of state then using
the known boundedness statements and repeatedly substituting into the
right hand side of \eqref{intformula2} would lead to unwieldy expressions
involving iterated integrals. Simpler results can be obtained if it is assumed
that in the limit $\epsilon\to\infty$ the function $f$ is linear in leading
order with lower powers as corrections. In other words for this assume that
$f$ admits an asymptotic expansion of the form
\begin{equation}\label{eqnofstate}
f(\epsilon)\sim w\epsilon+\sum_{j=1}^\infty f_j\epsilon^{a_j}
\end{equation}
as $\epsilon\to\infty$. Here the $f_j$ are constants while $\{a_j\}$ is a
decreasing sequence of real numbers all of which are less than one and which
tend to $-\infty$ as $j\to\infty$. Assume further that the relation obtained
by differentiating this expansion term by term any number of times is also a
valid asymptotic expansion. To have a concrete example, consider the
polytropic equation of state which is given parametrically by the relations
\begin{equation}\label{polytropic}
\epsilon=m+Knm^{\frac{n+1}{n}},\qquad p=Km^{\frac{n+1}{n}}
\end{equation}
with constants $K$ and $n$ satisfying $0<K<1$ and $n>1$. In this case
the asymptotic expansion is of the form
\begin{equation}\label{stateexpansion}
f(\epsilon)=n^{-1}\epsilon+n^{-1}(Kn)^{\frac{1}{n+1}}\epsilon^{\frac{n}{n+1}}
+\ldots
\end{equation}
Returning to the more general case \eqref{eqnofstate}, define a quantity
$m$ by
\begin{equation}\label{massdensity}
m(\epsilon)=\exp\left\{\int_1^\epsilon (\xi+f(\xi))^{-1}d\xi \right\}
\end{equation}
Substituting the asymptotic expression \eqref{eqnofstate} into
\eqref{massdensity} gives a corresponding asymptotic expansion for the
function $m(\epsilon)$ as a sum of powers of $\epsilon$ with the leading
term being proportional to $\epsilon^{\frac{1}{w+1}}$. It follows from the
continuity equation \eqref{ContinuityEqn} for the fluid that $m$ is
proportional to $a^{-3}$. This
leads to an asymptotic expansion for $\epsilon$ in terms of $a$. The equation
\eqref{SecondFriedmann}
implies that
$a'=\sqrt{8\pi G/3}\epsilon^{1/2}a^2$; substituting for
$\epsilon$ in terms of $a$ gives rise to a relation which can be
integrated to give an asymptotic expansion for $a$ in terms of $\eta$
in the limit $\eta\to 0$. The leading term is proportional to
$\eta^{\frac{2}{3w+1}}$. Substituting this back in leads to an asymptotic
expansion for $a'$ from which an asymptotic expansion for ${\cal H}$ can be
obtained. An asymptotic expression for $\epsilon$ in terms of $\eta$ can also
be derived. Thus in the end there are expansions for all the important
quantities in the background solution in terms of $\eta$. In all cases
the leading term in the expansion agrees with that in the case of a linear
equation of state. The result is an integral equation which can be
written in the form
\begin{eqnarray}\label{intformula3}
&&\Phi' (\eta)=h_1 (\eta)\left[C-
\int_0^{\eta_0} h_2(\zeta)\Delta\Phi (\zeta)
-h_3(\zeta)\Phi (\zeta)d\zeta\right.\nonumber \\
&&\left.+\int_0^\eta h_2(\zeta)\Delta\Phi (\zeta)
+h_3(\zeta)\Phi (\zeta)d\zeta\right]
\end{eqnarray}
where $C$ is a constant depending only on the data at time $\eta_0$
and asymptotic expansions are available for the
functions $h_1$, $h_2$ and $h_3$. The leading terms in $h_1$ and $h_2$
are constant multiples of the corresponding powers of $\eta$ for a linear
equation of state. To see the leading order behaviour of $h_3$ recall that
in \eqref{intformula3}, the coefficient of $\Phi$ is
\begin{equation}\label{evolH}
3\left(f'(\epsilon)-\frac{f(\epsilon)}{\epsilon}\right){\cal H}^2.
\end{equation}
Hence if $f_j$ is the first non-vanishing coefficient in the expansion
\eqref{stateexpansion} then the leading order power in $h_3$ is less
than that in $h_2$ by $\alpha=2-\frac{6(1+w)(1-a_j)}{1+3w}$. To obtain
estimates close to $\eta=0$ the estimate for the
energy can be applied starting from $\eta$ very small. In other words,
$\epsilon_0$ can be chosen as large as desired. Then all the coefficients in
the left hand side of \eqref{ineq2} not involving $\Lambda$ are as close as
desired to those for the corresponding linear equation of state.
Since $\Lambda$ is arbitrarily small, the coefficient involving $\Lambda$ is
also arbitrarily small. It follows that $\lambda$ can be chosen to
have any value strictly greater than $3(1+w)$. Hence $E$ can be bounded
by any power greater than the power in the corresponding linear case.
This is enough to proceed as in the proof of Theorem 1 to
obtain an asymptotic expansion for $\Phi$ where each invidual term is
a constant multiple of an expression of the form $\eta^k(\log\eta)^l$
with $l=0$ or $l=1$ and the leading term is just as in Theorem 1 with the
corresponding value of $w$. The key thing that makes this work is that
$\alpha<2$ so that no logarithms are generated when evaluating the
integral in \eqref{intformula3} in the course of the iteration.
The results of this discussion can be summed up as follows.
\begin{theorem}\label{GeneralSingularityExpansion}
Let $\Phi$ be a smooth solution of \eqref{MainEquation} on
$(0,\infty)\times T^3$. Suppose that the equation of state has an asymptotic
expansion of the form \eqref{eqnofstate}. Then there are coefficients
$\Phi_{k,l}$
with $k\ge -2\nu$ belonging to an increasing sequence of real numbers tending
to infinity and $l\in\{0,1\}$, smooth functions on $T^3$, such that the formal
series $\sum_k\Phi_{k,l}(\log\eta)^l\eta^k$ is asymptotic to $\Phi$ in the
limit $\eta\to 0$. All coefficients in the expansion are determined uniquely
by $\Phi_{-2\nu,0}$ and $\Phi_{0,0}$.
\end{theorem}
\section{Late-time asymptotics for a linear equation of state}
\label{asymplatetime}
In this section information is obtained about the asymptotics of
solutions of equation \eqref{basic} in the limit $\eta\to\infty$;
some extensions of these results to more general equations of state are
derived in Sect. \ref{asymplatetimegen}. Once again energy estimates play
a fundamental role.
In this case it is convenient to treat homogeneous solutions separately.
By a homogeneous solution we mean one which does not depend on the spatial
coordinates. These can be characterized as the solutions whose initial
data on a given spacelike hypersurface do not depend on the spatial
coordinates. For this class of solutions equation \eqref{basic} can be
solved explicitly with the result that $\Phi=A+B\eta^{-2\nu}$ for constants
$A$ and $B$. A general solution can be written as the sum of a
homogeneous solution and a solution such that $\Phi$ has zero mean
on any hypersurface of constant conformal time. Call solutions of the latter
type zero-mean solutions. Then in order to determine the late-time asymptotics
for general solutions it suffices to do so for zero-mean solutions. In
this case define $\psi(\eta)=\eta^{\nu+\frac12}\Phi(\eta)$. Then $\psi$
satisfies the equation
\begin{equation}\label{evolpsi}
\psi''=w\Delta\psi+\left(\nu^2-\frac14\right)\eta^{-2}\psi
\end{equation}
Define an energy by
\begin{equation}\label{energy2}
E_3(\eta)=\frac12\int_{T^3} |\psi'(\eta)|^2+w|\nabla\psi(\eta)|^2.
\end{equation}
Then
\begin{equation}
E_3'(\eta)=2\left(\nu^2-\frac14\right)\eta^{-2}\int_{T^3}
\psi(\eta)\psi'(\eta).
\end{equation}
The integral on the right hand side of this equation can be bounded, using
the Cauchy-Schwarz inequality, in terms of the $L^2$ norms of $\psi'$ and
$\psi$. The first of these can be bounded in terms of the energy and due
to the fact that the mean value of $\psi$ is zero, the same is true of the
second. Thus $E_3'(\eta)\le C\eta^{-2} E_3(\eta)$ for a constant
$C$. By
Gronwall's inequality it follows that $E_3$ is globally bounded in the
future. These
arguments apply equally well to spatial derivatives of $\psi$ of any order.
By the Sobolev embedding theorem it can be concluded that $\psi$ and its
spatial derivatives of any order are bounded. The energy bounds and the
basic equation then imply that all spacetime derivatives of any order are
uniformly bounded in time.
Let $\eta_j$ be a sequence of times tending to infinity and consider the
translates defined by $\psi_j(\eta)=\psi(\eta+\eta_j)$. The sequence
$\psi_j$ satisfies uniform $C^\infty$ bounds. Consider the restriction
of this sequence to an interval $[\eta_0,\eta_1]$. By the Arzel\`a-Ascoli
theorem the sequence of restrictions has a uniformly convergent subsequence.
By passing to further subsequences and diagonalization it can be shown
that $\psi$ and its spacetime derivatives of all orders converge uniformly
on compact subsets to a limit $W$. Passing to the limit in the evolution
equation for $\psi$ along one of these sequences shows that $W$ satisfies
the flat-space wave equation $W''=w\Delta W$. Note that \emph{a priori} the
function $W$ could depend on the sequence of times chosen. This issue is
examined more closely below.
Given a smooth solution of \eqref{evolpsi} it is possible to
do a Fourier transform in space to get the equation
\begin{equation}\label{mode}
\hat\psi''=-w|k|^2\hat\psi+\left(\nu^2-\frac14\right)\eta^{-2}\hat\psi
\end{equation}
which is referred to below as the mode equation. Here $k$ is a vector. The
restriction to zero-mean solutions implies that the case $k=0$ of
\eqref{mode} can be ignored.
\begin{lemma}\label{ModeLemma}
Any solution $\hat\phi$ of equation \eqref{mode} has an asymptotic
expansion of the form
\begin{equation}
\hat\phi (\eta)=\bar W_k\cos(\sqrt{w}|k|(\eta-\bar\eta_k))+O(\eta^{-1}),
\end{equation}
for constants $\bar\eta_k$ and $\bar W_k$, in the limit $\eta\to\infty$.
\end{lemma}
\begin{proof}
To prove the lemma it is convenient to introduce polar coordinates
associated to the variables $\hat\psi$ and $\frac{1}{\sqrt{w}|k|}\hat\psi'$.
Thus
$\hat\psi=r\cos\theta$ and $\frac{1}{\sqrt{w}|k|}\hat\psi'=r\sin\theta$. This
leads
to the equations:
\begin{eqnarray}
r'&=&\frac{1}{\sqrt{w}|k|}\left(\nu^2-\frac14\right)r\eta^{-2}
\sin\theta\cos\theta
\label{evolr} \\
\theta'&=&-\sqrt{w}|k|+\frac{1}{\sqrt{w}|k|}\left(\nu^2-\frac14\right)
\eta^{-2}\cos^2\theta
\label{evoltheta}
\end{eqnarray}
It follows from \eqref{evoltheta} that
\begin{equation}\label{asymptheta}
\theta (\eta)=-\sqrt{w}|k|(\eta-\bar\eta_k)+O(\eta^{-1})
\end{equation}
for a constant $\bar\eta_k$. From \eqref{evolr} it follows that
\begin{equation}\label{asympr}
r(\eta)=\bar W_k (1+O(\eta^{-1}))
\end{equation}
for a constant $\bar W_k$. As a consequence of \eqref{asymptheta} we have
\begin{equation}
\cos (\eta(\theta))=\cos(\sqrt{w}|k|(\eta-\bar\eta_k))+O(\eta^{-1}).
\end{equation}
Together with \eqref{asympr} this gives the conclusion of the lemma.
\end{proof}
Consider a zero-mean solution of the type considered before. Let a function
$W$ be defined by taking the sequence $\eta_j$ used above to consist of
integer multiples of $2\pi$. We now show that the function $\psi-W$
tends to zero as $\eta\to\infty$. In order to do this it suffices to show
that it does so along a subsequence of an arbitrary sequence of values
$\zeta_j$ of $\eta$ tending to infinity. By passing to a subsequence as
before it can be arranged that the translates by the amounts $\zeta_j$
converge uniformly on compact subsets
as $j\to\infty$. Call the limit $Y$. The aim is to prove that $Y=0$. If
not there must be some mode $\hat Y$ which is non-zero. It can be obtained
as the limit of some $\hat\psi-\hat W$. From Lemma \ref{ModeLemma} it can be
seen that
$\hat W=\bar W_k\cos (\sqrt{w}|k|(\eta-\eta_k))$. Hence
$\hat\psi-\hat W=O(\eta^{-1})$
and so $\hat Y=0$, a contradiction. Convergence of derivatives can be
obtained in a corresponding way. Thus any solution can be written as
$\Phi(\eta,x)=\eta^{-\nu-\frac12}(W(\eta,x)+o(1))$. A similar result for
the polarized Gowdy equation with a sharper estimate on the error term
was proved in \cite{jurke}.
A late-time asymptotic expansion has now been derived which involves a
solution $W$ of the flat-space wave equation. Comparing with the results
on parametrizing solutions by the coefficients in an asymptotic expansion
near the singularity it is natural to ask if the function $W$ can be
prescribed freely. It will now be shown that this is the case by
following the proof of an analogous result for the polarized Gowdy
equation due to Ringstr\"om \cite{ringstrom05}. Write an arbitrary
zero-mean solution in the form
\begin{equation}
\Phi(\eta,x)=\eta^{-\nu-\frac12}W(\eta,x)+\omega (\eta,x).
\end{equation}
Then $\omega$ satisfies the equation
\begin{equation}
\omega''+\eta^{-1}\omega'-w\Delta\omega=\left(\nu^2-\frac14\right)
\eta^{-\nu-\frac52}W.
\end{equation}
Define
\begin{equation}
H(\eta)=\frac12 \int_{T^3} |\omega'(\eta)|^2+w|\nabla\omega(\eta)|^2
\end{equation}
and
\begin{equation}
\Gamma(\eta)=\frac{1}{2\eta}\int_{T^3} \omega(\eta)\omega'(\eta).
\end{equation}
The aim is to study late times and attention will be restricted to the
region where $\eta\ge w^{-1}$. At this point it is necessary to assume
that $w>0$. The following inequalities show the equivalence of $H$ and
$H+\Gamma$ as norms of $(\omega',\nabla\omega)$:
\begin{equation}
|\Gamma(\eta)|\le\frac{1}{2w\eta}H(\eta)\ \ \ ,\ \ \frac12 H\le H
+\Gamma\le\frac32 H.
\end{equation}
Now
\begin{multline}
\frac{d}{d\eta}\left[H+\Gamma\right]=-\frac{1}{\eta}(H+\Gamma)-\frac{4\nu+3}{2\eta}\Gamma
\\
+\left(\nu^2-\frac14\right)\eta^{-\nu-\frac52}\int_{T^3}\omega'W
+\frac12\left(\nu^2-\frac14\right)\eta^{-\nu-\frac72}\int_{T^3}\omega W.
\end{multline}
Using the equivalence of $H+\Gamma$ and $H$ this can be used to derive
the following differential inequality
\begin{multline}
\frac{d}{d\eta}\left[H+\Gamma\right]\ge -\left(\frac{1}{\eta}+\frac{4\nu+3}{2w\eta^2}\right)
\left(H+\Gamma\right)
\\
-\eta^{-\nu-\frac52}\|W \|_{L^2}\left(\nu^2-\frac14\right)
\left(\frac{2 +\sqrt{w}}{\sqrt{2}}\right)\left(H+\Gamma\right)^{1/2}
\end{multline}
By analogy with equation (16) of \cite{ringstrom05} define
\begin{equation}
E_4(\eta)=\eta e^{\frac{4\nu+3}{2\eta w}}(H(\eta)+\Gamma(\eta)).
\end{equation}
This quantity satisfies an inequality of the form
\begin{equation}
E_4'(\eta)\ge -C\eta^{-\nu-2}\|W(\eta)\|_{L^2}E_4(\eta)^{1/2}
\end{equation}
for a positive constant $C$ depending on $w$. Since $\eta^{-\nu-2}$ is
integrable at infinity this inequality can be used in just the same way
as the corresponding inequality in \cite{ringstrom05}. In this way it
can be proved that given a solution $W$ of the flat space wave equation
there is a corresponding solution $\Phi$ of \eqref{basic}. It follows from
the proof that $E_4(\eta)=O(\eta^{-2\nu-2})$. Hence $H(\eta)=O(\eta^{-2\nu-3})$
and the solution decays like $\eta^{-3/2-\nu}$.
The information obtained concerning the asymptotics of the solutions
constructed starting from a solution $W$ of the wave equation is stronger
that what was proved about general solutions of \eqref{basic} up to this
point. This can be improved on as follows. Given a solution $\Phi$ of
\eqref{basic} a solution $W$ of the flat space wave equation is obtained.
From there a solution $\tilde\Phi$ of \eqref{basic} is obtained with stronger
information on the asymptotics. The aim is now to show that $\tilde\Phi=\Phi$.
To do this it is enough to show that each Fourier mode agrees. This means
showing that a solution $\hat\psi$ of \eqref{mode} vanishes if it tends to
zero as $\eta\to\infty$. That the latter statement holds follows easily
from \eqref{evolr}. What has been proved can be summed up in the following
theorem.
\begin{theorem}\label{LinearExpandingThm}
Let $\Phi$ be a global smooth solution of \eqref{basic}. Then
there exist constants $A$ and $B$ and a smooth solution $W$ of the equation
$W''=w\Delta W$ with zero spatial average such that
\begin{equation}
\Phi(\eta,x)=A+W(\eta,x)\eta^{-\nu-\frac12}+B\eta^{-2\nu}+O(\eta^{-\nu-\frac32})
\end{equation}
This asymptotic expansion may be differentiated term by term in space as
often as desired.
\end{theorem}
\noindent
Note that the third explicit term in this asymptotic expansion is
often no larger than the error term. The function $W$ can be prescribed
freely.
\section{Late-time asymptotics for a general equation of state}
\label{asymplatetimegen}
It will now be investigated how the results of the previous section can be
extended to the case of a more general equation of state. The class of
equations of state which will be treated is defined by requiring that they
admit an asymptotic expansion of the form
\begin{equation}\label{eqnofstate2}
f(\epsilon)\sim w\epsilon+\sum_{j=1}^\infty f_j\epsilon^{a_j}
\end{equation}
for $\epsilon\to 0$. Here $w\ge 0$, the coefficients $a_j$ are all greater
than one and form an increasing sequence. To ensure the positivity of $f'$
it is assumed that if $w=0$ the coefficient $f_1$ is positive. This form
of the equation of state may be compared with that of (\ref{eqnofstate}).
It is further assumed that this expansion retains its validity when
differentiated term by term as often as desired. An example is given by the
polytropic equation of state (\ref{polytropic}). In that case $w=0$, $f_1=K$
and $a_1=\frac{n+1}{n}$. With this assumption information can be obtained on
the leading order asymptotics of the background solution as $\eta\to\infty$.
To simplify the notation define $\sigma=a_1-1$. It is convenient to use the
mass density once more, writing (\ref{massdensity}) in the equivalent form
\begin{equation}\label{massdensity2}
m(\epsilon)=\exp\left\{-\int_\epsilon^1 (\xi+f(\xi))^{-1}d\xi \right\}
\end{equation}
Then $m(\epsilon)$ has an expansion about $\epsilon=0$ where the leading term
is proportional to $\epsilon^{\frac{1}{w+1}}$. In particular, when $w=0$ the
leading term is linear. Using the fact that $m$ is proportional to $a^{-3}$
for any equation of state leads to an asymptotic expansion for $\epsilon$ in
terms of $a$. Putting this information into (\ref{SecondFriedmann}) shows that
$a(\eta)$ has an expansion in the limit $\eta\to\infty$ with the leading
term proportional to $\eta^{\frac{2}{3w+1}}$. Finally it follows that
$\epsilon$ and ${\cal H}$ have expansions with leading terms proportional
to $\eta^{-\frac{6(1+w)}{1+3w}}$ and $\eta^{-1}$ respectively. With the leading
asymptotics of the background solution having been determined it is possible
to derive asymptotics for the coefficients in the equation for $\Phi$.
As in the case of a linear equation of state it is convenient to treat
homogeneous and zero-mean solutions separately. The homogeneous solutions
will be analysed first. This leads to consideration of the equation obtained
from (\ref{MainEquation}) by omitting the term containing spatial derivatives.
It is convenient here to exclude the case of a linear equation of state
which was previously analysed so as to ensure that $\sigma$ is defined
uniquely in terms of the equation of state. The coefficients satisfy:
\begin{equation}
3(1+f'(\epsilon))=3(1+w+(\sigma+1)f_1\eta^{-\beta})+o(\eta^{-\beta})
\end{equation}
and
\begin{equation}
3\left(f'(\epsilon)-\frac{f(\epsilon)}{\epsilon}\right)
=3f_1\sigma\eta^{-\beta}+o(\eta^{-\beta})
\end{equation}
where $\beta=\frac{6\sigma(1-w)}{1+3w}$. Define
\begin{equation}
F=\frac12\Phi'^2+\alpha\eta^{-2-\beta}\Phi^2
\end{equation}
where $\alpha$ is a positive constant which needs to be chosen appropriately
in what follows. Computing the derivative of $F$ with respect to
$\eta$ and using the equation gives a sum of terms involving $\Phi'^2$,
$\Phi^2$ and $\Phi\Phi'$. The aim is to show that $F$ is bounded and to do
this it suffices to consider arbitrarily late times. The leading order terms
in the coefficients of $\Phi^2$ and $\Phi'^2$ are
$-\frac{6(1+w)}{1+3w}\eta^{-1}$ and
$-A\eta^{-3-\beta}$ respectively, where $A$ is a positive constant. The
coefficient of $\Phi\Phi'$ has a leading term proportional to $\eta^{-2-\beta}$
for a general choice of $\alpha$. However if $\alpha$ is chosen to be half the
coefficient of the leading order term in the expansion of the coefficient of
$\Phi$ in (\ref{MainEquation}) then a cancellation occurs and the coefficient
becomes $o(\eta^{-2-\beta})$. This choice is made here. The aim is to show that
the term containing $\Phi\Phi'$ can be absorbed by the sum of the other two so
as to leave a non-positive remainder. To do this
the inequality
\begin{equation}
|\eta^{-2-\beta}\Phi\Phi'|\le\frac12 (\eta^{-1}\Phi'^2
+\eta^{-3-2\beta}\Phi^2)
\end{equation}
is used. The powers of $\eta$ which arise from this inequality match those
in the leading order terms in the coefficients of the manifestly negative
terms in the expression for the derivative of $F$ with respect to $\eta$.
Thus at late times the cross-term can be absorbed in the terms with the
desired sign. The conclusion is that $F$ is bounded. In fact this can be
improved somewhat. The derivative of $F$ can be estimated above by
$-2\gamma \eta^{-1}F$ for any positive constant $\gamma<2\nu+1$.
This means that $\Phi'$ decays like $\eta^{-\gamma}$. It can be concluded that
$\Phi$ is bounded. From the evolution equation for $\Phi$ and the boundedness
statements already
obtained it follows that $(\eta^{2\nu+1}\Phi')'$ is integrable. Thus
$\Phi=A+B\eta^{-2\nu}+\ldots$ for constants $A$ and $B$ and the leading order
behaviour is as in the case of a linear equation of state.
It turns out to be useful for the analysis of the zero-mean solutions in the
expanding direction to introduce a new time variable $\tau$ satisfying the
relation $d\tau/d\eta=\sqrt{f'(\epsilon)}$. Substituting the asymptotics of
$f'(\epsilon)$ in terms of $\eta$ into this provides an asymptotic
expansion for $\tau$ in terms of $\eta$. For $w>0$ a linear relation is
obtained in leading order while for $w=0$ and $\sigma\ne\frac13$ the
expansion reads
\begin{equation}\label{taudef}
\tau=C_1\eta^{1-3\sigma}+\tau_\infty+\ldots
\end{equation}
for constants $C_1$ and $\tau_\infty$. Note that the second term in this
expansion is only smaller than the first for $\sigma<\frac13$. For $w=0$ and
$\sigma=\frac13$ the power in this expression gets replaced by $\log\eta$.
From these facts it can be seen that $\tau\to\infty$ for $\eta\to\infty$ when
$w>0$ or when $w=0$ and $\sigma\le\frac13$. In contrast $\tau$ tends to the
finite limit $\tau_\infty$ for $\eta\to\infty$ when $w=0$ and
$\sigma>\frac13$. This is a symptom of a bifurcation where the asymptotics of
the linearized solution undergoes a major change. For convenience we say that
the dynamics for an equation of state with an asymptotic expansion of the form
(\ref{eqnofstate2}) is underdamped if $w>0$ or $\sigma<\frac13$, critical if
$w=0$ and $\sigma=\frac13$ and overdamped if $w=0$ and $\sigma>\frac13$.
Next the late-time behaviour will be analysed for zero-mean solutions with
an equation of state corresponding to underdamped dynamics. The first step is
to introduce the time variable $\tau$ into
(\ref{MainEquation}) with the result:
\begin{equation}\label{MainEquationtau}
\Phi_{\tau\tau}+3Z\tilde{\cal H}\Phi_\tau
+3Y\tilde{\cal H}^2\Phi-\Delta\Phi=0
\end{equation}
where
\begin{eqnarray}
&&Y=f'(\epsilon)-\frac{f(\epsilon)}{\epsilon}, \\
&&Z=1+f'(\epsilon)-\frac12\frac{(\epsilon
+f(\epsilon))f''(\epsilon)}{f'(\epsilon)}
\end{eqnarray}
and
$\tilde{\cal H}=a^{-1}a_\tau$. Derivatives with respect to $\tau$ are denoted
by subscripts. Next the term containing $\Phi_\tau$ will
be eliminated by multiplying $\Phi$ by a suitable factor $\Omega^{-1}$. Choose
$\Omega$ to satisfy
\begin{equation}
\frac{\Omega_\tau}{\Omega}=-\frac32 Z\tilde{\cal H}
\end{equation}
For all three types the behaviour of $\Omega$ as a function of $a$
in the limit $\epsilon\to 0$ can be determined. The result is that the
leading order term in $\Omega$ is proportional to $a^{-\frac32(1+w)}$ for $w>0$
and proportional to $a^{-\frac32(1-\frac{\sigma}{2})}$ for $w=0$. The function
$\Psi=\Omega^{-1}\Phi$ satisfies an equation of the form
\begin{equation}\label{taudynamics}
\Psi_{\tau\tau}=A(\epsilon)\tilde{\cal H}^2\Psi+\Delta\Psi
\end{equation}
where $A(\epsilon)$ is
a rational function of $\epsilon$, $f(\epsilon)$, $f''(\epsilon)$ and
$f'''(\epsilon)$. Under the given assumptions on the equation of state it
is bounded. Proving this requires examining many terms but is routine. For
example the only term containing the third derivative of $f$ is
$\frac{3(\epsilon+f(\epsilon))^2f'''(\epsilon)}{2f'(\epsilon)}$. The leading
order terms in the asymptotic expansions of numerator and denominator are
both proportional to $\epsilon^\sigma$. Note also that the leading order
term in the expansion for $\tilde{\cal H}$ is proportional to $\tau^{-1}$ for
any $\sigma<\frac13$. Note for comparison that $\tilde{\cal H}$ tends to a
constant value as $\tau\to\infty$ in the case $\sigma=\frac13$.
Define an energy by
\begin{equation}
E_5 (\tau)=\frac12\int\Psi_\tau^2+|\nabla\Psi|^2.
\end{equation}
Then using the same techniques as in previous energy estimates shows that
there is a constant $C$ such that
\begin{equation}
\frac{dE_5}{d\tau}\le C|A|\tilde{\cal H}^2E_5
\end{equation}
Using the information available concerning $A$ and $\tilde{\cal H}$ shows
that $E_5$ is bounded in the future. Taking derivatives of the equation and
using the same arguments as in previous cases shows that $\Psi$ and its
derivatives of all orders with respect to $x$ and $\tau$ are bounded. It
follows that any sequence of translates $\Psi(\tau+\tau_n)$ for a sequence
$\tau_n$ tending to infinity has a subsequence which converges on
compact subsets to a limit $W$.
Doing a Fourier transform of the equation (\ref{taudynamics}) in space leads
to the mode equation
\begin{equation}
\hat\Psi_{\tau\tau}=-|k|^2\hat\Psi+A\tilde{\cal H}^2\hat\Psi
\end{equation}
Introducing polar coordinates in the
$(\frac{\hat\Psi}{|k|},\frac{\hat\Psi_\tau}{|k|})$-plane leads to the
system
\begin{eqnarray}
\frac{dr}{d\tau}&=&\frac{1}{|k|}Ar\tilde{\cal H}^{-2}
\sin\theta\cos\theta
\label{evolr2} \\
\frac{d\theta}{d\tau}&=&-|k|+\frac{1}{|k|}A
\tilde{\cal H}^{-2}\cos^2\theta
\label{evoltheta2}
\end{eqnarray}
This implies that
\begin{equation}
\theta (\tau)=-|k|(\tau-\bar\tau_k)+O(\tau^{-1})
\end{equation}
and
\begin{equation}
r(\tau)=\bar W_k (1+O(\tau^{-1}))
\end{equation}
for some constants $\bar\tau_k$ and $\bar W_k$. Arguing as in the case of a
linear equation of state leads to the relation $\Psi (\tau,x)=W(\tau,x)+o(1)$
where $W$ is a solution of the equation $W_{\tau\tau}=\Delta W$. Using the
form of the leading order term in $\Omega$ as a function of $a$, it can be
shown that the leading order term in $\Omega$ as a function of $\tau$ is
given by $\tau^{-\frac{3(1+w)}{1+3w}}=\tau^{-\nu-\frac12}$ and
$\tau^{-\frac{3(1-\frac{\sigma}{2})}{(1-3\sigma)}}$ in the cases $w>0$ and
$w=0$ respectively. Note that the first of these reproduces the result in the
case of a linear equation of state. It does not seem to be possible to write
the expansion directly in terms of $\eta$ in such
a way that it gives more insight than the expression in terms of $\tau$.
The leading asymptotics of a zero-mean solution is obtained by taking a
solution of the flat space wave equation, distorting the time variable by
a diffeomorphism and multiplying by a power of the time coordinate which has
been explicitly computed.
Consider next the case $w=0$, $\sigma>\frac13$ (overdamped case).
The time coordinate $\tau$ tends to the finite limit $\tau_\infty$ as
$\eta\to\infty$.
Define $G=\tilde{\cal H}^{-2}\partial_\tau\tilde{\cal H}$. The function $G$
tends to the limit $\frac32(\sigma-\frac13)$ as $\tau\to\tau_\infty$. Let
\begin{equation}
E_6=\int \Phi_\tau^2+|\nabla\Phi|^2+\Lambda^2\tilde{\cal H}^2\Phi^2
\end{equation}
for a constant $\Lambda$ which remains to be chosen. For a constant $\lambda$
computing $\partial_\tau (a^{2\lambda}E_6)$ gives rise to a sum of expressions
containing $\Phi^2$, $\Phi_\tau^2$, $\Phi\Phi_\tau$ and $|\nabla\Phi|^2$.
Using the inequality
\begin{equation}
\tilde{\cal H}^2\Phi_\tau\Phi\le \frac{1}{2\delta}\tilde{\cal H}\Phi_\tau^2
+\frac{\delta}{2}\tilde{\cal H}^3\Phi^2
\end{equation}
leads to an inequality where the term involving $\Phi\Phi_\tau$ has been
eliminated. To obtain some control on the energy by means of the inequality
the coefficients $\Lambda$ and $\lambda$ should be chosen in such a way that
all terms on the right hand side are manifestly non-positive. The conditions
for this to happen are the inequalities $\lambda\le 0$,
\begin{equation}\label{ineq4}
\frac{1}{2\delta}|\Lambda^2-3Y|\le 3Z-\lambda
\end{equation}
and
\begin{equation}\label{ineq5}
\frac{\delta}{2}|\Lambda^2-3Y|\le -\Lambda^2(\lambda+G)
\end{equation}
Note that these inequalities imply in particular that $\lambda<0$.
Consider now the limit $\tau\to\tau_\infty$ where $Y$ behaves asymptotically
like $f_1\sigma\epsilon^\sigma$ and $Z\to 1-\frac12\sigma$. In this
limit the inequality (\ref{ineq5}) reduces to
$\lambda\le -\frac32 (\sigma-\frac13)-\frac{\delta}{2}$. Suppose therefore
that $\lambda<-\frac32 (\sigma-\frac13)$. Then by choosing
$\delta$ sufficiently small it can be arranged that the limiting inequality
is satisfied. In the limit the inequality (\ref{ineq4}) reduces to
$\frac{\Lambda^2}{2\delta}\le 3-\frac32\sigma-\lambda$. Choose
$\Lambda$ so that this inequality is satisfied strictly. With these choices
both inequalities are satisfied strictly in the limit. For $\tau$ sufficiently
close to $\tau_\infty$ the coefficients in (\ref{ineq4}) and (\ref{ineq5})
are as close as desired to their limiting values. Making them close enough
ensures that these two inequalities continue to be satisfied. It follows
that with these choices of the parameters $\partial_\tau (a^{2\lambda}E_6)$
is non-positive at late times. It can be concluded that $E_6=O(a^{-2\lambda})$.
This gives a limit on the growth rate of $E_6$ in terms of that of the scale
factor. As in previous cases corresponding estimates can be obtained for
derivatives and as a consequence pointwise estimates derived. It follows
that $\Phi=O(a^{-\lambda}\tilde{\cal H}^{-1})$. From what is known about the
background solution it follows that $\tilde{\cal H}$ is proportional to
$a^{\frac32 (\sigma-\frac13)}$. Thus if $\rho=-\frac32 (\sigma-\frac13)-\lambda$
then $\Phi=O(a^\rho)$. This power is positive but may be made as small as
desired by choosing $\lambda$ suitably. By the usual methods similar bounds
can be obtained for spatial derivatives of $\Phi$.
To get more information about the asymptotics as $\tau\to\tau_\infty$ it is
convenient to rewrite the equation in terms of the new time variable
$s=\tau_\infty-\tau$. The resulting equation is
\begin{equation}\label{MainEquations}
\Phi_{ss}-3Z\tilde{\cal H}\Phi_s
+3Y\tilde{\cal H}^2\Phi-\Delta\Phi=0
\end{equation}
As $s\to 0$ the coefficient $Z$ tends to $1-\frac{\sigma}2$ while
$\tilde {\cal H}$ and $Y\tilde {\cal H}^2$ are proportional in leading order
to $s^{-1}$ and $s^{\frac{2}{3(\sigma-1/3)}}$ respectively. The last exponent is
positive for any $\sigma>1/3$ so that the corresponding coefficient tends to
zero as $s\to 0$. Let $B$ be a positive solution of the equation
$\frac{dB}{ds}=-3Z\tilde {\cal H} B$. Then (\ref{MainEquations}) implies
the following integral equation:
\begin{equation}
\Phi_s=\frac{1}{B}\left(\bar\Phi_1+\int_0^s B(-3Y{\cal H}^2\Phi
+\Delta\Phi)\right)
\end{equation}
for a function $\bar\Phi_1(x)$. Here the fact has been used that the integral
occurring in this equation converges. This follows from the fact that in
leading order $B$ is proportional to $s^{-\frac{\sigma-2}{\sigma-1/3}}$ and the
bounds already obtained for $\Phi$ and its derivatives. When $B^{-1}$ diverges
faster than $s^{-1}$ in the limit $s\to 0$, which happens for
$\sigma<\frac76$, the known bounds on $\Phi$ imply that $\bar\Phi_1=0$. Hence
$\Phi$ is bounded in the limit $s\to 0$ in that case. When $\sigma>\frac76$ it
can also be concluded that $\Phi$ is bounded. For $\sigma=\frac76$ a
logarithmic divergence of $\Phi$ is not ruled out. In all cases the integral
equation can be used to obtain an asymptotic expansion for $\Phi$.
Schematically this expansion is of the form
\begin{equation}
\Phi (\eta,x)=\sum_i \Phi_i(x)\zeta_i (\eta).
\end{equation}
for some functions $\zeta_i$ with $\zeta_{i+1}(\eta)=O(\zeta_i(\eta))$ for
each $i$. This is very different from the expansion in the limit
$\eta\to\infty$ obtained when $w>0$ or $\sigma<\frac13$. In the present case,
scaling the solution by a suitable function of $\eta$ gives a result which
converges to a function of $x$ as $\eta\to\infty$. In the other case a
similar rescaling can lead to a profile which moves around the torus with
constant velocity. (In general it leads to a superposition of profiles of
this kind.) In the latter case there are waves which continue to propagate at
arbitrarily late times. In the case $\sigma>\frac13$ the waves \lq freeze\rq.
This is reminiscent of the late-time asymptotics of the gravitational field in
spacetimes with positive cosmological constant (cf. \cite{rendall04}).
To make this argument more concrete consider the special case where the
equation of state is $f(\epsilon)=f_1\epsilon^{\sigma+1}$ for some $\sigma$
between $\frac13$ and $\frac76$. Using the convergence of the integral it
follows immediately that $\Phi(s,x)=\Phi_0(x)+O(s^2)$ for some function
$\Phi_0$. Putting this information back into the integral equation gives
$\Phi(s,x)=\Phi_0 (x)+\frac{\sigma-1/3}{4-2\sigma}\Delta\Phi_0 (x)s^2
+\dots$.
Consider finally the case $w=0$, $\sigma=\frac13$ (critical case). Then
$\eta=\eta_0e^{\frac{\tau}{C_1}}+\ldots$ where $\eta_0$ is a constant and $C_1$
corresponds to the constant appearing in (\ref{taudef}). The arguments leading
to the estimate $\Phi=O(\tau^\rho)$ can be carried out as in the case
$\sigma>\frac13$. The only difference is that the limit $\tau\to\tau_\infty$
is replaced by $\tau\to\infty$. In the case $\sigma=\frac13$ the quantity
$\tilde{\cal H}$ tends to a constant for $\tau\to\infty$ and
$Y\tilde{\cal H}^2$ is proportional to $e^{-\frac{6}{C_1}\tau}$ in leading order.
A quantity $B$ can be introduced as before and an integral equation obtained.
In this case $B$ is a decaying exponential. Unfortunately it does not seem to
be possible to use this integral equation to refine the asymptotics in this
case and this matter will not be pursued further here.
\vskip 10pt
\noindent
{\it Acknowledgements}
\noindent
The authors gratefully acknowledge the hospitality and financial support
of the Mittag-Leffler Institute where part of this work was carried out.
|
1,116,691,498,381 | arxiv | \section{Introduction}
Thermodynamics and statistical mechanics are powerful and vastly general tools. But their usual formulation works only in the non-general-relativistic limit. Can they be extended to fully general relativistic systems?
The problem can be posed in physical terms: we do not know the position of each molecule of a gas, or the value of the electromagnetic field at each point in a hot cavity, as these fluctuate thermally, but we can give a statistical description of their properties. For the same reason, we do not know the \emph{exact} value of the gravitational field, which is to say the exact form of the spacetime geometry around us, since nothing forbids it from fluctuating like any other field to which it is coupled. Is there a theoretical tool for describing these fluctuations?
The problem should not be confused with thermodynamics and statistical mechanics on curved spacetime. The difference is the same as the distinction between the dynamics of matter on a given curved geometry versus the dynamics of geometry itself, or the dynamics of charged particles versus dynamics of the electromagnetic field. Thermodynamics on curved spacetime is well understood (see the classic \cite{Tolman}) and statistical mechanics on curved spacetimes is an interesting domain (for a recent intriguing perspective see \cite{Smerlak:2011yc}). The problem is also distinct from ``stochastic gravity" \cite{Hu:1989db,Hu:2003qn}, where metric fluctuations are generated by a Einstein-Langevin equation and related to semiclassical effects of quantum theory. Here, instead, the problem is the just the thermal behavior of conventional gravity.\footnote{One may ask whether equilibrium can ever be reached, given the gravitational instabilities and long thermalization times. The question is legitimate, but doesn't authorize us evading the issue of what equilibrium means: for the question itself to even make sense, and because we are always concerned only with approximate equilibrium in nature, gravity or not.}
A number of puzzling relations between gravity and thermodynamics (or gravity, thermodynamics and quantum theory) have been extensively discussed in the literature \cite{Bekenstein:1974ax,Bekenstein:1973ur,Hawking2,Bardeen:1973gs,Wald:1994uq,Padmanabhan:2011zz,Padmanabhan:2003gd,Jacobson:2003vx,Jacobson:2003wv,Carlip:2012ff}. Among the most intriguing are probably Jacobson's celebrated derivation of the Einstein equations from the entropy-area relation \cite{Jacobson:1995ab,Jacobson:2012yt}, and Penrose Weil-curvature hypothesis \cite{Penrose:1979fk,Penrose:2006zz}. These are very suggestive, but perhaps their significance cannot be evaluated until we better understand standard general covariant thermodynamics.
One avenue for addressing the problem is perturbation theory. Another is restricting to asymptotic flatness and observables at infinity \cite{York:1986it,Brown:1992bq,Brown:1989fa}. Although useful in specific contexts, these roads are incomplete, because they miss the core issue: understanding if temperature has a meaning in the bulk of spacetime in a strong field regime. What do we mean when we say that near a cosmological singularity temperature is high? For the moment we do not have a definition of temperature that makes sense where the metric might fluctuate widely.
A step towards general covariant statistical mechanics was taken in \cite{Rovelli:1993ys,Rovelli:1993zz} and extended to quantum field theory in \cite{Connes:1994hv}. The notion introduced in these papers is \emph{thermal time}. This is meant to address the basic difficulty of general relativistic statistical mechanics: in a generally covariant theory, dynamics is given relationally rather than in terms of evolution in physical time\footnote{For a discussion of this crucial point see the Appendix and Chapter 3 of \cite{Rovelli:2004fk}, in particular Section 3.2.4.}, consequently the canonical hamiltonian vanishes, and without a hamiltonian $H$ it is difficult to even start doing statistical physics. The idea of thermal time is to reinterpret the relation between Gibbs states ($\rho\propto e^{-\beta H}$) and time flow (generated by $H$): instead of viewing the Gibbs states as determined by the time flow, observe that any generic state generates its own time flow. The time with respect to which a covariant state is in equilibrium can therefore be read out from the state itself. The root of the temporal structure is thus coded in the non commutativity of the Poisson or quantum algebra \cite{albook,Connes:1994hv}.
Since any state is stationary with respect to its own flow, the problem left open is characterizing the states that are in \emph{physical} equilibrium. Here we consider a solution: equilibrium states are those whose thermal time is a flow in spacetime.\footnote{This problem is considered also in \cite{Martinetti:2002sz,Longo:2009mn,Buchholz:1998pv}.} These, we suggest, are the proper generalization of Gibbs states to the general covariant context.
This step allows temperature to be defined, following the intuition in \cite{Martinetti:2002sz,Rovelli:2010mv}: the temperature measured by a local clock is the ratio between thermal time and proper time. This yields immediately the Tolman-Ehrenfest law \cite{Tolman:1930zz, TolmanEhrenfest}, which correctly governs equilibrium temperature in gravity. Entropy and free energy can be defined and we obtain the full basis of generally covariant thermodynamics. The construction extends to the quantum theory.
The result is a tentative set of equations that generalize conventional thermodynamics and statistical mechanics to classical and quantum general-covariant systems.
\vspace{.2cm}
\centerline{---}
\vspace{.2cm}
We use units where the Boltzmann constant $k$ and the Planck constant $\hbar$ are set to unity. We have tried to keep the main text brief, confining background material to a detailed Appendix. The reader is urged to start from the Appendix unless the language and the background ideas of the text are already familiar. Equations in the paper are to be understood locally in phase space, namely on a chart where suitable regularity conditions are satisfied to avoid singular or degenerate behavior. A finer analysis will make sense after the basic conceptual structure is clear.
\section{General covariant Gibbs states}
\subsection{Thermal time}
Let $\cal E$ be a symplectic space, whose physical interpretation is the extended phase space of a general covariant theory (see the Appendix for notation and details.) Let $\cal C$ be a submanifold of $\cal E$, representing the surface where the constraints of the theory (which code the full dynamics) are satisfied. The symplectic form $\sigma$ of $\cal E$ induces a presymplectic structure on $\cal C$, whose null directions can be integrated to define the gauge orbits $o$. The space $\Gamma$ of these gauge orbits, which is the physical phase space of the theory, is again a symplectic space, with symplectic form $\omega$. It is in 1-to-1 correspondence with the space of the solutions of the field equations, modulo gauges. A statistical state $\rho$ is a positive function on $\Gamma$ normalized with the respect to the Liouville measure
\begin{equation}
\int_\Gamma \rho = 1.
\label{norm}
\end{equation}
The hamiltonian vector field $X$ defined by
\begin{equation}
\rho \ \omega(X) = d\rho
\label{X}
\end{equation}
generates a flow $\alpha_\tau$ in $\Gamma$ called the thermal flow; its generator
\begin{equation}
h=-\ln \rho
\label{equi}
\end{equation}
is called the thermal hamiltonian and the flow parameter $\tau$ is called thermal time\footnote{So defined, $\tau$ has the dimensions of an action, as it is conjugate to a dimensionless quantity. It can be made dimensionless by multiplying the r.h.s.\,of \eqref{X} and \eqref{equi} by $\hbar$. This is a bit artificial in the classical theory, but will be natural in the quantum theory.} \cite{Rovelli:1993ys}.
\subsection{Local thermal time}
Consider a general covariant theory that includes general relativity\footnote{We systematically disregard at this stage the difficulty of defining the Liouville measure that defines the integral \eqref{norm} in the case of field a theory. This is because the issue should properly be addressed in the quantum context, where I will be a bit more precise.}, and assume physical 3d space $\Sigma$ to be compact, with the $S_3$ topology. The space $\cal E$ can be coordinatized by the 3d Riemann metric tensor $q$ of $\Sigma$, the matter fields $\varphi$, and their respective conjugate momenta $(p,\pi)$; these quantities are fields on $\Sigma$, namely functions from $\Sigma$ to a target space $(q,\varphi,p,\pi):\Sigma\to V$. An orbit $o$ determines a solution of the field equations and therefore in particular a pseudo-riemannian manifold $(M,g)_{\!o}$. A point in $o$ determines a spacelike Cauchy surface $\phi:S_3\to (M,g)_{\!o}$, having the given induced metric $q$ and extrinsic curvature $p$. In particular, a foliation $\phi_\tau:S_3\to (M,g)_{\!o}, \tau\in R$ of $(M,g)_{\!o}$ corresponds to a line on the orbit.
Consider now a real function $\tilde T$ on $V$. This determines a local function (which we indicate with the same letter) on ${\cal E}$, namely a map $\tilde T:{\cal E}\times\Sigma\to R$ given by $\tilde T((q,p,\varphi,\pi),{\mathbf x} )=\tilde T(q({\mathbf x} ),p({\mathbf x} ),\varphi({\mathbf x} ),\pi({\mathbf x} )), {\mathbf x} \in\Sigma$. The coordinate $\tilde T({\mathbf x})$ on $\cal E$ plays a role of ``multi-fingered time" in what follows. If the equation
\begin{equation}
\tilde T({\mathbf x} )=\tau, \hspace{6em} \tau\in R
\end{equation}
defines a foliation of $(M,g)_{\!o}$ (on a given region of phase-space) we say that $\tilde T(x)$ is a ``local time". The parameter of the foliation defines then a time coordinate $\tau:(M,g)_{\!o}\to R$ on spacetime. The simplest example is if the matter fields include a scalar field $\tilde T$ that grows monotonically in spacetime (for the given region of phase space): then the value of the field defines a time coordinate.
If there are canonical coordinates $\tilde T$ and $Q^i$ on ${\cal E}$, with respective momenta $P_{\tilde T}$ and $P_i$, such that
\begin{equation}
P_{\tilde T}({\mathbf x} )=-h(Q^i({\mathbf x} ),P_i({\mathbf x} ))
\label{form}
\end{equation}
on $\cal C$, then $\tilde T({\mathbf x} )$ defines a deparametrization of the theory in the following sense: the hamiltonian
\begin{equation}
h=\int d^3{\mathbf x} \ h(Q^i({\mathbf x} ),P_i({\mathbf x} ))
\label{h}
\end{equation}
evolves geometry and matter fields along the foliation $\phi_\tau$. Notice that $h$ is constant along the orbits it generates. We can therefore associate its value to each $o$ and obtain in this manner a function $h$ on $\Gamma$. (Weaker cases are also of interest, in particular the case relevant in cosmology where
\begin{equation}
P_{\tilde T}({\mathbf x} )=-f(\tilde T({\mathbf x} ))\ h(Q^i({\mathbf x} ),P_i({\mathbf x} ));
\label{form2}
\end{equation}
which describe a system with temperature varying in time: see \cite{Rovelli:1993zz}.)
Let us now come to the first main notion that we introduce in this paper. We say that a statistical state $\rho$ on $\Gamma$ is a ``Gibbs state" if there is a local time $\tilde T(x)$ with a local hamiltonian $h$ of the form \eqref{h} (or \eqref{form2}) satisfying \eqref{equi} up to an additive constant\footnote{The constant has no effect on the dynamics and we set it to zero by redefining $P_{\tilde T}$.}.
If this is the case, the thermal time $\tau$ generated by $\rho$ is precisely the foliation time $\tau$, and therefore thermal time has a geometrical interpretation as a flow in spacetime.
\subsection{Nonrelativistic limit}
The definition above is a generalization of the conventional definition of Gibbs states. To see this, recall that for a Hamiltonian system with phase space $\Gamma_0$, canonical coordinates $(q,p)$ and Hamiltonian $H=H(q,p)$, a Gibbs state is a state of the form $\rho_\beta=Z^{-1}(\beta)\, e^{-\beta H}$ with $Z(\beta)\equiv\int_{\Gamma_0}e^{-\beta H}$. The general covariant formulation of this system is defined on the extended phase space ${\cal E}$ with canonical coordinates $(t,p_t,q,p)$ and the constraint $C=p_t+H(q,p)$. The constraint surface is coordinatized by $(t,q,p)$ and the orbits are given by $(t,q(t),p(t))$ where $q(t)$ and $p(t)$ are the solutions of the Hamilton equations. The space $\Gamma$ of these orbits is isomorphic to $\Gamma_0$ (but not canonically so, until a $t=t_0$ is chosen) via $(q,p)=q(t_0),p(t_0)$.
A time function on ${\cal E}$ is provided by $\tau=t/\beta$, whose conjugate momentum is $p_\tau=\beta p_t$, which satisfies the requirement that the constraint can be expressed in the form \eqref{form}, namely $p_\tau=-h(q,p)$, where $h=\beta H$. The hamiltonian, being constant on each orbit, is well defined on $\Gamma$, therefore $\rho_\beta$ is a function on $\Gamma$, namely it is a statistical state in the covariant sense. It is immediate to see that it satisfies \eqref{equi}. In other words, the Gibbs state picks out the coordinate $t$ from $\cal E$, where this was confounded with the other variables.
Observe now that the temperature $T\equiv\frac1{\beta}$ is equal to the ratio
\begin{equation}
T=\frac\tau{t}
\label{ttt}
\end{equation}
between the thermal time $\tau$, namely the parameter of the evolution generated by the logarithm of the Gibbs state, and the physical time $t$. This characterization of temperature can be extended to general covariant systems.
\subsection{Mean values, mean geometry\\ and local temperature}
Consider a family $\cal A$ of functions $A$ on $\Gamma$. Let the mean value of $A$ on the state $\rho$ be
\begin{equation}
\bar A = \int_\Gamma A\,\rho.
\end{equation}
The thermal time flow $\alpha_\tau$ acts on these functions by $A(\tau)(s)=\alpha_\tau(A)(s)=A(\alpha_{-\tau}(s)), s\in \Gamma$, which satisfies $dA/d\tau=\{A,h\}$.
Since $\rho$ is clearly invariant under the flow, so are the mean values, but
\begin{equation}
f_{AB}(\tau) = \int_\Gamma A(\tau)B\,\rho.
\end{equation}
is in general a non trivial function and describes temporal correlations in the state. Define the \emph{mean geometry} $\bar g$ (if it exists) of a state $\rho$ for an observable family $\cal A$ as a spacetime $(M,\bar g)$ with a foliation $\phi_\tau$ such that
\begin{equation}
\bar A(\tau)=A(\phi^{-1}_\tau(\bar g)).
\end{equation}
Since $\bar A(\tau)$ is $\tau$ independent, it follows that $(M,\bar g)$ is stationary under the flow defined by $\phi_\tau$. Therefore $\xi=\frac{\partial}{\partial\tau}$ is a timelike Killing field on $(M,\bar g)$. The norm of $\xi$ is $ds/d\tau$ namely the ratio between the local flow of proper time and thermal time. The equivalence principle therefore compels us to define the local temperature by the local version of \eqref{ttt}, namely
\begin{equation}
\hspace{6em} T(x) = |\xi(x)|^{-1}, \hspace{4em} x\in M
\end{equation}
from which the Tolman-Ehrenfest law \cite{Tolman:1930zz,TolmanEhrenfest}
\begin{equation}
T(x)|\xi(x)|=constant
\label{tolman}
\end{equation}
that governs the spacetime variation of temperature at equilibrium in gravity, follows immediately.\footnote{A suggestion in this direction was in \cite{Martinetti:2002sz}. The intriguing relation between \eqref{ttt} and the Tolman law was pointed out in \cite{Rovelli:2010mv}.} In stationary coordinates $(\tau, {\mathbf x} )$, the temperature is the inverse of the Lapse function, since $ds^2=N^2 d\tau^2$.
\subsection{Partition function \\ and global temperature}
If $\rho$ is a Gibbs state, we can obtain another Gibbs state by exponentiating it with a constant $\beta$ and multiplying it by a $\beta$ dependent factor that preserves the normalization: $\rho_\beta=Z^{-1}(\beta)\, \rho^\beta$. The effect of this exponentiation is to scale the thermal time globally, and therefore to scale the temperature globally. Therefore the global temperature is defined with respect to a reference Gibbs state. Having a one-parameter family of Gibbs states allows us to define the partition function
\begin{equation}
Z(\beta) = \int_\Gamma \rho^\beta.
\end{equation}
The entropy of the state can be obtained as usual from
\begin{equation}
S(\beta) = - \int_\Gamma \rho_\beta \ln\rho_\beta
\end{equation}
and from this we can derive in a few steps the standard thermodynamical relation
\begin{eqnarray}
S &=&- \int_\Gamma \rho_\beta (\beta \ln \rho-\ln Z)\nonumber \\
&=&\beta E+\ln Z
\end{eqnarray}
where $E$ is the mean value of the energy $h\!=\!-\ln\rho$. The global temperature $\beta$ of the state should not be confused with the local temperature $T(x)$, which is space-dependent. Also, the local temperature is defined directly by a single statistical state (if a mean geometry exists), while the global temperature is only defined relative to another Gibbs state taken as reference.
In the following section, we extend this structure to the quantum theory.
\subsection{Quantum theory}
Let $\cal K$ be the unconstrained Hilbert space of a general covariant theory and $\cal H$ its physical Hilbert space (the ``space of solutions of the Wheeler-deWitt equation"). General covariant quantum mechanics is well defined by these structures (See the Appendix, and, more in detail, Section 5.2 of \cite{Rovelli:2004fk}.)
A quantum statistical state is a trace-class operator $\rho$ on $\cal H$ such that ${\rm tr}\rho=1$. Its entropy is $S= -{\rm tr}[\rho\ln\rho]$. Let $\cal A$ be an observable algebra formed by self-adjoint operators $A$ on $\cal H$. Then $\rho$ defines a state on this algebra by
\begin{equation}
\rho(A)={\rm tr}[A\rho]
\end{equation}
and\footnote{Taking $\cal A$ to be a von Neumann algebra, namely a ${}^*$-algebra of bounded operators closed in the weak operator topology and including the identity.} the Tomita theorem provides a flow $\alpha_\tau:{\cal A}\to{\cal A}$ on the observable algebra. This is the thermal-time flow in the quantum theory \cite{Connes:1994hv}. If there is a \emph{local} hamiltonian $h$ and a (now dimensionless) conjugate ``time" observable $\tau$ that in the classical theory reduces to the quantities defined in the previous section, and generates an evolution
\begin{equation}
\alpha_\tau(A)=e^{\frac{i}{\hbar} h \tau}A\, e^{-\frac{i}{\hbar} h \tau},
\end{equation}
then we say that $\rho$ is a Gibbs state.\footnote{Since space is compact, the usual difficulty of hamiltonian quantum field theory with thermal states which historically gave rise to algebraic quantum field theory, is not there, since energy does not diverge on thermal states.} The Tomita flow of $\rho$ satisfies the KMS condition (see, for instance, \cite{Haag:1992hx})
\begin{equation}
f_{AB}(\tau)=f_{BA}(-\tau+2\pi i) \label{KMS}
\end{equation}
for any two observables $A$ and $B$, where
\begin{equation}
f_{AB}(\tau)= \rho(\alpha_\tau(A)B).
\end{equation}
A thermal state $\rho_\beta=Z^{-1}(\beta)\,\rho^{\beta/2\pi}$ satisfies the KMS condition
\begin{equation}
f_{AB}(\tau)=f_{BA}(-t+i\beta) \label{KMS2}
\end{equation}
with respect to the flow generated by $\rho$.
The notion of mean geometry can be extended to the quantum theory\footnote{The idea of mean geometry is implicit in contexts where covariant quantum states of gravity are associated to a classical geometry \cite{Ashtekar:1992tm,Iwasaki:1992qy,Sahlmann:2001nv,Bianchi:2009ky,Conrady:2008ea,Livine:2006it}.}
by defining $(M,\bar g, \phi_\tau)$ (if it exists) as the mean geometry of the state $\rho$ with respect to a given observable algebra $\cal A$ if
\begin{equation}
\bar A(\tau)\equiv \rho(\alpha_\tau(A))) =A(\phi^{-1}_\tau(\bar g)).
\end{equation}
The local temperature $T(x)$ is defined by the norm of the killing field of the mean geometry, and is therefore a semiclassical concept. Restoring physical units, local temperature is given on the mean geometry by
\begin{equation}
T(x) = \frac{\hbar}{k}\ \frac{d\tau }{ds} ,
\label{temperature}
\end{equation}
where $\hbar$ is the Planck constant and $k$ is the Boltzmann constant.
Notice that \eqref{temperature} gives the Unruh temperature \cite{Unruh:1976db} of a quantum field theory on Minkowski space, if $ds$ is the proper time along the accelerated observer trajectory and $\tau$ is the dimensionless parameter of the Bisognano-Wichman flow $U(\tau)=e^{i\tau K/2\pi}$, where $K$ is the boost generator, which is the Tomita flow of the vacuum state on the Rindler-wedge observables \cite{Bisognano:1975fk,Haag:1992hx}.
This suggests that the Unruh effect should affect the local temperature of an observer accelerated on a mean geometry, also in the context of the full generally-covariant statistical mechanics of the gravitational field. If a mean geometry has a Killing horizon, where the norm of $\xi$ becomes singular, then the local temperature \eqref{temperature} diverges on the horizon. The divergence of the temperature is a high-energy, namely a short-distance phenomenon, therefore we can consider it in a region of spacetime small with respect to the local curvature of the mean geometry, namely as a locally flat-space phenomenon. As such, it must be determined by the Unruh temperature. An explicit example of a statistical state where this happens has been discussed in \cite{Bianchi:2012ui,BianchiStoccolma}.
An Unruh temperature in the vicinity of the horizon of a black hole is red-shifted by the Tolman relation \eqref{tolman} precisely to Hawking's black hole temperature at infinity.
\section{Conclusion}
We have extended the machinery of statistical thermodynamics to the general covariant context. The new concepts with respect to conventional statistical mechanics are:
\begin{enumerate}
\item The statistical state is defined on the space of the solution of the field equation.
\item Each statistical state defines a preferred time flow, called thermal time.
\item A statistical state whose thermal time flow has a geometrical interpretation, in the sense that it can be reinterpreted as evolution with respect to a local internal time, defines a generalized Gibbs state, with properties similar to the conventional equilibrium states.
\item For such states, it is possible to define the relative global temperature between two states.
\item A mean geometry is a stationary classical geometry with a timelike killing field and a time foliation, such that the value of a suitable family of observables reproduces the statistical expectation values of these observables in the statistical ensemble.
\item If a mean geometry exists, a local temperature is defined. Local temperature is the ratio between proper time and thermal time on the mean geometry:
\begin{equation}
T(x) = \frac{\hbar}{k}\ \frac{d\tau }{ds} ,
\label{temperature2}
\end{equation}
It yields immediately the Tolman law.
\end{enumerate}
This construction reduces to conventional thermodynamics for conventional Hamiltonian systems rewritten in a parametrized language.
Examples, extension of the formalism to the boundary formalism \cite{Oeckl:2003vu,Rovelli:2002ef,Rovelli:2003ev}, which is the natural language for quantum field theory in the generally covariant context, and applications to horizon thermodynamics, and in particular to the local framework defined in \cite{Frodden:2011eb} and the derivation of black hole entropy in loop quantum gravity in \cite{Bianchi:2012ui}, will be considered elsewhere.
\centerline{---}
I thank Alejandro Perez for the crucial suggestion to focus on the locality of the hamiltonian, Ed Wilson-Ewing for pointing out the relevance for cosmology of the weaker notion of equilibrium, captured by \eqref{form2}, Eugenio Bianchi for numerous discussions on this subject, Simone Speziale and Pierre Martinetti for several helpful comments.
\vspace{1cm}
\section*{Appendix}
\subsection{Classical theory}
\subsubsection{Mechanics}
A conventional hamiltonian system is defined by a $2N$ dimensional phase space $\Gamma_0$ and a hamiltonian $H$. The phase space is a symplectic space, namely a manifold equipped by a non-singular closed symplectic two form $\omega$. Locally, we can always choose coordinates $(q^i,p_i)$ on $\Gamma$ such that
\begin{equation}
\omega=dq^i\wedge dp_i
\end{equation}
(summation understood). Having a symplectic two form is the same as having Poisson brackets. $H$ is a scalar function on $\Gamma$. Every function $f$ on a symplectic space defines a vector field $X_f$ on the space, defined by
\begin{equation}
\omega(X_f)=-df,
\end{equation}
where the l.h.s is the action of a differential two-form on a vector, which gives a one form, and the r.h.s is the differential of $f$. In turn, a vector field defines a flow $\alpha_t:\Gamma_0\to\Gamma_0, t\in R$, namely a continuous one-parameter group of automorphisms of $\Gamma_0$ into itself, related to $X$ by
\begin{equation}
\left.\frac{d\alpha_t}{dt}\right|_{t=0}=X_f.
\end{equation}
The Poisson bracket between two functions $A$ and $B$ on $\Gamma_0$ is defined by
\begin{equation}
\{A,B\}= X_B(A)=-X_A(B).
\end{equation}
The flow of the hamiltonian is the time flow, namely the evolution in time of each point of $\Gamma_0$. Explicitly, the hamiltonian vector field of $H$ is easily seen to be
\begin{equation}
X=\frac{\partial H}{\partial p_i} \frac{\partial}{\partial q^i} -\frac{\partial H}{\partial q^i} \frac{\partial}{\partial p_i},
\end{equation}
so that the time flow is determined by the Hamilton equations
\begin{equation}
\frac{dq^i(t)}{dt}=\frac{\partial H}{\partial p_i} , \hspace{2em} \frac{dp_i(t)}{dt}=-\frac{\partial H}{\partial q^i}.
\end{equation}
which show that this geometric construction is equivalent to hamiltonian mechanics. An observable $A$ is a real function on $\Gamma_0$. The time evolution of an observable is defined by $A(t)=A\circ \alpha_t$ and satisfies
\begin{equation}
\frac{dA(t)}{dt}=\{A,H\}.
\end{equation}
Let $\Gamma$ be the space of the solutions of the equation of motion $(q^i(t),p_i(t))$. This is a finite dimensional space with is isomorphic to $\Gamma_0$, but not canonically isomorphic. A specific isomorphism is obtained by choosing a value $t_0$ for the time parameter $t$. Then the isomorphism between $\Gamma$ and $\Gamma_0$ is given by $(q^i,p_i)=(q^i(t_0),p_i(t_0))$. Thanks to this isomorphism, $\Gamma$ has a symplectic structure as well (independent from $t_0$).
For instance, the solutions of the dynamics of a harmonic oscillator have the form $(q(t)=A\sin(\omega t+\phi), \ \ p(t)=m\omega A\cos(\omega t+\phi))$. Therefore $\Gamma$ is coordinatized by $A$ and $\phi$. A map between $\Gamma$ and $\Gamma_0$ is obtaining choosing for instance $t=0$, which gives $(q=A\sin\phi,\ \ p=m\omega A\cos\phi)$ and therefore the symplectic form on $\Gamma$ is
\begin{equation}
\sigma=-m\omega\ A\, dA\wedge d\phi.
\end{equation}
An equivalent formulation of the dynamics, called the presymplectic formulation, can be given on the $2N+1$ space ${\cal C}=\Gamma_0\times R$, with local coordinates $(q^i,p_i, t)$ equipped with the two-form
\begin{equation}
\omega'=dq^i\wedge dp_i-dH(q^i,p_i)\wedge dt.
\end{equation}
This two-form is degenerate, namely has a null direction (since the space is odd dimensional). That is, there exist a vector field $X'$, determined up to scaling, such that
\begin{equation}
\omega'(X')=0.
\end{equation}
It is immediate to see that this vector field is proportional to
\begin{equation}
X'= \frac{\partial}{\partial t} + X
\end{equation}
and its integral lines (called the orbits of $\omega'$) are precisely (the graphs of the) physical motions $(t,q^i(t),p_i(t))$. Let $\Gamma$ be the space of these orbits and $\pi$ the projection that sends each point of $\cal C$ to the orbit to which it belongs. $\Gamma$ carries a symplectic two-form $\sigma$ uniquely characterized by the fact that its pull back to $\cal C$ by $\pi$ is $\omega'$. (A pull back is degenerate in the directions of the orbits.) The symplectic space $(\Gamma,\sigma)$ is clearly the same as the one constructed above.
The equivalence between the conventional hamiltonian and the presymplectic formulation, is almost complete. The reason for the ``almost" is subtle, interesting, and at the core of the problem discussed in this paper. Given a hamiltonian system $(\Gamma_0, \omega, H)$, we can immediately construct its corresponding presymplectic formulation $({\cal C}, \omega')$. But the opposite is not true, since we need to know which one of the variables on $\cal C$ is the time variable, in order to do so. In other words, the presymplectic formulation leads to the same relations between the variables $(t,q^i,p_i)$ as the hamiltonian one, but without specifying which of these variables is to be recognized as the time variable. The difference is the same as the difference between giving a function $y(x)$ or its parametrized form $(y(s),x(s))$: in the first case $x$ is singled out as the independent variable, in the second case it is not.
\subsubsection{General covariant mechanics}
Systems like general relativity, or a single free relativistic particle, are defined in the covariant language by a Lagrangian that leads to a vanishing canonical Hamiltonian. Equivalently, they are defined by equations of motion that are gauge invariant under a re-parametrization of the evolution coordinate. The Legendre transform of the Lagrangian of these systems defines a phase space with constraints, and the dynamics is coded in the constraints. Let $\cal E$ denote this phase space (to distinguish it from the phase space of a conventional system, since it has a different physical interpretation) and let $\cal C$ denote the subspace of $\cal E$ where the constraints vanish. $\cal E$ is a symplectic space with symplectic form $\omega$. Its restriction to $\cal C$ is a presymplectic two-form $\omega'$ (the pull back of $\omega$ under the embedding $i$ of $\cal C$ in $\cal E$), which is degenerate in the directions of the hamiltonian vector fields of the constraints themselves. The space of the orbits $\Gamma$ is again a symplectic space carrying a symplectic two-form $\sigma$, uniquely characterized by
\begin{equation}
i_*\omega = \omega' = \pi_* \sigma.
\end{equation}
where
\begin{equation}
{\cal E} \stackrel{i}{\longleftarrow} {\cal C} \stackrel{\pi}\longrightarrow \Gamma.
\end{equation}
The presymplectic constraint surface $(\cal C,\omega')$ defines the dynamics precisely as in the presymplectic formulation of the hamiltonian dynamics described above. Notice that it defines all the physical correlations among dynamical variables, without specifying one of these as the independent time variable. The distinctive feature of the general covariant systems is therefore to define dynamics as a ``democratic" correlation between variables instead of as evolution with respect to a singled out independent variable.
A simple example is provided by the dynamics of a free relativistic particle. The extended phase space $\cal E$ is 8-dimensional, with coordinates $(x^\mu,p_\mu)$ and $\omega=dx^\mu\wedge dp_\mu)$. The constraint surface $\cal C$ is given by $p^2=m^2$. The orbits are given by
\begin{equation}
x^\mu(\tau)=\frac{p^\mu}m \tau + x_o^\mu.
\end{equation}
And there is a six dimensional space of these. Each orbit determines a correlation between observables. For instance it determines the relation between different coordinates on Minkowski space. Notice that all this is Lorentz invariant. Notice also that this canonical formulation never specifies one particular Lorentz time as the preferred one. To obtain a conventional Hamiltonian formulation we have instead to select a Lorentz frame and choose one variable, say $x^0$ (as opposed to $\tilde x^0=\Lambda^0_\mu x^\mu$ where $\Lambda$ is Lorentz matrix) as the time variable. Then this determines a Hamiltonian $H=\sqrt{\vec p^2+m^2}$, which generates the same motions, but in a non-manifestly Lorentz invariant language.
Notice that any Gibbs state
\begin{equation}
\rho\sim e^{-\beta H}=e^{-\beta \sqrt{\vec p^2+m^2}}
\end{equation}
breaks Lorentz invariance and selects a preferred Lorentz time. Physically, this is the specific Lorentz-time flow with respect to which a given gas of relativistic particles is in equilibrium.
Notice that it is somewhat misleading to state that the full dynamics of a generally covariant system is entirely captured by the physical phase space $\Gamma$ and all functions on $\Gamma$, because this would be like saying that the dynamics of a harmonic oscillator is captured by writing down the phase space coordinated by $A$ and $\phi$, and all functions of $A$ and $\phi$. If we do so, we loose track of the fact that the harmonic oscillator is characterized by the oscillating variable $q(t)$! The dynamics of a generally covariant system is not just described by $\Gamma$ and the family of all functions on $\Gamma$. We also need to give explicitly the embedding of each orbit in $\cal C$ or, equivalently, in $\cal E$. In the case of the relativistic particle, for instance, the dynamics is not just the specification that the physical space is six dimensional: it is also the information that each point of this space determines a timelike line in Minkowski space, namely a correlation between quantities on $\cal C$. In this context, such quantities are called ``partial observables" \cite{Rovelli:2001bz}.
\subsubsection{Statistical mechanics}
The symplectic form defines a volume-form on $\Gamma_0$, obtained by taking $N$ times the wedge product of $\omega$ with itself. This defines an integral on $\Gamma_0$, which we indicate simply without measure notation. A statistical state is a real non-negative function $\rho$ on $\Gamma_0$ normalized as
\begin{equation}
\int \rho = 1.
\end{equation}
Its entropy is defined by the Shannon expression
\begin{equation}
S=-\int \rho \ln\rho.
\end{equation}
The mean value of an observable $A$ in the state $\rho$ is defined by
\begin{equation}
\bar A = \int A \rho.
\end{equation}
The mean value of $A(t)$ can be equally obtained as the mean value of $A$ on the state $\rho(t)$ which satisfies
\begin{equation}
\frac{d\rho(t)}{dt} = \{\rho,H\}.
\end{equation}
An equilibrium Gibbs state, is a particular statistical state of the form
\begin{equation}
\rho \propto e^{-\beta H}
\label{gs}
\end{equation}
where $\beta=1/kT$ is a positive real number and $T$ is the temperature. It is immediately clear that a Gibbs state is time independent and the mean value of all observables in a Gibbs state are time independent. Nontrivial time correlations can nevertheless be defined from quantities like
\begin{equation}
f_{AB}(t) = \int A(t)B(0) \rho.
\end{equation}
The proportionality factor in \eqref{gs} is determined by the normalization condition:
\begin{equation}
\rho =\frac{1}{Z(\beta)} e^{-\beta H}
\end{equation}
where
\begin{equation}
Z(\beta)=\int e^{-\beta H} = e^{-\beta F}
\end{equation}
is called the partition function, and $F$ is called the free energy. It follow immediately from the definitions and a short calculation that the mean value $E$ of the energy is given by
\begin{equation}
E=-\frac1\beta\frac{d \ln Z}{d\beta}
\end{equation}
and
\begin{equation}
S=\beta E - \beta F.
\label{thermorel}
\end{equation}
these are the basic thermodynamical relations for the Gibbs states.
\subsubsection{General Covariant Statistical Mechanics}
Here a condense the results of this paper. A statistical state is a normalized positive function on the physical state space. It determines a thermal flow with generator $X$ defined by
\begin{equation}
\rho\ \omega(X)=d\rho.
\end{equation}
the generator of this flow is the (state dependent) thermal hamiltonian $h=-\ln\rho$ and the thermal time $\tau$ is the parameter of this flow. For a conventional Gibbs state in a non-generally-covariant system, temperature is the ratio between thermal time and geometrical time.
In a gravitational field theory, if $h$ is local, then it defines a flow in spacetime, and a preferred foliation of the mean geometry. The local temperature, which satisfies the Tolmann relation, is the local ratio between the spacetime flow and proper time.
\subsection{Quantum theory}
\subsubsection{Quantum Mechanics}
A conventional quantum system is defined by a Hilbert space $\cal H$ and a family $\cal A$ of observables $A$, self-adjoint operators on $\cal H$, which in particular includes a Hamiltonian $H$. The Hamiltonian generates a unitary flow on $\cal H$ by the one-parameter group of unitary transformations $U(t)=e^{-iHt}$ in the Schr\"odinger picture and a flow on the observables by $A(t)=U(-t)AU(t)$ in the Heisenberg picture. In the Schr\"odinger picture, the Hilbert space $\cal H$ corresponds to the phase space $\Gamma_0$ at a given time; while in the Heisenberg picture the Hilbert space $\cal H$ corresponds to the phase space $\Gamma$ of the solutions of the equations of motion. The expectation value of an observable in the state $\psi\in{\cal H}$ is given by $\bar A=\langle \psi A \psi\rangle$, or equivalently by
\begin{equation}
\bar A={\rm tr}[A\rho]
\label{meanv}
\end{equation}
where
\begin{equation}
\rho=|\psi\rangle\langle\psi|.
\label{pure}
\end{equation}
The eigenvalues of $A$ determine the quantization, namely the possible outcomes of a measurement, of $A$ and transition probabilities between such measurement outcomes are determined by the matrix elements of $U(t)$ in the observable's eigenbasis.
\subsubsection{Quantum Statistical Mechanics}
A statistical state $\rho$ is a trace-class operator on $\cal H$ normalized by
\begin{equation}
{\rm tr}[\rho]=1.
\end{equation}
The mean value of an observable in such a state is still given by \eqref{meanv}. The states of the form \eqref{pure} satisfy $\rho^2=\rho$, are called ``pure", and their conventional physical interpretation is that the probabilistic nature of the uncertainty in the predictions derived from them is not due to our ignorance, but to irreducible intrinsic quantum uncertainty. The von Neumann entropy of the state $\rho$
\begin{equation}
S=-{\rm tr}[\rho\ln\rho]
\end{equation}
vanishes on pure states. A Gibbs state is a state of the form $\rho\propto e^{-\beta H}$. The partition function is the inverse of its normalization, namely
\begin{equation}
Z(\beta)={\rm tr}[-e^{\beta H}].
\end{equation}
Again the basic thermodynamical relation \eqref{thermorel} follows in a few steps from these definitions.
\subsubsection{General Covariant Quantum Mechanics}
A generally covariant quantum system is defined by an extended Hilbert space $\cal K$, a (possibly generalized\footnote{Namely a subspace of $\overline{\cal S}$ in a Gelfand triple $\overline{\cal S}\supset{\cal K}\supset{\cal S}$.}) subspace $\cal H$, the ``space of solutions of the Wheeler-deWitt equation", and a family of observables $A,B$ on $\cal K$ called ``partial observables".
The eigenvalues of the partial observables determine the quantization, namely the possible outcomes of a measurement \cite{Rovelli:1992vv,Rovelli:1994ge,Rovelli:2007ep}, and transition probabilities between such measurements' outcomes are determined by the matrix elements
\begin{equation}
\langle q |P| q' \rangle
\end{equation}
of the (generalized) projection
\begin{equation}
P:{\cal K}\to{\cal H},
\end{equation}
in the observables' eigenbases $|q\rangle$ in $\cal K$ (see \cite{Rovelli:2004fk}, Chapter 5 and \cite{Rovelli:2011mf}). A specific example of a definition of these transition amplitudes, finite to all orders, is provided by covariant loop quantum gravity \cite{Rovelli:2011eq}. The quantum mechanics of generally covariant systems can therefore be well defined without the need of specifying a time variable.
\subsubsection{General Covariant Statistical Quantum Mechanics}
The thermal-time flow of a generally covariant statistical quantum state $\rho$ is defined by its Tomita flow. This can be constructed as follows. The expectation value of a statistical state $\rho$ on the algebra $\cal A$ of the gauge-invariant observables $a$ define a state on this algebra. Assuming $\cal A$ to be a $C^*$-algebra, the GNS construction defines a Hilbert space $\cal H$ where observables are represented by operators and $\psi$ is a vector (even if $\psi$ is a statistical state). Let then $S$ be the operator defined by $Sa\psi=a^*\psi$. It is always possible to write $S$ in the form $S=Je^{h/2}$, where $J$ is antinunitary and $e^{h}$ is self-adjoint. The Tomita flow on the algebra is then defined by
\begin{equation}
\alpha_t a = e^{-ith} a\; e^{ith}
\end{equation}
and the Tomita theorem states that this is a one-parameter group of automorphisms of the algebra.
To understand what is going one, start from a normal quantum field theory. Pure states are vectors in Fock space. Mixed states are density matrices, namely trace class operators $\rho$ on Fock space. These form an Hilbert space, which we can call $\cal H$: notice that a statistical state $\rho$ is now represented by a \emph{vector} in this Hilbert space, for which a convenient notation is $|\rho\rangle$. If $a$ is an observable on Fock space, we can represent it on $\cal H$ as $a|\rho\rangle=|a\rho\rangle$, which is again trace class. If $\rho$ is a Gibbs state for a Hamitonian $H$ at inverse temperature $\beta$, namely $\rho=e^{-\beta H}$, then a straightforward calculation shows that $J| k \rangle= | k^* \rangle$ and $e^{h}|k\rangle=|e^{-\beta H}k e^{-\beta H}\rangle$ satisfy the definition of $S$. Therefore the Tomita flow of the Gibbs state is precisely the time flow scaled by the temperature: $ \alpha_t a = e^{-it(\beta H)} a e^{it(\beta H)}$. In other words, the Tomita relation between a state and a flow is the quantum field theoretical version of the classical relation between a state on phase space and its Hamiltonian flow. The operator $J$ flips creation and annihilation operators of the quanta over the thermal state, and therefore codes the split between positive and negative frequencies. For a more detailed discussion, see \cite{Connes:1994hv}. Time flow is fully coded into the statistical state. The local relation between thermal time $d\tau$ proper time $dt$ and temperature $T$ is given by equation \eqref{temperature2}.
\vspace{2mm}
\centerline{---}
\vspace{2mm}
Thanks to Hal Haggard for a careful reading of the manuscript and useful comments.
\newpage
|
1,116,691,498,382 | arxiv | \section{Introduction}
In the recent decades, there had been some significant contributions in the automation of program analysis tasks. Neural networks had been frequently used either for detection or generation tasks in programming language processing similar to natural language processing. It had been employed in detecting software defects, as well as prediction of errors by using software metrics \cite{jayanthi2018software}.
\newline
\\
There have been some state-of-art performances achieved by generative adversarial networks and neural machine translation systems on language translation tasks in Natural Language Processing (NLP) that led to the deployment of these systems on error correction tasks in the programming source codes. Neural machine Translation (NMT) in \cite{ahmed2018compilation} builds and trains a single neural network model using a labelled set of paired examples that results in translation from the input directly. They are end-to-end in the sense that they have the ability to learn the source text directly by mapping them to the corresponding target text and uses an encoder-decoder(usually made up of Recurrent Neural Networks (RNN) units) approach for applying the transformations where encoder consists of sequences of source text and output consists of sequences of target text. Generative Adversarial Networks (GAN) trains a neural network model to predict security vulnerabilities in \cite{harer2018learning} without the necessity of being a paired set of source domain containing buggy codes and target domain consisiting of bug-free codes, instead being a bijection mapping. Generally, in this labelled set of paired examples or unpaired examples, the neural network model is trained on the set of positive examples where the mapping takes place between target sequences, made up of positive examples that are bug-free and compiling without any errors and source sequences consisting of negative examples that contains bugs within the code making the compilation to fail. In sequence-to-sequence learning systems such as NMT, it is flexible to train and learn the model with a labelled set of paired examples, but consider the scenario where there are no paired examples, further to put it in a simpler context, consider a real-case scenario where some common syntax or semantic errors are being committed by freshers or novice programmers working in a software industry or student submissions of programming assignments in coding competitions and there are no positive or bug-free reference examples, then learning becomes difficult for neural network approaches and neural machine translation models.
\\
\\
In our model, there are no positive examples to train and focus is only on the negative buggy examples specifically, the common semantic error caused due to undeclared variables which is often committed and unperceived by the novice programmers. The model is instead trained based on the structural and semantic elements, that is the non-terminal and terminal nodes of the programming source codes captured from the abstract syntax tree representation. The type of the undeclared variables is also inferred by performing type binding using the semantic elements of AST representation that provides about the type information of the variables thereby saving the compiler's time in performing the type binding of those undeclared variables. The comprehensive information of the ASTs, the motive of Long Short-Term Memory (LSTM), detailed view of the training approach and implementation, the generation approach and different scenarios where undeclared variables will be caused and possible cases of type inference of them will be discussed in the upcoming sections of the paper.
\section{Related Work}
AST are the static intermediate model of a programming source code as discussed in \cite{4299919} where the compiler's analytic front-end parses it, constructs the AST model eventually passing it to the compiler's synthetic back-end to produce an assembly code for a specific machine and also used for program analysis/transformation. Often low-level concrete syntax tree representations have a complex structure and it is difficult to characterize the semantics especially in the poorly understood domains, so in \cite{wile1997abstract}, describes a transformation process to get a good abstract syntax representation from low-level concrete specification where modern tools rely on its ability to analyze, simulate and synthesize programs easily in language processing.
\newline
\\
In the past recent decades, deep learning had achieved various scales in text domain such as natural language especially in neural machine translation tasks and used it successfully even for programming language processing tasks as well. Some of the recent works that had achieved empiricial success on neural machine translation include dialogue response generation, summarization and text generation tasks as explained in detail in \cite{kosovan2017dialogue}, also NMT tasks are very much useful in translation from one language to another like in \cite{choudhary2018neural} where NMT encoder-decoder model had been implemented using word embedding along with byte-pair encoding method to develop an efficent translation mechanism from English-Tamil. The performance of such text generation tasks such as NMT is enhanced with attention mechanism in \cite{bahdanau2014neural} to automatically search for a context of a source text that is relevant for prediction of target words and an ensemble model of global and local attention mechanism of \cite{luong2015effective} improving the performance.
\\
\\
Natural language generation having a discrete output space had also been implemented by generative adversarial networks (introduced by \cite{goodfellow2014generative}) on generating sentences from context-free grammars and probabilistic grammars as shown in \cite{rajeswar2017adversarial}. The quality of the text generation had been improvised by conditioning information on generated samples to produce realistic natural looking sentences compared to maximum likelihood training procedures. Text generation has also shown some encouraging results in \cite{guo2018long} in which the discriminator of the GAN leaks its own high-level features to the generator at each of the generator steps thus making the scalar guiding signal available continuously throughout the generative process.
\\
\\
In \cite{liu2016neural}, prediction of next tokens in the source code is implemented using LSTM neural networks where the model trains and learns associated subsequent nodes for code completion, given a partial AST containing left subset of nodes or semantic features with respect to a subtree. The efficiency of AST in extracting tokens and comparing the source codes based on them and the use of deep learning in classifying duplicate/clone codes can be helpful in software code maintenance as seen in \cite{li2017cclearner} where maintaining duplicate codes for reuse in order to improve productivity of programming becomes a burden when there are inconsistencies caused due to bug fixes and program modifications in the original code at multiple locations.
\\
\\
The significance of AST representation of source code can further be noted in \cite{dam2017automatic} where LSTM neural networks are leveraged to capture the long contextual relationships between semantic features to identify related code elements in order to predict the software vulnerabilities which causes a security threat or makes the program buggy. GAN approach is used in \cite{harer2018learning} for repairing vulnerabilities in source codes without any paired examples or bijections by mapping from non-buggy source domain to buggy target domain and training the discriminator using the loss that is generated between real examples and NMT-generated outputs from generator of desired output.
\\
Syntax errors poses a threat as it fails the compilation and some recent techniques such as \cite{ahmed2018compilation} where a RNN model is learnt on syntactically correct, executing student programming course submissions to model all valid sequences of token and use a prefix token sequence which is from the beginning of the program till the error location and is used to predict the following sequence that are able to automate the repair of errors present in corresponding locations of code. Sequence-to-sequence NMT with attention mechanism is learned iteratively in repairing syntax errors in \cite{gupta2017deepfix} using the tokenized vector representation of the program and is used to predict the erroneous program locations and the fixing statement without using any external compiler tools or any AST representation. A real-time feedback is given to the students enrolled in beginner level programming assignments in \cite{bhatia2016automated} of the compile-time syntax errors that are made by using RNN to predict the target lines from syntactically correct submissions given the source error lines from wrong submissions and a abstract version of top ranked suggestion of error fix is presented as feedback.
\begin{figure}
\begin{CenteredBox}
\begin{lstlisting}[xleftmargin=.1\textwidth,linebackgroundcolor={%
\ifnum\value{lstnumber}=7
\color{red!40}
\fi},linebackgroundwidth=18em,numbersep=25pt, basicstyle=\small]
<@\textcolor{blue}{\#include}@> <stdio.h>
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> i,max,j,n,m,y;
scanf(
<@\textcolor{blue}{for}@>(i=1;i<=n;i++){
s=0;
<@\textcolor{blue}{for}@>(j=1;j<=m;j++){
scanf(
s=s+j;
}
<@\textcolor{blue}{if}@>(max<s)
max=s;
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{CenteredBox}
\caption{An example illustrating the undeclared variable "s" that is frequently used in the program is caught by the compiler}
\end{figure}
\noindent
\section{Approach}
This section covers some examples of undeclared variables and also the importance of semantic analysis to determine the type information of those undeclared variables. We introduce the Abstract Syntax Trees (AST) that is used as the input, deployment of the LSTM RNN for training the deep learning model, semantic analysis determining the types of undeclared variables, and the generation approach by performing the serialization/deserialization of the AST in order to get back the clean and bug-free source code.
\subsection{Motivating Examples}
The most frequent semantic error that goes unnoticeable by novice programmers is the undeclared variables. The cause of this error is due to the variables being undeclared or another common cause will be usually through spelling mistakes which makes it the first occurrence in the program.
\begin{figure}
\begin{CenteredBox}
\begin{lstlisting}[xleftmargin=.1\textwidth,linebackgroundcolor={%
\ifnum\value{lstnumber}=16
\color{red!40}
\fi},linebackgroundwidth=19.5em,numbersep=18pt, basicstyle=\small]
<@\textcolor{blue}{\#include}@> <stdio.h>
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> n,m;
<@\textcolor{blue}{int}@> i,j;
<@\textcolor{blue}{int}@> a[20];
<@\textcolor{blue}{int}@> sum=0;
scanf(
<@\textcolor{blue}{for}@>(i=0;i<n;i++){
<@\textcolor{blue}{for}@>(j=0;j<m;j++){
scanf(
sum=sum+a[j];
}
printf(
i++;
J++;
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{CenteredBox}
\caption{An example of compiler error caused by an undeclared variable "J" that is used once in the program}
\end{figure}
The main challenge lies in the fact in determining whether the variable is an identifier, arrays, pointers or pointers-to-pointers and also in concluding about the type of the variable if it is an integer, float, character, double, long int and so on. The C99 standard\footnote{www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf}, removes the implicit integer rule that states a variable declared without an explicit data type is assumed to be integer which was previously defined in C89 standard\footnote{https://www.pdf-archive.com/2014/10/02/ansi-iso-9899-1990-1/ansi-iso-9899-1990-1.pdf}. Therefore, there is a need for determining the variables that are undeclared along with its type, else the compiler will throw an error.
\\
\subsection{Abstract Syntax Trees}
Generally, any programming language whether statically or dynamically typed follows an unambiguous context-free grammar language where there exists not more than one leftmost derivations or rightmost derivations of non-terminals, or more precisely there is always a unique parse tree for each string of the language generated by it.
\begin{figure*}
\begin{tikzpicture}[scale=0.69,sibling distance=10pt]
\tikzset{every tree node/.style={align=center,anchor=north}}
\tikzset{level 1/.style={sibling distance=30mm},level 2/.style={sibling distance=10mm}}
\Tree [.\node[draw,color=black]{FuncDef}; [.\node[draw,color=black] {FuncDecl}; [.\node[draw,color=black]{Decl}; \node[draw,dashed,color=red]{\textit{hanoi}}; ] [.\node[draw,color=black] {TypeDecl}; \node[draw,dashed,color=red]{\textit{hanoi}}; ] [.\node[draw,color=black] {IdentifierType}; \node[draw,dashed,color=red]{\textit{float}}; ] [.\node[draw,color=black] {Compound}; [.\node[draw,color=black] {Decl}; \node[draw,dashed,color=red]{\textit{towers}}; ] [.\node[draw,color=black] {TypeDecl}; \node[draw,dashed,color=red]{\textit{towers}}; ] [.\node[draw,color=black] {IdentifierType}; \node[draw,dashed,color=red]{\textit{float}}; ] [.\node[draw,color=black] {FuncCall}; [.\node[draw,color=black] {ID}; \node[draw,dashed,color=red]{\textit{scanf}}; ] [.\node[draw,color=black] {ExprList}; [.\node[draw,color=black]{Constant}; \node[draw,dashed,color=red]{\textit{"\%f"}}; ] [.\node[draw,color=black]{UnaryOp:\&}; [.\node[draw,color=black]{ID}; \node[draw,dashed,color=red]{\textit{towers}}; ]] ] ] [.\node[draw,color=black] {Assignment:=}; [.\node[draw,color=black] {ID}; \node[draw,dashed,color=red]{\textit{towers}}; ] [.\node[draw,color=black] {BinaryOp:+}; [.\node[draw,color=black] {ID}; \node[draw,dashed,color=red]{\textit{towers}}; ] [.\node[draw,color=black] {Constant}; \node[draw,dashed,color=red]{\textit{999.99}}; ] ] ] ] ] ]
\end{tikzpicture}
\caption{An example illustration of Abstract Syntax Trees(AST) of C program with non-terminals at the root as well as internal nodes and terminals at the leaf nodes of the tree.}
\end{figure*}
Parse trees are formed from a concrete context-free grammar and are not suitable for performing syntax or semantic analysis due to their complex representation. Nevertheless, the Abstract Syntax Tree are the derivation trees following an abstract grammar is used as an input for syntax/semantic analysis at compile-time. It is a rooted tree representation of an abstract syntactical structure of a programming language construct where the non-terminals represents non-leaf nodes and the terminals forms the children nodes. Some of the non-terminals in C language are \texttt{Decl, IdentifierType, For, TypeDecl, ExprList, FuncDef, ArrayDecl} etc. Terminal nodes includes any string literals, numerical literals, variable names, operators, keywords etc. Figure 3 shows an example AST representation where a node enclosed in a rectangle box and black-colored depicts the non-terminal nodes (eg.,\texttt{FuncDef}), node that are enclosed in dashed box and red-colored depicts the terminal nodes or tokens of the source code.
\subsection{Model}
This subsection covers the basics of LSTM-RNN and the prediction model that is subsequently used for generation approach. \\ \\
Long Short Term Memory(LSTM) recurrent neural networks have special memory units in the form of self-loops to produce paths so that information can be maintained for longer durations of time. LSTM is preferred over vanilla RNN due to the fact that the former tends to avoid vanishing or exploding gradient problem that occurs when trying to learn long term dependencies and store it in memory cells during backpropagation. This occurs when many deep layers with specific activation functions like sigmoid are used for training, it smoothens a region of input space into an output space between 0 and 1, then even a high change in input region effects almost negligible change in output region, thereby making the gradients/error signals of a long-term interaction becomes vanishingly very small. Further, the vanilla RNNs are affected by information morphology problem in which information contained at a prior state is lost due to non-linearites between the input and output space. LSTM avoids this problem by ensuring a constant unit activation function and uses gates to control the information flow between the memory cell and the outside layers without any inference. LSTM uses three gates namely forget gate, input gate and output gate layers. The forget gate layer decides the information that is needed to be stored or erased from the LSTM cell state where the decision is made by the sigmoid layer outputting a number between 0 to 1.
\begin{equation}
f\textsubscript{i}\textsuperscript{(t)} = \sigma(b\textsubscript{i}\textsuperscript{f} + \sum\limits_{j} U\textsubscript{i,j}\textsuperscript{f}x\textsubscript{j}\textsuperscript{(t)} + \sum\limits_{j} W\textsubscript{i,j}\textsuperscript{f}h\textsubscript{j}\textsuperscript{(t-1)} )
\end{equation}
where x\textsuperscript{(t)} is the input vector at current timestep t and h\textsuperscript{(t)} is the current hidden layer vector at timestep t, and b\textsuperscript{f}, U\textsuperscript{f}, W\textsuperscript{f} are bias units, input weights and recurrent weights of forget gate units f\textsubscript{i}\textsuperscript{(t)}.
\\
The input gate layer controls the flow of new information that is being stored in the LSTM cell state s\textsubscript{i}\textsuperscript{(t)} conditioned with a self-loop weight f\textsubscript{i}\textsuperscript{(t)}.
\begin{equation}
g\textsubscript{i}\textsuperscript{(t)} = \sigma(b\textsubscript{i}\textsuperscript{g} + \sum\limits_{j} U\textsubscript{i,j}\textsuperscript{g}x\textsubscript{j}\textsuperscript{(t)} + \sum\limits_{j} W\textsubscript{i,j}\textsuperscript{g}h\textsubscript{j}\textsuperscript{(t-1)} )
\end{equation}
where x\textsuperscript{(t)} is input vector at current timestep t and h\textsuperscript{(t)} is current hidden layer vector at timestep t, and b\textsuperscript{g}, U\textsuperscript{g}, W\textsuperscript{g} are bias units, input weights and recurrent weights of input gate units g\textsubscript{i}\textsuperscript{(t)}.
\begin{equation}
s\textsubscript{i}\textsuperscript{(t)} = f\textsubscript{i}\textsuperscript{(t)}s\textsubscript{i}\textsuperscript{(t-1)} + g\textsubscript{i}\textsuperscript{(t)} \sigma(b\textsubscript{i} + \sum\limits_{j} U\textsubscript{i,j}x\textsubscript{j}\textsuperscript{(t)} + \\ \sum\limits_{j} W\textsubscript{i,j}h\textsubscript{j}\textsuperscript{(t-1)} )
\end{equation}
where the parameters W,U and b represents the recurrent weights, input weights and bias units present in a LSTM cell. The output gate layer in the memory cell decides the pieces of information that is going to be output by the LSTM cell state. This is done by passing the cell state through a tanh layer and eventually multiplying by the sigmoid of the output gate.
\begin{equation}
q\textsubscript{i}\textsuperscript{(t)} = \sigma(b\textsubscript{i}\textsuperscript{o} + \sum\limits_{j} U\textsubscript{i,j}\textsuperscript{o}x\textsubscript{j}\textsuperscript{(t)} + \sum\limits_{j} W\textsubscript{i,j}\textsuperscript{o}h\textsubscript{j}\textsuperscript{(t-1)} )
\end{equation}
where b\textsuperscript{o},U\textsuperscript{o},W\textsuperscript{o} are the parametric units of the output gate q\textsubscript{i}\textsuperscript{(t)} that represents bias units, input weights and recurrent weights.
The output hidden state h\textsubscript{i}\textsuperscript{(t)} is obtained from output gate q\textsubscript{i}\textsuperscript{(t)} as follows:
\begin{equation}
h\textsubscript{i}\textsuperscript{(t)}= tanh(s\textsubscript{i}\textsuperscript{(t)})q\textsubscript{i}\textsuperscript{(t)}
\end{equation}
In our prediction model, we use non-terminals and terminals obtained from the AST as the input. The main ideology behind our model is to specify a declaration for any identifier that is used throughout a C program. Here, we assume identifier in our context that excludes keywords and only includes alphanumeric variables. This in turn solves the complex problem of automatically fixing the undeclared variables.\\
\subsubsection{Declaration Classification and Prediction}
\texttt{ID} is the non-terminal node of the AST that represents an identifier excluding keywords as mentioned above. \texttt{Decl} is the non-terminal node of the AST representation that is entitled to represent the declaration of the identifier where its terminal node is the corresponding identifier itself. Similarly, \texttt{TypeDecl} and \texttt{IdentifierType} are the semantic elements that is used to represent the type specifier information of the identifier where identifier and its type are its corresponding terminals respectively.
\\
\\
For the classification purpose, the pair of non-terminal node \texttt{ID} and any terminal alphanumeric identifiers usually variables is augmented along with pairs of \texttt{Decl} and the respective identifier, \texttt{TypeDecl} and the identifier, \texttt{IdentifierType} and a generalized "type" referring to the corresponding types of those identifier variables so that they can be used for backsubstitution which will be explained later in the generation approach. After classification, the LSTM model is used to predict the \texttt{Decl}, \texttt{TypeDecl} and \texttt{IdentifierType} information for any alphanumeric identifier variable occurring with its corresponding non-terminal node \texttt{ID}. The classification model is sequential where the embedding layer, LSTM layer and softmax layers are stacked on top of each other sequentially as shown in Figure 4 are detailed below in this section. \\
\subsubsection{Embedding Layer of Non-Terminal and Terminal Nodes}
In our model, we do not use any pre-trained embeddings, instead it is trained simultaneously with the model. The sequences of input tokens are a combination of non-terminal and terminal nodes where in our model, we consider only 4 non-terminals \texttt{ID, Decl, TypeDecl, IdentifierType} and all the alphanumeric identifier variables as the terminals. These input tokens are formed by the concatenation of individual string encodings of non-terminal and terminal node vocabularies (that is discussed in evaluation section), subsequently embedding is computed on the integer encodings(converted from string) for performing the training of the model. The embeddings are computed as follows:
\begin{equation}
E\textsubscript{i}= A*concat(N\textsubscript{i}T\textsubscript{i})
\end{equation}
where A is $K\times V\textsubscript{N,T}$ matrix where K is the embedding vector size and V\textsubscript{N,T} is the vocabulary size formed by the concatenated encodings of non-terminal and terminal nodes.
\\
\subsubsection{LSTM Layer}
The sequences of embedded tokens are passed on to the LSTM layer containing LSTM memory cells where each cell state stores information of the previous state and are controlled by the forget gate, input gate and output gate layers as mentioned above in equations (1),(2),(3),(4). Each LSTM cell state takes inputs from its previous LSTM cell hidden state
\begin{figure*}
\begin{center}
\includegraphics[width=320pt,height=220pt]{model_diag.PNG}
\caption{Illustration of approach showing concatenation of non-terminal and terminal node embeddings extracted from AST being used as the inputs to LSTM model for the sequence classification and prediction.}
\end{center}
\end{figure*}
h\textsubscript{i-1}, state information s\textsubscript{i} as well as the input tokens and outputs the hidden state h\textsubscript{i} of LSTM cell as in equation (5) where the LSTM layers can be seen from Figure 4.
\\
\subsubsection{Dense Softmax Activation Layer}
The last LSTM memory cell state's output hidden state of the LSTM layer is passed to the softmax activation layer to predict the sequences of non-terminals \texttt{Decl}, \texttt{TypeDecl} and \texttt{IdentifierType} given the non-terminal \texttt{ID}. The predicted output sequences at timestep t (or fixed input sequence length) are represented by \begin{math}\hat{y}(t)\end{math} and is formulated as:
\begin{equation}
\hat{y}(t) = softmax(b\textsubscript{N,T}+ M\textsubscript{N,T}h\textsubscript{(t)})
\end{equation}
where b\textsubscript{N,T} is the bias unit of softmax layer with size of V\textsubscript{N,T} dimensional vector, M\textsubscript{N,T} is a weight matrix of size $K \times V\textsubscript{N,T}$ and h\textsubscript{(t)} is the hidden state of the LSTM cells at each timestep t.
\section{Evaluation and Results}
\subsection{Dataset}
The dataset that used in this approach is prutor\footnote{https://www.cse.iitk.ac.in/users/karkare/prutor/prutor-deepfix-09-12-2017.zip} which is a database that has student coding submissions for university programming assignments. It contains a set of 53478 C programs out of which there are 6978 erroneous programs which contains multiple and single line syntax as well as semantic errors. Out of 6978 programs, 1059 programs contains only undeclared variable errors which is the main focus of our evaluation.
\subsection{Preprocessing and Training Details}
We use pycparser\footnote{https://pypi.org/project/pycparser/}, that acts as a front-end of the C compiler to parse source code of C language in python. AST are obtained as an output for the source code after the parsing stage and are stored in text files.
\newline
\\
The source code is preprocessed in the form of tokens that represents the terminal nodes of its corresponding AST. Since the set of tokens are uncontinuous, discrete and in its textual form, it must be encoded into sequences of numerical vectors to be used for training the model. Additionally, there are 47 fixed set of non-terminals in C language that are encoded as in Figure 5. The terminals can be keywords, strings, data types, integers or floating point numbers. The terminals of the above mentioned categories are encoded separately in a specific range of numbers. Now, the data for training is prepared by concatenating the encodings of non-terminals and terminals together. The individual encodings in the form of integers are converted to strings initially and after concatenation, they are converted back to integers. For example, the non-terminal \texttt{IdentifierType} is encoded as 9 in the dictionary of non-terminals and converted to '9', if the terminals are data types like int, float, long etc. then they are encoded as 111111 referring to a generalized 'type' and converted to '111111'. Therefore, the concatenation of the non-terminals and terminals are mapped accordingly and stored in separate vocabulary. This vocabulary set is used in performing the training of the model. One-hot encoding approach is used to perform categorical multi-class classification to represent the elements of vocabulary as vectors with each of them of vocabulary size containing 1 at the corresponding index and rest of them are 0.
\newline
\\
\textbf{Training Details:}\\
The training experiment is performed by using embedding dimension of size 512 and two LSTM layers are used each with number of hidden units as 512 and a dropout of 0.5. The input sequence length is 1, batch size is 3, vocabulary size is 583 and the total number of sequences is 2319. The dense layer is used for forming a fully connected layer in which each of the input layer nodes is connected with every hidden and output layer nodes. The activation function used in our model is softmax function because of its efficiency in dealing with multi-class classification problems compared to sigmoid and ReLU due to the fact that the outputs of the softmax is a categorical probability distribution summing to 1 and lying between 0 to 1. The total number of units of Dense layer is equal to the vocabulary size. The vocabulary formed from concatenation is split into training and testing data where test data size is 0.2 and the split rule percentage used is 80/20. The loss function used is categorical cross-entropy function. The optimizer used is RMSprop with a learning rate of 0.01 as it is better in handling non-local extremum points and has a constant initial global learning rate compared to optimizers such as Stochastic gradient descent optimizer.
\begin{figure}
\includegraphics[width=\linewidth,height=120pt]{non_terminal_vocab.PNG}
\caption{The vocabulary of all non-terminal nodes of AST for C programs}
\end{figure}
\subsection{Generation Approach}
During generation, one-hot encoding is used to represent the new unseen sequences of the nonterminal \texttt{ID} and a terminal variable as a categorical distribution and the output class is classified as the sequences of non-terminals \texttt{Decl, TypeDecl, IdentifierType} along with the corresponding terminal variables.
\subsubsection{AST Transformation}
The program fix approach is carried out through an AST transformation performed by augmenting the predicted output sequences in each of the program's AST syntactical structure for the terminal variables associated with its corresponding non-terminal node \texttt{ID}. This augmentation is carried out on each of the source code by performing a check on declaration of the variables in the vocabulary set of concatenated encodings of non-terminal and terminal nodes as created previously using the predicted output sequence of the non-terminal \texttt{Decl} and the associated terminal variable. If any of the predicted output sequences does not match with the declared variables present in a source code, then the output sequence \texttt{Decl} containing the particular terminal variable is augmented to the original AST structure of the code through serialization and deserialization that will be described below.
\begin{figure*}
\begin{lstlisting}[xleftmargin=.3\textwidth, xrightmargin=.3\textwidth, basicstyle=\small,numbers=none]
{
"_nodetype": "Decl",
"bitsize": null,
"funcspec": [],
"init": null,
"name": "j",
"quals": [],
"storage": [],
"type": {
"_nodetype": "TypeDecl",
"declname": "j",
"quals": [],
"type": {
"_nodetype": "IdentifierType",
"names": [
"int"
]
}
}
}
\end{lstlisting}
\caption{Example demo of JSON object containing Decl, TypeDecl and IdentifierType nodes.}
\end{figure*}
\noindent
\\
\subsubsection{Serialization and Deserialization}
Serialization is implemented using pycparser by transforming data structures such as nodes of python Node object by traversing the AST (the nodes being obtained by parsing the source code from pycparser) recursively into a dictionary object representation which is then subsequently serialized into a JSON object that is understood by Pycparser in deserializing it back to a dictionary object and thereby consequently back to AST Node objects. An example JSON object representation of a AST node object is shown in Figure 6 where the \texttt{\_nodetype} key refers to the different types of non-terminal nodes such as \texttt{Decl, ArrayDecl, TypeDecl, IdentifierType}. The \texttt{TypeDecl} and \texttt{ArrayDecl} are the child nodes of \texttt{Decl} and \texttt{IdentifierType} is the child node of the intermediate non-terminal \texttt{TypeDecl} node. The key \texttt{name} refers to the terminal variables, \texttt{type} refers to the datatypes and \texttt{coord} refers to the \texttt{Coord} node of the AST indicating the location of the object in the corresponding source code.
\\
\\
The serialization and deserialization is carried out when there is an AST transformation performed as mentioned above, so that the transformation is taking place consistently without disintegrating the program thereby ensuring maximal quality of the source code.
\subsubsection{Pre-Compile Time Type Binding and Analysis Results}
After determining and performing the augmentation of the undeclared terminal variables, the type of those undeclared variables is determined before compiling the program and is augmented to the original AST structure. The type binding is performed by determining the type of a \texttt{lvalue} from its \texttt{rvalue} in an assignment statement or finding the type of a undeclared variable from its neighboring variables in an expression of a program.
\\
\\
The type of the undeclared variables is determined and drawn from the following cases:
\\
\\
\textbf{Case 1:} When the \texttt{rvalue} is a constant, where the non-terminal is \texttt{Constant} and is of integer type, then the \texttt{lvalue} whose non-terminal is \texttt{ID}, is assigned \texttt{integer} as seen from the Figure 7, in the assignment statement \textbf{i=0} on left side of figure, "i" is undeclared and is assigned \texttt{integer}.
\\
\\
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.2\textwidth,linebackgroundcolor={%
\ifnum\value{lstnumber}=6
\color{red!40}
\fi},linebackgroundwidth=14em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> k,n,x,a[100];
scanf(
scanf(
<@\textcolor{blue}{for}@>(i=0;i<n;i++)
scanf(
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}\hfill\vline\hfill
\begin{minipage}{.8\textwidth}
\begin{lstlisting}[xrightmargin=.5\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=14em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> i;
<@\textcolor{blue}{int}@> k;
<@\textcolor{blue}{int}@> n;
<@\textcolor{blue}{int}@> x;
<@\textcolor{blue}{int}@> a[100];
scanf(
scanf(
<@\textcolor{blue}{for}@> (i = 0; i < n; i++)
scanf(
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Case 1 demonstrating location of error in the for loop statement involving undeclared identifier "i" and the fix of it}
\noindent
\\
\textbf{Case 2:} In this case, if the \texttt{rvalue} is an identifier that refers to an array element whose non-terminal is \texttt{ID} and its non-terminal parent is \texttt{ArrayRef}, then the \texttt{lvalue} terminal variable with non-terminal \texttt{ID} is assigned to the respective type of \texttt{rvalue} element. We can see in Figure 8 where "b" is undeclared and is assigned to \texttt{integer} type from array "n[1000]" in the statement \textbf{b=n[i]}.
\\
\\
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.2\textwidth,linebackgroundcolor={%
\ifnum\value{lstnumber}=13
\color{red!40}
\fi},linebackgroundwidth=14em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> n[1000],a[500],nm,i,j,ln,flag=0;
scanf(
scanf(
<@\textcolor{blue}{for}@>(i=0;i<500;i++)
{
a[i]=0;
}
<@\textcolor{blue}{for}@>(i=0;i<nm;i++)
{
scanf(
c=n[i];
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}\hfill\vline\hfill
\begin{minipage}{.8\textwidth}
\begin{lstlisting}[xrightmargin=.5\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=14.5em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> c;
<@\textcolor{blue}{int}@> n[1000];
<@\textcolor{blue}{int}@> a[500];
<@\textcolor{blue}{int}@> nm;
<@\textcolor{blue}{int}@> i;
<@\textcolor{blue}{int}@> j;
<@\textcolor{blue}{int}@> ln;
<@\textcolor{blue}{int}@> flag = 0;
scanf(
scanf(
<@\textcolor{blue}{for}@> (i = 0; i < 500; i++)
{
a[i] = 0;
}
<@\textcolor{blue}{for}@> (i = 0; i < nm; i++)
{
scanf(
c = n[i];
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Case 2 indicating the error in assignment statement between variable and array identifier}
\noindent
\\
\textbf{Case 3:} If the non-terminal of \texttt{rvalue} and non-terminal of \texttt{lvalue} are the children nodes of the non-terminal \texttt{BinaryOp}, then the type of \texttt{rvalue} is assigned as the type of \texttt{lvalue}. In the Figure 9 on the left side of the figure, in the conditional expression statement \textbf{$count > max$}, \texttt{BinaryOp} is "$>$", the \texttt{lvalue} "count" is undeclared with its non-terminal being \texttt{ID} and the \texttt{rvalue} is max with its non-terminal being \texttt{ID}.
\\
\\
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.2\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=18
\color{red!40}
\fi},linebackgroundwidth=15em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> n,i,j,max;
<@\textcolor{blue}{int}@> a[20];
<@\textcolor{blue}{for}@>(i=0;i<n;i++)
{
<@\textcolor{blue}{for}@>(j=i;j<n;j++)
{
<@\textcolor{blue}{if}@>(a[i]<a[j])
{
count=count+1;
}
}
<@\textcolor{blue}{if}@>(count>max){max=count;}
}
printf(
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}\hfill\vline\hfill
\begin{minipage}{.8\textwidth}
\begin{lstlisting}[xrightmargin=.5\textwidth,linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=14.5em,numbersep=5pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> count;
<@\textcolor{blue}{int}@> n;
<@\textcolor{blue}{int}@> i;
<@\textcolor{blue}{int}@> j;
<@\textcolor{blue}{int}@> max;
<@\textcolor{blue}{int}@> a[20];
<@\textcolor{blue}{for}@> (i = 0; i < n; i++)
{
<@\textcolor{blue}{for}@> (j = i; j < n; j++)
{
<@\textcolor{blue}{if}@> (a[i] < a[j])
{
count = count + 1;
}
}
<@\textcolor{blue}{if}@> (count > max)
{
max = count;
}
}
printf(
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Case 3 illustrating the repair of assignment statement of variable and array identifier}
\noindent
\\
\textbf{Case 4:} This case is similar to case 2 but deals with assignment of a variable to another variable instead of array element. In this Figure 10, in the left side of the figure, the \texttt{lvalue} variable "z" inside the \texttt{For} statement is undeclared, and its non-terminal node is \texttt{ID}, it is assigned to the \texttt{integer} type of the variable "i" whose non-terminal is \texttt{ID}.
\\
\\
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.2\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=6
\color{red!40}
\fi},linebackgroundwidth=15em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main() {
<@\textcolor{blue}{int}@> n, i, j, k;
scanf(
<@\textcolor{blue}{for}@>(i=1; i<=n; i++)
{
<@\textcolor{blue}{for}@>(j=1,z=i;j<=i;j++,k--)
{
<@\textcolor{blue}{if}@>((
printf("*");
}
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}\hfill\vline\hfill
\begin{minipage}{.8\textwidth}
\begin{lstlisting}[xrightmargin=.5\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=17.5em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> z;
<@\textcolor{blue}{int}@> n;
<@\textcolor{blue}{int}@> i;
<@\textcolor{blue}{int}@> j;
<@\textcolor{blue}{int}@> k;
scanf(
<@\textcolor{blue}{for}@> (i = 1; i <= n; i++)
{
<@\textcolor{blue}{for}@> (j = 1,z = i;j<=i;j++,k--)
{
<@\textcolor{blue}{if}@> ((k
printf("*");
}
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Case 4 illustrating the fix of variable "z" from the for loop statement}
\noindent
\\
\textbf{Case 5:} This case deals with binary operation involved in an assignment expression statement. In Figure 11, the terminal variable "t" is undeclared and is assigned to the type of the terminal variable "summation" which is of type \texttt{double}. In the statement \textbf{summation = summation + t*delx}, there is non-terminal \texttt{Assignment} and its children nodes being the non-terminal \texttt{ID} with terminal "summation" variable and the node \texttt{BinaryOp} with its corresponding children nodes \texttt{ID}:summation, \texttt{BinaryOp:+}, \texttt{BinaryOp:*} with its children \texttt{ID}:t, \texttt{ID}:delx.
\\
\\
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.2\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=8
\color{red!40}
\fi},linebackgroundwidth=17em,numbersep=5pt,basicstyle=\small]
<@\textcolor{blue}{double}@> sum(<@\textcolor{blue}{double}@> a, <@\textcolor{blue}{double}@> n, <@\textcolor{blue}{double}@> delx)
{
<@\textcolor{blue}{double}@> summation=0;
<@\textcolor{blue}{int}@> j;
<@\textcolor{blue}{for}@> (j=0;j<n;j++)
{<@\textcolor{blue}{double}@> x=a+j*delx;
<@\textcolor{blue}{double}@> r=fabs(f(x)-g(x));
summation=summation+t*delx;
}
<@\textcolor{blue}{return}@> summation;
}
\end{lstlisting}
\end{minipage}\hfill\vline\hfill
\begin{minipage}{.8\textwidth}
\begin{lstlisting}[xrightmargin=.5\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=19em,numbersep=5pt,basicstyle=\small]
<@\textcolor{blue}{double}@> sum(<@\textcolor{blue}{double}@> a, <@\textcolor{blue}{double}@> n, <@\textcolor{blue}{double}@> delx)
{
<@\textcolor{blue}{double}@> t;
<@\textcolor{blue}{double}@> summation = 0;
<@\textcolor{blue}{int}@> j;
<@\textcolor{blue}{for}@> (j = 0; j < n; j++)
{
<@\textcolor{blue}{double}@> x = a + (j * delx);
<@\textcolor{blue}{double}@> r =fabs(f(x)-g(x));
summation =summation+(t * delx);
}
<@\textcolor{blue}{return}@> summation;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Case 5 demonstrating the error in binary operation and undeclared "t" getting fixed}
\noindent
\\
\textbf{Case 6:} This case is an exact opposite of case 2 where the \texttt{lvalue} in the Figure 12 is an array identifier "b" undeclared, whose non-terminal node is \texttt{ID} and its parent node is \texttt{ArrayRef} and \texttt{rvalue} is a terminal variable "count" whose non-terminal node \texttt{ID} is assigned as \texttt{integer}.
\\
\\
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.2\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=15
\color{red!40}
\fi},linebackgroundwidth=15em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> i,j,n,k,count=0,max;
scanf(
<@\textcolor{blue}{int}@> a[n];
<@\textcolor{blue}{for}@>(i=0;i<n;i++){
scanf(
}
<@\textcolor{blue}{for}@> (i=0;i<n;i++){
<@\textcolor{blue}{for}@> (j=i;j<n;j++){
<@\textcolor{blue}{if}@> (a[j]>a[i]){
count++;
}
}
b[i]=count;
count=0;
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}\hfill\vline\hfill
\begin{minipage}{.8\textwidth}
\begin{lstlisting}[xrightmargin=.5\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=14.5em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> b[1000];
<@\textcolor{blue}{int}@> i;
<@\textcolor{blue}{int}@> j;
<@\textcolor{blue}{int}@> n;
<@\textcolor{blue}{int}@> k;
<@\textcolor{blue}{int}@> count = 0;
<@\textcolor{blue}{int}@> max;
scanf(
<@\textcolor{blue}{int}@> a[n];
<@\textcolor{blue}{for}@> (i = 0; i < n; i++)
{
scanf(
}
<@\textcolor{blue}{for}@> (i = 0; i < n; i++)
{
<@\textcolor{blue}{for}@> (j = i; j < n; j++)
{
<@\textcolor{blue}{if}@> (a[j] > a[i])
{
count++;
}
}
b[i] = count;
count = 0;
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Illustration of case 6 marked by red line indicating the error in assignment statement}
\noindent
\\
\textbf{Case 7:} This case is slightly similar to case 5 but it does not involve any assignment operation. The type can be assigned to a variable not only from \texttt{lvalue} but also from its neighbouring variables involved in a binary operation. As we can see in left side of the Figure 13, the terminal variable "diff" being undeclared is involved in a binary operation \texttt{BinaryOp:*} and \texttt{BinaryOp:+} with the terminal variables "key" and "a" respectively, so the variable "diff" is assigned to \texttt{integer} type.
\\
\\
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.2\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=8
\color{red!40}
\fi},linebackgroundwidth=16em,numbersep=5pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{const double}@> E=0.000001;
<@\textcolor{blue}{double}@> a,b,inter,subarea=0;
<@\textcolor{blue}{int}@> n,key=0;
scanf(
inter=(b - a)/n;
<@\textcolor{blue}{while}@>(key<n&&diff*key+1< E)
{
subarea+=1;
key++;
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}\hfill\vline\hfill
\begin{minipage}{.8\textwidth}
\begin{lstlisting}[xrightmargin=.5\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=20em,numbersep=5pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> diff;
<@\textcolor{blue}{const double}@> E = 0.000001;
<@\textcolor{blue}{double}@> a;
<@\textcolor{blue}{double}@> b;
<@\textcolor{blue}{double}@> inter;
<@\textcolor{blue}{double}@> subarea = 0;
<@\textcolor{blue}{int}@> n;
<@\textcolor{blue}{int}@> key = 0;
scanf(
inter = (b - a) / n;
<@\textcolor{blue}{while}@>((key < n)&&((diff*key)+ 1)<E))
{
subarea += 1;
key++;
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Demo of case 7 involving the error in while loop statement}
\noindent
\\
\textbf{Case 8:} In this case, the type of a variable is bound from the type of a function call in a conditional expression. In Figure 14, the terminal variable "k" is undeclared in the "for" expression \textbf{k $>=$ hanoi(j)-1} and it is being involved in a binary operation \texttt{BinaryOp:>=} with the function call \textbf{hanoi(j)} where the corresponding non-terminal node of the terminal "hanoi" is \texttt{ID} and parent node being \texttt{FuncCall}, is \texttt{integer}.
\\
\\
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.2\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=8
\color{red!40}
\fi},linebackgroundwidth=16em,numbersep=5pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main() {
<@\textcolor{blue}{int}@> t,i,n,j;
<@\textcolor{blue}{int}@> x;
scanf(
<@\textcolor{blue}{for}@>(i=1;i<t;i++)
{
scanf(
<@\textcolor{blue}{for}@>(j=0;k>=hanoi(j)-1;j++)
{
<@\textcolor{blue}{if}@>(hanoi(j)-1==k)
printf("yes");
<@\textcolor{blue}{else}@>
printf("no");
}
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}\hfill\vline\hfill
\begin{minipage}{.8\textwidth}
\begin{lstlisting}[xrightmargin=.5\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=20.5em,numbersep=5pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> k;
<@\textcolor{blue}{int}@> t;
<@\textcolor{blue}{int}@> i;
<@\textcolor{blue}{int}@> n;
<@\textcolor{blue}{int}@> j;
<@\textcolor{blue}{int}@> x;
scanf(
<@\textcolor{blue}{for}@> (i = 1; i < t; i++)
{
scanf(
<@\textcolor{blue}{for}@> (j = 0;k>=(hanoi(j) - 1);j++)
{
<@\textcolor{blue}{if}@> ((hanoi(j) - 1) == k)
printf("yes");
<@\textcolor{blue}{else}@>
printf("no");
}
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Case 8 indicating the undeclared "k" in for loop statement}
\noindent
\\
\textbf{Case 9:} This case is similar to case 8, however instead of a conditional expression with a binary operation, the type of the function call is a \texttt{rvalue} is bound to a \texttt{lvalue} variable in an assignment expression statement with a binary operation. This can be seen from Figure 15, where variable "y" is undeclared in the assignment expression \textbf{y = tower(j)-1} and is assigned to the type of the function call \textbf{tower(j)} whose non-terminal node is \texttt{ID} and parent node is \texttt{FuncCall}.
\\
\\
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.2\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=9
\color{red!40}
\fi},linebackgroundwidth=15em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{<@\textcolor{blue}{int}@> i,n,j,t;
scanf(
<@\textcolor{blue}{for}@>(i=1;i<=n;i++)
{
scanf(
<@\textcolor{blue}{for}@>(j=1;j<=200;j++)
{
y=tower(j)-1;
}
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}\hfill\vline\hfill
\begin{minipage}{.8\textwidth}
\begin{lstlisting}[xrightmargin=.5\textwidth, linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=16.5em,numbersep=10pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> y;
<@\textcolor{blue}{int}@> i;
<@\textcolor{blue}{int}@> n;
<@\textcolor{blue}{int}@> j;
<@\textcolor{blue}{int}@> t;
scanf(
<@\textcolor{blue}{for}@> (i = 1; i <= n; i++)
{
scanf(
<@\textcolor{blue}{for}@> (j = 1; j <= 200; j++)
{
y = tower(j) - 1;
}
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Case 9 demonstrating the undeclared identifier "y" in assignment statement with a function call}
\begin{table*}
\begin{tabular}{ | P{2cm} | P{1.7cm} | P{1.5cm}| P{1.7cm}| P{1.5cm}| P{1.7cm}| P{1.7cm}| P{1.2cm}| }
\cline{2-8}
\multicolumn{1}{c|}{} & \textbf{Identified} & \textbf{Not Identified} & \textbf{Correctly Identified (True Positive)} & \textbf{Wrongly Identified (False Positive)} & \textbf{Correctly Identified + Correct Type Inferred (Fixed)} & \textbf{Wrongly Identified + Wrong Type Inferred (Not Fixed)} & \textbf{Total} \\
\hline
\textbf{Undeclared Variables and Arrays} & 887(83.7\%) & 172 & 857(80.9\%) & 202 & 844(79.7\%) & 215 & 1059 \\
\hline
\textbf{Undeclared variables - Main function} & N/A & N/A & 566(99.1\%) & 5 & 560(98\%) & 11 & 571 \\
\hline
\textbf{Undeclared variables - Multiple functions} & N/A & N/A & 179(91.7\%) & 16 & 172(88.2\%) & 23 & 195 \\
\hline
\textbf{Undeclared Arrays - Main functions} & N/A & N/A & 90(96.8\%) & 3 & 90(96.8\%) & 3 & 93 \\
\hline
\textbf{Undeclared Arrays - Multiple functions} & N/A & N/A & 22(78.5\%) & 6 & 22(78.5\%) & 6 & 28 \\
\hline
\end{tabular}
\caption{\textbf{Analysis results of both the undeclared variables and arrays}}
\end{table*}
\noindent
\\
Table 1 shows the results of analysis obtained after performing the compilation of the programs manually where the first row consists of all the programs (1059) that are containing both undeclared variables and arrays in which our approach has located and identified the undeclared variables in 887(83.7\%) programs out of total number of 1059 programs. However, our approach has correctly located and identified them in 857(80.9\%) programs, but the repair is performed on 844(79.7\%) programs by correctly locating as well as inferring and binding their types. The value of the first column "Identified" in the rest of the rows are not applicable (N/A) as the results correspond to programs where undeclared variables are identified and located. Similarly, the second row shows the results for programs with only undeclared variables and consisting of only single main functions (571) out of the identified programs (887). The third row displays the results of programs with undeclared variables and containing two or more functions including main function (195) out of 887 programs. The fourth row demonstrates the results shown by programs only with errors caused due to undeclared arrays and having one and only main function (93) out of the 887 programs. Finally, the last row illustrates the number of programs which contains undeclared variables and having two or more functions along with main function (22) out of those 887 programs. Table 2 shows the summary of various cases along the rows and its brief description message along the columns through which the type binding is performed before the compile-time.
\noindent
\\
\\
\begin{table*}
\begin{tabular}{ | p{1.6cm} | p{14cm} | }
\hline
\textbf{Cases} & \textbf{Brief Description} \\
\hline
Case 1 & Assignment expression statement with a constant on the right-hand side of the expression \\
\hline
Case 2 & Assignment expression statement with an array identifier on the right-hand side and an identifier other than array identifier on the left-hand side of the expression \\
\hline
Case 3 & Conditional expression statement with a binary operation between the identifier variables \\
\hline
Case 4 & Assignment expression statement with an identifier other than array identifier on the right-hand side of the expression \\
\hline
Case 5 & Assignment expression statement with a binary operation between the identifier variables \\
\hline
Case 6 & Assignment expression statement with an identifier other than arrays on the right-hand side and an array identifier on the left-hand side of the expression \\
\hline
Case 7 & Binary operation between identifier variables in a loop expression statement \\
\hline
Case 8 & Conditional expression statement with a binary operation between an identifier and a function call expression \\
\hline
Case 9 & Assignment expression statement with a binary operation between an identifier and a function call expression\\
\hline
\end{tabular}
\caption{\textbf{Summary of Type Binding Case Description}}
\end{table*}
\begin{minipage}{.4\textwidth}
\begin{lstlisting}[xleftmargin=.1\textwidth, frame=tlrb,linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=15em,numbersep=8pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> J;
<@\textcolor{blue}{int}@> n;
<@\textcolor{blue}{int}@> i;
<@\textcolor{blue}{int}@> j;
<@\textcolor{blue}{int}@> flag = 0;
scanf(
<@\textcolor{blue}{int}@> a[51];
<@\textcolor{blue}{for}@> (i = 0; i < n; i++)
{
scanf(
}
<@\textcolor{blue}{for}@> (i = 0; i < n; i++)
{
<@\textcolor{blue}{for}@> (j = 0; j < n; J++)
{
<@\textcolor{blue}{if}@> (a[i] == a[j])
{
printf("YES");
flag = 1;
break;
}
}
}
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}\hfill
\begin{minipage}{.5\textwidth}
\begin{lstlisting}[frame=tlrb,linebackgroundcolor={%
\ifnum\value{lstnumber}=3
\color{green!40}
\fi},linebackgroundwidth=14.5em,numbersep=8pt,basicstyle=\small]
<@\textcolor{blue}{int}@> main()
{
<@\textcolor{blue}{int}@> l;
<@\textcolor{blue}{double}@> a;
<@\textcolor{blue}{double}@> b;
<@\textcolor{blue}{double}@> k;
<@\textcolor{blue}{double}@> p;
<@\textcolor{blue}{int}@> n;
scanf(
k = ((a - b) * 1.0) / n;
<@\textcolor{blue}{for}@> (l = 1; l <= n; l++)
{
<@\textcolor{blue}{if}@>((l * k)<(-1))
p+= k;
<@\textcolor{blue}{if}@>((l*k>=-1) && (l*k<= 1))
p=p + (((l * k)*(l * k))*k);
<@\textcolor{blue}{if}@>((l * k) > 1)
p=p + ((l * k)*(l * k))*(l*k)*k;
}
printf(
<@\textcolor{blue}{return}@> 0;
}
\end{lstlisting}
\end{minipage}
\captionof{figure}{Picture on left illustrates repair that caused infinite loop due to variable "J" incremented in loop statement and the right side depicts repair caused by binding type \texttt{int} instead of \texttt{double}}
\section{Discussion}
There are few limitations in our approach. Fixing the undeclared variables that had been due to imperceptible spelling mistakes or a variable that is used only once throughout the program may cause the program to run in an infinite loop or lead to some possible run-time errors. As seen in the left side of Figure 16, instead of incrementing the variable "j" inside the \texttt{for} loop, the programmer had used "J" instead which had caused the program to run into an infinite loop. Another major limitation is in the type binding approach on the right side of Figure 16 shows an example where, the type has been wrongly bound to the variable "l" because "l" is used in the \texttt{for} loop expression in which "l" is assigned to constant "1" as well as it is used in an assignment expression inside the \texttt{for} loop body, in the statement \textbf{if((l*k) $<$ -1)}, variable "k" is of the type \texttt{double} and "*" is a binary operation, so the variable "j" should be assigned of the type \texttt{double} instead it is inferred as an \texttt{integer} type due to the former case.
\\
\\
We had seen in our model that a vocabulary in the form of hash table is used for training purposes. The purpose of training neural networks on the hash table is due to the fact that it can be used for recognizing input patterns (keys) in the hash table and can be used to predict the sequences (values). Consider the case where an input pattern is not present in the hash table and we need to predict the sequence, a hash table would have return null in this case but neural networks will give the closest sequence prediction.
\\
\\
The benefits of our approach lies in the fact that our model could be used in real-time as a tool for any C programming environment online or offline editors in locating, reporting and repairing undeclared identifiers for any C programs. Additionally, our model can be used when there are lack of positive bug-free syntactically correct and executing source program reference examples for buggy source programs. Also, our type binding approach will be applicable even for declared variables.
\\
\section{Conclusion and Future Work}
In this paper, we had seen different cases of one of the most common semantic error: undeclared variables. We had combined AST and LSTM approaches to extract a set of non-terminal and terminal nodes to carry out the classification and prediction tasks of the undeclared variables. We had also seen the generation of clean and buggy-free source programs by performing AST transformation and serialization as well as deserialization of AST to JSON and vice- versa. Furthermore, in this paper we had coined a new term known as Pre-Compile Time Type binding where we had implemented the fix of the types of undeclared variables by binding them their corresponding types before providing it for the compiler to compile them. By our approach, we had correctly identified 81$\%$ of the programs that contains only undeclared identifier errors. Also, we had fixed those undeclared identifier errors by binding their corresponding types in 80$\%$ of the programs.
\\
\\
In future, we would like to perform automatic repair on different types of syntactic,semantic errors and logical errors. Further, we plan to perform type binding for the limitation cases in Figure 16 as well as also implement a repair approach for the logical errors that arises after the repair of syntactic and semantic errors caused by variables used only once in the program or due to spelling mistakes.
\bibliographystyle{unsrt}
|
1,116,691,498,383 | arxiv | \section{Introduction}
From physical sciences to social sciences, many phenomena are modeled by noisy dynamical systems. In many such systems, several widely separated time scales are present. The system obtained in the homogenization limit, in which the fast time scales go to zero, is simpler than the original one, while often retaining the essential features of its dynamics \cite{majda2001mathematical, givon2004extracting,Pavliotis-TwoFast,Pavliotis}. On the other hand, the different fast time scales may compete and this competition is reflected in the homogenized equations.
Of particular interest is the model of a Brownian particle interacting with the environment \cite{nelson1967dynamical}. The usual model for such system neglects the memory effects, representing the interaction of the particle with the environment as a sum of an instantaneous damping force and a white noise. Although such an idealized model generally gives a good approximate description of the dynamics of the particle, there are situations where the memory effects play an important role, for instance when the particle is subject to a hydrodynamic interaction \cite{franosch2011resonances}, or when the particle is an atom embedded in a condensed-matter heat bath \cite{Groblacher2015}. In addition, the stochastic forcing introduced by the environment is often more accurately modeled by a colored noise than by white noise.
In this paper, we study a class of generalized Langevin equations (GLEs), with state-dependent drift and diffusion coefficients, driven by colored noise. They provide a realistic description of the dynamics of a classical Brownian particle in an inhomogeneous environment; their solutions are not Markov processes. We are interested in the limiting behavior of the particle when the characteristic time scales become small and in how competition of the time scales, as well as inhomogeneity of the environment, impact its limiting dynamics. The main mathematical result of this paper is Theorem \ref{general_result}, in which we derive the homogenization limit for a general class of non-Markovian systems. Special cases are studied in some details to obtain more explicit results. Their physical relevance is illustrated by an application to thermophoresis models.
The paper is organized as follows. In Section \ref{nmle}, we introduce and discuss a class of GLEs, as well as its two sub-classes, to be studied in this paper. In Section \ref{skrevisited}, we revisit the Smoluchowski-Kramers limit for a class of SDEs with state-dependent drift and diffusion coefficients, under a weaker assumption on the spectrum of the damping matrix than that used in earlier work \cite{hottovy2015smoluchowski}. Using this result of Section \ref{skrevisited}, we study homogenization for the GLEs in Section \ref{general_homog}. We specialize the study to the two sub-classes of models in Section \ref{homog}. In Section \ref{sec:thermophoresis}, we apply the results obtained in the previous sections to study the thermophoresis of a Brownian particle in a non-equilibrium heat bath. We end the paper by giving the conclusions and final remarks in Section \ref{conc}. The appendices provide some technical results used in the main paper, as well as physical motivation for the form of the GLEs studied here. In Appendix \ref{appA} we provide a variant of a (heuristic) derivation of the equations studied in this paper from Hamiltonian model of a particle interacting with a system of harmonic oscillators. Appendix \ref{proof_sketch} contains a sketch of the proof of Theorem \ref{skthm}.
\section{Generalized Langevin Equations (GLEs)} \label{nmle}
\subsection{GLEs as Non-Markovian Models} \label{nmleA}
We consider a class of non-Markovian Langevin equations, with state-dependent coefficients, that describe the dynamics of a particle moving in a force field and interacting with the environment. Let $\boldsymbol{x}_{t} \in \RR^{d}$, $t \geq 0$, be the position of the particle. The evolution of position, $\boldsymbol{x}_{t}$, is given by the solution to the following stochastic integro-differential equation (SIDE):
\begin{equation} \label{genle}
m \ddot{\boldsymbol{x}}_{t} = \boldsymbol{F}(\boldsymbol{x}_{t}) - \boldsymbol{g}(\boldsymbol{x}_{t}) \int_{0}^{t} \boldsymbol{\kappa}(t-s) \boldsymbol{h}(\boldsymbol{x}_{s}) \dot{\boldsymbol{x}}_{s} ds + \boldsymbol{\sigma}(\boldsymbol{x}_{t}) \boldsymbol{\xi}_{t},
\end{equation}
with the initial conditions (here the initial time is chosen to be $t=0$):
\begin{equation}
\boldsymbol{x}_{0} = \boldsymbol{x}, \ \ \dot{\boldsymbol{x}}_{0} = \boldsymbol{v}.
\end{equation}
The initial conditions $\boldsymbol{x}$ and $\boldsymbol{v}$ are random variables independent of the process $\{\boldsymbol{\xi}_{t}: \ t \geq 0\}$. Our motivation to study the SIDE \eqref{genle} is that study of microscopic dynamics leads naturally to equations of this form (see Appendix \ref{appA}).
Here and throughout the paper, overdot denotes derivative with respect to time $t$, the superscript $^*$ denotes conjugate transposition of matrices or vectors and $E$ denotes expectation. In the SIDE \eqref{genle}, $m > 0$ is the mass of the particle, the matrix-valued functions $\boldsymbol{g}: \RR^{d} \to \RR^{d \times q}$, $\boldsymbol{h} : \RR^{d} \to \RR^{q \times d}$ and $\boldsymbol{\sigma}: \RR^{r} \to \RR^{d \times r}$ are the state-dependent coefficients of the equation, and $\boldsymbol{F} :\RR^{d} \to \RR^{d}$ is a force field acting on the particle. Here $d$, $q$ and $r$ are, possibly distinct, positive integers. The second term on the right hand side of \eqref{genle} represents the drag experienced by the particle and the last term models the noise.
The matrix-valued function $\boldsymbol{\kappa}: \RR \to \RR^{q \times q}$ is a memory function which is {\it Bohl}, i.e. the matrix elements of $\boldsymbol{\kappa}(t)$ are finite linear combinations of the functions of the form $t^k e^{\alpha t} \cos(\omega t)$ and $t^k e^{\alpha t} \sin(\omega t)$, where $k$ is an integer and $\alpha$ and $\omega$ are real numbers. For properties of Bohl functions, we refer to Chapter 2 of \cite{trentelman2002control}. The noise process $\boldsymbol{\xi}_{t}$ is a $r$-dimensional mean zero stationary real-valued Gaussian vector process having a Bohl covariance function, $\boldsymbol{R}(t):=E \boldsymbol{\xi}_t \boldsymbol{\xi}_0^* = \boldsymbol{R}^*(-t) $, and, therefore, its spectral density, $\boldsymbol{S}(\omega) := \int_{-\infty}^{\infty} \boldsymbol{R}(t) e^{-i\omega t} dt$, is a rational function \cite{willems1980stochastic}.
The SIDE \eqref{genle} is a non-Markovian Langevin equation, since its solution at time $t$ depends on the entire past. Two of its terms are different than those in the usual Langevin equations. One of them is the drag term, which here involves an integral over the particle's past velocities with a memory kernel $\boldsymbol{\kappa}(t-s)$. It describes the state-dependent dissipation which comprises the back-action effects of the environment up to current time. The other term, involving a Gaussian colored noise $\boldsymbol{\xi}_{t}$, is a multiplicative noise term, also arising from interaction of the particle with the environment. Therefore, \eqref{genle} is a generalized Langevin equation (GLE), which in its most basic form was first introduced by Mori in \cite{mori1965transport} and subsequently used to model many systems in statistical physics \cite{Kubo_fd,toda2012statistical,goychuk2012viscoelastic}.
As remarked by van Kampen in \cite{van1998remarks}, ``Non-Markov is the rule, Markov is the exception". Therefore, it is not surprising that non-Markovian equations (including those of form \eqref{genle}) find numerous applications and thus have been studied widely in the mathematical, physical and engineering literature. See, for instance, \cite{luczka2005non, samorodnitsky1994stable} for surveys of non-Markovian processes, \cite{PhysRevB.89.134303,mckinley2009transient,adelman1976generalized} for physical applications and \cite{Ottobre} for asymptotic analysis.
Note that the Gaussian process $\boldsymbol{\xi}_t$ which drives the SIDE \eqref{genle} is not assumed to be Markov. The assumptions we made on its covariance will allow us to present it as a projection of a Markov process in a (typically higher-dimensional) space. This approach, which originated in stochastic control theory \cite{kalman1960new}, is called {\it stochastic realization}. We describe it in detail below.
Let $\boldsymbol{\Gamma}_1 \in \RR^{d_1 \times d_1}$, $\boldsymbol{M}_1 \in \RR^{d_1 \times d_1}$, $\boldsymbol{C}_1 \in \RR^{q \times d_1}$, $\boldsymbol{\Sigma}_1 \in \RR^{d_1 \times q_1}$, $\boldsymbol{\Gamma}_2 \in \RR^{d_2 \times d_2}$, $\boldsymbol{M}_2 \in \RR^{d_2 \times d_2}$, $\boldsymbol{C}_2 \in \RR^{r \times d_2}$, $\boldsymbol{\Sigma}_2 \in \RR^{d_2 \times q_2}$ be constant matrices, where $d_1,d_2,q_1,q_2$, $q$ and $r$ are positive integers.
In this paper, we study the class of SIDE \eqref{genle}, with the memory function defined in terms of the triple $(\boldsymbol{\Gamma}_1,\boldsymbol{M}_1,\boldsymbol{C}_1)$ of matrices as follows:
\begin{equation} \label{memory_realized}
\boldsymbol{\kappa}(t)=\boldsymbol{C}_1e^{-\boldsymbol{\Gamma_1}|t|}\boldsymbol{M}_1\boldsymbol{C}_1^*.
\end{equation}
The noise process is the mean zero, stationary Gaussian vector process, whose covariance will be expressed in terms of the triple $(\boldsymbol{\Gamma}_2,\boldsymbol{M}_2,\boldsymbol{C}_2)$. More precisely, we define it as:
\begin{equation} \label{noise}
\boldsymbol{\xi}_t = \boldsymbol{C}_2 \boldsymbol{\beta}_t,\end{equation}
where $\boldsymbol{\beta}_t$ is the solution to the It\^o SDE:
\begin{equation} \label{realize}
d\boldsymbol{\beta}_t = -\boldsymbol{\Gamma}_2\boldsymbol{\beta}_t dt + \boldsymbol{\Sigma}_2 d\boldsymbol{W}^{(q_2)}_t,
\end{equation}
with the initial condition, $\boldsymbol{\beta}_0$, normally distributed with zero mean and covariance $\boldsymbol{M}_2$. Here, $\boldsymbol{W}^{(q_2)}_t$ denotes a $q_2$-dimensional Wiener process and is independent of $\boldsymbol{\beta}$. Throughout the paper the dimension of the Wiener process will be specified by the superscript.
For $i=1,2$, the matrix $\boldsymbol{\Gamma}_i$ is {\it positive stable}, i.e. all its eigenvalues have positive real parts and $\boldsymbol{M}_i = \boldsymbol{M}_i^* > 0$ satisfies the following Lyapunov equation:
\begin{equation}
\boldsymbol{\Gamma}_i \boldsymbol{M}_i+\boldsymbol{M}_i \boldsymbol{\Gamma}_i^*=\boldsymbol{\Sigma}_i \boldsymbol{\Sigma}_i^*.
\end{equation}
It follows from positive stability of $\boldsymbol{\Gamma}_i$ that this equation indeed has a unique solution \cite{bellman1997introduction}.
The covariance matrix, $\boldsymbol{R}(t) \in \RR^{r \times r}$, of the noise process is therefore expressed in terms of the matrices $(\boldsymbol{\Gamma}_2,\boldsymbol{M}_2,\boldsymbol{C}_2)$ as follows:
\begin{equation} \label{cov}
\boldsymbol{R}(t)=\boldsymbol{C}_2e^{-\boldsymbol{\Gamma_2}|t|}\boldsymbol{M}_2\boldsymbol{C}_2^*,
\end{equation}
and therefore the triple $(\boldsymbol{\Gamma}_2,\boldsymbol{M}_2,\boldsymbol{C}_2)$ completely specifies the probability distribution of $\boldsymbol{\xi}_t$. It is worth mentioning that the triples that specify the memory function in \eqref{memory_realized} and the noise process in \eqref{noise} are only unique up to the following transformations:
\begin{equation} \label{transf_realize}
(\boldsymbol{\Gamma}'_i=\boldsymbol{T}_i \boldsymbol{\Gamma}_i \boldsymbol{T}^{-1}_i, \boldsymbol{M}_i' = \boldsymbol{T}_i \boldsymbol{M}_i \boldsymbol{T}_i^{*}, \boldsymbol{C}'_i = \boldsymbol{C}_i \boldsymbol{T}_i^{-1}),
\end{equation}
where $i=1,2$ and $\boldsymbol{T}_i$ is any invertible matrices of appropriate dimensions.
The triple $(\boldsymbol{\Gamma}_2,\boldsymbol{M}_2,\boldsymbol{C}_2)$ above is called a {\it (weak) stochastic realization} of the covariance matrix $\boldsymbol{R}(t)$ in the well established theory of stochastic realization, which is concerned with solving the inverse problem of stationary covariance generation (see \cite{lindquist1985realization,lindquist2015linear}). Any zero mean stationary Gaussian process, $\boldsymbol{\xi}'_t$, having a Bohl covariance function, can be realized as a projection of a Gaussian Markov process in the above way. Let us remark that Gaussian processes with Bohl covariance functions are precisely those with rational spectral density \cite{willems1980stochastic}.
Our approach allows to consider the most general Gaussian noises that can be realized in a finite-dimensional state space in the above way (i.e. as a linear transformation of a Gaussian Markov process). In fact, the condition on the covariance function to have entries in the Bohl class is necessary and sufficient for solvability of the problem of stochastic realization of stationary Gaussian processes. We refer to the propositions and theorems on page 303-308 of \cite{willems1980stochastic} for a brief exposition of stochastic realization problems.
\begin{remark} Physically, the choice of the matrices $\boldsymbol{\Gamma}_2,\boldsymbol{M}_2,\boldsymbol{C}_2$ specifies the characteristic time scales (eigenvalues of $\boldsymbol{\Gamma}_2^{-1}$) present in the environment, introduces the initial state of a stationary Markovian Gaussian noise and selects the parts of the prepared Markovian noise that are (partially) observed, respectively. In other words, we have assumed that the noise in the SIDE \eqref{genle} is realized or ``experimentally prepared" by the above triple of matrices.
\end{remark}
For our homogenization study of the equation \eqref{genle} we need the {\it effective damping constant},
\begin{equation} \label{eff_damping}
\boldsymbol{K}_1 := \int_0^{\infty} \boldsymbol{\kappa}(t) dt = \boldsymbol{C}_1 \boldsymbol{\Gamma}_1^{-1} \boldsymbol{M}_1 \boldsymbol{C}_1^* \in \RR^{q \times q},
\end{equation}
and the {\it effective diffusion constant},
\begin{equation} \label{eff_diff}
\boldsymbol{K}_2 := \int_0^{\infty} \boldsymbol{R}(t) dt = \boldsymbol{C}_2 \boldsymbol{\Gamma}_2^{-1} \boldsymbol{M}_2 \boldsymbol{C}_2^* \in \RR^{r \times r},
\end{equation}
to be invertible (see Section \ref{nmleB}). This is equivalent to the matrices $\boldsymbol{C}_i$ having full rank. Homogenization for a class of systems with vanishing effective damping and/or diffusion constant \cite{bao2005non} will be explored in our future work.
With the above definitions of memory kernel and noise process, the SIDE \eqref{genle} becomes:
\begin{equation} \label{genle_general}
m \ddot{\boldsymbol{x}}_{t} = \boldsymbol{F}(\boldsymbol{x}_{t}) - \boldsymbol{g}(\boldsymbol{x}_{t}) \int_{0}^{t} \boldsymbol{C}_1 e^{-\boldsymbol{\Gamma}_1(t-s)} \boldsymbol{M}_1\boldsymbol{C}_1^* \boldsymbol{h}(\boldsymbol{x}_{s}) \dot{\boldsymbol{x}}_{s} ds + \boldsymbol{\sigma}(\boldsymbol{x}_{t}) \boldsymbol{C}_2 \boldsymbol{\beta}_{t},
\end{equation}
where $\boldsymbol{\beta}_t$ is the solution to the SDE \eqref{realize}.
To illustrate the results of the general study in important special cases (which will also be used later in applications), we consider two representative classes of SIDE \eqref{genle}. The driving Gaussian colored noise is Markovian in the first class and non-Markovian in the second. We set $d=d_1=d_2=q_1=q_2=q=r$ in the following examples.
\begin{itemize}
\item[(i)] {\it Example of a SIDE driven by a Markovian colored noise.} The memory kernel is given by an exponential function, i.e.
\begin{equation}
\boldsymbol{\kappa}(t-s) = \boldsymbol{\kappa}_{1}(t-s) := \boldsymbol{A} e^{-\boldsymbol{A}|t-s|},\end{equation}
where $\boldsymbol{A} \in \RR^{d \times d}$ is a constant diagonal matrix with positive eigenvalues. The driving noise is the Ornstein-Uhlenbeck (OU) process, $\boldsymbol{\xi}_{t} = \boldsymbol{\eta}_{t} \in \RR^d$, i.e. a mean zero stationary Gaussian process which is the solution to the SDE:
\begin{equation} \label{ou}
d\boldsymbol{\eta}_{t} = -\boldsymbol{A} \boldsymbol{\eta}_{t} dt + \boldsymbol{A} d\boldsymbol{W}^{(d)}_{t}.
\end{equation}
In order for the process $\boldsymbol{\eta}_{t}$ to be stationary, the initial condition has to be distributed according to the (unique) stationary measure of the Markov process defined by the above equation, i.e. $\boldsymbol{\eta}_{0} = \boldsymbol{\eta}$ is normally distributed with zero mean and covariance $\boldsymbol{A}/2$. The mean and the covariance of $\boldsymbol{\eta}_{t}$ are given by:
\begin{equation} \label{o-u_stats}
E[\boldsymbol{\eta}_{t}] = 0, \ \ E[\boldsymbol{\eta}_{t} \boldsymbol{\eta}_{s}^{*}] = \frac{1}{2}\boldsymbol{\kappa}_{1}(t-s), \ \ s,t \geq 0.
\end{equation}
The resulting SIDE reads:
\begin{equation} \label{side2}
m \ddot{\boldsymbol{x}}_{t} = \boldsymbol{F}(\boldsymbol{x}_{t}) - \boldsymbol{g}(\boldsymbol{x}_{t})\int_{0}^{t} \boldsymbol{\kappa}_{1}(t-s) \boldsymbol{h}(\boldsymbol{x}_{s}) \dot{\boldsymbol{x}}_{s} ds + \boldsymbol{\sigma}(\boldsymbol{x}_{t}) \boldsymbol{\eta}_{t}.
\end{equation}
Let us note that Ornstein-Uhlenbeck processes are the only stationary, ergodic, Gaussian, Markov processes with continuous covariance functions \cite{pavliotis2014stochastic}. When all diagonal entries of $\boldsymbol{A}$ go to infinity, the OU process approaches the white noise. For details on OU processes, see for instance \cite{pavliotis2014stochastic} and Section 2 of \cite{hottovy2015small}.
\item[(ii)] {\it Example of a SIDE driven by a non-Markovian colored noise.} The memory kernel is given by an oscillatory function whose amplitude is exponentially decaying, i.e. $ \boldsymbol{\kappa}(t-s) := \boldsymbol{\kappa}_2(t-s)$, a diagonal matrix with the diagonal entries:
\begin{equation}\label{harmonic_memory}
(\boldsymbol{\kappa}_{2})_{ii}(t-s) :=
\begin{cases}
\frac{1}{\tau_{ii}} e^{-\omega_{ii}^2 \frac{|t-s|}{2 \tau_{ii}}}\left[\cos\left(\frac{\omega^0_{ii}}{\tau_{ii}} (t-s) \right) + \frac{\omega^1_{ii}}{2} \sin\left(\frac{\omega^0_{ii}}{\tau_{ii}} |t-s| \right) \right], & \text{if } |\omega_{ii}|<2 \\
\frac{1}{\tau_{ii}} e^{-\omega_{ii}^2 \frac{|t-s|}{2 \tau_{ii}}}\left[\cosh\left(\frac{\tilde{\omega}^0_{ii}}{\tau_{ii}}(t-s) \right) + \frac{\tilde{\omega}^1_{ii}}{2} \sinh\left(\frac{\tilde{\omega}^0_{ii}}{\tau_{ii}} |t-s| \right) \right], & \text{if } |\omega_{ii}|>2,
\end{cases}
\end{equation}
where, for $i=1,\dots,d$, $\tau_{ii}$ is a positive constant, $\omega_{ii}$ is a real constant, $\omega^0_{ii} := \omega_{ii}\sqrt{1-\omega_{ii}^2/4}$, $\tilde{\omega}^0_{ii} := \omega_{ii}\sqrt{\omega_{ii}^2/4-1}$, $\omega_{ii}^1 := \omega_{ii}/\sqrt{1-\omega_{ii}^2/4}$, and $\tilde{\omega}_{ii}^1 := \omega_{ii}/\sqrt{\omega_{ii}^2/4-1}$.
Let $\boldsymbol{\tau}$ be constant diagonal matrix with the positive eigenvalues $(\tau_{jj})_{j=1}^d$, $\boldsymbol{\Omega}$ be constant diagonal matrix with the real eigenvalues $(\omega_{jj})_{j=1}^{d}$, $\boldsymbol{\Omega}_{0}$ be constant $d \times d$ diagonal matrix with the eigenvalues $\omega_{jj}\sqrt{1-\omega_{jj}^2/4}$ (if $|\omega_{jj}| < 2$) and $i \omega_{jj}\sqrt{\omega_{jj}^2/4-1}$ (if $|\omega_{jj}|>2$), and $\boldsymbol{\Omega}_{1}$ be constant $d \times d$ diagonal matrix with the eigenvalues $\omega_{jj}/\sqrt{1-\omega_{jj}^2/4}$ (if $|\omega_{jj}|<2$) and $-i\omega_{jj}/\sqrt{\omega_{jj}^2/4-1}$ (if $|\omega_{jj}|>2$), where $i$ is the imaginary unit.
The driving noise is given by the harmonic noise process, $\boldsymbol{\xi}_{t}=\boldsymbol{h}_{t} \in \RR^d$, i.e. a mean zero stationary Gaussian process which is the solution to the SDE system:
\begin{align}
\boldsymbol{\tau} d\boldsymbol{h}_{t} &= \boldsymbol{u}_{t} dt, \label{unscaled_har1} \\
\boldsymbol{\tau} d\boldsymbol{u}_{t} &= -\boldsymbol{\Omega}^2 \boldsymbol{u}_{t} dt - \boldsymbol{\Omega}^2 \boldsymbol{h}_{t} dt + \boldsymbol{\Omega}^2 d\boldsymbol{W}^{(d)}_{t}, \label{unscaled_har2}
\end{align}
with the initial conditions, $\boldsymbol{h}_0$ and $\boldsymbol{u}_0$, distributed according to the (unique) stationary measure of the above SDE system. The mean and the covariance of $\boldsymbol{h}_{t}$ are given by:
\begin{equation}
E[\boldsymbol{h}_{t}] = \boldsymbol{0}, \ \ E[\boldsymbol{h}_{t} \boldsymbol{h}_{s}^{*}] = \frac{1}{2} \boldsymbol{\kappa}_{2}(t-s),\ \ s, t \geq 0.\end{equation}
Note that $\boldsymbol{h}_t$ is not a Markov process (but the process $(\boldsymbol{h}_t, \boldsymbol{u}_t)$ is).
The resulting SIDE reads:
\begin{equation} \label{side3}
m \ddot{\boldsymbol{x}}_{t} = \boldsymbol{F}(\boldsymbol{x}_{t}) - \boldsymbol{g}(\boldsymbol{x}_{t})\int_{0}^{t} \boldsymbol{\kappa}_{2}(t-s) \boldsymbol{h}(\boldsymbol{x}_{s}) \dot{\boldsymbol{x}}_{s} ds + \boldsymbol{\sigma}(\boldsymbol{x}_{t}) \boldsymbol{h}_{t}.
\end{equation}
The harmonic noise is an approximation of the white noise, smoother than the Ornstein-Uhlenbeck process. It can be shown that in the limit $\omega_{ii} \to \infty$ (for all $i$) the process $\boldsymbol{h}_{t}$ converges to the Ornstein-Uhlenbeck process whose $i$th component process has correlation time $\tau_{ii}$, whereas in the limit $\tau_{ii} \to 0$ (for all $i$) the process $\boldsymbol{h}_{t}$ converges to the white noise. For detailed properties of harmonic noise process, see for instance \cite{schimansky1990harmonic} or the Appendix in \cite{McDaniel14}. We remark that the harmonic noise is one of the simplest examples of a non-Markovian process and its use as the driving noise in the SIDE \eqref{genle} is a natural choice that models the environment as a bath of damped harmonic oscillators \cite{hanggi1993can}.
\end{itemize}
\begin{remark} \label{dimofnoise} Note that in the SIDEs for the above two sub-classes, the dimension of the driving Wiener process, $\boldsymbol{W}_t^{(d)}$, is the same as that of the colored noise processes $\boldsymbol{\eta}_t$ and $\boldsymbol{h}_t$, as well as the processes, $\boldsymbol{x}_t$ and $\boldsymbol{v}_t$. One could as well consider realizing the noise processes using a driving Wiener process of different dimension. Our choice of working with the same dimensions is for the sake of convenience as it will help to simplify the exposition.
\end{remark}
\begin{remark}
Without loss of generality (due to \eqref{transf_realize}), we have taken $\boldsymbol{A}$ and $\boldsymbol{\Omega}$ to be diagonal.
\end{remark}
\begin{remark}
In cases of particular interest in statistical physics, the triples $(\boldsymbol{\Gamma}_i, \boldsymbol{M}_i, \boldsymbol{C}_i)$ coincide, up to the transformations in \eqref{transf_realize}, for $i=1,2$; $\boldsymbol{h} = \boldsymbol{g}^*$ and $\boldsymbol{g}$ is proportional to $\boldsymbol{\sigma}$, with the proportionality factor equals $k_B T$, where $k_B$ denotes the Boltzmann constant and $T > 0$ is temperature of the environment (see Appendix \ref{appA}). In this case, we have $d_1=d_2$ and $q=r$. In particular, for the two sub-classes above we have
\begin{equation} \label{fdt_sc1}
E[\boldsymbol{\eta}_{t}^{0} (\boldsymbol{\eta}_{s}^{0})^{*}] = \frac{1}{2} \boldsymbol{\kappa}_1(t-s)
\end{equation}
for the first sub-class and
\begin{equation}
E[\boldsymbol{h}_{t}^{0} (\boldsymbol{h}_{s}^{0})^{*}] = \frac{1}{2} \boldsymbol{\kappa}_{2}(t-s)
\end{equation}
for the second sub-class.
In such cases, the SIDEs describe a particle interacting with an equilibrium heat bath at a temperature $T$, whose dynamics satisfy the fluctuation-dissipation relation \cite{toda2012statistical,zwanzig2001nonequilibrium}.
\end{remark}
\subsection{Homogenization of SIDEs: Discussion and Statement of the Problem} \label{nmleB}
There are three characteristic time scales defining the non-Markovian dynamics described by the SIDE \eqref{genle}:
\begin{itemize}
\item[(i)] the inertial time scale, $\lambda_{m}$, proportional to $m$, whose physical significance is the relaxation time of the particle velocity process $\boldsymbol{v}_{t} := \dot{\boldsymbol{x}}_{t}$. The limit $\lambda_{m} \to 0$ is equivalent to the limit $m \to 0$;
\item[(ii)] the memory time scale, $\lambda_{\kappa}$, defined as the inverse rate of exponential decay of the memory kernel $\boldsymbol{\kappa}(t-s)$;
\item[(iii)] the noise correlation time scale, $\lambda_{\xi}$.
\end{itemize}
For the purpose of general multiscale analysis, we set $m = m_{0} \epsilon^{\mu}$, $\lambda_{\kappa} = \tau_{\kappa} \epsilon^{\theta}$ and $\lambda_{\xi} = \tau_{\xi} \epsilon^{\nu}$,
where $\epsilon > 0$ is a parameter which will be taken to zero, $m_0$, $\tau_\kappa$, $\tau_\xi$ are (fixed) proportionality constants, and $\mu, \theta, \nu$ are positive constants (exponents), specifying the orders at which the time scales $\lambda_{m}, \lambda_{\kappa}, \lambda_{\xi}$ vanish respectively. We consider a family of SIDEs, parametrized by $\epsilon$, with the inertial time scale $\lambda_m$ proportional to $m_0 \epsilon^\mu$, memory time scale $\lambda_{\kappa} = \tau_\kappa \epsilon^\theta$ and noise correlation time scale $\lambda_{\xi} = \tau_{\xi} \epsilon^{\nu}$, to be defined in the following.
We replace $m$ with $m_0 \epsilon^\mu$, $\boldsymbol{\Gamma}_1$ with $\boldsymbol{\Gamma}_1/(\tau_{\kappa} \epsilon^{\theta})$, $\boldsymbol{M}_1$ with $\boldsymbol{M}_1/(\tau_{\kappa} \epsilon^{\theta})$, and $\boldsymbol{x}_t$ with $\boldsymbol{x}_t^\epsilon$ in \eqref{genle_general}. Also, we substitute $\boldsymbol{\Gamma}_2$ with $\boldsymbol{\Gamma}_2/(\tau_{\xi} \epsilon^{\nu})$, $\boldsymbol{\Sigma}_2$ with $\boldsymbol{\Sigma}_2/(\tau_{\xi} \epsilon^{\nu})$, and $\boldsymbol{\beta}_t$ with $\boldsymbol{\beta}_t^\epsilon$ in \eqref{realize}. The SIDE \eqref{genle_general} then becomes:
\begin{equation} \label{general_side_rescaled}
m_0 \epsilon^{\mu} \ddot{\boldsymbol{x}}^{\epsilon}_{t} = \boldsymbol{F}(\boldsymbol{x}^\epsilon_{t}) - \frac{\boldsymbol{g}(\boldsymbol{x}^\epsilon_{t})}{\tau_{\kappa} \epsilon^{\theta}} \int_{0}^{t} \boldsymbol{C}_1e^{-\frac{\boldsymbol{\Gamma}_1}{\tau_{\kappa}\epsilon^{\theta}}(t-s)} \boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h}(\boldsymbol{x}^\epsilon_{s}) \dot{\boldsymbol{x}}^{\epsilon}_{s} ds + \boldsymbol{\sigma}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{C}_2 \boldsymbol{\beta}^\epsilon_{t},
\end{equation}
with the initial conditions, $\boldsymbol{x}^\epsilon_0 = \boldsymbol{x}$ and $\boldsymbol{v}^\epsilon_0 = \boldsymbol{v}$, where $\boldsymbol{\beta}^\epsilon_t$ is a process, with correlation time $\tau_{\xi} \epsilon^\nu$, satisfying the SDE:
\begin{equation} \label{general_rescaled_ou}
d\boldsymbol{\beta}^\epsilon_t = -\frac{\boldsymbol{\Gamma}_2}{\tau_{\xi} \epsilon^\nu} \boldsymbol{\beta}^\epsilon_t dt + \frac{\boldsymbol{\Sigma}_2}{\tau_{\xi} \epsilon^\nu} d\boldsymbol{W}^{(q_2)}_t,
\end{equation}
with the initial condition, $\boldsymbol{\beta}^\epsilon_0$, normally distributed with zero mean and covariance of $\boldsymbol{M}_2/(\tau_{\xi} \epsilon^\nu)$.
We will also perform similar analysis on the two sub-classes of SIDE, in which case:
\begin{itemize}
\item[(i)] the SIDE \eqref{side2} becomes (with $m:=m_0 \epsilon^\mu$, $\boldsymbol{A}$ in the formula for $\boldsymbol{\kappa}_{1}$ replaced by $\boldsymbol{A}/(\tau_{\kappa} \epsilon^{\theta})$, $\boldsymbol{A}$ in $\eqref{ou}$ replaced by $\boldsymbol{A}/(\tau_{\eta}\epsilon^{\nu})$, $\boldsymbol{x}_t$ replaced by $\boldsymbol{x}_t^\epsilon$ and $\boldsymbol{\eta}_{t}$ replaced by $\boldsymbol{\eta}_{t}^\epsilon$):
\begin{equation} \label{goal2}
m_{0} \epsilon^{\mu} \ddot{\boldsymbol{x}}^{\epsilon}_{t} = \boldsymbol{F}(\boldsymbol{x}^\epsilon_{t}) - \frac{\boldsymbol{g}(\boldsymbol{x}^\epsilon_{t})}{\tau_{\kappa} \epsilon^{\theta}} \int_{0}^{t} \boldsymbol{A} e^{-\frac{\boldsymbol{A}}{\tau_{\kappa} \epsilon^{\theta}} (t-s)} \boldsymbol{h}(\boldsymbol{x}^\epsilon_{s}) \dot{\boldsymbol{x}}^{\epsilon}_{s} ds + \boldsymbol{\sigma}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{\eta}^\epsilon_{t},
\end{equation}
where $\boldsymbol{\eta}^\epsilon_{t}$ is the Ornstein-Uhlenbeck process with the correlation time $\tau_{\eta} \epsilon^{\nu}$, i.e. it is a process satisfying the SDE:
\begin{equation} \label{rescaled-ou}
d\boldsymbol{\eta}^\epsilon_{t} = -\frac{\boldsymbol{A}}{\tau_{\eta} \epsilon^{\nu}} \boldsymbol{\eta}^\epsilon_{t} dt + \frac{\boldsymbol{A}}{\tau_{\eta} \epsilon^{\nu}} d \boldsymbol{W}^{(d)}_{t}.
\end{equation}
\item[(ii)] the SIDE \eqref{side3} becomes (with $m:=m_0 \epsilon^\mu$, $\tau_{ii} := \tau_{\kappa} \epsilon^{\theta}$ in the formula for ($\boldsymbol{\kappa}_{2})_{ii}$ in \eqref{harmonic_memory}, $\boldsymbol{x}_t$ replaced by $\boldsymbol{x}^\epsilon_t$, $\boldsymbol{h}_{t}$, $\boldsymbol{u}_{t}$ replaced by $\boldsymbol{h}^\epsilon_{t}$, $\boldsymbol{u}^\epsilon_{t}$ respectively and $\boldsymbol{\tau} := \tau_{h} \epsilon^{\nu} \boldsymbol{I}$ in $\eqref{unscaled_har1}$-$\eqref{unscaled_har2}$):
\begin{align}
&m_{0} \epsilon^{\mu} \ddot{\boldsymbol{x}}^\epsilon_{t} \nonumber \\
&= \boldsymbol{F}(\boldsymbol{x}^\epsilon_{t})
- \frac{\boldsymbol{g}(\boldsymbol{x}^\epsilon_{t})}{\tau_{\kappa} \epsilon^{\theta}} \int_{0}^{t} e^{-\boldsymbol{\Omega}^2\frac{(t-s)}{2\tau_{\kappa}\epsilon^{\theta}}}\left[\cos\left(\frac{\boldsymbol{\Omega}_{0}}{\tau_{\kappa}\epsilon^{\theta}}(t-s) \right) + \frac{\boldsymbol{\Omega}_{1}}{2} \sin\left(\frac{\boldsymbol{\Omega}_{0}}{\tau_{\kappa}\epsilon^{\theta}}(t-s) \right) \right] \boldsymbol{h}(\boldsymbol{x}^\epsilon_{s}) \dot{\boldsymbol{x}}^{\epsilon}_{s} ds + \boldsymbol{\sigma}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{h}^\epsilon_{t}, \label{goal3}
\end{align}
where $\boldsymbol{h}^\epsilon_{t}$ is the harmonic noise process with the correlation time $\tau_{h} \epsilon^{\nu}$, i.e. it is a process satisfying the SDE system:
\begin{align}
d\boldsymbol{h}^\epsilon_{t} &= \frac{1}{\tau_{h}\epsilon^{\nu}} \boldsymbol{u}^\epsilon_{t} dt, \label{rescaled_h1} \\
d\boldsymbol{u}^\epsilon_{t} &= -\frac{\boldsymbol{\Omega}^2}{\tau_{h} \epsilon^{\nu}} \boldsymbol{u}^\epsilon_{t} dt - \frac{\boldsymbol{\Omega}^2}{\tau_{h} \epsilon^{\nu}} \boldsymbol{h}^\epsilon_{t} dt + \frac{\boldsymbol{\Omega}^2}{\tau_{h} \epsilon^{\nu}} d\boldsymbol{W}^{(d)}_{t}. \label{rescaled_h2}
\end{align}
Both SIDEs have the initial conditions $\boldsymbol{x}^\epsilon_{0} = \boldsymbol{x}, \ \dot{\boldsymbol{x}}^\epsilon_{0} = \boldsymbol{v}$. The initial conditions $\boldsymbol{\eta}^\epsilon_{0}$ (respectively, $\boldsymbol{h}^\epsilon_{0}$ and $\boldsymbol{u}^\epsilon_{0}$) are distributed according to the stationary measure of the SDE that the process $\boldsymbol{\eta}^\epsilon_{t}$ (respectively, $\boldsymbol{h}^\epsilon_{t}$ and $\boldsymbol{u}^\epsilon_{t}$) satisfies.
\end{itemize}
In this paper we set $\mu = \theta = \nu$, which is the case when all the characteristic time scales are of comparable magnitude in the limit as $\epsilon \to 0$. Our main goal is to derive a limiting equation for the (slow) $\boldsymbol{x}^\epsilon$-component of the process solving the equations \eqref{general_side_rescaled}-\eqref{general_rescaled_ou}, including the special cases $\eqref{goal2}$-$\eqref{rescaled-ou}$ and $\eqref{goal3}$-$\eqref{rescaled_h2}$, in the limit as $\epsilon \to 0$, in a strong pathwise sense.
We explain the motivation behind the above rescalings. The rescaling of the memory kernels $\boldsymbol{\kappa}(t-s)$, $\boldsymbol{\kappa}_{1}(t-s)$, $\boldsymbol{\kappa}_{2}(t-s)$ is such that in the limit $\epsilon \to 0$ the rescaled memory kernels converge to $\boldsymbol{K}_1 \delta(t) $ formally, where $\delta(t)$ is the Dirac-delta function and $\boldsymbol{K}_1$ is the effective damping constant defined in \eqref{eff_damping}. On the other hand, the noise processes $\boldsymbol{\xi}^\epsilon_t = \boldsymbol{C}_2 \boldsymbol{\beta}^\epsilon_{t}$, $\boldsymbol{\eta}^\epsilon_{t}$ and $\boldsymbol{h}^\epsilon_{t}$ converge to white noise processes in the limit $\epsilon \to 0$.
\section{Smoluchowski-Kramers Limit of SDE's Revisited} \label{skrevisited}
Let $(\boldsymbol{x}_{t}^m, \boldsymbol{v}^m_{t}) \in \RR^{n} \times \RR^n$, where $t \in [0,T]$, be a family of solutions (parametrized by a positive constant $m$) to the following SDEs:
\begin{align}
d\boldsymbol{x}^m_{t} &= \boldsymbol{v}^m_{t} dt, \label{gsk1} \\
m d\boldsymbol{v}^m_{t} &= \boldsymbol{F}(\boldsymbol{x}^m_{t}) dt -\boldsymbol{\gamma}(\boldsymbol{x}^m_{t}) \boldsymbol{v}^m_{t} dt + \boldsymbol{\sigma}(\boldsymbol{x}^m_{t}) d\boldsymbol{W}^{(k)}_{t}.\label{gsk2}
\end{align}
In the SDEs above, $m > 0$ is the mass of the particle, $\boldsymbol{F}: \RR^{n} \to \RR^{n}$, $\boldsymbol{\gamma}: \RR^{n} \to \RR^{n \times n}$, $\boldsymbol{\sigma}: \RR^{n} \to \RR^{n \times k}$, and $\boldsymbol{W}^{(k)}$ is a $k$-dimensional Wiener process on the filtered probability space $(\Omega, \mathcal{F}, \mathcal{F}_{t}, \mathbb{P})$ satisfying the usual conditions \citep{karatzas2012Brownian}.
The initial conditions are given by $\boldsymbol{x}^m_{0} = \boldsymbol{x}, \ \boldsymbol{v}^m_{0} = \boldsymbol{v}^m$. The above SDE system models diffusive phenomena in cases where the damping coefficient $\boldsymbol{\gamma}$ and diffusion coefficient $\boldsymbol{\sigma}$ are state-dependent.
The Smoluchowski-Kramers limit (or the small mass limit) of the system \eqref{gsk1}-\eqref{gsk2} has been studied in \citep{hottovy2015smoluchowski, Hottovy12, Herzog2016, birrell2017small,birrell2017homogenization}. The main result in \citep{birrell2017homogenization} says that, under certain assumptions, the $\boldsymbol{x}^m$-component of the solution to \eqref{gsk1}-\eqref{gsk2} converges (in a strong pathwise sense), as $m \to 0$, to the solution of a homogenized SDE that contains in particular the so-called noise-induced drift, that was not present in the pre-limit SDEs (see Theorem \ref{skthm} for a precise statement). The presence of such noise-induced drift in the homogenized equation is a consequence of the state-dependence of the damping coefficient (and therefore also the diffusion coefficient if the system satisfies a fluctuation-dissipation relation). For an overview of the noise-induced drift phenomena, we refer to the review article \citep{volpe2016effective}.
In all the works mentioned previously, the spectral assumption made on the matrix $\boldsymbol{\gamma}$ was that the symmetrized damping matrix $\frac{1}{2}(\boldsymbol{\gamma} + \boldsymbol{\gamma}^{*})$ is uniformly positive definite (i.e. its smallest eigevalue is positive uniformly in $\boldsymbol{x}$). The same results can be obtained under a weaker assumption that matrix $\boldsymbol{\gamma}$ is {\it uniformly positive stable}, i.e. all real parts of the eigenvalues of $\boldsymbol{\gamma}$ are positive uniformly in $\boldsymbol{x}$ \citep{horn1994topics}. \\
\noindent {\bf Notation.} Here and in the following, we use Einstein summation convention on repeated indices. The Euclidean norm of a vector $\boldsymbol{w}$ is denoted by $| \boldsymbol{w} |$ and the (induced operator) norm of a matrix $\boldsymbol{A}$ by $\| \boldsymbol{A} \|$. For an $\RR^{n_2 \times n_3}$-valued function $\boldsymbol{f}(\boldsymbol{y}):=([f]_{jk}(\boldsymbol{y}))_{j=1,\dots,n_2; k=1,\dots, n_3}$, $\boldsymbol{y} := ([y]_1, \dots, [y]_{n_1}) \in \RR^{n_1}$, we denote by $(\boldsymbol{f})_{\boldsymbol{y}}(\boldsymbol{y})$ the $n_1 n_2 \times n_3$ matrix:
\begin{equation}
(\boldsymbol{f})_{\boldsymbol{y}}(\boldsymbol{y}) = (\boldsymbol{\nabla}_{\boldsymbol{y}}[f]_{jk}(\boldsymbol{y}))_{j=1,\dots, n_2; k=1,\dots,n_3},
\end{equation}
where $\boldsymbol{\nabla}_{\boldsymbol{y}}[f]_{jk}(\boldsymbol{y})$ denotes the gradient vector $(\frac{\partial [f]_{jk}(\boldsymbol{y})}{\partial [y]_1}, \dots, \frac{\partial [f]_{jk}(\boldsymbol{y})}{\partial [y]_{n_1}}) \in \RR^{n_1}$ for every $j,k$.
The symbol $\mathbb{E}$ denotes expectation with respect to $\mathbb{P}$. \\
We make the following assumptions.
\begin{ass} \label{a1} For every $\boldsymbol{x} \in \RR^n$, the functions $\boldsymbol{F}(\boldsymbol{x})$ and $\boldsymbol{\sigma}(\boldsymbol{x})$ are continuous, bounded and Lipschitz in $\boldsymbol{x}$, whereas the functions $\boldsymbol{\gamma}(\boldsymbol{x})$ and $(\boldsymbol{\gamma})_{\boldsymbol{x}}(\boldsymbol{x})$ are continuously differentiable, bounded and Lipschitz in $\boldsymbol{x}$. Moreover, $(\boldsymbol{\gamma})_{\boldsymbol{x} \boldsymbol{x}}(\boldsymbol{x})$ is bounded for every $\boldsymbol{x} \in \RR^n$.
\end{ass}
\begin{ass} \label{a2} The matrix $\boldsymbol{\gamma}$ is {\it uniformly positive stable}, i.e. all real parts of the eigenvalues of $\boldsymbol{\gamma}(\boldsymbol{x})$ are bounded below by some constant $2\kappa > 0$, uniformly in $\boldsymbol{x}\in \RR^n$.
\end{ass}
\begin{ass} \label{a3} The initial condition $\boldsymbol{x}^m_0 = \boldsymbol{x}_0$ is a random variable independent of $m$ and has finite moments of all orders, i.e. $\mathbb{E} |\boldsymbol{x}|^{p} < \infty$ for all $p > 0$. The initial condition $\boldsymbol{v}^m_0$ is a random variable that possibly depends on $m$ and we assume that for every $p>0$, $\mathbb{E} |m \boldsymbol{v}^m|^p = O(m^\alpha)$ as $m \to 0$, for some $\alpha \geq p/2$.
\end{ass}
\begin{ass} \label{a4} The global solutions, defined on $[0,T]$, to the pre-limit SDEs \eqref{gsk1}-\eqref{gsk2} and to the limiting SDE \eqref{sklim} a.s. exist and are unique for all $m > 0$ (i.e. there are no explosions).
\end{ass}
We now state the result.
\begin{theorem} \label{skthm} Suppose that the SDE system $\eqref{gsk1}$-$\eqref{gsk2}$ satisfies Assumption $\ref{a1}$-$\ref{a4}$. Let $(\boldsymbol{x}^m_{t}, \boldsymbol{v}^m_{t}) \in \RR^{n} \times \RR^{n}$ be its solution, with the initial condition $(\boldsymbol{x}, \boldsymbol{v}^m)$. Let $\boldsymbol{X}_{t} \in \RR^{n}$ be the solution to the following It\^o SDE with initial position $\boldsymbol{X}_{0} = \boldsymbol{x}$:
\begin{equation} \label{sklim}
d\boldsymbol{X}_{t} = [\boldsymbol{\gamma}^{-1}(\boldsymbol{X}_{t}) \boldsymbol{F}(\boldsymbol{X}_{t}) + \boldsymbol{S}(\boldsymbol{X}_{t})] dt + \boldsymbol{\gamma}^{-1}(\boldsymbol{X}_{t}) \boldsymbol{\sigma}(\boldsymbol{X}_{t}) d \boldsymbol{W}^{(k)}_{t},
\end{equation}
where $\boldsymbol{S}(\boldsymbol{X}_{t})$ is the noise-induced drift whose $i$th component is given by
\begin{equation}
S_{i}(\boldsymbol{X}) = \frac{\partial}{\partial X_{l}}[(\gamma^{-1})_{ij}(\boldsymbol{X})] J_{jl}(\boldsymbol{X}), \ \ i,j,l = 1, \dots, n, \end{equation}
and $\boldsymbol{J}$ is the unique matrix solving the Lyapunov equation
\begin{equation} \label{lyp}
\boldsymbol{J} \boldsymbol{\gamma}^{*} + \boldsymbol{\gamma} \boldsymbol{J} = \boldsymbol{\sigma} \boldsymbol{\sigma}^{*}.
\end{equation}
Then the process $\boldsymbol{x}^m_{t}$ converges, as $m \to 0$, to the solution $\boldsymbol{X}_{t}$, of the It\^o SDE \eqref{sklim}, in the following sense: for all finite $T>0$,
\begin{equation}
\sup_{t \in [0,T]} |\boldsymbol{x}_t^m - \boldsymbol{X}_t| \to 0
\end{equation} in probability, in the limit as $m \to 0$.
\end{theorem}
We end this section with a few remarks concerning the statements in Theorem \ref{skthm}.
\begin{remark} \label{ass_rmk}
\hspace{5cm}
\begin{itemize}
\item[(i)] Because of the relaxed spectral assumption on $\boldsymbol{\gamma}$, a new idea has to be used to prove decay estimates for solutions of the velocity equation. Once this is done, Theorem 1 can be proven using the technique of \cite{birrell2017homogenization} (note that Assumption \ref{a1} is essentially the same as the assumptions in Appendix A of \cite{birrell2017homogenization}, specialized to the present case). In Appendix \ref{proof_sketch} we give a sketch of the proof of Theorem \ref{skthm}, pointing out the necessary modifications. The reader is referred to \cite{birrell2017homogenization} for more details.
\item[(ii)] Our assumption on the initial variable $\boldsymbol{v}_0^m$ implies that the initial average kinetic energy, $K(\boldsymbol{v}^m) := \mathbb{E} \frac{1}{2} m |\boldsymbol{v}^m|^2$, does not blow up (but can possibly vanish) as $m \to 0$. This is analogous to the Assumption 2.4 in \cite{birrell2017homogenization} and it is more general than the one in \cite{hottovy2015smoluchowski}.
\item[(iii)] With slightly more work and additional assumptions, one could prove the statement in Assumption \ref{a4} from Assumptions \ref{a1}-\ref{a3}. However, such existence and uniqueness result is not the focus of this paper and, therefore, we choose to take the existence and uniqueness result for granted in Assumption \ref{a4}.
\item[(iv)] We make no claim that Assumptions \ref{a2}-\ref{a4} are as weak or as general as possible. In particular, the boundedness assumption on the coefficients of the SDEs could be relaxed (for instance, using the techniques in \cite{Herzog2016}) and the initial condition $\boldsymbol{x}$ could have some dependence on $m$ (see, for instance, \cite{birrell2017homogenization}) at the cost of more technicalities.
The main focus of our revisit here is to point out that the result in \cite{hottovy2015smoluchowski} still holds with a relaxed spectral assumption on the matrix $\boldsymbol{\gamma}$ and with the initial condition $\boldsymbol{v}_0^m$ possibly dependent on $m$ -- this will be important for applications in later sections (see also Remark \ref{imp_rmk}).
\end{itemize}
\end{remark}
\section{Homogenization for Generalized Langevin Dynamics} \label{general_homog}
In this section, we study homogenization for the system of equations \eqref{general_side_rescaled}-\eqref{general_rescaled_ou} (with $\mu = \theta = \nu$) by taking the limit as $\epsilon \to 0$, under appropriate assumptions.
Without loss of generality, we set $\mu = \theta = \nu = 1$. We cast \eqref{general_side_rescaled}-\eqref{general_rescaled_ou} as the system of SDEs for the Markov process $(\boldsymbol{x}^\epsilon_{t}, \boldsymbol{v}^\epsilon_{t}, \boldsymbol{z}^\epsilon_{t}, \boldsymbol{y}^\epsilon_{t}, \boldsymbol{\zeta}^\epsilon_{t}, \boldsymbol{\beta}^\epsilon_{t})$ on the state space $\RR^{d}\times \RR^d \times \RR^{d_1} \times \RR^{d_1} \times \RR^{d_2} \times \RR^{d_2}$:
\begin{align}
d\boldsymbol{x}^\epsilon_{t} &= \boldsymbol{v}^\epsilon_{t} dt, \label{sdec1} \\
m_{0} \epsilon d\boldsymbol{v}^\epsilon_{t} &= - \boldsymbol{g}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{C}_1 \boldsymbol{y}^\epsilon_{t} dt + \boldsymbol{\sigma}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{C}_2 \boldsymbol{\beta}^\epsilon_{t} dt +\boldsymbol{F}(\boldsymbol{x}^\epsilon_{t})dt, \\
d \boldsymbol{z}^\epsilon_{t} &= \boldsymbol{y}^\epsilon_{t} dt, \\
\tau_{\kappa} \epsilon d\boldsymbol{y}^\epsilon_{t} &= -\boldsymbol{\Gamma}_1 \boldsymbol{y}^\epsilon_{t} dt + \boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{v}^\epsilon_{t} dt, \\
d\boldsymbol{\zeta}^\epsilon_{t} &= \boldsymbol{\beta}^\epsilon_{t} dt, \\
\tau_{\xi} \epsilon d\boldsymbol{\beta}^\epsilon_{t} &= -\boldsymbol{\Gamma}_2 \boldsymbol{\beta}^\epsilon_{t} dt + \boldsymbol{\Sigma}_2 d\boldsymbol{W}^{(q_2)}_{t}, \label{sdec6}
\end{align}
where we have defined the auxiliary process
\begin{equation}
\boldsymbol{y}^\epsilon_{t} := \frac{1}{\tau_{\kappa} \epsilon} \int_{0}^{t} e^{-\frac{\boldsymbol{\Gamma}_1}{\tau_{\kappa} \epsilon}(t-s)} \boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h}(\boldsymbol{x}^\epsilon_{s}) \boldsymbol{v}^\epsilon_{s} ds.
\end{equation}
Here, the initial conditions $\boldsymbol{x}^\epsilon_0 = \boldsymbol{x}$, $\boldsymbol{v}^\epsilon_0 = \boldsymbol{v}$, $\boldsymbol{z}^\epsilon_0 = \boldsymbol{z}$ and $\boldsymbol{\zeta}^\epsilon_0 = \boldsymbol{\zeta}$ are random variables. Note that $\boldsymbol{y}^\epsilon_0 = \boldsymbol{0}$, and $\boldsymbol{\beta}^\epsilon_0$ is a zero mean Gaussian random variable with covariance $\boldsymbol{M}_2/\tau_\xi \epsilon$.
Let $\boldsymbol{W}^{(q_2)}_{t}$ be an $\RR^{q_2}$-valued Wiener process on the filtered probability space $(\Omega, \mathcal{F}, \mathcal{F}_{t}, \mathbb{P})$ satisfying the usual conditions \cite{karatzas2012Brownian} and $\mathbb{E}$ denotes expectation with respect to $\mathbb{P}$.
We use the notation introduced in Section \ref{skrevisited} and make the following assumptions.
\begin{ass} \label{ass1} For every $\boldsymbol{x} \in \RR^{d}$, the vector-valued function $\boldsymbol{F}(\boldsymbol{x})$ is continuous, bounded and Lipschitz in $\boldsymbol{x}$, whereas the matrix-valued functions $\boldsymbol{g}(\boldsymbol{x})$, $\boldsymbol{h}(\boldsymbol{x})$, $\boldsymbol{\sigma}(\boldsymbol{x})$, $(\boldsymbol{g})_{\boldsymbol{x}}(\boldsymbol{x})$, $(\boldsymbol{h})_{\boldsymbol{x}}(\boldsymbol{x})$ and $(\boldsymbol{\sigma})_{\boldsymbol{x}}(\boldsymbol{x})$ are continuously differentiable, bounded and Lipschitz in $\boldsymbol{x}$. Moreover, $(\boldsymbol{g})_{\boldsymbol{x}\boldsymbol{x}}(\boldsymbol{x})$, $(\boldsymbol{h})_{\boldsymbol{x}\boldsymbol{x}}(\boldsymbol{x})$ and $(\boldsymbol{\sigma})_{\boldsymbol{x}\boldsymbol{x}}(\boldsymbol{x})$ are bounded for every $\boldsymbol{x} \in \RR^d$.
\end{ass}
\begin{ass} \label{ass2} The initial conditions $\boldsymbol{x}$, $\boldsymbol{v}$, $\boldsymbol{z}$, $\boldsymbol{\zeta}$ are random variables independent of $\epsilon$. We assume that they have finite moments of all orders, i.e. $\mathbb{E}|\boldsymbol{x}|^{p}, \ \mathbb{E}|\boldsymbol{v}|^{p}, \ \mathbb{E}|\boldsymbol{z}|^p, \ \mathbb{E}|\boldsymbol{\zeta}|^p < \infty$ for all $p>0$.
\end{ass}
\begin{ass} \label{ass4} There are no explosions, i.e. almost surely, for any value of the parameter $\epsilon$ there exists a unique solution on the compact time interval $[0,T]$ to the pre-limit equations \eqref{general_side_rescaled}-\eqref{general_rescaled_ou}, and also to the limiting equation \eqref{general_limitSDE}.
\end{ass}
The following convergence theorem is the main result of this paper. It provides a homogenized SDE for the particle's position in the limit as the inertial time scale, the memory time scale and the noise correlation time scale go to zero at the same rate in the case when the pre-limit dynamics are described by the family of equations \eqref{general_side_rescaled}-\eqref{general_rescaled_ou} (with $\mu = \theta = \nu = 1$), or equivalently by the SDEs \eqref{sdec1}-\eqref{sdec6}.
In the following, $(\boldsymbol{D})_{ij}$ denotes the $(i,j)$-entry of the matrix $\boldsymbol{D}$.
\begin{theorem} \label{general_result}
Let $\boldsymbol{x}^\epsilon_{t} \in \RR^{d}$ be the solution to the SDEs \eqref{sdec1}-\eqref{sdec6}. Suppose that Assumptions \ref{ass1}-\ref{ass4} are satisfied and the effective damping and diffusion (constant) matrices, $\boldsymbol{K}_1$, $\boldsymbol{K}_2$, defined in \eqref{eff_damping} and \eqref{eff_diff} respectively, are invertible. Moreover, we assume that for every $\boldsymbol{x} \in \RR^d$,
\begin{equation} \label{inv_cond}
\boldsymbol{B}_\lambda(\boldsymbol{x}) := \boldsymbol{I} + \boldsymbol{g}(\boldsymbol{x}) \tilde{\boldsymbol{\kappa}}(\lambda \tau_{\kappa}) \boldsymbol{h}(\boldsymbol{x})/\lambda m_0
\end{equation}
is invertible for all $\lambda$ in the right half plane $\{\lambda \in \CC: Re(\lambda)>0\}$, where $\tilde{\boldsymbol{\kappa}}(z) := \boldsymbol{C}_1(z\boldsymbol{I} + \boldsymbol{\Gamma}_1)^{-1}\boldsymbol{M}_1 \boldsymbol{C}_1^*$, for $z \in \CC$, is the Laplace transform of the memory function.
Denote $\boldsymbol{\theta}(\boldsymbol{X}) := \boldsymbol{g}(\boldsymbol{X})\boldsymbol{K}_1 \boldsymbol{h}(\boldsymbol{X}) \in \RR^{d \times d}$ for $\boldsymbol{X} \in \RR^d$. Then as $\epsilon \to 0$, the process $\boldsymbol{x}^\epsilon_{t}$ converges to the solution, $\boldsymbol{X}_{t}$, of the following It\^o SDE:
\begin{equation} \label{general_limitSDE}
d\boldsymbol{X}_{t} = \boldsymbol{S}(\boldsymbol{X}_{t}) dt + \boldsymbol{\theta}^{-1}(\boldsymbol{X}_t) \boldsymbol{F}(\boldsymbol{X}_{t}) dt + \boldsymbol{\theta}^{-1}(\boldsymbol{X}_t) \boldsymbol{\sigma}(\boldsymbol{X}_{t}) \boldsymbol{C}_2 \boldsymbol{\Gamma}_2^{-1} \boldsymbol{\Sigma}_2 d\boldsymbol{W}^{(q_2)}_{t},
\end{equation}
with $\boldsymbol{S}(\boldsymbol{X}_{t}) = \boldsymbol{S}^{(1)}(\boldsymbol{X}_{t}) + \boldsymbol{S}^{(2)}(\boldsymbol{X}_{t}) + \boldsymbol{S}^{(3)}(\boldsymbol{X}_{t}),$ where the $\boldsymbol{S}^{(k)}$ are the noise-induced drifts whose $i$th components are given by
\begin{align}
S^{(1)}_{i} &= m_{0} \frac{\partial}{\partial X_{l}}\left[(\boldsymbol{\theta}^{-1})_{ij}(\boldsymbol{X})\right] (\boldsymbol{J}_{11})_{jl}(\boldsymbol{X}), \ \
i,j,l = 1, \dots, d, \label{gen_nid1} \\
S^{(2)}_{i} &= -\tau_{\kappa} \frac{\partial}{\partial X_{l}}\left[(\boldsymbol{\theta}^{-1} \boldsymbol{g})_{ij}(\boldsymbol{X})\right] (\boldsymbol{C}_1 \boldsymbol{\Gamma}_1^{-1} \boldsymbol{J}_{21})_{jl}(\boldsymbol{X}), \ \ i,l = 1, \dots, d; \ j = 1,\dots,q, \label{gen_nid2} \\
S^{(3)}_{i} &= \tau_{\xi} \frac{\partial}{\partial X_{l}}\left[(\boldsymbol{\theta}^{-1}\boldsymbol{\sigma} )_{ij}(\boldsymbol{X}) \right] (\boldsymbol{C}_2 \boldsymbol{\Gamma}_2^{-1} \boldsymbol{J}_{31})_{jl}(\boldsymbol{X}), \ \ i,l = 1, \dots, d; \ j=1,\dots,r. \label{gen_nid3}
\end{align}
Here
$\boldsymbol{J}_{11} = \boldsymbol{J}_{11}^* \in \RR^{d\times d}$, $\boldsymbol{J}_{21}=\boldsymbol{J}_{12}^* \in \RR^{d_1 \times d}$ and $\boldsymbol{J}_{31} = \boldsymbol{J}_{13}^* \in \RR^{d_2 \times d}$ satisfy the following system of five matrix equations:
\begin{align}
\boldsymbol{g} \boldsymbol{C}_1 \boldsymbol{J}_{12}^* + \boldsymbol{J}_{12} \boldsymbol{C}_1^* \boldsymbol{g}^* &= \boldsymbol{\sigma} \boldsymbol{C}_2 \boldsymbol{J}_{13}^* + \boldsymbol{J}_{13} \boldsymbol{C}_2^* \boldsymbol{\sigma}^*, \label{gen_system} \\
m_0 \boldsymbol{J}_{11} \boldsymbol{h}^* \boldsymbol{C}_1 \boldsymbol{M}_1 + \tau_{\kappa} \boldsymbol{\sigma} \boldsymbol{C}_2 \boldsymbol{J}_{23}^* &= \tau_{\kappa} \boldsymbol{g} \boldsymbol{C}_1 \boldsymbol{J}_{22} + m_0 \boldsymbol{J}_{12} \boldsymbol{\Gamma}_1^*,\\
\tau_{\xi} \boldsymbol{g} \boldsymbol{C}_1 \boldsymbol{J}_{23} + m_0 \boldsymbol{J}_{13} \boldsymbol{\Gamma}_2^* &= \boldsymbol{\sigma} \boldsymbol{C}_2 \boldsymbol{M}_2, \\
\boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h} \boldsymbol{J}_{12} + \boldsymbol{J}_{12}^* \boldsymbol{h}^* \boldsymbol{C}_1 \boldsymbol{M}_1 &= \boldsymbol{\Gamma}_1 \boldsymbol{J}_{22} + \boldsymbol{J}_{22} \boldsymbol{\Gamma}_1^*, \\
\tau_{\xi} \boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h} \boldsymbol{J}_{13} &= \tau_{\xi} \boldsymbol{\Gamma}_1 \boldsymbol{J}_{23} + \tau_{\kappa} \boldsymbol{J}_{23} \boldsymbol{\Gamma}_2^*.
. \label{gen_end}
\end{align}
The convergence is obtained in the following sense: for all finite $T>0$,
\begin{equation}
\sup_{t \in [0,T]}|\boldsymbol{x}_t^\epsilon - \boldsymbol{X}_t| \to 0
\end{equation}
in probability, in the limit as $\epsilon \to 0$.
\end{theorem}
\begin{remark}
Invertibility of the matrices $\boldsymbol{B}_\lambda(\boldsymbol{x})$ (the assumption \eqref{inv_cond}) is a technical condition which will be used in the proof of the theorem. We are going to verify it in the special cases and applications discussed later (see Corollary \ref{model3} and Corollary \ref{model4}). In particular, it follows from the stronger spectral condition, namely that
$\boldsymbol{g}(\boldsymbol{x})\tilde{\boldsymbol{\kappa}}(\mu)\boldsymbol{h}(\boldsymbol{x})$ has no spectrum in the right half plane for any $\mu$ with $Re(\mu) > 0$. See also Remark \ref{role_of_gamma}.
\end{remark}
\begin{proof}
We denote $\boldsymbol{\hat{x}}^\epsilon_{t} := (\boldsymbol{x}^\epsilon_{t}, \boldsymbol{z}^\epsilon_{t}, \boldsymbol{\zeta}^\epsilon_{t}) \in \RR^{d+d_1+d_2}$ and $\boldsymbol{\hat{v}}^\epsilon_{t} := (\boldsymbol{v}^\epsilon_{t}, \boldsymbol{y}^\epsilon_{t}, \boldsymbol{\eta}^\epsilon_{t}) \in \RR^{d+d_1+d_2} $ and rewrite the above SDE system in the form $\eqref{gsk1}$-$\eqref{gsk2}$:
\begin{align}
d\boldsymbol{\hat{x}}^{\epsilon}_{t} &= \boldsymbol{\hat{v}}^{\epsilon}_{t} dt \label{gen_nsk1}, \\
\epsilon d\boldsymbol{\hat{v}}^\epsilon_{t} &= - \boldsymbol{\hat{\gamma}}(\boldsymbol{x}^\epsilon_{t}) \hat{\boldsymbol{v}}^\epsilon_{t} dt + \boldsymbol{\hat{F}}(\boldsymbol{x}^\epsilon_{t})dt + \boldsymbol{\hat{\sigma}} d\boldsymbol{W}^{(q_2)}_{t}, \label{gen_nsk2}
\end{align}
with \begin{equation}\boldsymbol{\hat{\gamma}}(\boldsymbol{x}^\epsilon_{t}) = \left[ \begin{array}{ccc}
\boldsymbol{0} & \frac{\boldsymbol{g}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{C}_1}{m_{0}} & -\frac{\boldsymbol{\sigma}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{C}_2}{m_{0}} \\
-\frac{\boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h}(\boldsymbol{x}^\epsilon_{t}) }{\tau_{\kappa}} & \frac{\boldsymbol{\Gamma}_1}{\tau_{\kappa}} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0} & \frac{\boldsymbol{\Gamma}_2}{\tau_{\xi}} \end{array} \right], \ \ \ \boldsymbol{\hat{F}}(\boldsymbol{x}^\epsilon_{t}) = \begin{bmatrix}
\frac{\boldsymbol{F}(\boldsymbol{x}^\epsilon_{t})}{m_{0}} \\
\boldsymbol{0} \\
\boldsymbol{0} \\
\end{bmatrix}, \ \ \boldsymbol{\hat{\sigma}}=
\left[ \begin{array}{c}
\boldsymbol{0} \\
\boldsymbol{0} \\
\frac{\boldsymbol{\Sigma}_2}{\tau_{\xi}} \end{array} \right], \end{equation}
where $\boldsymbol{\hat{\gamma}} \in \RR^{(d+d_1+d_2) \times (d+d_1+d_2)}$ is a 3 by 3 block matrix with each block a matrix of appropriate dimension; $\boldsymbol{\hat{F}} \in \RR^{d+d_1+d_2}$, $\boldsymbol{\hat{\sigma}} \in \RR^{(d+d_1+d_2)\times q_2}$ and the $\boldsymbol{0}$ appearing in $\boldsymbol{\hat{\gamma}}$, $\boldsymbol{\hat{F}}$ and $\boldsymbol{\hat{\sigma}}$ is a zero vector or matrix of appropriate dimension.
We now want to apply Theorem \ref{skthm} (with $m:=\epsilon$, $n:=d+d_1+d_2$, $\boldsymbol{\gamma}$ replaced by $\boldsymbol{\hat{\gamma}}$, $\boldsymbol{F}$ replaced by $\boldsymbol{\hat{F}}$, $\boldsymbol{\sigma}$ replaced by $\boldsymbol{\hat{\sigma}}$, etc.) to $\eqref{gen_nsk1}$-$\eqref{gen_nsk2}$.
It is straightforward to see that Assumption \ref{ass1} implies Assumption \ref{a1} and Assumption \ref{ass4} implies Assumption \ref{a4}.
To verify Assumption \ref{a3}, note again that $\boldsymbol{y}^\epsilon_0 = \boldsymbol{0}$ and so by Assumption \ref{ass2}, we only need to show that for every $p>0$, $\mathbb{E}|\epsilon \boldsymbol{\beta}^\epsilon_0|^p = O(\epsilon^\alpha)$ as $\epsilon \to 0$, for some constant $\alpha \geq p/2$. To show this, we use the fact that for a mean zero Gaussian random variable, $X \in \RR$, with variance $\sigma^2$,
\begin{equation}
\mathbb{E} |X|^p = \sigma^p \frac{2^{p/2} \Gamma\left(\frac{p+1}{2}\right)}{\sqrt{\pi}},
\end{equation}
for every $p>0$, where $\Gamma$ denotes the gamma function \cite{winkelbauer2012moments}. Applying this to $\boldsymbol{\beta}^\epsilon_0$, we obtain, for every $p>0$, $\mathbb{E} |\boldsymbol{\beta}_0^\epsilon|^p = O(1/\epsilon^{p/2})$ as $\epsilon \to 0$ and so $\mathbb{E}|\epsilon \boldsymbol{\beta}_0^\epsilon|^p = O(\epsilon^{p/2})$ as $\epsilon \to 0$. Therefore, Assumption \ref{a3} is verified.
It remains to verify Assumption \ref{a2}, i.e. that $\boldsymbol{\hat{\gamma}}$ is positive stable. Note that $\boldsymbol{\Gamma}_2$ is positive stable by assumption and the triangular-block structure of $\boldsymbol{\hat{\gamma}}$ implies that one only needs to verify that the upper left 2 by 2 block matrix of $\boldsymbol{\hat{\gamma}}$:
\begin{equation}
\boldsymbol{L}(\boldsymbol{x}) = \left[ \begin{array}{cc}
\boldsymbol{0} & \boldsymbol{g}(\boldsymbol{x}) \boldsymbol{C}_1/m_0 \\
-\boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{g}(\boldsymbol{x})/\tau_{\kappa} & \boldsymbol{\Gamma}_1/\tau_{\kappa} \end{array} \right]\end{equation}
is positive stable, where $\boldsymbol{x} \in \RR^d$.
We thus need to show that the resolvent set of $-\boldsymbol{L(\boldsymbol{x})}$, $\rho(-\boldsymbol{L}(\boldsymbol{x})):=\{\lambda \in \CC : (\lambda \boldsymbol{I} + \boldsymbol{L}(\boldsymbol{x}))^{-1} \text{ exists}\}$, contains the right half plane $\{\lambda \in \CC : Re(\lambda)>0\}$ for every $\boldsymbol{x} \in \RR^d$.
Let $\lambda \in \CC$ such that $Re(\lambda) > 0$. We will use the following formula for blockwise inversion of a block matrix: provided that $\boldsymbol{S}$ and $\boldsymbol{P}-\boldsymbol{Q}\boldsymbol{S}^{-1} \boldsymbol{R}$ are nonsingular, we have
\begin{equation}
\begin{bmatrix}
\boldsymbol{P} & \boldsymbol{Q} \\
\boldsymbol{R} & \boldsymbol{S} \\
\end{bmatrix}^{-1} = \begin{bmatrix}
(\boldsymbol{P}-\boldsymbol{Q}\boldsymbol{S}^{-1}\boldsymbol{R})^{-1} & -(\boldsymbol{P}-\boldsymbol{Q}\boldsymbol{S}^{-1}\boldsymbol{R})^{-1} \boldsymbol{Q} \boldsymbol{S}^{-1} \\
-\boldsymbol{S}^{-1}\boldsymbol{R}(\boldsymbol{P}-\boldsymbol{Q}\boldsymbol{S}^{-1}\boldsymbol{R})^{-1} & \boldsymbol{S}^{-1} + \boldsymbol{S}^{-1}\boldsymbol{R}(\boldsymbol{P}-\boldsymbol{Q}\boldsymbol{S}^{-1}\boldsymbol{R})^{-1} \boldsymbol{Q} \boldsymbol{S}^{-1}\\
\end{bmatrix} ,\end{equation}
where $\boldsymbol{P}$, $\boldsymbol{Q}$, $\boldsymbol{R}$, $\boldsymbol{S}$ are matrix sub-blocks of arbitrary dimension.
Since the matrices $\boldsymbol{A}_{\lambda} := \boldsymbol{\Gamma}_1/\tau_{\kappa} + \lambda \boldsymbol{I}$ and $\boldsymbol{B}_{\lambda}(\boldsymbol{x}) := \boldsymbol{I} + \boldsymbol{g}(\boldsymbol{x}) \tilde{\boldsymbol{\kappa}}(\lambda \tau_{\kappa}) \boldsymbol{h}(\boldsymbol{x})/\lambda m_0$ are invertible for all $\lambda$ in the right half plane by assumption, $\lambda \boldsymbol{I} + \boldsymbol{L}(\boldsymbol{x})$ is indeed invertible for every $\boldsymbol{x}$ and in fact using the above formula for the inverse of a block matrix, we have:
\begin{equation}
(\lambda \boldsymbol{I} + \boldsymbol{L}(\boldsymbol{x}))^{-1} = \left[ \begin{array}{cc}
\boldsymbol{B}_{\lambda}^{-1}(\boldsymbol{x})/\lambda & -\boldsymbol{B}_{\lambda}^{-1}(\boldsymbol{x}) \boldsymbol{g}(\boldsymbol{x}) \boldsymbol{C}_1 \boldsymbol{A}_{\lambda}^{-1}/\lambda m_0 \\
\boldsymbol{A}_{\lambda}^{-1} \boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h}(\boldsymbol{x}) \boldsymbol{B}_{\lambda}^{-1}(\boldsymbol{x})/\lambda \tau_{\kappa} & \ \boldsymbol{A}_{\lambda}^{-1} (\boldsymbol{I} - \boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h}(\boldsymbol{x})\boldsymbol{B}_{\lambda}^{-1}(\boldsymbol{x}) \boldsymbol{g}(\boldsymbol{x}) \boldsymbol{C}_1 \boldsymbol{A}_{\lambda}^{-1}/\lambda m_0 \tau_{\kappa}) \end{array} \right].
\end{equation}
Therefore, $\boldsymbol{\hat{\gamma}}$ is invertible and one can compute:
\begin{equation}
\boldsymbol{\hat{\gamma}}^{-1} = \left[ \begin{array}{ccc}
m_0 \boldsymbol{\theta}^{-1} & -\tau_{\kappa} \boldsymbol{\theta}^{-1}\boldsymbol{g} \boldsymbol{C}_1 \boldsymbol{\Gamma}_1^{-1} & \tau_{\xi} \boldsymbol{\theta}^{-1} \boldsymbol{\sigma} \boldsymbol{C}_2 \boldsymbol{\Gamma}_2^{-1} \\
m_{0} \boldsymbol{\Gamma}_1^{-1} \boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h} \boldsymbol{\theta}^{-1} & \ \tau_{\kappa} \boldsymbol{\Gamma}_1^{-1}(\boldsymbol{I}- \boldsymbol{M}_1 \boldsymbol{C}_1^* \boldsymbol{h}\boldsymbol{\theta}^{-1} \boldsymbol{g} \boldsymbol{C}_1 \boldsymbol{\Gamma}_1^{-1}) & \ \ \tau_{\xi}\boldsymbol{\Gamma}_1^{-1}\boldsymbol{M}_1\boldsymbol{C}_1^* \boldsymbol{h} \boldsymbol{\theta}^{-1} \boldsymbol{\sigma}\boldsymbol{C}_2 \boldsymbol{\Gamma}_2^{-1} \\
\boldsymbol{0} & \boldsymbol{0} & \tau_{\xi} \boldsymbol{\Gamma}_2^{-1} \end{array} \right],\end{equation}
where $\boldsymbol{\theta} = \boldsymbol{g} \boldsymbol{K}_1 \boldsymbol{h}$.
The result follows by applying Theorem \ref{skthm} to the SDE systems \eqref{gen_nsk1}-\eqref{gen_nsk2}. In particular, a rewriting of the resulting Lyapunov equation \eqref{lyp} gives the system of matrix equations \eqref{gen_system}-\eqref{gen_end}.
\end{proof}
\begin{remark} \label{role_of_gamma}
In the above proof, the condition of invertibility of $\boldsymbol{B}_\lambda(\boldsymbol{x})$ is only used to guarantee the positive stability of the matrix $\hat{\boldsymbol{\gamma}}$. Therefore, the conclusion of the theorem holds also when the latter can be established in another way. This can indeed be done in a number of concrete examples.
\end{remark}
\begin{remark} \label{mark} Our SIDEs belong to a special class of non-Markovian equations, the so-called {\it quasi-Markovian Langevin equations} \cite{eckmann1999non}. For these equations one can introduce a finite number of auxiliary variables in such a way that the evolution of particle's position and velocity, together with these auxiliary variables, is described by a usual SDE system and is thus Markovian. We remark that such ``Markovianization" procedure works here because the colored noise can be generated by a linear system of SDEs and the memory kernel satisfies a linear system of ordinary differential equations---both with constant coefficients. If, on the other hand, the memory kernel decays as a power, then there is no finite dimensional extension of the space which would make the solution process Markovian \cite{luczka2005non}.
\end{remark}
The following corollary uses a linear change of variables in a given SIDE, to arrive at an alternative form of the corresponding homogenized SDE of the form \eqref{general_limitSDE}.
\begin{corollary} \label{class_thm}
For $i=1,2$, let $\boldsymbol{T}_i$ be arbitrary $d_i \times d_i$ constant invertible matrix, where $d_1,d_2$ are positive integers. For $t \ge 0$, denote $\boldsymbol{\Gamma}'_i=\boldsymbol{T}_i \boldsymbol{\Gamma}_i \boldsymbol{T}^{-1}_i$, $\boldsymbol{M}_i' = \boldsymbol{T}_i \boldsymbol{M}_i \boldsymbol{T}_i^{*}$, $\boldsymbol{C}'_i =\boldsymbol{C}_i \boldsymbol{T}_i^{-1}$, $(\boldsymbol{\beta}^\epsilon_t)'=\boldsymbol{T}_2 \boldsymbol{\beta}^\epsilon_t$, $\boldsymbol{\Sigma}_i' = \boldsymbol{T}_i \boldsymbol{\Sigma}_i$ and consider the equations:
\begin{align}
m_0 \epsilon^{\mu} \ddot{\boldsymbol{x}}^\epsilon_{t} &= \boldsymbol{F}(\boldsymbol{x}^\epsilon_{t}) - \frac{\boldsymbol{g}(\boldsymbol{x}^\epsilon_{t})}{\tau_{\kappa} \epsilon^{\theta}} \int_{0}^{t} \boldsymbol{C}'_1e^{-\frac{\boldsymbol{\Gamma}'_1}{\tau_{\kappa}\epsilon^{\theta}}(t-s)} \boldsymbol{M}'_1 (\boldsymbol{C}'_1)^* \boldsymbol{h}(\boldsymbol{x}^\epsilon_{s}) \dot{\boldsymbol{x}}^\epsilon_{s} ds + \boldsymbol{\sigma}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{C}'_2 (\boldsymbol{\beta}^\epsilon_{t})', \label{class_side_rescaled} \\
\tau_{\xi}\epsilon^\nu d(\boldsymbol{\beta}^\epsilon_t)' &= -\boldsymbol{\Gamma}'_2 (\boldsymbol{\beta}^\epsilon_t)' dt + \boldsymbol{\Sigma}'_2 d\boldsymbol{W}'_t, \label{class_rescaled_ou}
\end{align}
where $\boldsymbol{W}'_t$ is a $q_2$-dimensional Wiener process and the initial condition, $(\boldsymbol{\beta}^\epsilon_0)'$, is normally distributed with zero mean and covariance of $\boldsymbol{M}'_2/(\tau_{\xi} \epsilon^\nu)$.
Suppose that Assumptions \ref{ass1}-\ref{ass4} are satisfied and the effective damping and diffusion constants, $\boldsymbol{K}'_i = \boldsymbol{C}'_i (\boldsymbol{\Gamma}_i')^{-1} \boldsymbol{M}_i' (\boldsymbol{C}_i')^{*} = \boldsymbol{K}_i$ ($i=1,2$), are invertible. Moreover, we assume that $\boldsymbol{I} + \boldsymbol{g}(\boldsymbol{x}) \tilde{\boldsymbol{\kappa}'}(\lambda \tau_{\kappa}) \boldsymbol{h}(\boldsymbol{x})/\lambda m_0$ is invertible for all $\lambda$ in the right half plane $\{\lambda \in \CC: Re(\lambda)>0\}$ and $\boldsymbol{x} \in \RR^d$, where $\tilde{\boldsymbol{\kappa}'}(z) := \boldsymbol{C}'_1(z\boldsymbol{I} + \boldsymbol{\Gamma}'_1)^{-1}\boldsymbol{M}'_1 (\boldsymbol{C}'_1)^* = \boldsymbol{\tilde{\kappa}}(z)$ for $z \in \CC$.
Let $\mu = \theta = \nu$ in \eqref{class_side_rescaled}-\eqref{class_rescaled_ou}. Then as $\epsilon \to 0$, the process $\boldsymbol{x}^\epsilon_t$ converges, in the similar sense as in Theorem \ref{general_result}, to $\boldsymbol{X}_t$, where $\boldsymbol{X}_{t}$ is the solution of the SDE \eqref{general_limitSDE} with the $\boldsymbol{C}_i$, $\boldsymbol{\Gamma}_i$, $\boldsymbol{M}_i$, $\boldsymbol{\Sigma}_i$ replaced by $\boldsymbol{C}'_i$, $\boldsymbol{\Gamma}'_i$, $\boldsymbol{M}'_i$, $\boldsymbol{\Sigma}'_i$ respectively, and the driving Wiener process $\boldsymbol{W}^{(q_2)}_t$ replaced by $\boldsymbol{W}_t'$.
\end{corollary}
Corollary \ref{class_thm} is an easy consequence of Theorem \ref{general_result}. \\
Next, we discuss a particular, but very important, case when a {\it fluctuation-dissipation relation} holds. This is, for instance, the case when the pre-limit dynamics are (heuristically) derived from Hamiltonian dynamics (see Appendix \ref{appA}). We will further explore similar cases of fluctuation-dissipation relations for the two sub-classes.
\begin{corollary} \label{gen_fdt}
Let $\boldsymbol{x}^\epsilon_{t} \in \RR^{d}$ be the solution to the SDEs \eqref{sdec1}-\eqref{sdec6}. Suppose that the assumptions of Theorem \ref{general_result} holds. Moreover, we assume that:
\begin{equation} \label{fdt_con1}
\tau_{\kappa} = \tau_{\xi} = \tau, \ \ \boldsymbol{\sigma} = \boldsymbol{g}, \ \ \boldsymbol{h} = \boldsymbol{g}^*, \end{equation} where $\tau$ is a positive constant, and
\begin{equation} \label{fdt_con2}
\boldsymbol{C}_1 = \boldsymbol{C}_2 := \boldsymbol{C}, \ \ \boldsymbol{\Gamma}_1 = \boldsymbol{\Gamma}_2 := \boldsymbol{\Gamma}, \ \ \boldsymbol{M}_1 = \boldsymbol{M}_2 := \boldsymbol{M}, \ \ \boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2 := \boldsymbol{\Sigma},
\end{equation}
(so that $q=r$ and $d_1 =d_2$). Denote $\boldsymbol{K} := \boldsymbol{C} \boldsymbol{\Gamma}^{-1} \boldsymbol{M} \boldsymbol{C}^*$.
Then as $\epsilon \to 0$, the process $\boldsymbol{x}^\epsilon_{t}$ converges to the solution, $\boldsymbol{X}_{t}$, of the following It\^o SDE:
\begin{equation} \label{fdtcase_limitSDE}
d\boldsymbol{X}_{t} = \boldsymbol{S}(\boldsymbol{X}_{t}) dt + [\boldsymbol{g}(\boldsymbol{X}_t) \boldsymbol{K} \boldsymbol{g}^*(\boldsymbol{X}_t)]^{-1} \boldsymbol{F}(\boldsymbol{X}_{t}) dt + [\boldsymbol{g}(\boldsymbol{X}_t) \boldsymbol{K} \boldsymbol{g}^*(\boldsymbol{X}_t)]^{-1} \boldsymbol{g}(\boldsymbol{X}_t) \boldsymbol{C} \boldsymbol{\Gamma}^{-1} \boldsymbol{\Sigma} d\boldsymbol{W}^{(q_2)}_{t},
\end{equation}
where $\boldsymbol{S}(\boldsymbol{X}_t)$ is the noise-induced drift whose $i$th component is given by
\begin{equation}
S_i(\boldsymbol{X}) = m_0 \frac{\partial}{\partial X_{l}}\left[ ((\boldsymbol{g} \boldsymbol{K}\boldsymbol{g}^*)^{-1})_{ij}(\boldsymbol{X})\right] (\boldsymbol{J}_{11})_{jl}(\boldsymbol{X}), \ \
i,j,l = 1, \dots, d, \end{equation}
where $\boldsymbol{J}_{11}$ solves the following system of three matrix equations:
\begin{align}
m_0 \boldsymbol{J}_{11} \boldsymbol{g} \boldsymbol{C} \boldsymbol{M} + \tau \boldsymbol{g} \boldsymbol{C}(\boldsymbol{J}_{23} + \boldsymbol{J}_{23}^*) &= \tau \boldsymbol{g} \boldsymbol{C} \boldsymbol{J}_{22}+\boldsymbol{g} \boldsymbol{C} \boldsymbol{M}, \label{spec1} \\
\boldsymbol{M} \boldsymbol{C}^* \boldsymbol{g}^* \boldsymbol{g} \boldsymbol{C} \boldsymbol{M} (\boldsymbol{\Gamma}^{-1})^* &= \tau \boldsymbol{M} \boldsymbol{C}^* \boldsymbol{g}^* \boldsymbol{g} \boldsymbol{C} \boldsymbol{J}_{23} (\boldsymbol{\Gamma}^{-1})^* + m_0 (\boldsymbol{\Gamma} \boldsymbol{J}_{23} + \boldsymbol{J}_{23} \boldsymbol{\Gamma}^*), \\
\boldsymbol{M} \boldsymbol{C}^* \boldsymbol{g}^*\boldsymbol{g} \boldsymbol{C} \boldsymbol{M} (\boldsymbol{\Gamma}^{-1})^* + \boldsymbol{\Gamma}^{-1} \boldsymbol{M} \boldsymbol{C}^* \boldsymbol{g}^* \boldsymbol{g} \boldsymbol{C} \boldsymbol{M} &= \tau(\boldsymbol{M} \boldsymbol{C}^* \boldsymbol{g}^* \boldsymbol{\Gamma}^{-1} \boldsymbol{J}_{23}^* \boldsymbol{C}^* \boldsymbol{g}^* + \boldsymbol{\Gamma}^{-1} \boldsymbol{J}_{23}^* \boldsymbol{C}^* \boldsymbol{g}^* \boldsymbol{g} \boldsymbol{C} \boldsymbol{M}) \nonumber \\
&\hspace{1cm} + m_0 (\boldsymbol{\Gamma} \boldsymbol{J}_{22} + \boldsymbol{J}_{22} \boldsymbol{\Gamma}^*). \label{spec3}
\end{align}
The convergence is obtained in the same sense as in Theorem \ref{general_result}.
\end{corollary}
Eqns. \eqref{fdt_con1}-\eqref{fdt_con2} are a form of fluctuation-dissipation relation familiar from non-equilibrium statistical mechanics \cite{toda2012statistical}. As stationary measures of systems satisfying fluctuation-dissipation relations are in equilibrium with respect to the underlying dynamics, this result is relevant for describing equilibrium properties of such systems in the small mass limit.
\begin{remark}
Therefore, if the fluctuation-dissipation relation holds, the noise-induced drift in the limiting SDE reduces to {\it a single term} (later we will see how this term simplifies in some special cases). This result may have interesting implications for nanoscale systems in equilibrium.
We remark that the conditions for the fluctuation-dissipation relation in Corollary \ref{gen_fdt} can be written in other equivalent forms, up to the transformations in \eqref{transf_realize} and multiplication by a constant.
\end{remark}
\begin{proof}
The above corollary follows from applying Theorem \ref{general_result}. Indeed, by assumptions of the corollary, \eqref{gen_system} simplifies to:
\begin{equation}
\boldsymbol{g} \boldsymbol{C} (\boldsymbol{J}_{12}-\boldsymbol{J}_{13})^* + (\boldsymbol{J}_{12}-\boldsymbol{J}_{13}) (\boldsymbol{g} \boldsymbol{C})^* = \boldsymbol{0}.
\end{equation}
This implies that $\boldsymbol{J}_{12} = \boldsymbol{J}_{13}$ and, therefore, $\boldsymbol{S}^{(2)}$ and $\boldsymbol{S}^{(3)}$ cancel. Rewriting the resulting system of matrix equations in \eqref{gen_system}-\eqref{gen_end} give \eqref{spec1}-\eqref{spec3}.
\end{proof}
\section{Homogenization for Models of the Two Sub-Classes} \label{homog}
We now return to the two sub-classes of SIDEs \eqref{genle_general} introduced in Section \ref{nmle}. In this section, we study the effective dynamics described by SIDEs \eqref{goal2} and \eqref{goal3} in the limit as $\epsilon \to 0$. By specializing to these two sub-classes, the convergence result of Theorem \ref{general_result}, in particular the expressions in \eqref{general_limitSDE}-\eqref{gen_end}, can be made more explicit under certain assumptions on the matrix-valued coefficients and therefore the limiting equation obtained may be useful for modeling purposes.
\subsection{SIDEs Driven by a Markovian Colored Noise}
The following convergence result provides a homogenized SDE for the particle's position in the limit as the inertial time scale, the memory time scale and the noise correlation time scale vanish at the same rate in the case when the pre-limit dynamics are driven by an Ornstein-Uhlenbeck noise.
\begin{corollary} \label{model3}
Let $d=d_1=d_2=q_1=q_2=q=r$. We set, in the SDEs \eqref{sdec1}-\eqref{sdec6}: $\boldsymbol{\beta}^\epsilon_t = \boldsymbol{\eta}^\epsilon_t$, $\tau_{\xi} = \tau_{\eta}$, $\boldsymbol{W}_t^{(q_2)} = \boldsymbol{W}^{(d)}_t := \boldsymbol{W}_t$ and
\begin{equation}
(\boldsymbol{\Gamma}_1, \boldsymbol{M}_1, \boldsymbol{C}_1) = (\boldsymbol{A}, \boldsymbol{A}, \boldsymbol{I}), \ \ (\boldsymbol{\Gamma}_2, \boldsymbol{M}_2, \boldsymbol{C}_2) = (\boldsymbol{A}, \boldsymbol{A}/2, \boldsymbol{I}),\end{equation}
to obtain SDEs equivalent to equations \eqref{goal2}-\eqref{rescaled-ou} with $\mu = \theta = \nu = 1$. Let $\boldsymbol{x}^\epsilon_{t} \in \RR^{d}$ be the solution to these equations, with the matrices $\boldsymbol{g}(\boldsymbol{x})$ and $\boldsymbol{h}(\boldsymbol{x})$ positive definite for every $\boldsymbol{x} \in \RR^d$. Suppose that Assumptions \ref{ass1}-\ref{ass4} are satisfied and, moreover, that $\boldsymbol{g}(\boldsymbol{x})$, $\boldsymbol{h}(\boldsymbol{x})$ and the diagonal matrix $\boldsymbol{A}$ are commuting.
Then as $\epsilon \to 0$, the process $\boldsymbol{x}^\epsilon_{t}$ converges to the solution, $\boldsymbol{X}_{t}$, of the following It\^o SDE:
\begin{equation} \label{limitSDE}
d\boldsymbol{X}_{t} = \boldsymbol{S}(\boldsymbol{X}_{t}) dt + (\boldsymbol{g} \boldsymbol{h})^{-1}(\boldsymbol{X}_{t})\boldsymbol{F}(\boldsymbol{X}_{t}) dt + (\boldsymbol{g}\boldsymbol{h})^{-1}(\boldsymbol{X}_{t}) \boldsymbol{\sigma}(\boldsymbol{X}_{t}) d\boldsymbol{W}_{t},
\end{equation}
with $\boldsymbol{S}(\boldsymbol{X}_{t}) = \boldsymbol{S}^{(1)}(\boldsymbol{X}_{t}) + \boldsymbol{S}^{(2)}(\boldsymbol{X}_{t}) + \boldsymbol{S}^{(3)}(\boldsymbol{X}_{t}),$ where the $\boldsymbol{S}^{(k)}$ are the noise-induced drifts whose $i$th components are given by
\begin{align}
S^{(1)}_{i}(\boldsymbol{X}) &= m_{0} \frac{\partial}{\partial X_{l}}[((\boldsymbol{g}\boldsymbol{h})^{-1})_{ij}(\boldsymbol{X})] (\boldsymbol{J}_{11})_{jl}(\boldsymbol{X}), \ \
i,j,l = 1, \dots, d, \label{nid1} \\
S^{(2)}_{i}(\boldsymbol{X}) &= -\tau_{\kappa} \frac{\partial}{\partial X_{l}}[((\boldsymbol{A} \boldsymbol{h})^{-1})_{ij}(\boldsymbol{X})] (\boldsymbol{J}_{21})_{jl}(\boldsymbol{X}), \ \ i,j,l = 1, \dots, d, \label{nid2} \\
S^{(3)}_{i}(\boldsymbol{X}) &= \tau_{\eta} \frac{\partial}{\partial X_{l}}[((\boldsymbol{g}\boldsymbol{h})^{-1} \boldsymbol{\sigma} \boldsymbol{A}^{-1} )_{ij}(\boldsymbol{X})] (\boldsymbol{J}_{31})_{jl}(\boldsymbol{X}), \ \ i,j,l = 1, \dots, d. \label{nid3}
\end{align}
Here
$\boldsymbol{J}_{11} = \boldsymbol{J}_{11}^*$, $\boldsymbol{J}_{21}=\boldsymbol{J}_{12}^*$ and $\boldsymbol{J}_{31} = \boldsymbol{J}_{13}^*$ are $d$ by $d$ block matrices satisfying the following system of matrix equations:
\begin{align}
\tau_{\eta} \boldsymbol{g} \boldsymbol{J}_{23} + m_0 \boldsymbol{J}_{13} \boldsymbol{A} &= \boldsymbol{\sigma} \boldsymbol{A}/2, \label{subclass1_start} \\
\tau_{\eta} \boldsymbol{A} \boldsymbol{h} \boldsymbol{J}_{13} &= \tau_{\eta} \boldsymbol{A} \boldsymbol{J}_{23} + \tau_{\kappa} \boldsymbol{J}_{23} \boldsymbol{A}, \\
\boldsymbol{A} \boldsymbol{h} \boldsymbol{J}_{12} + \boldsymbol{J}_{12}^* \boldsymbol{h} \boldsymbol{A} &= \boldsymbol{A} \boldsymbol{J}_{22} + \boldsymbol{J}_{22} \boldsymbol{A}, \\
\boldsymbol{g} \boldsymbol{J}_{12}^* + \boldsymbol{J}_{12} \boldsymbol{g} &= \boldsymbol{\sigma}\boldsymbol{J}_{13}^* + \boldsymbol{J}_{13} \boldsymbol{\sigma}^*, \\
m_0 \boldsymbol{J}_{11} \boldsymbol{h} \boldsymbol{A} + \tau_{\kappa} \boldsymbol{\sigma} \boldsymbol{J}_{23}^* &= \tau_{\kappa} \boldsymbol{g} \boldsymbol{J}_{22} + m_0 \boldsymbol{J}_{12} \boldsymbol{A}. \label{subclass1_end}
\end{align}
The convergence is obtained in the same sense as in Theorem \ref{general_result}.
\end{corollary}
\begin{proof} We will apply Theorem \ref{general_result}. As $\boldsymbol{K}_1=2 \boldsymbol{K}_2 = \boldsymbol{I}$, clearly they are invertible. Also, being positive definite, $\boldsymbol{g}$ and $\boldsymbol{h}$ are invertible.
Since $\boldsymbol{g}$, $\boldsymbol{h}$ and $\boldsymbol{A}$ are positive definite and commuting matrices, the matrix $\boldsymbol{B}_{\lambda}(\boldsymbol{x})$, defined in \eqref{inv_cond}, is invertible for all $\lambda$ such that $Re(\lambda) > 0$. Indeed, in this case $\boldsymbol{B}_{\lambda}(\boldsymbol{x}) = \boldsymbol{I} + \boldsymbol{g}(\boldsymbol{x})(\lambda \tau_{\kappa} \boldsymbol{I} + \boldsymbol{A})^{-1} \boldsymbol{A} \boldsymbol{h}(\boldsymbol{x})/\lambda m_0$. Since $\boldsymbol{g}$, $\boldsymbol{h}$ and $\boldsymbol{A}$ are positive definite and commuting, they have positive eigenvalues and can be simultaneously diagonalized. Therefore, all the eigenvalues of $\boldsymbol{B}_{\lambda}(\boldsymbol{x})$ are nonzero for every $\lambda$ with $Re(\lambda) > 0$ and $\boldsymbol{x} \in \RR^d$, so the invertibility condition is verified. Therefore, the block matrix:
\begin{equation} \label{nsk2}
\boldsymbol{\hat{\gamma}}(\boldsymbol{x}^\epsilon_{t}) = \left[ \begin{array}{ccc} \boldsymbol{0} & \frac{\boldsymbol{g}(\boldsymbol{x}^\epsilon_{t})}{m_{0}} & -\frac{\boldsymbol{\sigma}(\boldsymbol{x}^\epsilon_{t})}{m_{0}} \\ -\frac{\boldsymbol{A} \boldsymbol{h}(\boldsymbol{x}^\epsilon_{t}) }{\tau_{\kappa}} & \frac{\boldsymbol{A}}{\tau_{\kappa}} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} & \frac{\boldsymbol{A}}{\tau_{\eta}} \end{array} \right],
\end{equation}
is positive stable (see Remark \ref{role_of_gamma}).
The result then follows by applying Theorem \ref{general_result}.
\end{proof}
For special one-dimensional systems, the form of the limiting equation can be made even more explicit.
\begin{corollary} \label{1dcase}
In the one-dimensional case, we drop the boldface and write $\boldsymbol{X}_{t} := X_{t} \in \RR, \ \boldsymbol{g}(\boldsymbol{X}) := g(X),$ with $g: \RR \to \RR$, etc.. We assume that $h = g$ and $\boldsymbol{A} := \alpha > 0$ is a constant. The homogenized equation is given by:
\begin{equation} \label{onedlimitSDE}
dX_{t} = S(X_{t}) dt + g^{-2}(X_{t}) F(X_{t}) dt + g^{-2}(X_{t}) \sigma(X_{t}) dW_{t},
\end{equation}
with $S(X_{t}) = S^{(1)}(X_{t}) + S^{(2)}(X_{t}) + S^{(3)}(X_{t}),$ where the noise-induced drift terms $S^{(k)}(X_{t})$ have the following explicit expressions that depend on the parameters $m_{0}, \tau_{\kappa}$ and $\tau_{\eta}$:
\begin{align}
S^{(1)}(X_{t}) &= \left(\frac{1}{g^2(X_{t})} \right)' \frac{ \sigma(X_{t})^2}{2 g^2(X_{t})} \left[\frac{\tau_{\kappa}^2 g^2(X_{t})+m_{0}\alpha (\tau_{\kappa}+\tau_{\eta})}{\tau_{\eta}^2 g^2(X_{t})+m_{0}\alpha (\tau_{\kappa}+\tau_{\eta})} \right], \label{86} \\
S^{(2)}(X_{t}) &= - \left(\frac{1}{g(X_{t})} \right)' \frac{ \sigma(X_{t})^2 \tau_{\kappa}(\tau_{\kappa}+\tau_{\eta})}{2g(X_{t})[\tau_{\eta}^2 g^2(X_{t})+m_{0}\alpha(\tau_{\kappa}+\tau_{\eta})]}, \label{87} \\
S^{(3)}(X_{t}) &= \left(\frac{\sigma(X_{t})}{g^2(X_{t})} \right)' \frac{ \sigma(X_{t})\tau_{\eta} (\tau_{\kappa}+\tau_{\eta})}{2[\tau_{\eta}^2 g^2(X_{t})+m_{0}\alpha (\tau_{\kappa}+\tau_{\eta})]}, \label{88}
\end{align}
where the prime $'$ denotes derivative with respect to $X_t$.
\end{corollary}
\begin{proof}
With $\boldsymbol{x}^\epsilon_{t} := (x^\epsilon_{t}, z^\epsilon_{t}, \zeta^\epsilon_{t}) \in \RR^{3}$ and $\boldsymbol{v}^\epsilon_{t} := (v^\epsilon_{t}, y^\epsilon_{t}, \eta^\epsilon_{t}) \in \RR^{3}$, SDEs \eqref{gen_nsk1}-\eqref{gen_nsk2} become:
\begin{align}
d\boldsymbol{x}^\epsilon_{t} &= \boldsymbol{v}^\epsilon_{t} dt, \\
\epsilon d\boldsymbol{v}^\epsilon_{t} &= - \boldsymbol{\gamma}(\boldsymbol{x}^\epsilon_{t}) \boldsymbol{v}^\epsilon_{t} dt + \boldsymbol{F}(\boldsymbol{x}^\epsilon_{t})dt + \boldsymbol{\sigma} d\boldsymbol{W}_{t},
\end{align}
where
\begin{equation} \label{why}
\boldsymbol{\gamma}(\boldsymbol{x}^\epsilon_{t}) = \left[ \begin{array}{ccc}
0 & \frac{g(x^\epsilon_{t})}{m_{0}} & -\frac{\sigma(x^\epsilon_{t})}{m_{0}} \\
-\frac{\alpha}{\tau_{\kappa}} g(x^\epsilon_{t}) & \frac{\alpha}{\tau_{\kappa}} & 0 \\
0 & 0 & \frac{\alpha}{\tau_{\eta}} \end{array} \right], \ \ \boldsymbol{F}(\boldsymbol{x}^\epsilon_{t}) = \begin{bmatrix}
\frac{F(x^\epsilon_{t})}{m_{0}} \\
0\\
0 \\
\end{bmatrix}, \ \ \boldsymbol{\sigma} =
\left[ \begin{array}{c}
0 \\
0 \\
\frac{\alpha}{\tau_{\eta}} \end{array} \right]. \end{equation}
It follows from Corollary \ref{model3} that the matrix $\boldsymbol{\gamma}$ is positive stable; one can also calculate its eigenvalues explicitly and see that their real parts are positive.
The eigenvalues of $\boldsymbol{\gamma}$ are
\begin{equation} \frac{\alpha}{\tau_{\eta}}, \ \frac{\alpha}{2 \tau_{\kappa}} \pm \frac{1}{2} \sqrt{\frac{\alpha^2 m_{0}-4 \alpha g(x^\epsilon_{t})^2 \tau_{\kappa}}{m_{0} \tau_{\kappa}^2}}, \end{equation}
and so their real parts are indeed positive.
On the other hand, the solution, $\boldsymbol{J} \in \RR^{3 \times 3}$, to the Lyapunov equation,
\begin{equation}
\boldsymbol{\gamma} \boldsymbol{J} + \boldsymbol{J} \boldsymbol{\gamma}^{*} = \boldsymbol{\sigma} \boldsymbol{\sigma}^{*},\end{equation}
can be computed (using Mathematica$\textsuperscript{\textregistered}$) to be:
\begin{equation}
\boldsymbol{J} = \left[ \begin{array}{ccc}
\frac{ \sigma^2}{2m_{0} g^2}\left[ \frac{\tau_{\kappa}^2 g^2 + m_{0} \alpha (\tau_{\kappa}+\tau_{\eta}) }{\tau_{\eta}^2 g^2 + m_{0} \alpha (\tau_{\kappa}+\tau_{\eta})}\right] & \frac{\alpha \sigma^2(\tau_{\kappa}+\tau_{\eta}) }{2g(\tau_{\eta}^2 g^2+m_{0} \alpha(\tau_{\kappa}+\tau_{\eta})}
& \frac{\alpha \sigma(\tau_{\kappa}+\tau_{\eta})}{2(\tau_{\eta}^2 g^2 + m_{0} \alpha(\tau_{\kappa}+\tau_{\eta}))} \\
\frac{\alpha \sigma^2(\tau_{\kappa}+\tau_{\eta}) }{2g(\tau_{\eta}^2 g^2+m_{0} \alpha(\tau_{\kappa}+\tau_{\eta})}
& \frac{\alpha \sigma^2 (\tau_{\kappa}+\tau_{\eta})}{2(\tau_{\eta}^2 g^2 +m_{0} \alpha(\tau_{\kappa}+\tau_{\eta}))} & \frac{\tau_{\eta} \alpha \sigma g}{2 (\tau_{\eta}^2 g^2+ m_{0} \alpha(\tau_{\kappa}+\tau_{\eta}))} \\
\frac{\alpha \sigma(\tau_{\kappa}+\tau_{\eta})}{2(\tau_{\eta}^2 g^2 + m_{0} \alpha(\tau_{\kappa}+\tau_{\eta}))}
& \frac{\tau_{\eta} \alpha \sigma g}{2 (\tau_{\eta}^2 g^2 + m_{0} \alpha(\tau_{\kappa}+\tau_{\eta}))} & \frac{\alpha}{2 \tau_{\eta}}
\end{array} \right].\end{equation} The result then follows from Corollary \ref{model3}.
\end{proof}
\begin{remark} \label{imp_rmk}
Note that here the matrix $\boldsymbol{\gamma}$ in \eqref{why} is not symmetric and the smallest eigenvalue of its symmetric part can be negative. Moreover, the initial condition $\boldsymbol{v}^\epsilon_0$ depends on $\epsilon$ through the component $\eta^\epsilon_0$ (which is a zero mean Gaussian random variable with variance $\alpha/2\epsilon$). Thus, we cannot apply the main results in \cite{hottovy2015smoluchowski} to obtain the convergence result. This is our main motivation to revisit the Smoluchowski-Kramers limit of SDEs in Section III under a weakened spectral assumption on the matrix $\boldsymbol{\gamma}$ (or $\boldsymbol{\hat{\gamma}}$ in the multidimensional case) and a relaxed assumption concerning the $\epsilon$ dependence of $\boldsymbol{v}^\epsilon_0$ (or $\hat{\boldsymbol{v}}^\epsilon_0$ in the multidimensional case).
\end{remark}
\begin{remark} \label{fdt_impcase} In the important case when the fluctuation-dissipation relation (i.e. $\tau_{\kappa} = \tau_{\eta}$, $h = g$ and $g$ is proportional to $\sigma$) holds for the one-dimensional models of the first sub-class, the correction drift terms $S^{(2)}$ and $S^{(3)}$ cancel each other and the resulting (single) noise-induced drift term coincides with that obtained in the limit as $m \to 0$ of the systems with no memory, driven by a white noise to which Theorem \ref{skthm} applies directly! However, when the relation fails, we obtain three different drift corrections induced by vanishing of all time scales. Again, the presence of these correction terms may have significant consequences for the dynamics of the systems (see Section \ref{sec:thermophoresis}).
\end{remark}
\subsection{SIDEs Driven by a Non-Markovian Colored Noise}
The following corollary provides a homogenized SDE for the particle position in the limit, in which the inertial time scale, the memory time scale and the noise correlation time scale vanish at the same rate in the case when the pre-limit dynamics are driven by the harmonic noise. We emphasize that in this case the original system is driven by a noise which is not a Markov process.
\begin{corollary} \label{model4}
Let $d=d_1=d_2=q_1=q_2=q=r$. We set, in the SDEs \eqref{sdec1}-\eqref{sdec6}: $\tau_{\xi} = \tau_{h}$, $\boldsymbol{W}_t^{(q_2)} = \boldsymbol{W}^{(d)}_t = \boldsymbol{W}_t$ and
\begin{equation}
\boldsymbol{\Gamma}_2 =
\begin{bmatrix}
\boldsymbol{0} & -\boldsymbol{I} \\
\boldsymbol{\Omega}^2 & \boldsymbol{\Omega}^2
\end{bmatrix}, \ \ \
\boldsymbol{\Gamma}_1 = \frac{1}{2}
\begin{bmatrix}
\boldsymbol{\Omega}^2 & 4\boldsymbol{I}-\boldsymbol{\Omega}^2 \\
-\boldsymbol{\Omega}^2 & \boldsymbol{\Omega}^2
\end{bmatrix} =: \boldsymbol{T} \boldsymbol{\Gamma}_2 \boldsymbol{T}^{-1}, \end{equation}
\begin{equation}
\boldsymbol{M}_2 =
\frac{1}{2} \begin{bmatrix}
\boldsymbol{I} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{\Omega}^2
\end{bmatrix}, \ \ \ \boldsymbol{M}_1 = 2\boldsymbol{T} \boldsymbol{M}_2 \boldsymbol{T}^*, \end{equation}
\begin{equation} \boldsymbol{C}_2 = [\boldsymbol{I} \ \ \boldsymbol{0}], \ \ \boldsymbol{C}_1 = \boldsymbol{C}_2 \boldsymbol{T}^{-1}, \end{equation}
to obtain SDEs equivalent to equations \eqref{goal3}-\eqref{rescaled_h2} with $\mu = \theta = \nu = 1$. Let $\boldsymbol{x}^\epsilon_{t} \in \RR^{d}$ be the solution to the SDEs \eqref{sdec1}-\eqref{sdec6}, with the matrices $\boldsymbol{g}(\boldsymbol{x})$ and $\boldsymbol{h}(\boldsymbol{x})$ positive definite for every $\boldsymbol{x} \in \RR^d$. Moreover, $\boldsymbol{g}(\boldsymbol{x})$, $\boldsymbol{h}(\boldsymbol{x})$ and the diagonal matrix $\boldsymbol{\Omega}^2$ are commuting. Suppose that Assumptions \ref{ass1}-\ref{ass4} are satisfied.
Then as $\epsilon \to 0$, the process $\boldsymbol{x}^\epsilon_{t}$ converges to the solution, $\boldsymbol{X}_{t}$, of the following It\^o SDE
\begin{equation} \label{limitSDE}
d\boldsymbol{X}_{t} = \boldsymbol{S}(\boldsymbol{X}_{t}) dt + (\boldsymbol{g}\boldsymbol{h})^{-1}(\boldsymbol{X}_{t})\boldsymbol{F}(\boldsymbol{X}_{t}) dt + (\boldsymbol{g} \boldsymbol{h})^{-1}(\boldsymbol{X}_{t}) \boldsymbol{\sigma}(\boldsymbol{X}_{t}) d\boldsymbol{W}_{t},
\end{equation}
with $\boldsymbol{S}(\boldsymbol{X}_{t}) = \boldsymbol{S}^{(1)}(\boldsymbol{X}_{t}) + \boldsymbol{S}^{(2)}(\boldsymbol{X}_{t}) + \boldsymbol{S}^{(3)}(\boldsymbol{X}_{t}),$ where the $\boldsymbol{S}^{(k)}$ are the noise-induced drift terms whose $i$th components are given by the expressions
\begin{align}
S^{(1)}_{i}(\boldsymbol{X}) &= m_{0} \frac{\partial}{\partial X_{l}}[((\boldsymbol{g}\boldsymbol{h})^{-1})_{ij}(\boldsymbol{X})] (\boldsymbol{J}_{11})_{jl}(\boldsymbol{X}), \label{hnid1} \\
S^{(2)}_{i}(\boldsymbol{X}) &= -\tau_{\kappa}\left( \frac{\partial}{\partial X_{l}}[( \boldsymbol{h}^{-1})_{ij}(\boldsymbol{X})] (\boldsymbol{J}_{21})_{jl}(\boldsymbol{X}) + \frac{\partial}{\partial X_{l}}[( \boldsymbol{h}^{-1} (\boldsymbol{I}-2\boldsymbol{\Omega}^{-2}))_{ij}(\boldsymbol{X})] (\boldsymbol{J}_{31})_{jl}(\boldsymbol{X}) \right), \label{hnid2} \\
S^{(3)}_{i}(\boldsymbol{X}) &= \tau_{h} \left(\frac{\partial}{\partial X_{l}}[((\boldsymbol{g}\boldsymbol{h})^{-1} \boldsymbol{\sigma})_{ij}(\boldsymbol{X})] (\boldsymbol{J}_{41})_{jl}(\boldsymbol{X}) + \frac{\partial}{\partial X_{l}}[((\boldsymbol{g}\boldsymbol{h})^{-1} \boldsymbol{\sigma} \boldsymbol{\Omega}^{-2} )_{ij}(\boldsymbol{X})] (\boldsymbol{J}_{51})_{jl}(\boldsymbol{X}) \right), \ \ \label{hnid3}
\end{align}
where $i,j,l = 1, \dots, d$.
In the above,
\begin{equation} \boldsymbol{\hat{J}} := \begin{bmatrix}
\boldsymbol{J}_{11} & \dots & \boldsymbol{J}_{15} \\
\vdots & \ddots & \vdots \\
\boldsymbol{J}_{51} & \dots & \boldsymbol{J}_{55}
\end{bmatrix} \in \RR^{5d \times 5d}, \ \text{ with } \boldsymbol{J}_{kl} \in \RR^{d \times d}, \ \ k,l = 1,\dots,5,\end{equation} is the block matrix solving the Lyapunov equation \begin{equation} \boldsymbol{\hat{J}} \boldsymbol{\hat{\gamma}}^{*} + \boldsymbol{\hat{\gamma}} \boldsymbol{\hat{J}} = \boldsymbol{\hat{\sigma}} \boldsymbol{\hat{\sigma}}^{*}, \end{equation}
where
\begin{equation} \label{hcase2}
\boldsymbol{\hat{\gamma}} = \left[ \begin{array}{ccccc}
\boldsymbol{0} & \frac{\boldsymbol{g}(\boldsymbol{X}) }{m_{0}} & \frac{\boldsymbol{g}(\boldsymbol{X})}{m_{0}} & -\frac{\boldsymbol{\sigma}(\boldsymbol{X}) }{m_{0}} & \boldsymbol{0} \\
-\frac{\boldsymbol{h}(\boldsymbol{X}) }{\tau_{\kappa}} & \frac{\boldsymbol{\Omega}^2}{2 \tau_{\kappa}} & \frac{2 \boldsymbol{I}}{\tau_{\kappa}} - \frac{\boldsymbol{\Omega}^2}{2 \tau_{\kappa}} & \boldsymbol{0} & \boldsymbol{0} \\
\boldsymbol{0} & -\frac{\boldsymbol{\Omega}^2}{2 \tau_{\kappa}} & \frac{\boldsymbol{\Omega}^2}{2 \tau_{\kappa}} & \boldsymbol{0} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & -\frac{1}{\tau_{h}} \boldsymbol{I} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \frac{\boldsymbol{\Omega}^2}{\tau_{h}} & \frac{\boldsymbol{\Omega}^2}{\tau_{h}} \end{array} \right], \ \ \ \
\boldsymbol{\hat{\sigma}} =
\left[ \begin{array}{c}
\boldsymbol{0} \\
\boldsymbol{0} \\
\boldsymbol{0} \\
\boldsymbol{0} \\
\frac{\boldsymbol{\Omega}^2}{\tau_{h}} \end{array} \right]. \end{equation}
In the above, $\boldsymbol{\hat{\gamma}} \in \RR^{5d \times 5d}$ is a 5 by 5 block matrix with each block an $\RR^{d \times d}$-valued matrix, $ \boldsymbol{\hat{\sigma}} \in \RR^{5d \times d}$ is a 5 by 1 block matrix with each block a $\RR^{d \times d}$-valued matrix, $\boldsymbol{I}$ is a $d \times d$ identity matrix, $\boldsymbol{0}$ in $\boldsymbol{\hat{\gamma}}$ and $\boldsymbol{\hat{\sigma}}$ is a $d \times d$ zero matrix, and $\boldsymbol{W}$ is a $d$-dimensional Wiener process.
The convergence is obtained in the same sense as in Theorem \ref{general_result}.
\end{corollary}
Note that the oscillatory nature of covariance function of the harmonic noise in the pre-limit SIDE makes the noise-induced drift in the resulting limiting SDE more complicated (there are more terms) compared to the case of OU process in the first sub-class. Therefore, we write the system of matrix equations that the $\boldsymbol{J}_{kl}$ satisfy in the form of a matrix Lyapunov equation in Corollary \ref{model4}, without breaking it up into equations for individual blocks. This could of course be done, leading to a (more complicated) analog of \eqref{subclass1_start}-\eqref{subclass1_end}. The proof of Corollary \ref{model4} is essentially identical to the proof of Corollary \ref{model3}, so we omit it.
Again, for special one-dimensional systems, we are going to make the result more explicit.
\begin{corollary} \label{h_1dcase}
In the one-dimensional case, we drop the boldface and write $\boldsymbol{X}_{t} := X_{t} \in \RR, \ \boldsymbol{g}(\boldsymbol{x}) := g(x),$ with $g: \RR \to \RR$, etc.. We assume that $h=g$ and $\boldsymbol{\Omega} := \Omega $ is a real constant. The homogenized equation is given by:
\begin{equation} \label{onedlimitSDE}
dX_{t} = S(X_{t}) dt + g^{-2}(X_{t}) F(X_{t}) dt + g^{-2}(X_{t}) \sigma(X_{t}) dW_{t},
\end{equation}
with $S(X_{t}) = S^{(1)}(X_{t}) + S^{(2)}(X_{t}) + S^{(3)}(X_{t}),$ where the noise-induced drift terms $S^{(k)}(X_{t})$ have the following explicit expressions (computed using Mathematica$\textsuperscript{\textregistered}$) that depend on the parameters $m_{0}, \tau_{\kappa}$ and $\tau_{h}$:
\begin{align}
S^{(1)}(X) &= m_{0}\left(\frac{1}{g^2(X)}\right)'J_{11}(X),\\
S^{(2)}(X) &= -\tau_{\kappa}\left(\frac{1}{g(X)}\right)' \left(J_{21}(X)+\left(1-\frac{2}{\Omega^2} \right) J_{31}(X) \right), \\
S^{(3)}(X) &= \tau_{h} \left( \frac{\sigma(X)}{g^2(X)}\right)' \left(J_{41}(X)+\frac{1}{\Omega^2} J_{51}(X) \right),
\end{align}
where the prime $'$ denotes derivative with respect to $X$ and the $J_{kl}(X)$ are given by:
\begin{align}
J_{11}(X) &= \frac{\sigma^2}{2m_{0} g^2 R(X)} \bigg( g^4 \tau_{\kappa}^4(\tau_{\kappa}^2+\tau_{\kappa} \tau_{h}\Omega^2 + \tau_{h}^2 \Omega^2) + m_{0}^2 \Omega^4 (\tau_{\kappa}+\tau_{h})^2 (\tau_{\kappa}^2 + \tau_{h}^2 \nonumber \\
&\ \ \ \ \ \ + \tau_{\kappa} \tau_{h} (\Omega^2-2)) + m_{0} \Omega^2 g^2 (\tau_{\kappa} + \tau_{h}) [\tau_{h}^4 + \tau_{\kappa}^2\tau_{h}^2 (\Omega^2-2)+\tau_{\kappa}^4(\Omega^2-1) \nonumber \\
&\ \ \ \ \ \ + \tau_{\kappa}^3 \tau_{h}(2-3\Omega^2+\Omega^4)] \bigg) \\
J_{21}(X) &= \frac{\sigma^2(\tau_{\kappa} + \tau_{h}) \Omega^2}{4g R(X)} \bigg( m_{0}\Omega^2 (\tau_{\kappa} + \tau_{h})(\tau_{\kappa}^2 + \tau_{h}^2 + \tau_{\kappa} \tau_{h} (\Omega^2-2)) \nonumber \\
&\ \ \ + g^2(\tau_{\kappa}^4+\tau_{\kappa}^2 \tau_{h}^2 + \tau_{h}^4+\tau_{\kappa}^3 \tau_{h}(\Omega^2-1)) \bigg), \\
J_{31}(X) &= -\frac{\sigma^2(\tau_{\kappa} + \tau_{h}) \Omega^2}{4g R(X)} \bigg(-m_{0}\Omega^2 (\tau_{\kappa} + \tau_{h})(\tau_{\kappa}^2 + \tau_{h}^2 + \tau_{\kappa} \tau_{h} (\Omega^2-2)) \nonumber \\
&\ \ \ + g^2(\tau_{\kappa}^4+\tau_{\kappa}^2 \tau_{h}^2 - \tau_{h}^4+\tau_{\kappa}^3 \tau_{h}(\Omega^2-1)) \bigg), \\
J_{41}(X) &= \frac{1}{2} \bigg(\sigma \Omega^2 (\tau_{\kappa}+\tau_{h}) [g^2 \tau_{h}^4 + m_{0} \Omega^2 (\tau_{\kappa}+\tau_{h})(\tau_{\kappa}^2+\tau_{h}^2+\tau_{\kappa}\tau_{h}(\Omega^2-2))] \bigg), \\
J_{51}(X) &= -\frac{1}{2} \bigg(\sigma \Omega^2 (\tau_{\kappa}+\tau_{h}) [m_{0} \Omega^2 (\tau_{\kappa}+\tau_{h})(\tau_{\kappa}^2+\tau_{h}^2+\tau_{\kappa}\tau_{h}(\Omega^2-2)) \nonumber \\
&\ \ \ \ \ -g^2\tau_{\kappa}\tau_{h}^2(\tau_{\kappa}+\tau_{h}(\Omega^2-1))] \bigg),
\end{align}
where $g=g(X)$, $\sigma = \sigma(X)$ and
\begin{align}
R(X) &= g^4 \tau_{h}^4 (\tau_{\kappa}^2 + \tau_{\kappa} \tau_{h} \Omega^2+\tau_{h}^2 \Omega^2) + m_{0}^2 \Omega^4 (\tau_{\kappa}+\tau_{h})^2(\tau_{\kappa}^2+\tau_{h}^2+\tau_{\kappa} \tau_{h} (\Omega^2-2)) \nonumber \\
&\ \ \ \ +g^2m_{0}\tau_{h}^2 \Omega^2[\tau_{h}^3\Omega^2+\tau_{\kappa}^3(\Omega^2-2)+\tau_{\kappa}^2 \tau_{h}\Omega^2(\Omega^2-2)+\tau_{\kappa}\tau_{h}^2(2-2\Omega^2+\Omega^4)].
\end{align}
\end{corollary}
Note that if we send $\Omega \to \infty$ in the expressions for the $S^{(i)}(X)$ $(i=1,2,3)$ above, we recover the corresponding expressions given in Corollary \ref{1dcase} (with $\alpha =1$). This is not surprising, since in this limit the harmonic noise becomes an OU process (with $\alpha = 1)$.
Moreover, when $\tau_{\kappa} = \tau_{h} = \tau$, the noise-induced drift becomes $S(X) = S^{(1)}(X)+S^{(2)}(X)+S^{(3)}(X),$ where
\begin{align}
S^{(1)} &= \frac{1}{2}\left(\frac{1}{g^2}\right)'\frac{\sigma^2}{g^2},\\
S^{(2)} &= -\frac{2\tau \Omega^2 \sigma^2}{g} \left(\frac{1}{g}\right)' \left(\frac{g^2 \tau + m_{0}\Omega^2(\Omega^2-1)}{4 m_{0}^2 \Omega^6+2 g^2 m_{0} \tau \Omega^4(\Omega^2-1)+g^4 \tau^2 (1+2\Omega^2)} \right), \\
S^{(3)} &= 2 \tau \Omega^2 \sigma \left( \frac{\sigma}{g^2}\right)' \left(\frac{g^2 \tau + m_{0}\Omega^2(\Omega^2-1)}{4 m_{0}^2 \Omega^6+2 g^2 m_{0} \tau \Omega^4(\Omega^2-1)+g^4 \tau^2 (1+2\Omega^2)} \right).
\end{align}
Again, in the case when the fluctuation-dissipation relation holds we see that the noise-induced drift coincides with that obtained in the limit as $m \to 0$ of the Markovian model in Section III.
\section{Application to the Study of Thermophoresis} \label{sec:thermophoresis}
\subsection{Introduction}
We revisit the dynamics of a free Brownian particle immersed in a heat bath where a temperature gradient is present. This was previously studied in \cite{Hottovy2012a}. It was found there that the particle experiences a drift in response to the temperature gradient, due to the interplay between the inertial time scale and the noise correlation time scale. Such phenomenon is called {\it thermophoresis}. We refer to \cite{Hottovy2012a,piazza2008thermophoresis} and the references therein for further descriptions of this phenomenon, including references to experiments.
Here, we will study the dynamics of the particle in a non-equilibrium heat bath, where a {\it generalized fluctuation-dissipation relation} holds, in which both the diffusion coefficient and the temperature of the heat bath vary with the position. In contrast to \cite{Hottovy2012a}, we take into account also the memory time scale (in addition to the inertial time scale and the noise correlation time scale) and model the position of the particle as the solution to a SIDE of the form \eqref{genle_general}. Unlike the model used in \cite{Hottovy2012a}, the model can be derived heuristically from microscopic dynamics by an argument very similar to that of Appendix \ref{appA}.
For a spherical particle of radius $R$ immersed in a fluid of viscosity $\mu$, which in general is a function of the temperature $T = T(x)$ (and thus depends on $x$ as well), the friction (or damping) coefficient $\gamma$ satisfies the Stokes law \cite{toda2012statistical}:
\begin{equation} \label{stokes}
\gamma(x) = 6 \pi \mu(T) R.
\end{equation}
On the other hand, the damping coefficient $\gamma(x)$ and the noise coefficient $\sigma(x)$ are expressed in terms of the diffusion coefficient $D(x)$ and the temperature $T(x)$ as follows:
\begin{equation} \label{coeff_fd}
\gamma(x) = \frac{k_{B}T(x)}{D(x)}, \ \ \sigma(x) = \frac{k_{B}T(x) \sqrt{2}}{\sqrt{D(x)}}.\end{equation}
In the following, we study two one-dimensional non-Markovian models of thermophoresis. The first model is driven by a Markovian colored noise and the second model by a non-Markovian one.
\subsection{A Thermophoresis Model with Ornstein-Uhlenbeck Noise}
In this section we model evolution of the position, $x_{t} \in \RR$, of a particle by the following SIDE:
\begin{equation} \label{thermo}
m \ddot{x}_{t} = - \sqrt{\gamma(x_{t})} \int_{0}^{t} \alpha e^{-\alpha(t-s)} \sqrt{\gamma(x_{s})} \dot{x}_{s} ds + \sigma(x_{t}) \eta_{t},
\end{equation}
where $\eta_{t}$ is a stationary process, satisfying the SDE:
\begin{equation} \label{outhermo}
d\eta_{t} = -\alpha \eta_{t} dt + \alpha dW_{t}.
\end{equation}
The above equations are obtained by setting $d=1$, $\boldsymbol{F} = 0$, $\boldsymbol{h} = \boldsymbol{g} = g := \sqrt{\gamma}$, $\boldsymbol{\sigma} = \sigma$ in $\eqref{side2}$ and $\boldsymbol{A} = \alpha$ in $\eqref{ou}$, where $\gamma$ and $\sigma$ are given by \eqref{coeff_fd}. Note that the noise correlation function is proportional to the memory kernel in the SIDE \eqref{thermo}, i.e.
\begin{equation} E[\eta_{t} \eta_{s}] = \frac{\alpha}{2} e^{-\alpha|t-s|} = \frac{1}{2}\kappa_{1}(t-s), \ s,t \geq 0\end{equation}
as in \eqref{fdt_sc1}. Together with \eqref{coeff_fd}, this implies that \eqref{thermo} satisfies the generalized fluctuation-dissipation relation (see the statement of Corollary \ref{gen_fdt} and Remark \ref{fdt_impcase}). Note also that $g$ is a constant multiple of $\sigma$ if and only if $T$ is position-independent.
We now consider the effective dynamics of the particle in the limit when all the three characteristic time scales vanish at the same rate. In the following, the prime $'$ denotes derivative with respect to the argument of the function.
\begin{corollary} \label{cor1_thermo}
Let $\epsilon > 0$ be a small parameter and let the particle's position, $x^\epsilon_t \in \RR$ ($t \geq 0$), satisfy the following rescaled version of \eqref{thermo}-\eqref{outhermo}:
\begin{align}
dx^\epsilon_t &= v^\epsilon_t dt, \label{rescaled_m1_thermo0} \\
m_0 \epsilon d v^\epsilon_{t} &= \sigma(x^\epsilon_{t}) \eta^\epsilon_{t} dt - \sqrt{\gamma(x^\epsilon_{t})} \left( \int_{0}^{t} \frac{\alpha}{\tau \epsilon} e^{-\frac{\alpha}{\tau \epsilon}\left(t-s\right)} \sqrt{\gamma(x^\epsilon_{s})} v^\epsilon_{s} ds \right) dt, \label{rescaled_m1_thermo1} \\
\tau \epsilon d\eta^\epsilon_t &= -\alpha \eta^\epsilon_t dt + \alpha dW_t, \label{rescaled_m1_thermo2}
\end{align}
where $m_0$, $\alpha$, $\tau$ are positive constants, and $(W_t)$ is a one-dimensional Wiener process. The initial conditions are random variables $x^\epsilon_0 = x$, $v^\epsilon_0 = v$, independent of $\epsilon$ and (statistically) independent of $(W_t)$, and $\eta^\epsilon_0$ is distributed according to the invariant distribution of the SDE \eqref{rescaled_m1_thermo2}.
Assume that the assumptions of Corollary \ref{model3} are satisfied (in particular, $\gamma(x) > 0$ for every $x \in \RR$). Then, in the limit as $\epsilon \to 0$, $x^\epsilon_t$ converges (in the same sense as in Corollary \ref{model3}) to the process $X_{t} \in \RR$, satisfying the SDE:
\begin{equation} \label{limitthermo}
dX_{t} = b_{1}(X_{t}) dt + \sqrt{2 D(X_{t})} dW_{t},
\end{equation}
with the noise-induced drift, $b_{1}(X) = S^{(1)}(X) + S^{(2)}(X) + S^{(3)}(X)$, where
\begin{align}
S^{(1)}(X) &= D'(X)-\frac{D(X)T'(X)}{T(X)},\\
S^{(2)}(X) &= \left[-\frac{k_{B}T(X)D'(X)}{D(X)}+ k_{B}T'(X) \right] \cdot \left[\frac{ \tau D(X)}{\tau k_{B}T(X)+2m_{0}\alpha D(X)} \right], \\
S^{(3)}(X) &= \left[\frac{k_{B}T(X)D'(X)}{D(X)} \right] \cdot \left[\frac{ \tau D(X)}{\tau k_{B}T(X)+2m_{0}\alpha D(X)} \right].
\end{align}
\end{corollary}
\begin{proof}
The corollary follows from Corollary \ref{1dcase}. In particular, the expressions for $S^{(1)}$, $S^{(2)}$ and $S^{(3)}$ follow from applying Corollary \ref{1dcase} to the present system (see \eqref{86}-\eqref{88}).
\end{proof}
We give some remarks and discussions of the contents of Corollary \ref{cor1_thermo} before we end this subsection.
\begin{remark}
We see that in this case a part of $S^{(2)}$ cancels $S^{(3)}$ and therefore the noise-induced drift simplifies to:
\begin{equation} \label{thermodrift}
b_{1}(X) = D'(X)-\frac{2m_{0}\alpha D^2(X)}{\tau k_{B}T(X)+2m_{0}\alpha D(X)}\frac{T'(X)}{T(X)}.
\end{equation}
Using the Stokes law \eqref{stokes} which gives \begin{equation}D(X) = \frac{k_{B}}{6 \pi R} \frac{T(X)}{\mu(T)}, \end{equation}
where $\mu(T) = \mu(T(X))$, we have
\begin{equation} \label{thermodrift2}
b_{1}(X) = k_{B} T'(X) \left( \frac{\tau}{2(\alpha m_{0} + 3 \pi R \tau \mu(T))} - \frac{\mu'(T) T(X)}{6 \pi R \mu^2(T)} \right).
\end{equation}
Equation \eqref{thermodrift} gives the thermophoretic drift in the limit when the three characteristic time scales vanish. Since it arises in the absence of an external force acting on the particle, it is a ``spurious drift" caused by the presence of the temperature gradient and the state-dependence of the diffusion coefficient. Compared to eqn. (101) in \cite{hottovy2015smoluchowski}, the drift term derived here contains a correction term due to the temperature profile.
\end{remark}
\noindent {\bf Discussion.} We discuss some physical implications of the thermophoretic drift given in $\eqref{thermodrift}$. As discussed in \cite{Hottovy2012a}, the sign of $b_{1}(X)$ determines the direction in which
the particle is expected to travel. The particle will eventually reach some boundaries, which can be either absorbing or reflecting. We are going to consider the reflecting boundaries case. The position of the particle reaches a steady-state distribution $\rho_{\infty}(X)$ in the limit $t \to \infty$. Assuming that the particle is confined to the interval $(a,b)$, $a<b$, one can compute the stationary density:
\begin{equation} \label{stat_den}
\rho_{\infty}(X) = C \exp{\left(-\int_{a}^{X} \frac{2\alpha}{r \gamma(y) + 2\alpha} \frac{T'(y)}{T(y)} dy \right)},
\end{equation}
where in terms of the original parameters of the model, $r := \tau/m_{0} > 0$, and $C$ is a normalizing constant. In particular, in absence of temperature gradient ($T'(y) = 0$), the particle is equally likely to be found anywhere in $(a,b)$, whereas when a temperature gradient is present, the distribution of the particle's position is not uniform. In the limit $r \to \infty$, the particle's position is again distributed uniformly on $(a,b)$. On the other hand, in the limit $r \to 0$ the stationary density is inversely proportional to the temperature, i.e. $\rho_{\infty}(X) = \tilde{C} T(X)^{-1},$ where $\tilde{C}$ is a normalizing constant. Thus, the particle is more likely to be found in the colder region. In the special case when $D(X)$ is proportional to $T(X)$, so that $\gamma$ is independent of $X$, we have \begin{equation}\rho_{\infty}(X) = \tilde{C} T(X)^{-\frac{2\alpha}{2\alpha+r \gamma}},\end{equation}
where $\tilde{C}$ is a normalizing constant, so the particle is more likely to be found in the colder region, with the likelihood decreasing as $r$ increases.
Next, we are going to study the sign of the thermophoretic drift directly using \eqref{thermodrift} (this is in contrast to the approach in \cite{Hottovy2012a}, where $\mu(T)$ is expanded around a fixed temperature). We find that $b_{1}(X) > 0$ if and only if $r > r_c$ and $r_c$ is the critical ratio of $\tau/m_0$, given by:
\begin{equation}
r_c = \frac{\alpha}{3 \pi R \mu(T)} \left(\frac{\mu'(T) T(X)}{\mu(T) -\mu'(T)T(X)} \right), \end{equation}
where $\mu(T) = \mu(T(X))$ is obtained from the Stokes law.
For $r = r_c$, the stationary density \eqref{stat_den} reduces to: \begin{equation}\rho^{c}_{\infty}(X) = C \frac{\mu(T(X))}{T(X)}, \end{equation} where $C$ is a normalizing constant.
Importantly, note that the drift does not change sign if $T$ is independent of $X$.
Finally, we discuss a special case. When $\mu(T)=\mu_{0} > 0$ is a constant (so that $\gamma(X)$ is a constant), the thermophoretic drift is given by:
\begin{equation}b_{1}(X) = \frac{k_{B}T'(X)}{6 \pi R \mu_{0}} \left[1 - \frac{\alpha}{\alpha + 3 \pi r R \mu_{0}} \right].\end{equation} In agreement with the result in \cite{Hottovy2012a}, $b_{1}(X)$ has the same sign as $T'(X)$, leading to
a flow towards the hotter region. The steady-state density is \begin{equation}\rho_{\infty}(X) = C T(X)^{-\frac{\alpha}{\alpha + 3 \pi r R \mu_{0}}},\end{equation} where $C$ is a normalizing constant, and the particle is more likely to be found in the colder region for all $r > 0$, even though the thermophoretic drift actually directs the particle towards the hotter regions. This effect is in agreement with experiments, and is explained by the presence of reflecting boundary conditions.
\subsection{A Thermophoresis Model with Non-Markovian (Harmonic) Noise}
We repeat the analysis of the previous subsection in the case when the colored noise is a harmonic noise. We set $d=1$, $\boldsymbol{F} = 0$, $\boldsymbol{h} = \boldsymbol{g} = g := \sqrt{\gamma}$, $\boldsymbol{\sigma} = \sigma$, $\boldsymbol{\Omega} = \Omega$, $\boldsymbol{\Omega}_0 = \Omega_0 := \Omega \sqrt{1-\Omega^2/4}$, $\boldsymbol{\Omega}_1 = \Omega_1 := \Omega/\sqrt{1-\Omega^2/4}$ (where $|\Omega|<2$) in the SIDE $\eqref{side3}$ and study the effective dynamics of the resulting system as before. The case where $|\Omega|>2$ can be studied analogously. The following result follows from Corollary \ref{h_1dcase}.
\begin{corollary} \label{cor_thermo_2} Let $\epsilon > 0$ be a small parameter and the particle's position, $x^\epsilon_t \in \RR \ (t \geq 0)$, satisfy the following rescaled SDEs:
\begin{align}
dx^\epsilon_t &= v^\epsilon_t dt, \label{rescaled_thermo1} \\
m_0 \epsilon dv^\epsilon_{t} &= -\frac{\sqrt{\gamma(x^\epsilon_{t})}}{\tau \epsilon} \left( \int_{0}^{t} e^{-\frac{\Omega^2}{2\tau \epsilon}\left(t-s\right)}\left[\cos\left(\frac{\Omega_{0}}{\tau \epsilon}(t-s) \right) + \frac{\Omega_{1}}{2} \sin\left(\frac{\Omega_{0}}{\tau \epsilon}(t-s) \right) \right] \sqrt{\gamma(x^\epsilon_s)} v^\epsilon_{s} ds \right) dt \nonumber \\
&\ \ \ \ \ + \sigma(x^\epsilon_{t}) h^\epsilon_{t} dt, \label{rescaled_thermo2} \\
\tau \epsilon dh^\epsilon_t &= u^\epsilon_t dt, \label{rescaled_thermo3} \\
\tau \epsilon du^\epsilon_t &= -\Omega^2 u^\epsilon_t dt - \Omega^2 h^\epsilon_t dt + \Omega^2 dW_t, \label{rescaled_thermo4}
\end{align}
where $m_0$ and $\tau$ are positive constants, $\Omega$, $\Omega_0$ and $\Omega_1$ are constants defined as before, and $(W_t)$ is a one-dimensional Wiener process. The initial conditions are given by the random variables $x^\epsilon_0 = x$, $v^\epsilon_0 = v$, independent of $\epsilon$, and $(h^\epsilon_0, u^\epsilon_0)$ are distributed according to the invariant measure of the SDEs \eqref{rescaled_thermo3}-\eqref{rescaled_thermo4}.
Assume that the assumptions in Corollary \ref{model4} are satisfied. Then, in the limit as $\epsilon \to 0$, the process $x^\epsilon_t$ converges (in the same sense as Corollary \ref{model4}) to the process $X_{t} \in \RR$, satisfying the SDE:
\begin{equation}dX_{t} = b_{2}(X_{t}) dt + \sqrt{2D(X_{t})} dW_{t}, \end{equation}
where the noise-induced drift term is given by:
\begin{align}
b_{2}(X) &= D'(X) \\
&\ \ \ - \frac{(4m_{0}^2 \Omega^6 D^2(X)+\tau^2 (k_{B}T(X))^2)D(X)}{4m_{0}^2 \Omega^6 D^2(X)+2k_{B}T(X)m_{0}\tau \Omega^4(\Omega^2-1)D(X)+\tau^2(1+2\Omega^2)(k_{B}T(X))^2}\frac{T'(X)}{T(X)}.
\end{align}
\end{corollary}
We next discuss the contents of Corollary \ref{cor_thermo_2}.
\begin{remark}
Note that $b_{2}(X)$ differs from $b_{1}(X)$ obtained previously and $b_{2}(X) \to b_{1}(X)$, with $\alpha =1$ in the expression for $b_{1}(X)$, in the limit $\Omega \to \infty$.
\end{remark}
\noindent {\bf Discussion.}
In the reflecting boundaries case, the stationary distribution of the particle's position is
\begin{align}
&\rho_{\infty}(X) \nonumber \\
&= C \exp{\left(-\int_{a}^{X} \frac{D(y)(4\Omega^6D^2(y)+r^2 (k_{B}T(y))^2)}{4 \Omega^6 D^2(y)+2r\Omega^4(\Omega^2-1)D(y)k_{B}T(y)+r^2(1+2\Omega^2)(k_{B}T(y))^2 }\frac{T'(y)}{T(y)} dy \right)}, \end{align}
where $r := \tau/m_{0} > 0$ and $C$ is a normalizing constant. Similarly to the previous model, in the absence of temperature gradient (i.e. when $T$ is a constant), the particle is equally likely to be found anywhere in $(a,b)$. When a temperature gradient is present, distribution of the particle's position is not uniform. However, in contrast to the previous model, in the limit $r \to \infty$ the particle is not distributed uniformly on $(a,b)$ and in the limit $r \to 0$ the stationary density is no longer inversely proportional to the temperature. Both distributions depend on the diffusion coefficient $D(X)$ as well as on the temperature profile $T(X)$.
We can also study the sign of the thermophoretic drift. In this case there can be up to two critical ratios,
$r_{c}$, at which $b_{2}(X)$ changes sign, as the equation $b_{2}(X) = 0$ is a quadratic equation in $r$. In the special case when $\mu(T)=\mu_{0} > 0$ is a constant (and thus so is $\gamma(X)$), the thermophoretic drift is given by:
\begin{equation}b_{2}(X) = \frac{k_{B}T'(X)}{6 \pi R \mu_{0}} \left[1 - \frac{\Omega^6+9\pi^2R^2r^2\mu^{2}_{0}}{\Omega^6+3 \pi R r \Omega^4(\Omega^2-1)\mu_{0} +9\pi^2 R^2 r^2(1+2\Omega^2)\mu^2_{0}} \right].\end{equation} In contrast to the result in previous model, $b_{2}(X)$ has the same sign as $T'(X)$ provided that \begin{equation}r > \frac{\Omega^2(1-\Omega^2)}{6\pi R \mu_{0}}.\end{equation} Thus, $b_{2}(X)$ and $T'(X)$ do not share the same sign for all $r>0$, unless $ |\Omega| \geq 1.$ Thus, according to this model, presence of a temperature gradient allows us to tune the parameters $(m_{0}, \tau, \Omega)$ to control the direction which the particle travels.
The steady-state density in this case is \begin{equation}\rho_{\infty}(X) = C T(X)^{-\frac{\Omega^6+9\pi^2R^2r^2\mu^{2}_{0}}{\Omega^6+3 \pi R r \Omega^4(\Omega^2-1)\mu_{0} +9\pi^2 R^2 r^2(1+2\Omega^2)\mu^2_{0}}},\end{equation} where $C$ is a normalizing constant. The particle will be more likely found in the colder region for all $r>0$ if $|\Omega| \geq 1$, whereas this might not be true for all $r>0$ if $|\Omega| < 1$.
\section{Conclusions and Final Remarks} \label{conc}
We have studied homogenization of a class of GLEs in the limit when three characteristic time scales, i.e. the inertial time, the characteristic memory time in the damping term, and the correlation time of colored noise driving the equations, vanish at the same rate. We have derived effective equations, which are simpler in three respects:
\begin{itemize}
\item[1.] The velocity variables have been homogenized. As a result, the number of degrees of freedom is reduced and there are no fast variables left.
\item[2.] The equations become regular SDEs, since the memory time has been taken to zero.
\item[3.] The system is driven by a white noise.
\end{itemize}
Importantly, {\it noise-induced drifts} are present in the limiting equations, resulting from the dependence of the coefficients of the original model on the state of the system. We have applied the general results to a study of thermophoretic drift, correcting the formulae obtained in an earlier work \cite{Hottovy2012a}. In systems, satisfying a fluctuation-dissipation relation, the noise-induced drifts in the limiting SDEs for the particle's position reduce to a single term, and for special cases the limiting SDEs coincide with that of \cite{hottovy2015smoluchowski}. However, in the more general case, new terms appear, absent in the case without memory. To prove the main theorem, we have employed the main result of \cite{hottovy2015smoluchowski}, proven here in a different version under a relaxed assumption on the damping matrix and the initial conditions.
Homogenization of other specific non-Markovian models can also be studied using the methods of this paper. An example is a system with exponentially decaying memory kernel, driven by white noise in the limit as the inertial and memory time scales vanish at the same rate. In this case the noise-induced drift in the limiting equation will consist of two terms, not three, as in the case studied here.
The colored noises considered in this paper have correlations decaying exponentially (short-range memory). It would be interesting to study cases where the GLE is driven by other colored noises such as fractional Gaussian noises, with covariances decaying as a power, relevant for modeling anomalous diffusion phenomena in fields ranging from biology to finance \cite{kou2008stochastic}. As mentioned in Section 2, we will explore homogenization for GLEs with vanishing effective damping and diffusion constant in a future work.
\begin{acknowledgements}
The authors were partially supported by NSF grant DMS-1615045. S. Lim is grateful for the support provided by the Michael Tabor Fellowship from the Program in Applied Mathematics at the University of Arizona during the academic year 2017-2018. The authors learned the method of introducing additional variables to eliminate the memory term from E. Vanden-Eijnden. They would like to thank Maciej Lewenstein for insightful discussion on the GLEs.
\end{acknowledgements}
\bibliographystyle{spmpsci}
|
1,116,691,498,384 | arxiv | \section{Introduction}
The long and (almost) un-interrupted observations of the {\it Kepler}
space telescope allow us to investigate moderate or small amplitude
brightness variations with long periods.
These studies generally need lots of telescope time and high precision
simultaneously, therefore space photometry is ideal for them.
Such an interesting phenomenon is the Blazhko effect \citep{Bla07}
of RR\,Lyrae stars, the long period amplitude (AM) and frequency
modulation (FM) of the observed light curves.
Formerly, the effect was defined as the presence of (at least) one of these two
types of modulations, however, recent investigations have always found
both of them simultaneously. The typical pulsation (light variation) period
of a fundamental mode pulsating RR\,Lyrae (RRab) star is about half a day
with 0.5-1~mag amplitude, while the Blazhko modulation time-scale is
generally 10-1000 times longer and its amplitude is around a few tenths
of magnitudes or smaller.
These behaviors encumbered the investigation of the Blazhko effect in the past.
Now, the long time coverage, the high duty cycle and precise observations of
{\it Kepler} allow us to address questions such as how regular the
Blazhko effect is and how frequent multiperiodic Blazhko stars are.
The better knowledge of the Blazhko phenomenon
is important, because the effect is frequent
(the incidence of the Blazhko effect among
RR\,Lyrae stars is high: 30 -- 50\% depending on the different samples
considered) and its physical origin -- despite of the serious
efforts during the last hundred years -- is still unknown.
As of today two competitive explanations of the Blazhko effect
have survived among the many previously suggested ones (see for a review
\citealt{Kol12}, or \citealt{Kov09}). Stothers' idea
\citep{Sto06} explains the effect with the influence
of local transient magnetic fields excited by the turbulent convection.
The first quantitative tests of this idea found
serious inconsistencies between theory and observations
\citep{Smolec11, Mol12}.
\cite{Bu11} suggested a model where the modulation caused
by a resonance coupling between low order radial (typically fundamental)
mode with a high order radial (so-called strange) mode.
This model is based on the amplitude equation formalism, but
has not been tested yet by hydro-dynamic computations.
Both of these theories can potentially predict variable Blazhko cycles.
In the first case the variation could be
quasi-periodic with stochastic nature,
while in the second case it can be regular:
single or multiperodic, or chaotic.
\section{Data}\label{data}
The main characteristics of the {\it Kepler Mission} are described in
\cite{Koch10} and \cite{Jen10a, Jen10b}. Technical details can be
found in these handbooks: \citet{KIH,DPH,KDCH}. We summarize here
those basic facts about the instrument only that proved to be important
in our analysis.
The space telescope orbits the Sun and observed one
fixed area continuously at the Cygnus/Lyra region. To ensure the
optimal illumination for its solar cells the equipment was
rolled by 90 degrees four times a year.
As a consequence, each target star's light is collected on
four different CCDs according to the quarterly positions. This
implies possible systematic differences among data from
different quarters.
When we have a look at the flux variation curves of an RR\,Lyrae star,
the zero points and amplitudes are
evidently different from quarter to quarter for most of the stars
(see Fig.~\ref{llc_ny} for an example).
Due to the limited telemetrical capacity
only small areas around each selected target stars were downloaded.
We will refer these areas in this paper as `stamps'. Within these stamps
`optimal apertures' (in {\it Kepler} jargon) were fitted from
the 1024 pre-defined ones for each star and quarter separately.
The photometry done on these apertures defines the {\it Kepler} flux variation curves.
These apertures are, however, optimal only if the light variation of the target
is less than about of a tenth of a magnitude \citep{DPH}.
Since the total amplitude of pulsation for {\it Kepler} RR\,Lyrae stars
is between 0.47 and 1.1 mag \citep{Nemec13} (hereafter N13),
these pre-defined apertures are not optimal any more: significant
fraction of the flux flows out of these apertures.
This effect adds to the fact that even
the apertures may differ quarterly for a given star explaining
most of the differences between the amplitudes and average fluxes
belonging to different quarters.
\subsection{The Sample}
\begin{figure}
\includegraphics[width=8.5cm]{llc_ny}
\caption[]{
Top panel: `optimal aperture'
flux variation curve of V783\,Cyg ({\it Kepler} archive).
Middle panel: the flux variation curve prepared by using the best tailor-made
aperture. Bottom panel: scaled, shifted and detrended curve. (For better visibility,
only the first six quarters are plotted.)
} \label{llc_ny}
\end{figure}
\begin{figure*}
\includegraphics[width=8cm]{Q1_global}
\includegraphics[width=8cm]{Q1_local}
\caption[]{
Constructing of the tailor-made apertures.
The entire pixel mask around \object{V783 Cyg} (KIC~5559631) in the
first quarter. Grey pixels: elements of the `optimal apertures',
white pixels: all other downloaded pixels (`stamp').
We plotted the Q1 time series of each pixel individually.
On the left hand side all pixel light curves are scaled between zero and the
maximum flux. On the right hand side all pixel
light curves are scaled individually.
We collected all those pixels that show the signal of the star
and omitted those ones
which include noise or background sources only.
} \label{Q1_map}
\end{figure*}
We assembled our Blazhko star sample in the {\it Kepler} field the following way.
The sample of \cite{Benko10} contains fourteen stars.
One of them -- \object{V349 Lyr} (KIC~7176080) -- proved to be
a non-modulated star suggested by \cite{Nemec11}.
This finding has been confirmed by the present work by
checking its pixel photometric data. We also include
three additional stars that were analyzed by N13.
The extremely small Blazhko effect of V838\,Cyg (KIC~10789273)
was revealed by N13, while the (Blazhko)
RR\,Lyrae nature of \object{KIC 7257008} and \object{KIC 9973633} was discovered by
the ASAS survey (\citealt{Poj97, Poj02}; Szab\'o et al. in prep.)
when the {\it Kepler} measurements had been in progress. That is the reason
why we have data on these two targets from Q10 only.
The other 13 stars were pre-selected by the {\it Kepler}
Asteroseimic Science Consortium\footnote{\url{http://astro.phys.au.dk/KASC/}} (KASC)
and were observed during the whole mission. N13 shortly noted two additional
Blazhko candidates, however, both of those stars are
faint ones merging with neighboring bright sources.
Because of the serious problems concerning the
separation of their signals from their close companions
we omitted them from our investigations. (They will be discussed in a forthcoming paper.)
\object{RR Lyr} itself is also in the {\it Kepler} field, but
its image is always saturated. Therefore, recovering its original
signal needs extra caution and
special techniques (e.g. custom apertures).
Many successful efforts have been done to this direction
\citep{Kolenberg10, Kolenberg11, Molnar12}.
We will only refer to those results on RR\,Lyr.
Our final sample consists of fifteen Blazhko stars.
The exposition time of the {\it Kepler} camera is 6.02~s with 0.52~s readout time.
The long cadence (LC) data result from 270 exposures co-added with a total 1766~s
integration time. Since we concentrate on long period effects we generally
used these LC data only, especially as the time-span of short cadence
data (SC: 9 frames codded 58.85~s) are usually no more than one
single quarter.
The commissioning phase data (Q0) between 2009
May 2 to 11 (9.7\,d) included only one Blazhko star: \object{KIC 11125706}.
The observations of other targets began with Q1
on 2009 May 13. Here we analyzed LC data
to the end of the last full quarter (Q16, till 2013 Apr 8).
The total lengths of the data covers 3.9 years.
The CCD module No.~3 failed during Q4 (on 2010 January 12), the targets
located on this module have quarter-long gaps in their time series data.
Six stars of our fifteen-element sample suffer from this defect
(see Tab.~\ref{Blazhko_stars} and Fig.~\ref{zoo}).
The combined number of data points for a given star is
between 19\,249 (KIC~9973633) and 61\,351 (V783\,Cyg),
the typical value is about 50-60\,000.
Data are public and can be downloaded from the web page of
MAST\footnote{\url{http://archive.stsci.edu/kepler/}}.
\subsection{Data Processing}\label{data_processing}
\begin{figure*}
\includegraphics[width=17cm]{llc_2}
\caption[]{
Flux variation curves from the data processing.
The figure shows curves of V783\,Cyg from Q2-Q4.
From left to the right: the archived {\it Kepler} data,
fluxes extracted from the best tailor-made apertures, and
final rectified data, respectively. The arrows illustrate the internal trends.
$A^{\mathrm T}_2(j)$ and $A^{\mathrm T}_4(k)$ are the total pulsation
amplitude of the archived {\it Kepler} Q2 and Q4 data at the $j$th and $k$th pulsation
cycles, respectively. These amplitudes are also shown in the tailor-made data plot for
comparison.
} \label{llc_2}
\end{figure*}
In this subsection we summarize the main steps done before our analysis.
The {\it Kepler} data available for each source in two forms (1) as
{\it photometric time series}: flux variation curves (flux vs. time) prepared
from the pre-defined optimal apertures and (2) {\it image time series}
(image of the stamp vs. time). Latter data sets are often referred to as `pixel data'.
For the above mentioned reasons we used these
pixel data. After we downloaded them from the MAST web page
PyKE\footnote{\url{http://keplergo.arc.nasa.gov/PyKE.shtml}}
routines provided by Kepler Guest Observer Office were used to extract
the flux variation curves for each pixel in the stamps of a given star.
Since the pixel files before file version 5.0 have a time stamp
error we corrected it by using the PyKE {\tt keptimefix} tool.
\begin{table*}
\begin{center}
\caption{Sample from a rectified data file}
\label{sample_data}
\footnotesize{
\begin{tabular}{rcrccrr}
\tableline\tableline
No & Time & Flux & Zero point offset
& Scaling factor & Corrected flux & Corrected K$_{\mathrm p}$\\
& (BJD$-$2454833) & (e$^{-}$s$^{-1}$) & (e$^{-}$s$^{-1}$) &
& (e$^{-}$s$^{-1}$) & (mag) \\
\tableline
1 & 131.5123241 & 5322.6 & $-400.00$ & 1.000 & 5402.03436789 & 0.39793763 \\
2 & 131.5327588 & 5393.9 & $-400.00$ & 1.000 & 5473.33140201 & 0.38370163 \\
3 & 131.5531934 & 5496.7 & $-400.00$ & 1.000 & 5576.12843615 & 0.36349907 \\
4 & 131.5736279 & 5498.1 & $-400.00$ & 1.000 & 5577.52547030 & 0.36322708 \\
5 & 131.5940625 & 5488.2 & $-400.00$ & 1.000 & 5567.62250444 & 0.36515654 \\
6 & 131.6144972 & 5571.6 & $-400.00$ & 1.000 & 5651.01953856 & 0.34901397 \\
7 & 131.6349317 & 6347.2 & $-400.00$ & 1.000 & 6426.61657271 & 0.20937502 \\
8 & 131.6553663 & 8478.6 & $-400.00$ & 1.000 & 8558.01360685 & $-0.10160144$ \\
9 & 131.6758010 & 14437.0 & $-400.00$ & 1.000 & 14516.41064097 & $-0.67531712$ \\
10 & 131.6962356 & 15817.5 & $-400.00$ & 1.000 & 15896.90767511 & $-0.77395064$ \\
$\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\
\tableline
\end{tabular}
}
\tablecomments{The first ten data lines from the file of \object{V2178 Cyg} ({\tt kplr003864443.tailor-made.dat}).
The columns contain serial numbers, baricentric Julian dates, flux extracted from the tailor-made
aperture, zero point offsets, scaling factors (1.0 = no scaling), stitched (shifted, scaled and trend filtered) flux and
their transformation into the K$_{\mathrm p}$ magnitude scale, respectively. See the text for the details.
}
\end{center}
\end{table*}
We investigated the flux variation curves of all individual pixels separately.
We illustrate this process in Fig.~\ref{Q1_map}.
Here the stamp of Q1 around the star V783\,Cyg (KIC~5559631)
is plotted on two different scales.
The gray pixels symbolize the elements of the pre-defined `optimal' aperture.
We plotted the Q1 time series of each pixel individually. On the left panel
all pixel flux variation curves are scaled between zero and the
maximum flux attained by the brightest pixel in the stamp. This option
shows the relative contribution to the archived flux variation curve by each
pixel.
The total flux of the star comes obviously from a few pixels only.
On the right panel all individual pixel flux variation curves are
scaled separately between their minima and maxima
to ensure the largest dynamic range for each pixel.
This map reveals that there is some flux from outside of the
original (`optimal') aperture.
Aiming for apertures for each star and quarter separately
which include the total flux,
we built the `tailor-made' apertures in the following way:
if the flux variation curve of a pixel showed the signal of the given variable star
-- that is the main pulsation period is detectable
($A(f_0) \sim 3\sigma$) in the Fourier spectrum --
we added the pixel in question to our tailor-made aperture,
otherwise we dropped it.
\begin{figure*}
\includegraphics[width=18.5cm]{zoo_uj}
\caption[]{
The gallery of {\it Kepler} Blazhko stars. The figure shows the complete
light curves of fifteen stars observed with long--cadence (LC) sampling during the
periods Q0 through Q16. The light curves are ordered by the primary Blazhko period
from the longest (top left) to the shortest (bottom right) ones.
$^{*}$For better visibility the scale of KIC\,11125706 is
increased by a factor of 1.5.
} \label{zoo}
\end{figure*}
By summing up the flux of all pixels in the tailor-made apertures
we obtained raw flux variation curves.
At first sight these time series and
the {\it Kepler} flux variation curves
do not differ too much (see Fig.~\ref{llc_ny} for an example).
Nevertheless, the difference between the flux values of the
archived (`optimal' aperture) data
and our (`tailor-made' aperture) fluxes is considerable: about 1-5 per cents.
The exact value differs from star to star and quarter to quarter.
We note that such a comparison needs shifting the quarters
pairwise to a common zero point.
The main differences between the archived and our flux variation curves are that (i)
the total pulsation amplitudes $A^{\mathrm T}_i(n)$ increase for all quarters
($A'^{\mathrm T}_i(n) > A^{\mathrm T}_i(n)$; Fig.~\ref{llc_2}) indicating the
flux loss in the archived data.
Here the amplitudes $A^{\mathrm T}_i(n)$, $A'^{\mathrm T}_i(n)$ are the
total pulsation amplitude (maximal flux $-$ minimal flux) of the $n$th pulsation
cycle in the $i$th quarter $i,n=1,2,\dots$ for optimal and tailor-made aperture data,
respectively. (The superscript T stands for the word `total'.)
(ii) The internal trends within quarters (see arrows in Fig.~\ref{llc_2})
decrease, suggesting that these trends originate from the small drift and differential velocity
aberration of the telescope; and
(iii) the difference of the total pulsation amplitudes
between the consecutive quarters
$\Delta A^{\mathrm T}_{i,i+1}=\vert A^{\mathrm T}_i(l) - A^{\mathrm T}_{i+1}(1) \vert$
decrease: $\Delta A^{\mathrm T}_{i,i+1} > \Delta A'^{\mathrm T}_{i,i+1}$,
(the index $l$ denotes the last pulsation cycle in the $i$th quarter).
In an optimal case -- if the tailor-made apertures capture all the flux --
these total pulsation amplitude differences practically disappear and
only zero point shifts would remain between quarters.
Initially we hoped that we could define tailor-made apertures
for all stars which include the total flux, i.e.
the different quarters can be joined smoothly by simple zero point shifts.
We have found such apertures for nine Blazhko stars, only.
Six of our stars, however, show total pulsation amplitude
differences between quarters for all possible apertures.
(For the list of the individual stars see Table~\ref{Blazhko_stars}.)
In these cases the downloaded stamps seemed to be
too small. The right panel in Fig~\ref{Q1_map} demonstrates
such a situation well: the top pixel row and the right-most column
contain the signal of the variable star while e.g. bottom row
does not.
\begin{figure*}
\includegraphics[width=18cm]{anal_A}
\includegraphics[width=18cm]{anal_B}
\caption[]{
Schematic overview of the Fourier analysis of the light curves.
Top panels (signed by A): Fourier amplitude spectra, bottom panels
(signed by B) spectra after we pre-whitened the data with the main
pulsation frequency $f_0$ and its significant harmonics $kf_0$, $k=1, 2, \dots$.
Panels with indexed letters:
zooms of the the spectra, (1) low frequency range, (2) surrounding of $f_0$,
(3) additional frequencies between $f_0$ and $2f_0$, (4) high frequency range (around $9f_0$).
Colored boxes in the middle panels show the approximate positions and
sizes of the small (zoomed) panels.
} \label{analysis}
\end{figure*}
How can we correct the flux variation curve of such stars?
Simple zero point shifts
do not result in continuous light curves, however,
we must assume that the light curve of an RR\,Lyrae
star is continuous and smooth. To this end,
we have to scale the flux values for properly joining the quarters.
Since at the beginning of the mission, Q4 was the most stable quarter
(see fig.~10 in \citealt{KDCH})
we chose its fluxes as a reference for all stars.
We defined scaling factor and zero point offset pairs
for each quarter separately so that the transformed flux values
of quarters can be stitched smoothly.
These transformations are neither exact nor unique. An
additional trouble arises when one quarter of data is missing.
In those cases,
since the stamps of a star were generally fixed for the identical telescope
rolls (viz. settings for Q1=Q5=Q9,$\dots$; Q2=Q6=Q10,$\dots$ etc.),
we used scaling factor and zero
point offset of the previous quarter at the same telescope positions:
e.g. if Q8 data are missing we use parameters of Q5 for Q9.
It must be kept in mind that this procedure may influence
the final result especially when we investigate amplitude
changes.
The flux values of each quarter
were (1) shifted with zero point offsets and (2) multiplied by scaling factors.
Finally, (3) the eventual long time-scale trends were removed
from the flux variation curves by
a trend filtering algorithm prepared for CoRoT RR\,Lyrae
data\footnote{\url{http://www.konkoly.hu/HAG/Science/index.html}} and (4)
fluxes were transformed into a magnitude scale, where the averaged magnitude
of each star was set to zero. We have to emphasize that due to the logarithmic
nature of the magnitude scale all corrections and transformations should be
performed on the flux data. The measured fluxes fell between
1190 and 350\,500~e$^{-}$s$^{-1}$ which yields the estimated error
$7.2\times 10^{-4}$ and $4.2\times 10^{-5}$\,mag
for an individual data point, respectively.
Corrected time series data on the tailor-made apertures are available
both in flux and magnitude scales
in electronic format\footnote{See the web page of this journal: \url{http://} \\
or \url{http://www.konkoly.hu/KIK/data.html}}.
The fluxes extracted from the tailor-made aperture
without trend filtering, applied scaling factors and offset values are
also given in the data set (for a sample see Table~\ref{sample_data}).
\section{Analysis and Results}
\begin{figure}
\includegraphics[width=8.5cm]{fr_add}
\caption[]{
Structure of
the additional frequencies. Here we plotted the spectra between
the main frequencies and their first harmonics
($f_0 + 0.2 < f < 2f_0 - 0.2$). All figures show residual spectra
after consecutive pre-whitening steps with the harmonics and their
numerous (3-10) significant side peaks. The yellow stripes indicate
the expected position of the radial first overtone ($f_1$), period doubling
($1.5f_0$) and radial second overtone ($f_2$) frequencies, respectively.
} \label{fr_add}
\end{figure}
\begin{figure*}
\includegraphics[width=17cm]{o-c}
\caption[]{The O$-$C diagrams of the analyzed Blazhko RR\,Lyrae stars.
The diagrams are constructed from interpolated times of pulsation maxima.
The basic epochs are the times of the first maximum for all stars.
The primary AM Blazhko period decreases from top to bottom.
The LC data of V838\,Cyg is not suitable for constructing the O$-$C diagram
(for the details see Sec.~\ref{v838cyg}),
so it has not shown here.
} \label{o-c_all}
\end{figure*}
\begin{figure*}
\includegraphics[width=17cm]{fr_low}
\caption[]{Properties of amplitude and frequency modulations.
Left panels: low frequency range of the Fourier
spectra of the light curve (Fig.~\ref{zoo}).
Blue vertical lines show
the location of the common instrumental frequencies:
$f_{\mathrm K}/2$ (continuous), $f_{\mathrm K}$ (dotted),
$2f_{\mathrm K}$ (short dashed),
$4f_{\mathrm K}=f_{\mathrm Q}$ (long dashed), respectively.
Right panels: Fourier spectra of the O$-$C diagrams
(Fig.~\ref{o-c_all}).
For both cases the primary ($f_{\mathrm B}$) and secondary modulation
frequencies ($f_{\mathrm S}$) and their harmonics are marked.
Possible linear combinations are shown in Figs.~\ref{V450_Lyr} and \ref{V366_Lyr}.
} \label{fr_low}
\end{figure*}
\subsection{General Overview}
In the course of this study we mainly used two methods. One of them is
the Fourier analysis of the light curves that were pre-conditioned by
the described process and shown in Fig.~\ref{zoo}.
The second method is the analysis of O$-$C (observed minus calculated) diagrams.
In some cases other tools were used, as well. These are described later
when we discuss the relevant objects.
This paper uses the following notation conventions: numbers in the lower indices denote
the radial pulsation orders (viz. 0 = fundamental, 1 = first overtone modes, etc.).
Lower indices B and S indicate the primary (Blazhko) and secondary modulations, respectively.
Upper indices denote the detected frequencies before identification.
Throughout this paper
the numerical values (frequencies, amplitudes, etc.) are written with the significant
number of digits plus one digit.
\subsubsection{Fourier Analysis of the Light Curves}\label{fr_anal}
The software packages {\sc MuFrAn} \citep{Kol90} and {\sc Period04} \citep{LB05}
were used for the Fourier analysis. These program packages
-- together with {\sc SigSpec} \citep{Re07} --
were tested for {\it Kepler} Blazhko stars in the
past \citep{Benko10}. Since all of them provided similar spectra with the same
frequencies, amplitudes and phases we can use the one which
fits the best our purposes. In this work our primary tool was {\sc MuFrAn},
but e.g. the frequency error
or signal-to-noise ratio (S/N) were determined by {\sc Period04}.
Here we describe the general features of our Fourier analysis.
For an illustration see Fig.~\ref{analysis}.
The highest peaks in the Fourier spectra are always the main pulsation
frequencies ($f_0$) and their harmonics ($kf_0$, where $k=1, 2,\dots$)
(see panel A in Fig.~\ref{analysis}).
The Nyquist frequency for the {\it Kepler} LC data is $f_{\mathrm N}=24.46$~d$^{-1}$.
Up to this limit frequency we detected 9-15 significant
harmonics depending on the pulsation frequency.
When we pre-whiten the data with these frequencies, we get Fourier spectra dominated
by the side peaks (Fig.~\ref{analysis}B).
The harmonics (including the main frequency) are surrounded by the
side peaks caused by the Blazhko modulation ($kf_0\pm lf_{\rm B}$, where $k, l=1, 2,\dots$).
Side peaks of triplets ($l=1$) can always be seen (panel B$_2$ in Fig.~\ref{analysis})
and in some cases higher order multiplets ($l>1$) are also detectable (panel B$_4$).
The higher order multiplets tend to appear around the higher order
harmonics which indicates the frequency modulation \citep{Benko11}.
After we pre-whitened the data with a set of side frequencies
it turned out to be evident that
the side peaks sometimes consist of double or even multiple peaks.
If we measure the spacing between these double peaks we find that
the frequency difference corresponds to the {\it Kepler} year ($P_{\rm K}=372.5$~days).
More precisely, these frequencies can be described as $kf_0\pm f'$,
where $f'$ is one of the followings: $0.5f_{\rm K}$, $f_{\rm K}$, $2f_{\rm K}$,
$4f_{\rm K}=f_{\rm Q}$. Here $f_{\rm K}=1/P_{\rm K}$ and
$f_{\rm Q}=1/P_{\rm Q}$, where $P_{\rm Q}$ is the characteristic length of a quarter.
Thus these frequencies are caused either by not properly
eliminated problems of stitched quarters, missing quarters or
sometimes the instrumental
amplitude variation with $P_{\rm K}$ recently discovered by \cite{Banyai13}.
In the low frequency ranges (panel B$_1$ in Fig.~\ref{analysis}
and Fig~\ref{fr_low}) we generally find the modulation
frequencies $f_{\rm B}$, frequencies connected to the {\it Kepler} year,
and some other instrumental peaks.
In many cases we also find the harmonics of the Blazhko frequencies
to be significant. It implies that the amplitude modulation is fairly
non-sinusoidal which is evident from the envelopes of the corresponding light curve
in Fig.~\ref{zoo}. It is surprising that in
many cases more than one modulation peaks can be found (see also B$_1$ in Fig.~\ref{analysis}).
These secondary modulation frequencies sometimes mimic
to be the harmonic of the primary modulation
frequencies but the O$-$C analysis (see Sec.~\ref{O-C_anal})
helped us to distinguish multiple modulation and non-sinusoidal
modulation. The ratios between different Blazhko frequencies
are often close to small integer numbers, which may suggests a
resonance at work.
Although this paper focuses on the long time-scale variations, we
briefly discuss the frequency ranges between
the harmonics, since the spectra after pre-whitening with
the modulation frequencies and the side peaks show
specific structures (see panel B$_3$ in Fig.~\ref{analysis}).
Four stars (\object{V353 Lyr}, \object{V1104 Cyg}, KIC~11125706 and V783\,Cyg)
do not show any significant peaks in these frequency ranges,
but the remaining eleven stars do (Fig.~\ref{fr_add}).
Three well separated forests of peaks can be
identified which appear in all stars with different combinations.
The middle ones belong to the period doubling (PD)
phenomenon \citep{Kolenberg10,Szabo10,Kollath11}.
The half-integer frequencies (HIFs: $0.5f_0, 1.5f_0,\dots$),
their side peaks and a number of linear combination
frequencies can be detected. If we determine the
frequency ratio of these HIF peaks we find that their values
frequently differ from the exact half-integer ratios. The
explanation is a combination of mathematical, physical and sampling
effects \citep{Szabo10}.
The frequencies located between the HIFs and $2f_0$ harmonics belong
to the second radial overtone ($f_2$), while
peaks between $f_0$ and HIFs are identified as
the frequency of the first radial overtone mode ($f_1$).
The explanation of the
huge number of surrounding peaks around all three cases
is mathematical: the amplitude of the additional frequencies
for both the PD effect and overtone modes
changes in time (see e.g. \citealt{Benko10, Szabo10, Szabo14, Guggenberger12}).
Such a variable signal results in a forest
of peaks in the Fourier spectra as it was shown by \cite{Szabo10}.
\subsubsection{Analysis of the O$-$C Diagram}\label{O-C_anal}
The FM part of the Blazhko effect can be
separated if we study the effect in the time domain.
Since the AM and FM
definitely connected to each other, such an investigation
shows different aspects of the same phenomenon.
A practical advantage of this handling is that the time
measurements are almost free of instrumental problems contrary to
the brightness measurements which were discussed in Sec.~\ref{data}.
There are numerous opportunities for following frequency/period variations
from the traditional O$-$C diagram \citep{Sterken05} to the
analytic signal method \citep{Kollath02} or
we can transform it to phase variation as e.g. N13.
We have chosen here the O$-$C diagram analysis as a simple and clear
method. Although O$-$C diagrams were widely used for investigating
RR\,Lyrae stars for many decades, the first diagrams that show the
period variations due to the Blazhko effect were published only
recently \citep{Chadid10, Guggenberger12},
when the continuous space-based data became available.
As \cite{Sterken05} defined ``O$-$C stands for O[bserved] minus C[alculated]:
... it involves the evaluation and interpretation
of the discord between the measure of an observable event and its
predicted or foretold value." In our case we chose
the time of maxima of the pulsation as an ``observable event".
For the determination of the observed maximum times (`O' values)
we used 7-9th order polynomial or spline fits around the maximum brightness of each
pulsation cycle. The initial epochs ($E_0$) were always the time of the first
maximum for each star. The `C' (calculated) maximum times were determined from these
epochs and the averaged pulsation periods ($P_0$): C=$E_0 + E P_0$, where
$E=1, 2 \dots$, is the cycle number.
Gaps in the observed light curves often resulted in interpolation errors and
consequently deviant points in the constructed O$-$C curves. We removed these points with
the {\tt time string plot} tool of Period04. The selection criterium for the wrong points was that
they deviate from the smooth fit of the curves more than $3\sigma$, where $\sigma$
indicates the standard deviation of the fit.
The obtained curves are plotted in Fig.~\ref{o-c_all}. The accuracy of an individual O$-$C value is
about 1 minute.
The frequency content of the O$-$C diagrams were extracted again
by Fourier analysis. The corresponding spectra are shown
in Fig.~\ref{fr_low}, where we compare them with the low frequency range of
the light curve Fourier spectra. Generally speaking the structures of these
two types of spectra are similar, but O$-$C spectra are more clear.
Here no instrumental peaks can be detected and in the case of
multiple modulated stars the linear combination
frequencies are also more numerous and significant.
These linear combination frequencies demonstrate
that both modulations belong to the same star
(and not to a background source) and the nonlinear coupling
between different modulations. It is noteworthy that the frequencies of
the AM and FM are always the same within their errors.
\begin{table*}
\begin{center}
\caption{Blazhko periods and amplitudes from different methods}
\label{Blazhko_ampl}
\footnotesize{
\begin{tabular}{*6{l@{\hspace{5pt}}}}
\tableline\tableline
Name & $P^{\mathrm{(s)}}_i$ & $P^{\mathrm{AM}}_i$ & $A(f^{\mathrm{AM}}_i)$
& $P^{\mathrm{FM}}_i$ & $A(f^{\mathrm{FM}}_i)$ \\
& [d] & [d] & [mmag] & [d] & [min] \\
\tableline
V2178\,Cyg & $207\pm15$ & $216\pm2$ & $14\pm3.7$ & $215.9\pm0.35$ & $43.6\pm0.9$ \\
& $168.8\pm1.1$ & & & $166.2\pm2.4$ & $18.6\pm1.2$ \\
V808\,Cyg & $92.14\pm0.06$ & $92.18\pm0.39$ & $5.4\pm0.7$ & $92.16\pm0.01$ & $30.6\pm0.1$ \\
V783\,Cyg & $27.666\pm0.001$ & $27.73\pm0.39$ & $1.5\pm1.2$ & $27.667\pm0.005$ & $2.6\pm0.05$ \\
V354\,Lyr & $807\pm16$ & & & $891\pm4$ & $36.9\pm0.5$ \\
V445\,Lyr & $54.7\pm0.5$ & $54.80\pm0.3$ & $4.3\pm1.2$ & $55.04\pm0.04$ & $38.7\pm1.5$ \\
& $146.4\pm0.8$ & & & $147.4\pm0.7$ & $21.8\pm1.7$ \\
KIC~7257008& $39.51\pm0.05$ & $39.7\pm0.4$ & $4.2\pm1.9$ & $39.72\pm0.02$ & $26.3\pm0.4$ \\
V355\,Lyr & $31.06\pm0.1$ & $31.04\pm0.08$ & $8.4\pm1.8$ & $30.99\pm0.02$ & $2.3\pm0.1$ \\
& $16.243\pm0.007$ & $16.25\pm0.1$ & $1.2\pm1.7$ & $16.229\pm0.003$ & $5.2\pm0.1$ \\
V450\,Lyr & $123.7\pm0.4$ & $123.0\pm1$ & $6.5\pm1.4$ & $124.8\pm0.3$ & $5.7\pm0.3$ \\
& $81.0\pm0.6$ & $80.4\pm0.8$ & $4.3\pm1.5$ & $80.1\pm0.1$ & $6.7\pm0.3$ \\
V353\,Lyr & $71.70\pm0.04$ & $72.1\pm1.5$ & $2.3\pm1.3$ & $71.68\pm0.02$ & $7.17\pm0.07$ \\
& $133.1\pm0.4$ & & & $131.3\pm0.3$ & $1.64\pm0.08$ \\
V366\,Lyr & $62.90\pm0.01$ & $62.87\pm0.4$ & $5.0\pm1.4$ & $62.77\pm0.05$ & $1.66\pm0.06$ \\
& $29.29\pm0.01$ & $29.28\pm0.3$ & $2.1\pm1.4$ & $29.295\pm0.007$ & $2.58\pm0.06$ \\
V360\,Lyr & $52.10\pm0.01$ & $51.88\pm0.5$ & $2.9\pm1.2$ & $52.11\pm0.015$ & $12.9\pm0.2$ \\
& $21.041\pm0.008$ & $21.09\pm0.15$ & $1.3\pm1.2$ & $21.073\pm0.005$ & $6.0\pm0.2$ \\
KIC~9973633& $67.11\pm0.08$ & & & $67.30\pm0.07$ & $8.2\pm0.2$ \\
& $27.13\pm0.06$ & & & $27.21\pm0.15$ & $1.6\pm0.4$ \\
V838\,Cyg & $59.5\pm0.1$ & $59.8\pm3$ & $0.6\pm2$ & & \\
KIC~11125706& $40.21\pm0.02$ & & & $40.21\pm0.01$ & $1.66\pm0.03$ \\
& & & & $58.9\pm0.1$ & $0.27\pm0.03$ \\
V1104\,Cyg & $52.00\pm0.01$ & $52.08\pm0.2$ & $5.4\pm1.8$ & $51.99\pm0.02$ & $3.14\pm0.05$ \\
\tableline
\end{tabular}
}
\tablecomments{
$P_i$ and $A(f_i)$ denote the Blazhko periods and the amplitude
of the modulation frequencies
where $i$=B or S for primary and
secondary Blazhko periods, respectively.
The upper indices denote the method of the calculation: (s): from
the side peaks around harmonics; AM: direct detection
in the light curve spectra; FM: from
the spectra of the O$-$C diagrams.
}
\end{center}
\end{table*}
\subsubsection{Calculated Parameters and Accuracies}
\begin{figure}
\includegraphics[width=9cm]{per_amp}
\caption[]{Blazhko period(s) vs. amplitude
of the AM frequency. For the plotted values see
Table~\ref{Blazhko_ampl}.
} \label{period.vs.amplitude}
\end{figure}
The analyzed Fourier spectra have well-defined
structures and although the spectra of the light curves
contain hundreds of significant peaks, only few of them belong to
independent frequencies. These are
the main pulsation frequency $f_0$, the Blazhko frequencies
($f_{\mathrm B}$ and $f_{\mathrm S}$) and the frequencies of the excited
additional radial overtone mode(s) ($f_1$, $f_2$ and/or
the strange mode $f_9$ which responsible for the PD effect).
The error estimation for both the frequencies and amplitudes were
obtained by Monte Carlo simulation
of {\sc Period04}. We note that these errors are only few percents higher
than the analytic error estimations \citep{Breger99}, because
we have almost continuous and uniformly sampled data sets.
The error estimation of the main pulsation frequency
yields 1.1-1.8$\times 10^{-7}$~d$^{-1}$ for the Q1-Q16 data sets
while 4$\times 10^{-6}$~d$^{-1}$ for the Q10-Q16 data. These translate to
3-5$\times 10^{-8}$~d period uncertainty for the best data
(short period and long observing span)
and $10^{-6}$~d at worst.
The Rayleigh frequency resolution is 0.0007~d$^{-1}$ for Q1-Q16 data sets,
and 0.0015~d$^{-1}$ for Q10-Q16 data (for KIC~7257008 and KIC~9973633).
The frequencies never change due to a modulation
\citep{Benko11} as opposed to the amplitudes which are affected
by the FM. Consequently, our formal error estimation for the
main pulsation amplitudes (0.3-1~mmag) are lower limits only.
The Blazhko periods were determined by three different ways
(see Table~\ref{Blazhko_ampl}):
(i) from the averaged frequency differences of the first two triplets
(second column);
(ii) from the Blazhko frequencies themselves (column 3)
-- if they are detectable in the spectrum of the light curve --
and (iii) from the Fourier spectrum of the O$-$C diagrams (column 5).
The latter two methods provide the AM and FM amplitudes
which are shown in columns 4 and 6 in Table~\ref{Blazhko_ampl}.
We call the attention of the reader to an interesting phenomenon.
In Fig.~(\ref{period.vs.amplitude}) we
plot the Blazhko period ($P_i$) vs. amplitude
of the AM frequency ($A(f^{\mathrm AM}_i)$) diagram, where
$i$=B or S. We find a trend: the longer
Blazhko periods mean larger amplitudes and vice verse.
We can not rule out that this is a small sample effect,
however, some arguments contradict to this scenario. The emptiness of the
long period and small amplitude part of the diagram can be explained by
observational effect (it is difficult to distinguish between
small-amplitude long-period stellar variations and
the instrumental effects with similar time-scales even
in the {\it Kepler} data), but the lack of points of the small period and
large amplitude part can not. Additionally, similar effects are
common for (hydro)dynamical systems: e.g. weakly dissipating systems
could be perturbed for high amplitude by long time-scale perturbing
forces only \citep{Molnar12}. The found effect will be investigated in a separate study.
Basic parameters obtained from this analysis are summarized in
Table~\ref{Blazhko_stars}. The columns of the table show the ID numbers and
names of the stars, main pulsation periods and their Fourier amplitude
where the number of digits indicate the accuracy. Columns 5 and 6 contain the
modulation periods averaged from the values in Table~\ref{Blazhko_ampl}.
The last two columns of the table indicate the presence of additional frequencies,
instrumental problems and auxiliary information about the {\it Kepler}
observations.
\begin{table*}
\begin{center}
\caption{Basic properties of the {\it Kepler} Blazhko stars}
\label{Blazhko_stars}
\footnotesize{
\begin{tabular}{lrlc@{\hspace{5pt}}ccrll@{}}
\tableline\tableline
KIC & GCVS & $P_{\mathrm 0}$ & $A(f_0)$ & $P_{\mathrm B}$&
$P_{\rm S}$ & & Add. freq.\tablenotemark{a} & remarks\tablenotemark{b} \\
& & [d] & [mag] & [d] & [d] & & & \\
\hline
3864443 & V2178\,Cyg & $0.4869470$ & 0.3156 & $213\pm5$ & $167.5\pm1.8$(?) & & F2, (PD) & m \\
4484128 & V808\,Cyg & $0.5478635$ & 0.2197 & $92.16\pm0.02$ & $\sim1000$ & & PD, (F2) & m \\
5559631 & V783\,Cyg & $0.6207001$ & 0.2630 & $27.6667\pm0.0005$ & & & & scal \\
6183128 & V354\,Lyr & $0.5616892$ & 0.2992 & $849\pm59$ & (?) & & F2, (PD, F') & scal, m \\
6186029 & V445\,Lyr & $0.5130907$ & 0.2102 & $54.83\pm0.04$ & $146.9\pm0.7$ & & PD, F1, F2 & m \\
7257008 & & $0.511787 $ & 0.2746 & $39.67\pm0.14$ & $>900$ & & PD, F2 & Q10-\\
7505345 & V355\,Lyr & $0.4736995$ & 0.3712 & $31.02\pm0.05$ & $16.24\pm0.01$ & & PD, F2 & \\
7671081 & V450\,Lyr & $0.5046198$ & 0.3110 & $123.8\pm0.9$ & $80.5\pm0.5$ & & F2 & scal \\
9001926 & V353\,Lyr & $0.5567997$ & 0.2842 & $71.8\pm0.3$ & $132.2\pm1.3$ & & \\
9578833 & V366\,Lyr & $0.5270284$ & 0.2909 & $62.84\pm0.07$ & $29.29\pm0.01$ & & (F2) & \\
9697825 & V360\,Lyr & $0.5575755$ & 0.2572 & $52.03\pm0.14$ & $21.07\pm0.03$ & & F2, (PD) & \\
9973633 & & $0.510783$ & 0.2458 & $67.2\pm 0.1$ & $27.17\pm0.06$ & & PD, F2 & m, Q10- \\
10789273 & V838\,Cyg & $0.4802800$ & 0.3909 & $59.7\pm0.2$ & & & (F2, PD) & scal \\
11125706 & & $0.6132200$ & 0.1806 & $40.21\pm0.02$ & $58.9\pm0.1$ & & & m, scal \\
12155928 & V1104\,Cyg& $0.4363851$ & 0.3847 & $52.02\pm0.05$ & & & & scal \\
\tableline
\end{tabular}
}
\tablecomments{
$P_0$, $P_{\mathrm B}$ and $P_{\mathrm S}$ mean the pulsation, the primary and
secondary modulation periods, respectively. $A(f_0)$ is the Fourier amplitude of the main
pulsation frequency.
$^{\mathrm a}$
The pattern of additional frequencies: PD means period doubling;
F1 indicates first overtone frequency and its linear combination with
the fundamental one; F2 is as F1, but
with the second radial overtone; F$^\prime$ indicates frequencies with unidentified modes;
brackets indicate marginal effects.
$^{\mathrm b}$
scal=scaled, m=missing quarters, Q10- = data from Q10}
\end{center}
\end{table*}
\subsection{Analysis of Individual Stars}
\subsubsection{V2178\,Cyg = KIC~3864443}\label{v2178cyg}
\begin{figure}
\includegraphics[width=9cm]{V2178_Cyg}
\caption[]{
O$-$C analysis of V2178\,Cyg. Left panels
show the O$-$C diagram (top) and its residuals after we pre-whitened
the data with $f_{\mathrm B}$ and $2f_{\mathrm B}$ (middle)
and also with $f_{\mathrm S}$ (bottom). Right panels show
the Fourier spectra of the O$-$C data during this consecutive
pre-whitening process.
} \label{V2178_Cyg}
\end{figure}
This star was chosen by N13 as a representative
of the long period Blazhko stars showing large AM and FM.
The envelope of the light curve in Fig.~\ref{zoo} shows
complicated amplitude changes suggesting multiperiodicity
and/or cycle-to-cycle variations of the Blazhko effect.
Unfortunately, the long time-scales of the variations
make quantification of this phenomenon impossible.
We detected 11 significant harmonics of the main pulsation
frequency ($f_0$) up to the Nyquist frequency.
The triplet structures around the harmonics are highly
asymmetric: $A(kf_0-f_{\mathrm B}) \gg A(kf_0+f_{\mathrm B})$
(see also fig.~3 in N13).
If we calculate the primary Blazhko frequency from the
averaged spacing of the side peaks we find $0.00482$~d$^{-1}$.
This value is in good agreement with
the highest amplitude peak in the low frequency range
($0.00462\pm 0.0001$~d$^{-1}$) so it can be identified with $f_{\mathrm B}$
(Fig.~\ref{fr_low}).
Due to the missing quarters the Fourier spectrum
contains numerous instrumental frequencies such as $f_{\mathrm Q}$,
$f_{\mathrm Q}\pm f_{\mathrm B}$ and their linear combinations with
the main frequency and its harmonics. The second largest
low frequency peak is at 0.002486~d$^{-1}$ which coincides
with $f_{\mathrm K}=0.002685$~d$^{-1}$ within the determination error (0.0001~d$^{-1}$).
After we subtracted the largest amplitude side peaks ($kf_0-f_{\mathrm B}$)
in the spectrum of the residual, the other components of the triplets
($kf_0+f_{\mathrm B}$) and additional side peaks of
a possible secondary modulation ($kf_0-f_{\mathrm S}$) appeared,
where $f_{\mathrm S}=0.00593$~d$^{-1}$.
If $f_{\mathrm S}$ belongs to a secondary modulation,
the ratio of two modulation frequencies would be 2:3, however,
$f_{\mathrm S}$ can also be interpreted as a linear combination:
$f_{\mathrm S}=f_{\mathrm Q}-f_{\mathrm B}$.
When we compute the Fourier spectrum of the O$-$C diagram (Fig.~\ref{V2178_Cyg})
we find $f_{\mathrm B}=0.00463$~d$^{-1}$ and $2f_{\mathrm B}$.
Pre-whitening with these frequencies one significant
peak appears at 0.00602~d$^{-1}$ (S/N=24). If we remove this frequency as
well, the residual curve shows large amplitude quasi-periodic oscillations,
but no further frequencies can be identified. Our conclusion is
that V2178\,Cyg shows a multiperiodic and/or quasi-periodic
Blazhko effect but the long periods do not allow us to draw final
conclusion.
Similarly to \cite{Benko10}, we found a bunch of peaks around
the frequency of the second radial overtone mode
(see also in Fig.~\ref{fr_add}, $f_2=3.51478$~d$^{-1}$,
$P_0/P_2=0.584$; S/N~$\approx3$). The PD
phenomenon is marginal: the highest peak around $1.5f_0$
is $f^{(1)}=3.05804$~d$^{-1}$ ($f^{(1)}/f_0=1.49$; S/N~$\approx2$).
A third additional peak condensation can be seen around
$f^{(2)}=2.656875$~d$^{-1}$ (S/N~$\approx2$). Though some Blazhko stars
(e.g. RR\,Lyr, \object{V445 Lyr}) show radial first overtone frequency
($f_1$) around this region, we could identify this peak
of V2178\,Cyg as a linear combination
$f^{(2)}=3f_0-f_2$ with high certainty, because the period ratio
$P^{(2)}/P_0=0.773$ is far from the canonical value of $P_1/P_0=0.744$.
Because this period ratio increases with the increasing metallicity
(see e.g. fig.~8. in \citealt{Chadid10})
the measured low metallicity of V2178\,Cyg ([Fe/H]$=-1.46$, N13)
also supports the linear combination explanation.
\subsubsection{V808\,Cyg = KIC~4484128}\label{v808cyg}
\begin{figure}
\includegraphics[width=8.5cm]{V808_Cyg}
\caption[]{
New results from V808\,Cyg. A-C panels:
The pre-whitening process of the O$-$C diagram shown in
Fig.~\ref{o-c_all}. The harmonics
of the Blazhko frequency $f_{\mathrm B}$ are significant up
to the 4th order. A long time-scale variation is evident
because of the presence of
low frequency peaks ($f_{\mathrm S}$, $2f_{\mathrm S}$) and
side frequencies around the harmonics of $f_{\mathrm B}$.
Panel D: residual O$-$C curve containing only
$f_{\mathrm S}$ and $2f_{\mathrm S}$. Panel E: the second
radial overtone frequency ($f_2$) can be detected from the Q1-Q16 data.
} \label{V808_Cyg}
\end{figure}
The light curve of this star in Fig.~\ref{zoo} shows
two important features. First, the
envelope shape suggests a highly non-sinusoidal AM.
Second, the length of the Blazhko cycle is close to
the length of the observing quarters.
As a consequence of the first fact, we can detect two
significant harmonics $2f_{\mathrm B}$ and $3f_{\mathrm B}$
of the Blazhko frequency $f_{\mathrm B}=0.01085$~d$^{-1}$
(Fig.~\ref{fr_low}), and
multiplet side peaks ($kf_0\pm lf_{\mathrm B}$, where $l>1$)
are detectable, as well. A slight cycle-to-cycle amplitude change
might be present but
the quarter-long Blazhko period and gaps together make
such an effect barely detectable.
The O$-$C diagram of \object{V808 Cyg} can be fitted well
with the Blazhko period and its three harmonics.
After we subtracted this four-frequency fit from the O$-$C data,
a definite structure can be detected in the residual spectrum
(panel B in Fig.~\ref{V808_Cyg}).
At the position of $f_{\mathrm B}$ and $3f_{\mathrm B}$
side peaks appear. These peaks define a secondary modulation
with the frequency of $f_{\mathrm S}=0.0010$~d$^{-1}$.
The pre-whitened spectrum indeed shows two peaks at $f_{\mathrm S}$ and $2f_{\mathrm S}$
(panel C).
However, the possible period $P_{\mathrm S}\sim 1000$~d is commensurable with
the length of the total observational time, so this O$-$C variation in panel D could
be secular, as well.
V808\,Cyg shows the strongest known PD effect that is why
data taken during the first two quarters were investigated in detail
by \cite{Szabo10}. Using the time series up to Q16
this main finding remains unchanged. The highest amplitude
HIF is at $1.5f_0$, namely $f^{(1)}=2.69770$~d$^{-1}$
($f^{(1)}/f_0=1.48$; S/N~$\approx 30$).
After applying a few-step pre-whitening process -- when we subtract the
main pulsation frequency, its harmonics and some (6-10) significant
multiplets around the harmonics -- we found that the second
radial overtone mode (or a non-radial mode at the location of the
radial one) $f_2=3.09774$~d$^{-1}$ ($P_2/P_0$=0.589) is also excited
(panel E in Fig.~\ref{V808_Cyg}).
The amplitude of this frequency is much lower than the amplitudes
of the PD frequencies, which explains why previous investigations have
not discovered this mode.
\subsubsection{V783\,Cyg = KIC~5559631}\label{v783cyg}
\begin{figure*}
\includegraphics[width=17cm]{V354_Lyr}
\caption[]{
Additional peaks in the lower frequency range of
the pre-whitened Fourier spectrum of V354\,Lyr light curve.
} \label{V354_Lyr}
\end{figure*}
The Blazhko effect of V783\,Cyg seems to be
simple: a sinusoidal AM and FM
visible both in the light curve (Fig.~\ref{zoo}) and O$-$C diagram
(Fig.~\ref{o-c_all}). By investigating these curves more carefully
we can detect small differences between consecutive cycles.
When we pre-whiten the light curve data with the main pulsation
frequency and its 15 significant harmonics we find nice
symmetric triplet structures in the spectrum around the harmonics.
The star has the shortest Blazhko
cycle in our sample ($P_{\mathrm B}=27.67$~d).
The spectrum also contains the modulation frequency itself:
$f_{\mathrm B}=0.036058$d$^{-1}$. After
subtracting the triplets in the residual spectrum,
multiplet side frequencies appear. By carrying out a similar
few-step pre-whitening process as in Sec.~\ref{v808cyg} we can eliminate all side peaks and no
additional peaks emerge between the harmonics.
Fourier analysis of the O$-$C diagram provides us $f_{\mathrm B}$
again (Fig.~\ref{fr_low}). When we subtract a fit with $f_{\mathrm B}$,
the residual O$-$C diagram shows a parabolic shape indicating
a period change. Fitting a quadratic function in the form
\[
\mathrm{O}-\mathrm{C}=\frac{1}{2} \frac{dP_0}{dt} \bar{P_0} E^2
\]
\citep{Sterken05}, where $\bar{P_0}$ means the averaged period,
$E$ the cycle number from the initial epoch, we find
a period increase: $dP_0/dt=1.02\times 10^{-9}\pm1.7\times 10^{-10}$~dd$^{-1}$.
That is $0.12\pm0.02$~dMy$^{-1}$ which agrees well with the value
of \cite{Cross91} $0.088\pm0.023$~dMy$^{-1}$, who determined it on the
basis of photometric observations between 1933 and 1990.
The Fourier analysis of the light curve and O$-$C diagram
are not sensitive for the mentioned slight cycle-to-cycle variation
of the Blazhko effect.
The short Blazhko period and uninterrupted {\it Kepler} data
make V783\,Cyg a good candidate for a dynamical analysis.
This study will be discussed in a separate paper by
Plachy et al. (in prep.). The preliminary results
suggest that the cycle-to-cycle
variation of V783\,Cyg light curve and O$-$C
diagram have chaotic nature.
\subsubsection{V354\,Lyr = KIC~6183128}\label{v354lyr}
The star has the longest Blazhko period in
the {\it Kepler} sample.
We find $P_{\mathrm B}=807$~d
($f_{\mathrm B}=0.00124$~d$^{-1}$) calculating from
the triplet spacing in the spectrum of the light curve.
At the same time the highest peak in the low frequency range is
at $f_{\mathrm B}=0.00134$~d$^{-1}$ which yields
$P_{\mathrm B}=748$~d. The problem is that
the Blazhko period is inseparably close to the instrumental
period $2P_{\mathrm K}=745$~d.
The observed two Blazhko cycles in Fig.~\ref{zoo}
look different. The ascending branch of the first
cycle is steeper than that of the second one while
the descending branch of the second cycle is the steeper.
The different shapes of the two cycles
in the O$-$C diagram (Fig.~\ref{o-c_all}) strengthen our suspicion:
\object{V354 Lyr} shows multiperiodicity or cycle-to-cycle variations in the
Blazhko effect. The long Blazhko period
prevents us from quantifying this feature.
As it was already found by \cite{Benko10}, the Fourier
spectrum of V354~Lyr contains significant additional
peaks between harmonics. The following frequencies
were reported between $f_0$ and $2f_0$ (with the notations of the referred paper):
$f'=2.0810$, $f''=2.4407$, $f'''=2.6513$ and $f_2=3.03935$~d$^{-1}$.
After we removed the main pulsation frequency, its significant
harmonics, and the highest amplitude side peaks around the harmonics,
our spectrum (Fig.~\ref{V354_Lyr}) also contains numerous additional
frequencies. The highest amplitude peak from the half-integer frequency is
located now at $f^{(1)}=2.648387=1.5f_0$ ($f^{(1)}/f_0$=1.49).
Identifying the frequency of the radial second overtone mode is problematic,
because there is a double peak in that position:
$f^{(1)}_2=3.038671$~d$^{-1}$
($P^{(1)}_2/P_0=0.586$) and $f^{(2)}_2=2.999333$~d$^{-1}$
($P^{(1)}_2/P_0=0.593$). The spacing between these two frequencies
is 0.300d$^{-1}$. Similar double peak structures can be seen
for many other frequencies.
The highest amplitude additional frequency $f'=2.080672$~d$^{-1}$
is mysterious. We have not found any frequencies in such a position for
other Blazhko stars (see also Fig.~\ref{fr_add}).
Neither known instrumental frequencies nor linear combinations
of instrumental and real frequencies give peak at this position.
The spectral window function has a comb like structure \citep{KIH},
but the peaks are far from $f'$ and their amplitude
are very low (in a normalized scale $\sim 0.004$).
So we can rule out $f'$ being an instrumental frequency.
We checked whether this frequency comes
from this star or not. Using the flux data we searched for
its linear combination frequencies with the main pulsation one.
Both the $f'+f_0=3.8613$~d$^{-1}$ and $f'-f_0=0.3003$~d$^{-1}$ are indeed
detectable. So we could rule out a background star as the
source of this frequency. The period ratio is $P'/P_0=0.855$.
\cite{Benko14} raised an explanation that this frequency
would be a linear combination of the main pulsation
$f_0$ and radial first overtone mode frequencies $f_1$, namely
$f'=(f_0+f_1)/2$. The problem of this interpretation
is that the spectrum shows significant peaks neither at $f_0+f_1$ nor at $f_1$.
As we have seen the spacing between $f^{(1)}_2$ and $f^{(2)}_2$ can
be given as $f' - f_0$, so any of them can be calculated as a linear
combination of $f_2$, $f_0$ and $f'$.
In Fig.~\ref{V354_Lyr} we identified $f_2=f^{(1)}_2$, therefore
$f^{(2)}_2=f_2+f_0 - f'$. Alternatively, if we identify $f_2=f^{(2)}_2$,
then $f^{(1)}_2=f_2+f' - f_0$.
We can find several other peaks where we have alternate identifications
depending on how we choose $f_2$.
The last frequency listed by \cite{Benko10} is $f''=2.4407$~d$^{-1}$.
Although this frequency is not significant in the spectrum
calculated from Q1-Q16 data, but the half of it (1.220~d$^{-1}$) can be detected.
The later frequency can easily be produced by $f_2-f_0$ if we identify
$f_2=f^{(2)}_2$. As we have shown in \cite{Benko14} frequency combination
$2(f_2-f_0)$ could explain numerous previously unidentified peak
in spectra of some {\it CoRoT} and {\it Kepler} Blazhko stars.
So it is not surprising that we temporarily detect this
combination frequency $2(f_2-f_0)=2.4407$~d$^{-1}$ in V354~Lyr, as well.
As we mentioned in Sec.~\ref{fr_anal} all additional
frequency amplitudes (HIFs, overtones) change in time,
these combination frequencies also seem to show similar time dependency.
\subsubsection{V445\,Lyr = KIC~6186029}\label{v445lyr}
The light curve of the star shows strong
and complicated amplitude changes (Fig.~\ref{zoo}).
It was the subject of a detailed study of \cite{Guggenberger12}.
The paper used the data available in that time
(Q1-Q7), but its main statements are not changed in the
light of the more extended Q1-Q16 data set.
The heavily varying parameters such as the different periods,
amplitudes and phases result in a slightly different
averaged values for these parameters compared to
ones given by \cite{Guggenberger12}. We confirm
the existence of two modulation frequencies ($f_{\mathrm B}$,
$f_{\mathrm S}$) and four additional frequency patterns,
namely $f_2$, $f_1$, PD and $f_{\mathrm N}=2.763622$~d$^{-1}$.
In the latter case we noted a possible interpretation with
the linear combination of $f_{\mathrm N}=2(f_2-f_0)$ in \cite{Benko14}.
\subsubsection{KIC~7257008}\label{kic7257008}
The variable nature of the star was discovered by the
ASAS survey (\citealt{Poj97, Poj02}; Szab\'o et al. in prep.).
Its {\it Kepler} data were investigated for
the first time by N13. The envelope of the
light curve in Fig.~\ref{zoo} suggests multiple modulation
behavior. We determined the Blazhko frequency both
from the side peak patterns and from the low frequency range of
the Fourier spectrum of the light curve:
$f_{\mathrm B}$=0.02528~d$^{-1}$. The harmonic $2f_{\mathrm B}$
is also significant as a consequence of the non-sinusoidal nature
of the AM.
The FM is even more
non-sinusoidal: the Fourier spectrum of the O$-$C diagram (in Fig.~\ref{fr_low})
contains five significant harmonics of the Blazhko frequency.
A small peak at the sub-harmonic
$f_{\mathrm B}/2=0.01234$~d$^{-1}$ (S/N=3.6) is also present.
After pre-whitening the Fourier spectra of the upper envelope (maxima) curve
or O$-$C diagram, double side peaks remain at the
location of the Blazhko frequency and its harmonics.
The spacing between side peaks is very narrow ($\sim 0.001$~d$^{-1}$)
which implies a secondary Blazhko period longer than the length of the
data set (Q10-Q16). The amplitude and the phase of the harmonics of
$f_{\mathrm B}$ have changed during the observing term producing
varying envelope and O$-$C curves. These changes have been verified
by using {\sc Period04} amplitude and phase variation tool.
\cite{Molnar14} have found that the star shows PD effect
($1.5f_0=2.871047$~d$^{-1}$) and contains the second overtone
pulsation ($f_2=3.329353$~d$^{-1}$) as well.
\subsubsection{V355\,Lyr = KIC~7505345}\label{v355lyr}
\begin{figure}
\includegraphics[width=4cm,angle=-90,trim=100 30 100 100]{7505345_o-c_fit}
\caption[]{
Residual O$-$C diagram of V355\,Lyr after we pre-whitened
the data with the frequencies $kf_{\mathrm B}$, ($k=1,2,3,5$),
and side peaks $f_{\mathrm B}\pm f_{\mathrm L}$.
} \label{7505345_o-c}
\end{figure}
The light curve of \object{V355 Lyr} in Fig.~\ref{zoo} suggests at
least two modulation periods. The longer period amplitude
change shows about four cycles during the four years observing
span. It raises the possibility of an instrumental effect showing the
Kepler year (372.5~d) \citep{Banyai13}. Indeed, we found two strong peaks
in the low frequency range of the spectrum which can be
identified as $f_{\mathrm K}=0.00266$ and $2f_{\mathrm K}=0.00533$~d$^{-1}$.
But other aspects contradict such an explanation.
The Blazhko frequency turns up from the triplet
structures and it is detectable directly as $f_{\mathrm
B}=0.0322\pm0.005$~d$^{-1}$.
There is a detectable peak at $f=0.06154$~d$^{-1}$, (S/N=4.5) which
can not be the harmonic $2f_{\mathrm B}$, because its difference
from the exact harmonic (0.00147~d$^{-1}$) is twice of the
Rayleigh frequency resolution ($\approx0.0007$~d$^{-1}$).
So we may identify it as a possible secondary modulation
frequency ($f=f_{\mathrm S}$). If it is true, the two modulation
frequencies are in a nearly 1:2 ratio, causing the
observed beating phenomenon in the envelope curve.
At the same time the sub-harmonic $f_{\mathrm B}/2$ is also significant.
A similar situation has been found for the first time by \cite{Sodor11}
in the case of the multiperodic Blazhko star \object{CZ Lac}.
\cite{Jurcsik12} also detected the sub-harmonic of the Blazhko
frequency for \object{RZ Lyr}. As \cite{Sodor11} discussed, we can also identify
$f_{\mathrm B}/2$ as the primary modulation frequency. In that case
instead of a sub-harmonic we would have a harmonic with much higher amplitude than
the main modulation frequency. Moreover, the modulation curve would have a
rather unusual shape. For these reasons we prefer the sub-harmonic
identification.
As opposed to the above mentioned amplitude relations, the Fourier amplitude
$A(f_{\mathrm S})$ is higher than $A(f_{\mathrm B})$ in the spectrum of
the O$-$C curve.
In other words, in FM $f_{\mathrm S}$ dominates, while in AM $f_{\mathrm B}$
is the dominant.
Such an effect have never been detected before.
Linear combination peaks at
$f_{\mathrm S}\pm f_{\mathrm B}$
are also detectable. Additionally, the sub-harmonic of $f_{\mathrm B}$ can not be seen, but
the sub-harmonic of $f_{\mathrm S}=0.03083$~d$^{-1}$ can be detected.
One other significant peak is at $f^{(1)}=0.16316$~d$^{-1}$ (S/N=5.3).
This frequency is located close to $5f_{\mathrm B}=0.16150$~d$^{-1}$ but
the identification $f^{(1)}=5f_{\mathrm B}$ is ambiguous, because no other harmonics
are detectable. The spectrum of the light curve also shows
a marginal (S/N=2.4) peak at this position.
We suspect that it might be a third modulation frequency.
Pre-whitening the O$-$C curve with all the above mentioned
frequencies we obtain the residual curve
in Fig.~\ref{7505345_o-c}. This residual
shows a sudden period change at about $E\approx1636$ (= BJD$\approx2455778$) and a
less pronounced one around $E\approx2386$ (= BJD$\approx2456133$). Nothing particular can be seen
in the light curves around these dates.
Higher frequency range of the time series data is
dominated by the main pulsation frequency, its harmonics, and
their strong multiplet surroundings.
Beyond the clear PD effect ($1.5f_0=3.155484$~d$^{-1}$;
$P/P_0=1.495$) discussed by \cite{Szabo10},
the Fourier spectrum of the star shows evidences for
the second radial overtone pulsation (see in Fig.~\ref{fr_add}).
This feature was undetected
by previous {\it Kepler} studies. The frequency $f_2=3.589528$~d$^{-1}$
($P_2/P_0=0.588$) is surrounded by well separated side peaks.
\subsubsection{V450\,Lyr = KIC~7671081}\label{v450lyr}
\begin{figure}
\includegraphics[width=8.5cm]{V450_Lyr}
\caption[]{
Top: Zoom from the Fourier spectrum of the O$-$C diagram of V450\,Lyr.
Bottom: Residual O$-$C diagram after we pre-whitened
the data with the above signed five frequencies. The best fitted parabola
(continuous line) suggests a very fast period increase.
} \label{V450_Lyr}
\end{figure}
The shape of the maxima curve of \object{V450 Lyr} suggests a strong beating
phenomenon between two modulation periods, however,
similar features have also been seen in other stars
(e.g. V355\,Lyr and KIC~7257008) which proved to be
instrumental effects. Accordingly, we carefully compared the
frequencies determined in the spectra of the light curve and the O$-$C diagram.
In the case of the light curve
the largest low frequency peak $f_{\mathrm K}$
belongs to the Kepler year.
The next one is the modulation frequency $f_{\mathrm B}=0.00813$~d$^{-1}$
($A(f_{\mathrm B})$=6~mmag).
Its harmonic $2f_{\mathrm B}$ is also detectable. The third largest
amplitude peak is at the frequency $f_{\mathrm S}=0.01243$~d$^{-1}$.
Low significance peaks are seen at the $f_{\mathrm B}+f_{\mathrm S}$
and $f_{\mathrm S}/2$
($A(f_{\mathrm B}+f_{\mathrm S})\approx A(f_{\mathrm S}/2)\approx2$~mmag).
A similar analysis on the spectrum of O$-$C diagram (see top panel in
Fig.~\ref{V450_Lyr})
results in two independent frequencies $f_{\mathrm B}$, $f_{\mathrm S}$ and
some combinations of them ($f_{\mathrm S}-f_{\mathrm B}$,
$f_{\mathrm S}-2f_{\mathrm B}$, $f_{\mathrm S}/2$).
We can state now that $f_{\mathrm S}$ or $f_{\mathrm S}/2$ is a real
secondary modulation frequency (See the discussion about
the interpretation of the residual sub-harmonic
in Sec.~\ref{v355lyr}).
When we subtract a fit with all the mentioned frequencies,
the residual O$-$C diagram (bottom panel in Fig.~\ref{V450_Lyr}) shows a combination of
remained quasi-periodic signal and a parabolic shape indicating
a strong period change. Fitting a quadratic function we found
a fast period increase: $dP_0/dt=2.4\times 10^{-8}$~dd$^{-1}$, which can
not be caused by stellar evolution.
This phenomenon can be explained e.g. as a sign of a third longer period
modulation or as a random walk caused by a quasi-periodic/chaotic
modulation.
By investigating the fine structure of the Fourier spectrum
between harmonics of the main pulsation frequency
\cite{Molnar14} recognized that V450\,Lyr pulsates in the
second radial overtone mode, as well (see also Fig.~\ref{fr_add}).
If we identify the highest peak in this region at 3.33670~d$^{-1}$
with $f_2$, the period ratio is $P_2/P_0=0.594$, corresponding
to canonical second overtone period ratio.
\subsubsection{V353\,Lyr = KIC~9001926}\label{v353lyr}
The AM of V353\,Lyr displays alternating
higher and lower amplitude Blazhko cycles (Fig.~\ref{zoo}).
The phenomenon reminds us the PD effect where the amplitudes of the
consecutive pulsation cycles alternate. The effect suggests
two modulation frequencies in a nearly 1:2 ratio.
The low frequency region of the Fourier spectrum of the data
is dominated by instrumental frequencies such as $f_{\mathrm K}$,
$f_{\mathrm K}/2$, $f_{\mathrm Q}$ and $f_{\mathrm Q}/2$.
If we remove these instrumental peaks, we find the two largest
non-instrumental ones: $f_{\mathrm B}$=0.01386~d$^{-1}$
and $f^{(1)}$=0.00819~d$^{-1}$. We obtain
0.01394 and 0.00751~d$^{-1}$, respectively
for these values from the spacings of the side peaks.
The two frequencies are in the ratio of $\approx1:2$
as we predicted. Since we detected sub-harmonic
of the Blazhko frequency for several stars, $f^{(1)}$ could also be a
sub-harmonic ($f^{(1)}=f_{\mathrm B}/2$) of $f_{\mathrm B}$.
The O$-$C diagram shows alternating
maxima and minima again (Fig.~\ref{o-c_all}). The Fourier spectrum
of this curve is much simpler than the spectrum of the original light curve data.
It contains $f_{\mathrm B}$=0.01395~d$^{-1}$ and its
harmonic $2f_{\mathrm B}$, $f^{(1)}=f_{\mathrm S}=0.00761$~d$^{-1}$, and an additional
discrete peak close to 0 representing a global long-term period change.
Removing these three frequencies we obtain a residual spectrum which contains
two significant peaks at 0.01462~d$^{-1}$ (S/N=11.8)
and 0.02010~d$^{-1}$ (S/N=4). The most plausible identification of
these peaks are $2f_{\mathrm S}$ and $f_{\mathrm B}+f_{\mathrm S}$, respectively.
These frequencies, especially the linear combination, contradict
the sub-harmonic scenario.
By removing all the mentioned frequencies
we receive a parabolic shape diagram (which can be seen in Fig.~\ref{o-c_all}).
The period decrease calculating from the best fitted parabola
is $-8.4\times 10^{-9}$~dd$^{-1}$ that can not be explained by an evolutionary
effect. It shows hidden period(s) and/or chaotic variation.
To search for additional frequencies we
pre-whitened the data with main frequency its harmonics and
the largest 7-8 side peaks around the harmonics.
We have found neither PD effect nor any
higher order radial overtone modes (Fig.~\ref{fr_add}).
\subsubsection{V366\,Lyr = KIC~9578833}\label{v366lyr}
\begin{figure}
\includegraphics[width=8.5cm]{V366_Lyr}
\caption[]{
The Fourier spectrum of the O$-$C diagram of V366\,Lyr.
Top panel: a possible identification of the peaks.
Bottom panel: spectrum after we prewhitened with the
four most significant frequencies above.
} \label{V366_Lyr}
\end{figure}
The maxima of the light curve show a beating like phenomenon (Fig.~\ref{zoo}).
Calculations from the multiplet structures and the determination
of the significant low frequencies end in the same results.
Beside the primary modulation
frequency $f_{\mathrm B}$=0.0159~d$^{-1}$, two additional
peaks can be detected at the frequencies
$f^{(1)}=0.03415$ and $f^{(2)}=0.03175$~d$^{-1}$ with nearly equal amplitudes.
The latter is the first harmonic of the Blazhko
frequency, but the first one seems to belong to a
secondary modulation frequency. Both of the primary
($kf_0\pm f_{\mathrm B}$) and
secondary modulation side peaks ($kf_0\pm f_{\mathrm S}$)
can be detected.
The ratio between the two modulation frequencies is close to 1:2.
The O$-$C diagram (in Fig.~\ref{o-c_all}) shows
a typical beating signal.
The spectrum is surprising (Fig.~\ref{V366_Lyr}).
Following our expectations it contains
two close frequencies at $f_{\mathrm B}$=0.015932~d$^{-1}$
and $f^{(3)}=0.01823$~d$^{-1}$ which must be responsible for
the beating phenomenon. Two other peaks are also visible.
One of them is the harmonic of the primary modulation
frequency ($2f_{\mathrm B}$), while the other is at the
position of $f^{(1)}=f_{\mathrm S}=0.03414$~d$^{-1}$.
In this framework we can identify $f^{(3)}=f_{\mathrm S}-f_{\mathrm B}$.
Strangely enough, the amplitude of $f_{\mathrm S}$
is 1.55 times higher (1.7 vs 2.6~minutes) than that of $f_{\mathrm B}$.
In other words, the roles of the primary and secondary Blazhko
modulations are reversed. This is the second case
for this new phenomenon (see also V355\,Lyr in Sec.~\ref{v355lyr}).
There is an alternate identification scenario for the detected
frequencies. If we assume $f^{(3)}$ to be the genuine secondary modulation
then the amplitudes would be in order:
$A(f_{\mathrm B})=1.7 > A(f_{\mathrm S})=1.1$~minutes,
however, $f^{(1)}$ should be identified as $f_{\mathrm B}+f_{\mathrm S}$.
In this case (i) the amplitude of the linear combination frequency
would be higher than any of its elements. Moreover, since
$f^{(1)}$ can be detected directly from the spectrum of the light curve,
(ii) \object{V366 Lyr} would be the only case where a linear combination
frequency is detectable instead of the secondary modulation frequency.
For the above two (i and ii) reasons, we prefer the first scenario.
When we pre-whiten the O$-$C spectrum with the discussed four
frequencies, additional significant peaks appear at some
harmonics and linear combination frequencies, namely
at $2f_{\mathrm S}$, $f_{\mathrm S}+f_{\mathrm B}$ and
$2f_{\mathrm B}-kf_{\mathrm S}$ (bottom panel in Fig.~\ref{V366_Lyr}).
After subtracting all these significant frequencies,
the residual O$-$C diagram shows no more structures.
On the basis of Q1-Q2 data no additional
frequency pattern has been found for V366\,Lyr \citep{Benko10}.
The situation is changed when we take into account the Q1-Q16
time span. The highest peak between the harmonics is
at the frequency $f^{(4)}=2.675799$~d$^{-1}$ (S/N=4.7, Fig.~\ref{fr_add}).
The period ratio is $P^{(4)}/P_0$=0.71. Frequencies with similar
period ratios were discovered in the CoRoT targets
\object{V1127 Aql} and \object{CoRoT 105288363}, and
{\it Kepler} stars V445\,Lyr and \object{V360 Lyr}
\citep{Chadid10, Guggenberger12, Benko10}.
These frequencies are the dominant additional
frequencies for only three stars: V1127\,Aql, V360\,Lyr and
V366\,Lyr.
The referred papers generally explain these
frequencies as excitation of independent non-radial modes.
As we showed in \cite{Benko14} all of these
frequencies can be constructed by a linear
combination as $f^{(4)}=2(f_0-f_2)$. Here, in the case of V366\,Lyr,
we may identify the second overtone mode frequency
with the marginal peak at $f_2=3.227711$~d$^{-1}$ (S/N=2.7).
This formal mathematical description has its
own strengths and weaknesses. As opposed to the
non-radial explanation, it could be verified or denied by
using existing radial hydrodynamic codes.
It is hardly understandable, however, why linear combination $2(f_0-f_2)$
is stronger than $f_0-f_2$.
\subsubsection{V360\,Lyr = KIC~9697825}\label{v360lyr}
\begin{figure}
\includegraphics[width=4cm,angle=-90,trim=100 30 100 100]{9697825_o-c_fit}
\caption[]{
Residual O$-$C diagram of V360\,Lyr after we prewhitened
the data with all significant frequencies.
} \label{9697825_o-c}
\end{figure}
The maxima of the light curve of V360~Lyr in Fig.~\ref{zoo}
show a slight beating phenomenon.
The Fourier spectrum contains rich multiplet structures around
the harmonics of the main pulsation frequency.
The largest side peaks indicate the following frequencies:
$f_{\mathrm B}=0.01919$,
$f^{(1)}=0.02374$, $f^{(2)}=0.02821$ and
$f^{(3)}=0.04753$~d$^{-1}$. Two of them ($f_{\mathrm B}$ and $f^{(3)}$)
can be detected directly, as well. If we assume $f^{(3)}$ to be a
second independent modulation frequency ($f^{(3)}=f_{\mathrm S}$),
the other two values can be expressed as
$f^{(2)}=f_{\mathrm S}-f_{\mathrm B}$ and $f^{(1)}=f_{\mathrm S}/2$.
Beyond the above mentioned four frequencies,
the Fourier spectrum of the O$-$C diagram shows two additional
peaks at $2f_{\mathrm B}$ and $f_{\mathrm S}+f_{\mathrm B}$.
We found again a Blazhko star which shows not only a secondary
modulation but its half as well (see also discussion in Sec.~\ref{v355lyr}).
The ratio between the two modulation frequencies is 0.404
or 2:5. The residual curve of the
O$-$C diagram (Fig.~\ref{9697825_o-c}) shows a rather long
period oscillation as a secular period change.
Two additional frequencies
($f_1=2.4875$ and $f'=2.6395$~d$^{-1}$) of V360\,Lyr were
reported by \cite{Benko10} who explained $f_1$
with a possible first overtone mode and $f'$
with an independent non-radial mode.
As \cite{Szabo10} have already mentioned
$f'$ could also be a member of a PD pattern around $f'=1.5f_0$.
When we analyze the Q1-Q16 data set the highest
amplitude peak in this region is located at $f^{(4)}=2.678669$~d$^{-1}$
($P^{(4)}/P_0=1.49$) which is a PD frequency without doubt.
Interpretation of the strongest additional peak at the
frequency of $f^{(5)}=2.487740$~d$^{-1}$ is more problematic.
The period ratio $P^{(5)}/P_0=0.721$ is far from the
canonical value of the first radial overtone and fundamental modes (0.745).
Such a ratio could be produced
by a highly metal abundant RR\,Lyrae star (see e.g. fig.~8 in \citealt{Chadid10})
but the metallicity of V360\,Lyr is [Fe/H]=$-1.5\pm 0.35$~dex (N13).
This difference might also be
due to the resonance, which excites this mode
in non-traditional way \citep{Molnar12}.
We suggested in \cite{Benko14} an other explanation:
similarly to V366\,Lyr (and V445\,Lyr), this frequency
could be a linear combination $2(f_2-f_0)$, where
$f_2=3.036015$~d$^{-1}$ is the frequency of the second radial overtone mode.
\subsubsection{KIC~9973633}\label{kic9973633}
The history of the star is the same as that of KIC~7257008:
the ASAS survey discovered it and its basic parameters
were determined for the first time by N13.
The {\it Kepler} data set of KIC~9973633 is relatively the
most unfortunate in the analyzed sample. It has short
time coverage (data exist from only Q10) and additional two
quarters (Q11 and Q15) are missing (Fig.~\ref{zoo}).
Although the triplet structures around the harmonics
provide us the Blazhko frequency $f_{\mathrm B}$=0.01490~d$^{-1}$, we
can not detect it directly in the low frequency range of the
Fourier spectrum. Here the instrumental peak $f_{\mathrm K}$
dominates. When we remove it, we find two high amplitude peaks at
$f_{\mathrm K}/2$ and $f^{(1)}=0.00411$~d$^{-1} = f_{\mathrm B}-f_{\mathrm Q}$.
After the next pre-whitening step we can see three
peaks near the detection threshold at $f_{\mathrm Q}$,
$f^{(2)}=0.03771$~d$^{-1}$ and $f^{(3)}=0.02701$~d$^{-1}$.
It is easily recognizable that $f^{(3)}=f^{(2)}-f_{\mathrm Q}$.
Which one is the real independent frequency:
$f^{(2)}$ or $f^{(3)}$?
To investigate the question we analyzed the O$-$C diagram of
KIC~9973633. The Fourier spectrum
shows two highly significant peaks at $f_{\mathrm B}=0.01486$~d$^{-1}$
and $2f_{\mathrm B}=0.02972$~d$^{-1}$ (Fig.~\ref{fr_low}). By pre-whitening the data with
these two frequencies we obtain a third one at 0.03675~d$^{-1}$.
If we identify it with $f^{(3)}=f_{\mathrm S}$, we are in a similar situation
as in the case of V360\,Lyr (Sec.~\ref{v360lyr}): we have two modulation frequencies
with the ratio of 2:5.
The residual of the O$-$C diagram does not show any structures:
it is a constant line with some scatter.
\subsubsection{V838\,Cyg = KIC~10789273}\label{v838cyg}
\begin{figure}
\includegraphics[width=9cm]{V838_Cyg}
\caption[]{
Analysis of \object{V838 Cyg}. The top panel shows the synthetic light
curve prepared by using the Fourier parameters of the observed
data without assuming any modulations and sampled on the
{\it Kepler} data points (cf. Fig.~\ref{zoo}). Middle panel: Fourier spectra
of the observed (red continuous line) and synthetic (blue dashed line)
time series around the pulsation frequency after the data were
pre-whitened with the main frequency. The left and right hand side magnitude scales belong
to the observed and synthetic spectra, respectively. Bottom panel:
Part of the O$-$C diagrams for LC data (red crosses) and SC observations
(blue dots).
} \label{V838_Cyg}
\end{figure}
The extremely low amplitude
Blazhko modulation of V838\,Cyg was discovered by N13.
The light curve in Fig.~\ref{zoo} shows wavy maxima and minima, however,
this is structure produced by interference of
the sampling and the pulsation frequencies and it does not indicate any AM.
It is a virtual modulation and not a real one.
It means that we need more careful investigations to decide whether
V838\,Cyg is modulated or not.
Clear triplets around the main pulsation frequency
and its harmonics in the Fourier spectrum suggest a modulation
phenomenon with the frequency of $f_{\mathrm B}=0.01681$~d$^{-1}$.
It follows that the period is $P_{\mathrm B}=59.5$~d which roughly
agrees with the dominant modulation period value (54-55~d)
found by N13.
V838\,Cyg was observed in SC mode
in only one quarter in Q10. We processed these pixel data
in the same way as we have done for LC data.
The obtained SC light curve does not show virtual modulation
like LC data, but shows a small amplitude change. This variation is, however, hard to distinguish
from possible instrumental trends. To test the presence of
the modulation we made synthetic data. We prepared an artificial time series
$m_{\mathrm{syn}}(t)$ using the Fourier parameters of the observed data such as
$f_0$, $A_k$ and $\varphi_k$, where $A_k$ and $\varphi_k$ mean
the amplitude and phase values of the $k$th harmonics ($k=1, 2,\dots,11$)
i.e. without modulation side peaks:
\[
m_{\mathrm{syn}}(t)=\sum_{k=1}^{11}A_k \sin (2\pi f_0 t + \varphi_k).
\]
The synthetic data were sampled at the observed time stamps $t$
(top panel in Fig.~\ref{V838_Cyg}).
The spectra of the observed and synthetic
data have systematic differences (see middle panel in Fig.~\ref{V838_Cyg}).
(i) In the case of synthetic
data the side peaks have $\approx80$ times smaller
amplitudes than the observed ones (0.012~mmag vs. 1~mmag). (ii) The position of the side
frequencies are different. (iii) The structures of these
side patterns are also different: the observed data show
simple clear triplet, while the synthetic data produce complicated
multiplets. To summarize, we confirmed
the AM with an independent method.
Since the amplitude of the modulation
is very small, we could not detect $f_{\mathrm B}$ directly from
the low frequency range of the Fourier spectrum. This part of the
spectrum is dominated by instrumental frequencies
($f_{\mathrm K}$, $f_{\mathrm L}$, where $P_{\mathrm L}=1/f_{\mathrm L}$ indicates
the total length of the observation: $\sim4$~years).
N13 mentioned possible multiple modulations.
After pre-whitening the data with the side frequencies
$kf_0\pm f_{\mathrm B}$ only instrumental side peaks
($kf_0\pm f_{\mathrm K}$ $kf_0\pm f_{\mathrm L}$) remain
significant around the harmonics.
We can not construct the O$-$C diagram the traditional
way because the sparse continuous sampling with the relatively short
pulsation period and the interpolation errors produce
systematic undulations (in the bottom panel in Fig.~\ref{V838_Cyg}).
Due to the long Blazhko period, the analysis of SC data does not help too much.
The sensitive method used by N13 resulted in
0.001~radian phase variation in $\varphi_1$ which predicts about
$5\times 10^{-4}\approx 40$~s O$-$C variation. It is about
our detection limit: the standard deviation of O$-$C for SC data
is $\approx 2\times 10^{-4}$. Our analysis supports
the phase modulation reported by the discovery paper.
We also found additional frequency patterns for the first time in this star.
After we subtracted the significant harmonics of the
pulsation frequency and triplets around them,
the following additional frequencies can be detected
between $f_0$ and $2f_0$:
$f_2=3.509142$~d$^{-1}$, $1.5f_0=3.113906$~d$^{-1}$ (PD)
and $3f_0-f_2=2.737256$~d$^{-1}$, respectively
(see Fig.~\ref{fr_add}).
\subsubsection{KIC~11125706}\label{kic11125706}
Though KIC~11125706 shows the second lowest amplitude AM
in our sample, its long pulsation period allows us to
detect it without any problems. The asymmetric triplet structures
around the pulsation frequency and its harmonics provide
$f_{\mathrm B}$. Direct detection of this frequency has failed.
The low frequency range of the Fourier spectrum is dominated by
instrumental peaks.
Contrary to this, the Fourier spectrum of the O$-$C diagram
(Fig.~\ref{fr_low}) contains a very significant (S/N=37.8)
peak at $f_{\mathrm B}$.
Pre-whitening with this frequency, a lower amplitude,
but clearly detectable ($A(f^{(1)})=1.9\times 10^{-4}$~d, S/N=$6.4$)
peak can be seen at $f^{(1)}=0.01698$~d$^{-1}$.
Is it a real secondary modulation frequency or results from the
sampling? We generated synthetic O$-$C data using the Fourier
parameters of the primary Blazhko period and
added Gaussian noise to it. The synthetic diagram was sampled
exactly at the same points as the observed one. The Fourier
spectrum of the artificial O$-$C curve contains only the
$f_{\mathrm B}$ and it does not show any other significant peaks.
Thus we ruled out an instrumental origin of this frequency.
Now, we tend to identify $f^{(1)}=f_{\mathrm S}$. In this case
the two modulation frequencies are in a ratio of nearly 3:2.
The quadratic fit of the residual O$-$C gives a slight period increase:
$dP_0/dt=4.4\times 10^{-10}\pm1.1\times10^{-10}$~dd$^{-1}$.
Returning to the light curve analysis we do not
find any significant frequencies around $f_{\mathrm S}$
and no side peaks detected which could belong to this
frequency. These facts mean that the AM of $f_{\mathrm S}$
is below of our detection limit. Furthermore, no additional peak
patterns have been found between harmonics (Fig.~\ref{fr_add}).
\subsubsection{V1104\,Cyg = KIC~12155928}\label{v1104cyg}
Beyond V2178\,Cyg, V1104\,Cyg was also the subject of
the case study of N13. We can add only
few things to this study. We derived the Blazhko cycle based on a longer time span.
The frequency of the Blazhko modulation $f_{\mathrm B}$ is
highly significant both in the Fourier spectrum of the light
curve and that of the O$-$C diagram. The latter spectrum also includes the
harmonic $2f_{\mathrm B}$ (Fig.~\ref{fr_low})
indicating non-sinusoidal behavior of the
FM. The O$-$C residual does not indicate any
period changes during the {\it Kepler} time-scale.
Our analysis resulted in neither secondary
modulation nor additional frequencies between harmonics (Fig.~\ref{fr_add}).
\section{Conclusions}
The main goal of this paper was to investigate
the long time-scale behavior of the Blazhko effect
among the {\it Kepler} RR\,Lyrae stars.
To provide the best input for the analysis we prepared
time series from the pixel photometric data with our own tailor-made aperture.
These light curves include the total flux of the
stars for only 9 cases while some portion of the flux of 6 RR\,Lyrae stars
was lost using even in the largest possible aperture.
Nevertheless, our data set comprises the longest continuous,
most precise observations of Blazhko RR Lyrae stars
ever published. These data will be unprecedented for years
to come.
Since the Blazhko effect manifests itself in simultaneous
AM and FM we analyzed both phenomena
and compared the results of the separated analyses.
This approach reduces the influence of the instrumental effects.
We detected single Blazhko period for three stars:
V783\,Cyg, V838\,Cyg and V1104\,Cyg. Since
V838\,Cyg shows the smallest amplitude modulation both in AM
and FM, we could not confirmed
its multiperiodicity which was suspected by N13.
Twelve stars in our sample show evidences for multiperiodic
modulation. In eight cases we could determine two significant Blazhko
periods, while for four additional cases
(V2178\,Cyg, V808\,Cyg, V354\,Lyr and KIC~7257008) we could
establish the presence of a possible long secondary period.
It does not mean, however, that we could described the total
variations with these one or two periods completely. The residual curves show
significant structures (see e.g. V2178\,Cyg, V808\,Cyg, V450\,Lyr)
after subtracting the best fitted light/O$-$C curves.
For the 2-3 stars with the shortest Blazhko periods we have a
chance to carrying out a dynamical analysis. In this work we
mentioned the preliminary result from the cycle-to-cyle variation of
V783\,Cyg modulation (Plachy et al. in prep.) which hints for its chaotic nature.
The latest and most complete
compilation of the Galactic Blazhko RR\,Lyrae stars \citep{Skarka13}
consists only 8 multiperiodic cases among 242 known field Blazhko
RR\,Lyrae stars (3.3\%). More recently \cite{Skarka14} studied
a more homogeneous 321 elements sample from the ASAS and SuperWASP surveys.
He found the ratio of the multiperiodic and irregularly modulated stars
to be 12\%. In this work
we surprisingly found that most of the {\it Kepler} Blazhko stars
-- 12 from 15 (80\%) -- belongs to the multiperiodic group.
In other words: the Blazhko effect predominantly manifests
as a multiperiodic phenomenon instead of a
mono-periodic and regular one.
Here we briefly summarize the main characteristics of the
multiperiodic modulations.
\begin{itemize}
\item
Up to now the known smaller amplitude (secondary) modulation
periods were generally longer than the primary ones.
The only exception is RZ\,Lyr \citep{Jurcsik12}.
Here we showed five further examples
(V355\,Lyr, V450\,Lyr, V366\,Lyr, V360\,Lyr and
KIC~9973633) for shorter secondary periods.
\item
What is more, the definition of the primary and secondary modulations
proved to be relative. In three cases
(for V450\,Lyr, V366\,Lyr and V355\,Lyr) the relative strength
of the primary modulations is weaker in FM than in AM.
\item
The linear combination of the modulation frequencies can
generally be detected that indicates the nonlinear coupling between the
modes. Sub-harmonic frequencies ($f_{\mathrm B}/2$ and/or $f_{\mathrm S}/2$)
were detected for numerous cases (KIC~7257008, V355\,Lyr, V450\,Lyr, V360\,Lyr).
In the majority of the cases
the two modulation periods are in a ratio of small integer
numbers:
1:2 for V353\,Lyr; 2:1 for V366\,Lyr and V355\,Lyr;
(3:2 for V2178\,Cyg); 2:3 for KIC~11125706;
5:2 for KIC~9973633 and V360\,Lyr. (Here the ratios of the
primary AM period vs. secondary one are indicated.)
\end{itemize}
As a by-product of the analysis we report here for the
first time additional frequency structures for V808\,Cyg,
V355\,Lyr and V838\,Cyg. For all three cases we detected
the second radial overtone mode $f_2$.
The former studies of {\it CoRoT} and {\it Kepler} Blazhko data
found unidentified frequencies for numerous stars.
These frequencies were explained by non-radial mode excitation.
Here we showed that almost all of such frequencies can also
be produced by linear combinations of radial modes.
The only case where we could not find a proper linear combination is
the highest amplitude additional frequency of V354\,Lyr.
The amplitudes of these frequencies point to rather the non-radial
mode scenario. These non-radial modes may be excited by resonances at the locations of
the linear combination of the radial modes \citep{VanHolst}.
In other words, these frequencies are linear combinations from mathematical
point of view only, and physically they are frequencies of independent
(non-radial) modes.
\
\acknowledgments{
Funding for this Discovery mission is provided by NASA's Science Mission Directorate.
This project has been supported by
the `Lend\"ulet-2009 Young Researchers' Program of
the Hungarian Academy of Sciences, the Hungarian OTKA grant K83790 and
the KTIA URKUT\_10-1-2011-0019 grant.
The research leading to these results has received funding from the
European Community's Seventh Framework Programme (FP7/2007-2013) under
grant agreements no. 269194 (IRSES/ASK) and no. 312844 (SPACEINN).
The work of E. Plachy was supported by the European Union and
the State of Hungary, co-financed by the European Social Fund in the framework
of T\'AMOP 4.2.4.\ A/2-11-1-2012-0001 `National Excellence Program'.
R. Szab\'o was supported by the J\'anos Bolyai Research Scholarship of the
Hungarian Academy of Sciences.
The authors thank the referee for carefully reading our
manuscript and for his/her helpful suggestions.
}
|
1,116,691,498,385 | arxiv | \section{Introduction}
Brownian dynamics has been widely applied to nano-scale (macromolecular) driven systems such as AFM/DFM cantilevers \cite{Liang, Tamayo, Kim03}, motor proteins \cite{Juli97}, ion channels \cite{Colqu}, and tribology \cite{Urbakh04}. These systems are mainly in non-equilibrium driven by external agents. The design of efficient nano-scale systems requires mesoscopic non-equilibrium thermodynamics.
We have introduced mesoscopic heat, work, and entropy \cite{Kim03, Kim06, Seki97, Seifert05}. We have provided a rigorous thermodynamic analysis on a molecular refrigerator composed of both a Brownian harmonic oscillator and an external control agent who actively reduces the thermal fluctuation of the oscillator \cite{Kim03, Kim06}. We have assumed that the interaction between the refrigerator and its surrounding is expressed as linear friction and Gaussian white noise.
In this manuscript, we extend our thermodynamic analysis to study Brownian particles under \emph{multiplicative} noise. When polymers are in solvent, they are under hydrodynamic interactions \cite{Doi, Hooger92}. Their Brownian motions are well described by a state-dependent diffusion process, i.e., multiplicative noise.
Such noise appears in other diverse fields \cite{Sancho82}, e.g., motor proteins \cite{Bier96} and regulation of gene expression \cite{Hasty00}.
Most of numerical and analytical analysis on the systems under the multiplicative noise have been focused on their stochastic dynamics rather than on their thermodynamics. To our knowledge, the quantitative thermodynamic analysis is absent due to the following two reasons.
The first reason is the lack of physical concept in mesoscopic heat dissipated from a Brownian particle. The mesoscopic heat is the mesoscopic work done by the Brownian particle on the solvent, so it is expressed as an integration of an interacting force between the particle and the solvent along its position space trajectory of the particle: $Q(t) = -\int_{s=0}^{s=t} d\mb{x}(s) \cdot \mb{F}_{PS}(\mb{x}(s), \mb{v}(s))$ with $\mb{x}(s)$ ($\mb{v}(s)$) is the position (velocity) of the particle at time $s$ and $\mb{F}_{PS}$ is the force done on the particle by the solvent molecules and $t$ is time. We have adopted without rigorous justification that the above integration along the trajectory is done with the Stratonovich prescription \cite{Qian-time, Kim03, Kim06}. In this manuscript, we will show its justification.
The second reason is that the mesoscopic heat needs another prescription in $\mb{F}_{PS}(\mb{x}(s), \mb{v}(s))$. The phase space trajectory of the particle is different for different prescriptions of the stochastic integration involved in $\mb{v}(t) = \int_0^t ds \mb{F}_{T}(\mb{x}(s), \mb{v}(s))$ with unit mass and with the total force on the particle $\mb{F}_{T}$. This means that $\mb{F}_{PS}(\mb{x}(s), \mb{v}(s))$ is also dependent on the prescription in $\mb{v}$. However, the dependence on the prescription is shown to be removed under a generalized Einstein relation \cite{Arnold00}. Then, can one express heat in a form independent to the prescription in $\mb{v}$ under the relation? We provide such expression of heat from energy balance.
We also answer a fundamental question, ``Does detailed balance guarantee equilibrium steady state?" Recently, in Brownian systems without any multiplicative noise, the detailed balance has been shown to guarantee the systems to reach at equilibrium steady state \cite{Qian-time, Kim03}. However, as presented here, in systems under multiplicative noise, this is not true.
This paper is organized as follows: from Sec.\ref{model} to Sec.\ref{sec-detailed-ep}, we study a model of macromolecules in \emph{closed} heat bath, where the system reaches at an equilibrium steady state. In Sec.\ref{model}, a model of the macromolecules is introduced using both a Langevin equation and its corresponding Fokker-Planck equation. Mesoscopic heat dissipated from Brownian particles is introduced from mesoscopic energy balance with Stratonovich prescription of stochastic integration involved in the integration of force along the past spatial trajectories. In Sec.\ref{sec-Einstein}, the generalized Einstein relation is derived to guarantee a Maxwell-Boltzmann distribution in an equilibrium steady state. We show that the relation makes the mesoscopic heat independent to the prescription of stochastic integration of the Langevin equation. In Sec.\ref{sec-entropy}, using the definite form of heat dissipation, we derive entropy balance and show the H-function, i.e., free energy, becomes maximized in equilibrium state. In Sec.\ref{sec-detailed} and \ref{sec-detailed-ep}, we investigate the relationship among detailed balance, an equilibrium state, and the H-function. In Sec.\ref{sec-nonequil}, we consider a \emph{driven} system by feedback controls and reformulate all the previous results for a non-equilibrium steady state. Finally, in Sec.\ref{sec-proof}, we prove that the Stratonovich prescription of stochastic integration involved in the definition of mesoscopic heat is the unique physical choice.
\section{A Model of Single Macromolecules under Multiplicative Noise} \label{model}
We consider a \emph{closed} system composed of both a macromolecule and its surrounding isothermal water solvent. Following the general theory of polymer dynamics \cite{Doi}, the macromolecule itself is described by a bead spring model with Hamiltonian $H = \Sigma_\alpha \frac{\mb{p}_\alpha^2}{2m_\alpha} + U_{int}(\mb{x}_1, \cdots, \mb{x}_N)$ where $\mb{x}_\alpha$ and $\mb{p}_\alpha$ are the 3-D coordinate- and momentum-vectors of the $\alpha$-th bead of the macromolecule, respectively, and $U_{int}$ is an internal potential of the macromolecule. The random collisions between water molecules and the beads are modeled by a multiplicative noise and frictional force $\mathbf{f}(\mb{v}_1 ,\cdots ,\mb{v}_N )$ with $\mb{v}_i$ the velocity of the $i$-th bead. This is because the bead is assumed to be much larger than water molecules in heat bath and thus the time scales of the two can be separated \cite{Shea96, Shea98}. For simplicity but without losing any generality, we consider a macromolecule simply as one point-like bead. Its internal energy $H$ becomes $H= v^2/2$ with a unit mass. The dynamics of the bead can be described by the following Langevin equation,
\begin{equation}
\frac{d\mb{v}}{dt} = \mathbf{f}(\mb{v}) + \mbh{\Gamma}(\mb{v}) \cdot \mb{\xi}(t),
\label{nonlinear-langevin}
\end{equation}
where $\xi$ is Gaussian white noise satisfying $\langle \mb{\xi}(t)_i \mb{\xi}(t^\prime)_j \rangle = \delta_{ij}\delta(t-t^\prime)$ with $i=x,y,z$ in 3-D. Note that the noise term has a state-dependent amplitude, $\mbh{\Gamma}(\mb{v})$, and such state-dependent noise, $\mbh{\Gamma}(\mb{v})\cdot \mb{\xi}$, is called a multiplicative noise, which incorporates hydrodynamic interactions. For example, spherical hard particles with finite radius $R$ undergoes hydrodynamic interactions of nonlinear fluctuation force in Oseen approximation \cite{Hermans},
\begin{equation}
(\mbh{\Gamma} \mbh{ \Gamma}^T)_{ij} = T \zeta \{ (1+\frac{9}{16} \frac{\rho R}{\eta} v) \delta_{ij} - \frac{3}{16}\frac{\rho a}{\eta} \frac{v_i v_j}{v} \},
\label{oseen}
\end{equation}
where $\zeta$ is the frictional coefficient due to the interaction between solvent and the particle, and $\rho$ is the density of solvent, and $v$ the modulus of $\mb{v}$. The frictional force, $\mb{f}(\mb{v})$, is shown later to be given by Eq.(\ref{fprime}) (\ref{fric2})and (\ref{f-hydro}) if $\mbh{\Gamma}$ is known.
We will discuss the above example more in detail in Sec. \ref{sec-Einstein}.
We should note that Eq.(\ref{nonlinear-langevin}) is meaningless without integration prescription since a pulse in white noise causes a finite jump in $\mb{v}$ and then the value of $\mb{v}$ in $\mbh{\Gamma}(\mb{v})$ needs to be prescribed. Two popular prescriptions by Ito and Stratonovich have been widely used. The Ito prescription takes the value of $\mb{v}$ in $\mbh{\Gamma}(\mb{v})$ before the jump in $\mb{v}$ caused by the white noise and the Stratonovich prescription takes it as the middle point value of $\mb{v}$ before and after the jump. A Langevin equation with each prescriptions can be converted into Langevin equations with the other prescriptions \cite{Gardi}. For the ease of calculation, we convert Eq.(\ref{nonlinear-langevin}) into a corresponding Ito-prescribed form,
\begin{equation}
\frac{d\mb{v}}{dt} = \mb{f^\prime}+ \mb{\hat{\Gamma}}(\mb{v}) \cdot \mb{\xi}(t),
\label{Ito}
\end{equation}
where
\begin{equation}
f^\prime_i(\mb{v}) \equiv f_i(\mb{v}) + a \hat{\Gamma}_{kj}(\mb{v}) \partial_j \hat{\Gamma}_{ki}(\mb{v}),
\label{fprime}
\end{equation}
with $a=0 ~ (1/2)$ for Ito (Stratonovich)-prescribed Eq.(\ref{nonlinear-langevin}). Note that the Einstein summation rule is used.
Let's consider energy balance. The change of mechanical energy of the macromolecule, $dH(X_t, Y_t)$, is the same as the work done on the macromolecule by all the external forces, i.e., $dH(X_t, Y_t) = (f^\prime+\hat{\Gamma} \cdot \xi) \circ dX$, where $\circ$ indicates that the stochastic integration is done in Stratonovich way. We will be shown in Sec.\ref{sec-proof} that the Stratonovich prescription for the definition of $dH$ is the unique physical choice. Since the internal energy changes by heat dissipation and absorption by the bead through interaction with the surrounding heat bath, we identify \cite{Seki97,Kim03}
\begin{equation}
dQ(X_t,Y_t) \equiv -dH(X_t, Y_t)=-(f^\prime+\hat{\Gamma} \cdot \xi) \circ dX.
\label{defheat}
\end{equation}
This indicates how much heat is dissipated (absorbed) to (from) the surrounding water heat bath from (to) the bead located at $(X_t,Y_t)$ at time $t$ during time interval $dt$ for a stochastic process. Using Eq.(\ref{Ito}), we derive
\begin{equation}
\frac{dQ}{dt} = H_d + v_i \cdot \Gamma_{ij} \cdot \xi_j, \label{dQ}
\label{stochasticheat}
\end{equation}
where $H_d \equiv -\mb{v} \cdot \mb{f}^\prime - \frac{1}{2} Tr[\mbh{\Gamma} \mbh{\Gamma}^T]$, and Eq.(\ref{dQ}) is integrated with the Ito prescription. For the detailed derivation of Eq.(\ref{dQ}), see \cite{Kim-prescription}.
The ensemble average of heat dissipated up to time $t$ for a stochastic process, $Q(\{\mb{v}(s)\};\{0 \leq s \leq t \})$, is $ E[Q] = \int_0^t E[H_d(v(s))] ds.$ For a sufficiently small time interval $(t, t+\Delta t)$, the average amount of heat dissipated is $E[H_d]\Delta t$. The heat dissipation rate ($h_d$) at time $t$ can then be defined as
\begin{equation}
h_d(t) \equiv E[H_d( \mb{v}(t))] = \int d\mb{v} \{ -\mb{v} \cdot \mb{f}^\prime - \frac{1}{2} Tr[\mbh{\Gamma} \mbh{\Gamma}^T] \} P(\mb{v},t),
\label{dissipation}
\end{equation}
where $P(\mb{v},t)$ is a probability distribution function satisfying a Fokker-Planck equation corresponding to Eq.(\ref{Ito}),
\begin{equation}
\frac{\partial P}{\partial t} = \frac{1}{2} \partial_i \partial_j \{(\hat{\Gamma} \hat{\Gamma}^T)_{ij} P\} - \partial_i (f^\prime_iP) = \mathcal{L}P.
\label{fokker}
\end{equation}
Eq.(\ref{dissipation}) implies fluctuation dissipation relation ($h_d(t=\infty)=0$) in equilibrium as shown in Sec.\ref{sec-Einstein}: frictional dissipation from the bead to the heat bath is balanced by heat absorption by fluctuation from the heat bath to the bead. In addition, the formula of Eq.(\ref{dissipation}) is shown in Sec.\ref{sec-nonequil} not to be changed in a driven system and to imply the breakdown of the fluctuation dissipation relation.
\section{Generalized Einstein Relation and Heat Dissipation Rate} \label{sec-Einstein}
In this section, generalized Einstein relation is proposed for the correct equilibrium steady state distribution of the Fokker-Planck equation Eq.(\ref{fokker}), i.e., Maxwell-Boltzmann distribution. This relation is shown to make the mesoscopic heat, Eq.(\ref{stochasticheat}), independent to stochastic integration prescription of the Langevin equation, Eq.(\ref{nonlinear-langevin}), and vanish at equilibrium steady state. In addition, this relation shows interesting information on the form of frictional force which includes (1) nonlinear frictional force stemming from hydrodynamic interactions, e.g., dissipative force in dissipative particle dynamics \cite{Hooger92} and frictional force in a Zimm model \cite{Doi} and (2) transverse force in vortex dynamics of homogeneous superconductors \cite{Ao99}.
By substituting Maxwell-Boltzmann distribution function ($P_e(\mb{v}) \equiv C e^{-v^2/T}$ with $C$ normalization constant and with $k_B=1$ unit) into Eq.(\ref{fokker}), we can find the form of frictional force $\mb{f}(\mb{v})$.
The r.h.s. of Eq.(\ref{fokker}) is simplified as
\begin{eqnarray*}
\lefteqn{\frac{1}{2} \partial_i \partial_j \{(\hat{\Gamma}\Gammah^T)_{ij} P_{e}\}- \partial_{i}( f^\prime_i P_{e} )} \\
&=& \partial_i \Bigl[ \frac{1}{2}\partial_j \{(\hat{\Gamma} \hat{\Gamma}^T)_{ij} P_e \} - f^\prime_i P_e \Bigr]\\
&=& \partial_i \Bigl[ \{\frac{1}{2}\partial_j (\hat{\Gamma} \hat{\Gamma}^T)_{ij} -\frac{1}{2}(\hat{\Gamma} \hat{\Gamma}^T)_{ij}\frac{v_j}{T} - f^\prime_i \} P_e\Bigr].
\end{eqnarray*}
Therefore,
\begin{equation}
f^\prime_i(\mb{v}) = \frac{1}{2} \partial_j(\hat{\Gamma}(\mb{v}) \hat{\Gamma}^T (\mb{v}))_{ij} - \frac{1}{2}(\hat{\Gamma}(\mb{v}) \hat{\Gamma}^T(\mb{v}))_{ij}\frac{v_j}{T} + b_i(\mb{v}),
\label{fric2}
\end{equation}
where $\mb{b}(\mb{v})$ is an arbitrary solution of $\partial_i (b_i(\mb{v})P_e(\mb{v})) = 0$.
Now, by substituting Eq.(\ref{fric2}) to Eq.(\ref{dissipation}), heat dissipation rate $h_d(t)$ becomes
\begin{eqnarray}
h_d(t)&=& \int d\mb{v} \Bigl[ \{ - \frac{1}{2}\partial_j (\hat{\Gamma} \hat{\Gamma}^T)_{ij}+\frac{1}{2}(\hat{\Gamma} \hat{\Gamma}^T)_{ij}\frac{v_j}{T}-b_i\} v_i \nonumber \\
&&- \frac{1}{2}Tr[\hat{\Gamma} \hat{\Gamma}^T] \Bigr] P(t). \label{3Dhd}
\end{eqnarray}
In equilibrium, by substituting $P_e(\mb{v})$ to the above equation,
\begin{eqnarray}
h_d(\infty) &=& \int d\mb{v} \Bigl[ \frac{1}{2}(\hat{\Gamma} \hat{\Gamma}^T)_{ij}\partial_j(v_i P_e) \nonumber \\
&& + \{ \frac{1}{2}(\hat{\Gamma} \hat{\Gamma}^T)_{ij}\frac{v_i v_j}{T}- \frac{1}{2}Tr[\hat{\Gamma} \hat{\Gamma}^T] \} P_e - b_i v_i P_e \Bigr] \nonumber \\
&=&-\langle \mb{b} \cdot \mb{v} \rangle_{e}, \label{gel-Ein}
\end{eqnarray}
where $\langle \cdot \rangle_{e}$ means average over equilibrium distribution function $P_e(\mb{v})$.
The heat dissipation rate is expected to vanish in equilibrium, so we propose a generalized Einstein relation:
\begin{equation}
\mbox{Eq.(\ref{fric2})}\quad \mbox{and} \quad \langle \mb{b} \cdot \mb{v} \rangle_{e}=0,
\label{gen-einstein-rel}
\end{equation}
where $\mb{b}$ an arbitrary solution of $\partial_i (b_i(\mb{v})P_e(\mb{v})) = 0$. Since all the above procedure can be reversed back, the generalized Einstein relation also guarantees Maxwell-Boltzmann distribution for closed systems. Furthermore, the relation lets heat dissipation vanish, entropy production vanish, and an average flux $\langle \mb{v} \rangle_e$ trivially vanish as shown in Sec.\ref{sec-entropy}.
Under the generalized Einstein relation, the Fokker-Planck equation Eq.(\ref{fokker}) becomes independent to the choice of integration prescription of the Langevin equation, i.e., the constant, $a$ \cite{Arnold00}. The definition of the mesoscopic heat Eq.(\ref{defheat}) derived from the energy balance makes Eq.(\ref{stochasticheat}) $a$-independent since $f^\prime$ becomes $a$-independent from the generalized Einstein relation (See Eq.(\ref{fric2})). This makes thermodynamic quantities such as entropy production, work and internal energy $a$-independent. We conclude that \emph{when a Langevin equation with multiplicative noise is given, we can construct mesoscopic thermodynamics independent to the prescription of the stochastic integration in the Langevin equation as long as its steady state follows Maxwell-Boltzmann distribution.} In Sec.\ref{sec-nonequil}, we consider a non-equilibrium system where the generalized Einstein relation still holds. Such system will be shown later for both the Fokker-Planck equation and the mesoscopic heat, Eq.(\ref{stochasticheat}), still to be independent to the stochastic integration prescription of its Langevin equation. We reach at a more general conclusion that \emph{as long as the generalized Einstein relation holds, one can construct mesoscopic non-equilibrium thermodynamics independent to the prescription of the stochastic integration in the Langevin equation.} The validity of the generalized Einstein relation and the proposed heat must be tested by measuring thermodynamic quantities using molecular dynamics simulations.
Note also that since Fokker-Planck equation Eq.(\ref{fokker}) and the mesoscopic heat equation Eq.(\ref{stochasticheat}) are expressed in terms of diffusion coefficient $\mbh{\Gamma}\mbh{\Gamma}^T$, \emph{one can numerically or analytically predict all the thermodynamic quantities by measuring the diffusion coefficient}. For example, polymers in solvent under hydrodynamic interactions (See Eq.(\ref{oseen})) undergoes frictional force $f^\prime$,
\begin{equation}
f_i^\prime= - \frac{\zeta}{2} v_i (1+\frac{3}{8} \frac{\rho R}{\eta} v),\label{f-hydro}
\end{equation}
and heat is dissipated on average by
\begin{eqnarray}
h_d(t) &=& \int d\mb{v} \Big[ \zeta ( \frac{v^2}{2} - \frac{3T}{2} ) \nonumber \\
&& + \frac{3}{4} \frac{\rho R }{\eta} \zeta v (-T + \frac{v^2}{4}) \Big] P(\mb{v},t). \label{hd-hydro}
\end{eqnarray}
The heat dissipation rate, as expected, vanishes at equilibrium; Substituting $P_e$ into Eq.(\ref{hd-hydro}), the first term in $h_d$ vanishes from equi-partition theorem and the second term vanishes since the order of $v$ is odd. Eq.(\ref{f-hydro}) and (\ref{hd-hydro}) become useful in a driven system, where one can analytically predict average heat dissipation rate in steady state once the steady state distribution is known, or if not known, at least one can get numerics by Eq.(\ref{dQ}).
The frictional force term $b(\mb{v})$ can be transverse force to velocity $\mb{v}$. One of the concrete example is vortex dynamics in homogeneous superconductors \cite{Ao99}. This vortex system is a closed system since a transverse force $b(\mb{v})$ on vortex is applied by magnetic field produced by superfluid circulation.
If a macromolecule is immersed in a general isotropic frictional medium \cite{Klimon90} such as the frictional force $\mb{f}^\prime(\mb{v})$ is expressed as $\mb{f}^\prime(\mb{v}) = - \gamma(v) \mb{v}$ and the fluctuation force is as $\mb{\hat{\Gamma}}_{ij}(\mb{v}) = \delta_{ij}\Gamma(v)$, the frictional force $\mb{f}^\prime$ becomes
\begin{equation}
f^\prime_i(\mb{v}) = ( -\frac{\Gamma(v)^2}{2T} +\frac{1}{2 v} \frac{\partial \Gamma(v)^2}{\partial v} ) v_i,
\end{equation}
where we used the fact that $\mb{b}$ vanishes from one of the generalized Einstein relation $\langle \mb{b} \cdot \mb{v} \rangle_{e}=0$ since $\mb{b}$ cannot be parallel to $\mb{v}$.
\section{entropy production rate} \label{sec-entropy}
In a non-equilibrium system, entropy is produced (created) from its inside. For example, let two heat baths having different temperature connected together. Then, the total entropy change of the two heat bath is $dS = dQ_{1 \rightarrow 2} (1/T_2 - 1/T_1)>0$, where $dQ_{1 \rightarrow 2}$ is the heat transferred from heat bath 1 to heat bath 2 and $T_{1(2)}$ is the temperature of heat bath 1(2). In equilibrium, entropy production vanishes. In macromolecular system immersed in solvent, the macromolecule is not in a quasi-static process, while the heat bath is because the macromolecule has a small number of degrees of freedom; there is no boundary layer that the fluctuation caused by interaction with heat bath disappears. The entropy change of heat bath, $dS_H$, is $dQ_{M \rightarrow H} / T_H$, where $dQ_{M \rightarrow H}$ is the heat transferred from the macromolecule to the heat bath and $T_H$ is the temperature of heat bath. However, the entropy change in the macromolecule, $dS_M$, is not $dQ_{H \rightarrow M}/T_M$, where $T_M$ is the temperature of the macromolecule if it can be defined. How can one construct $dS_M$?
We consider Gibbs entropy, $S_M(t) \equiv - \int d\mb{v} P(\mb{v}) \ln P(\mb{v})$ \cite{Schnak76, Jou99} and $T \equiv T_H$.
It will be shown in this section that entropy balance is expressed as,
\begin{equation}
\frac{d(S_M(t)+S_H(t))}{dt}=e_{p}(t) \geq 0,
\label{entropy-balance}
\end{equation}
where
\begin{equation}
e_p(t) \equiv \frac{1}{T}\int \Pi_i(t) J_i(t) d\mb{v},
\label{ep}
\end{equation}
and $\Pi$ is a thermodynamic force defined as the sum of the second term of the frictional force expressed in Eq.(\ref{fric2}) and Onsager's thermodynamic force:
\[
\Pi_i(t) \equiv - (1/2T)(\Gamma \Gamma^T)_{ij}(v_j + T \partial_j \ln P(t))
\]
and $J(x,y,t)$ is a thermodynamic flux corresponding to the thermodynamic force $\Pi$ and is defined as:
\[
J_i(t) \equiv -(v_i + T \partial_i \ln P(t)) P(t).
\]
$J_i$ is the sum of both the velocity of the macromolecule and the diffusion flow in momentum space. Note that, as in macroscopic non-equilibrium thermodynamics \cite{Mazur}, $e_p(t)$ is expressed as a product of thermodynamic force and its corresponding flux. Note also that $e_p(t)$ is always non-negative. This implies the 2nd law of thermodynamics. In the equilibrium steady state, the entropy production vanishes: the macromolecule can be considered to be in quasi-static process and $T_M$ well defined to be $T$.
Let's start the derivation of Eq.(\ref{entropy-balance}). $T$ denotes heat bath temperature. For the ease of calculation, we introduce $\Xi \equiv (1/2T)\Gamma \Gamma^T$.
\begin{eqnarray}
\frac{dS_M}{dt} &=& -\frac{d}{dt} \int P \ln P d\mb{v} \nonumber = - \int \frac{\partial P}{\partial t} \ln P d\mb{v} \nonumber \\
&=&-\int \partial_i(-f_i^\prime P + T \partial_j( \Xi)_{ij} P + T \Xi_{ij} \partial_j P ) \nonumber \\
&&\times \ln P d\mb{v} \nonumber \\
&=&\int (-f_i^\prime + T\partial_j( \Xi)_{ij} + T \Xi_{ij} \partial_j \ln P ) \partial_i P d\mb{v} \nonumber \\
&=& \int \Xi_{ij}(v_j + T \partial_j \ln P ) \partial_i P dv - \int b_i \partial_i P d\mb{v} \nonumber \\
&=& \frac{1}{T}\int \Xi_{ij}(v_j + T \partial_j \ln P)(v_i + T \partial_i \ln P) P d\mb{v} \nonumber \\
&&- \frac{1}{T}\int \Xi_{ij}(v_j+ T \partial_j \ln P) v_i P d\mb{v} \nonumber \\
&&-\int b_i \partial_i P d\mb{v} \label{entropy-bal2},
\end{eqnarray}
where the first term is entropy production rate $e_p(t)$ and the second and the third are the entropy change in heat bath due to heat dissipation from the macromolecule,
\begin{equation}
\frac{dS_H(t)}{dt} \equiv \frac{1}{T}\int J_i(t) (-\Xi_{ij}v_i) d\mb{v} + \int b_i \partial_i P d\mb{v}.
\label{hd}
\end{equation}
$dS_H(t)/dt$ can be simplified as
\begin{eqnarray*}
T\frac{dS_H(t)}{dt}&= & \int -(f_i^\prime - \partial_j(T \Xi_{ij}) - b_i - T \Xi_{ij}\partial_j \ln P) v_i P d\mb{v} \nonumber \\
&&- T\langle \partial_i b_i \rangle\\
&=& \int (-f_i^\prime v_i - Tr[T \Xi_{ij}])P d\mb{v} + \langle \mb{b}\cdot \mb{v}\rangle- T\langle \partial_i b_i \rangle\\
&=& h_d(t) + \langle v_i b_i -T\partial_i b_i \rangle.
\end{eqnarray*}
Now, we have an unexpected extra term and this term, however, is proved to vanish: $\mb{b}$ is an arbitrary solution of $\partial_i (b_i P_e) =0$, which is equivalent to $T\partial_i b_i - v_i b_i=0$. Therefore,
\begin{equation}
\frac{dS_H(t)}{dt} = \frac{h_d(t)}{T}. \label{dsh}
\end{equation}
This confirms that the heat bath is in quasi-static process.
\section{detailed balance and potential conditions}\label{sec-detailed}
A system is called to be \emph{microscopically reversible} \cite{Tolman, Mazur, Kampen} when the microscopic equations of motion governed by a Hamiltonian is invariant under time reversal operation. From this microscopic reversibility, detailed balance is derived with ergodic hypothesis \cite{Mazur, Kampen}. We are interested in a closed system of many classical particles, comprising solvent molecules and macromolecules, of which the equation of motion satisfies the microscopic reversibility. Therefore, the detailed balance is expected to hold in mesoscopic description of the Langevin equation Eq.(\ref{Ito}).
In Brownian systems without any multiplicative noise, the detailed balance has been shown to be a \emph{sufficient} condition for equilibrium steady state \cite{Qian-time, Kim03}. However, as rooted from the microscopic reversibility, the detailed balance will be shown in this section to be just a necessary condition for equilibrium steady state.
Now, let's derive well-known \emph{potential conditions} \cite{Graham71} in discrete time and continuous space, which can be trivially extended to the case of continuous time and continuous space. The time increment is denoted by $\epsilon$ and $t_m \equiv m \epsilon$ with $m$ an positive integer. When the system is Markovian, we can introduce transfer matrix, $\hat{T}(\{C_i \})$, satisfying $|P\rangle_{t_{m+1}} = \hat{T}(\{ C_i \})|P\rangle_{t_{m}}$.
The detailed balance \cite{Kampen} is expressed as,
\begin{eqnarray}
\lefteqn{\langle \mb{v}_{m+1} | \hat{T}(\{C_i \})|\mb{v}_m \rangle \langle \mb{v}_m | P;\{C_i\}\rangle_{e}} \nonumber \\
&=\langle -\mb{v}_m | \hat{T}(\{\epsilon_i C_i \})|-\mb{v}_{m+1} \rangle \langle -\mb{v}_{m+1} | P;\{\epsilon_i C_i\} \rangle_{e},
\label{ext-detail0}
\end{eqnarray}
where $\epsilon_i$ is $1$ for a constant coefficient, $C_i$, if it is symmetric under time reversal operation, $-1$ if anti-symmetric, and other value if neither symmetric nor anti-symmetric determined from microscopic origin \cite{Kim03, Kampen}. $\mb{v}_i$ denotes $\mb{v}$ at time $t_i$ along a process by transfer matrix $\hat{T}(\{ C_i \})$. By integrating out $\mb{v}_m$ in Eq.(\ref{ext-detail0}), we get $\langle \mb{v} | P\rangle_e = \langle -\mb{v} | \tilde{P}\rangle_e$, where $|P\rangle_e \equiv | P;\{C_i\} \rangle_e$, $|\tilde{P}\rangle_e \equiv | P;\{\epsilon_i C_i\} \rangle_e$. Therefore, Eq.(\ref{ext-detail0}) becomes
\begin{eqnarray}
\lefteqn{\langle \mb{v}_{m+1} | \hat{T}(\{C_i \})|\mb{v}_m \rangle \langle \mb{v}_m |P\rangle_e } \nonumber \\
& = \langle -\mb{v}_m | \hat{T}(\{\epsilon_i C_i \})|-\mb{v}_{m+1} \rangle \langle \mb{v}_{m+1} |{P}\rangle_e.
\label{ext-detail}
\end{eqnarray}
The transfer matrix is related to the linear operator $\hat{\mathcal{L}}$ of Fokker-Planck equation Eq.(\ref{fokker}) as $\hat{T}= I + \epsilon \hat{\mathcal{L}}$. Now, Eq.(\ref{ext-detail}) is re-expressed as
\begin{equation}
\langle v|\hat{\mathcal{L}}^\dagger |v^\prime \rangle \langle v|P \rangle_e = \langle -v|\hat{\widetilde{\mathcal{L}}}|-v^\prime \rangle \langle v^\prime |P\rangle_e,
\label{d2}
\end{equation}
where $\hat{T}(\{\epsilon_i C_i \})\equiv I+\epsilon \hat{\widetilde{\mathcal{L}}}$ is used.
Eq.(\ref{d2}) becomes
\begin{eqnarray}
\mathcal{L}^\dagger_v \delta(v-v^\prime) P_e(v^\prime) = \frac{1}{P_e(v)} \widetilde{\mathcal{L}}_{-v} \delta(v-v^\prime) {P}_e(v^\prime)^2 \nonumber \\
\quad = \frac{1}{P_e(v)} \widetilde{\mathcal{L}}_{-v} {P}_e(v) \delta(v-v^\prime) {P}_e(v^\prime).
\end{eqnarray}
We find symmetry in the operator $\hat{\mathcal{L}}$,
\begin{equation}
\mathcal{L}^\dagger_v = \frac{1}{{P}_e(v)}\tilde{\mathcal{L}}_{-v} {P}_e(v).
\label{rev1}
\end{equation}
From Eq.(\ref{fokker}), $\mathcal{L}^\dagger = \frac{1}{2} A_{ij} \partial_i \partial_j + f_i^\prime \partial_i$, where $\hat{A} \equiv \hat{\Gamma}\hat{\Gamma}^T$. The r.h.s. of Eq.(\ref{rev1}) is expressed as
\begin{eqnarray*}
\lefteqn{\frac{1}{{P}_e(v)}\tilde{\mathcal{L}}_{-v} {P}_e(v)} \\
&=&\frac{1}{P_e}\{ \frac{1}{2} \partial_i \partial_j \tilde{A}_{ij}(-v) P_e(v) - \partial_i \tilde{f}_i^\prime (-v) P_e(v) \} \\
&=& \frac{1}{2} \tilde{A}_{ij}(-v)\partial_i \partial_j + \frac{1}{P_e}(\partial_i \tilde{A}_{ij}(-v) P_e) \partial_j + \tilde{f}_i^\prime(-v) \partial_i \\
&&+ \frac{1}{P_e}(\frac{1}{2} \partial_i \partial_j \tilde{A}_{ij}(-v)P_e + \partial_i \tilde{f}^\prime_i (-v) P_e),
\end{eqnarray*}
where $\tilde{A}(v) \equiv A(v;\{\epsilon_i C_i\})$ and $\tilde{f}^\prime (v) \equiv f^\prime (v; \{\epsilon_i C_i \})$.
Finally, by matching term by term, we derive a set of conditions well known as potential conditions \cite{Graham71},
\begin{eqnarray}
\partial_i \ln P_e(\mb{v}) &=&(\hat{\Gamma} \hat{\Gamma}^T)^{-1}_{ij} [ -f_j^\prime(-\mb{v};\{\epsilon_i C_i\}) + f_j^\prime(\mb{v}) \nonumber \\
&& - \partial_k (\hat{\Gamma} \hat{\Gamma}^T)_{kj}] \label{rev3}
\end{eqnarray}
\begin{eqnarray}
\lefteqn{\mbh{\Gamma}(\mb{v};\{C_i\})\mbh{\Gamma}^T(\mb{v};\{ C_i \})} \nonumber \\
&& \quad \quad \quad \quad \quad = \mbh{\Gamma}(-\mb{v};\{\epsilon_i C_i\})\mbh{\Gamma}^T (-\mb{v}; \{ \epsilon_i C_i \}). \label{rev2}
\end{eqnarray}
The detailed balance Eq.(\ref{ext-detail}), the symmetry relation in operator $\mathcal{L}$ Eq.(\ref{rev1}), and the potential conditions Eq.(\ref{rev3}) and (\ref{rev2}) are all equivalent in Markovian systems with $P_e(\mb{v};\{C_i\}) = P_e(-\mb{v};\{\epsilon_i C_i\})$ \cite{Ito78, Graham71}.
By substituting Eq.(\ref{fric2}) into Eq.(\ref{rev3}) and (\ref{rev2}), the potential condition becomes
\begin{equation}
\partial_i \ln P_e(\mb{v}) =-\frac{v_i}{T} + (\hat{\Gamma} \hat{\Gamma}^T)^{-1}_{ij} \{ -b_j(-\mb{v};\{\epsilon_i C_i\}) + b_j(\mb{v})\} \label{r1}
\end{equation}
\begin{equation}
\mbh{\Gamma}(\mb{v};\{C_i\})\mbh{\Gamma}^T(\mb{v};\{ C_i \}) = \mbh{\Gamma}(-\mb{v};\{\epsilon_i C_i\})\mbh{\Gamma}^T (-\mb{v}; \{ \epsilon_i C_i \}). \label{r2}
\end{equation}
We can find the property of each terms of $\mb{f}^\prime$ under time reversal operation from Eq.(\ref{fric2}), (\ref{r1}), and (\ref{r2}): $\mb{b}(\mb{v})$ is symmetric and the rest of terms in Eq.(\ref{fric2}) are anti-symmetric. If $\mb{b}(\mb{v})$ is odd (even) in velocity, then its coefficient must be anti-symmetric (symmetric) under time reversal, e.g, transverse force in vortex dynamics is odd in velocity and its coefficient, magnetic field, is anti-symmetric under time reversal operation \cite{Ao99}. For the rest of terms in Eq.(\ref{fric2}), however, the coefficients of terms odd (even) in velocity are symmetric (anti-symmetric). We can find time reversal properties of all the constant coefficients in $\mb{f}^\prime$ from the detailed balance condition and, however, this condition does not apply any further restriction on the form of the frictional force.
\section{Detailed Balance v.s. $e_p=0$}\label{sec-detailed-ep}
In the region where the generalized Einstein relation is valid, if entropy production vanishes, equilibrium state reaches since $e_p = 0$ is equivalent to $ v_i + T \partial_i \ln P(t)=0$ from Eq.(\ref{ep}) and this equation guarantees $P(t)$ to be Maxwell distribution, $P_e$. Therefore, $e_p=0$ is equivalent to equilibrium \cite{Kim03}. Does detailed balance guarantee that system reaches in equilibrium? No. The detailed balance is necessary condition on equilibrium from Eq.(\ref{r1}) and (\ref{r2}) since time reversal property of transverse force, $\mb{b}(\mb{v})$, is not known as priori. Only when the transverse frictional force is symmetric under time reversal operation, the detailed balance guarantees equilibrium. In summary, the relation among equilibrium, zero entropy production, and detailed balance is symbolically expressed as
\begin{equation}
\mbox{Equilibrium} ~\equiv ~[e_p=0]~ \subset ~\mbox{Detailed ~Balance}.
\end{equation}
\section{A Driven system: Non-equilibrium Steady State} \label{sec-nonequil}
In the previous sections, we constructed mesoscopic thermodynamics for a closed system, which has an equilibrium steady state. In this section, we extend all the previous analysis to a driven system, which has a non-equilibrium steady state. We consider that macromolecules are under hydrodynamic interaction and are subject to a feedback control by an external agent. As before, the hydrodynamic interaction is modeled by an Oseen tensor. The feedback control is modeled by a non-conservative force, $\mb{g}(\mb{x})$ \cite{Kim-vdep}. We assume that the driven system is near equilibrium in the sense that the generalized Einstein relation Eq.(\ref{gen-einstein-rel}) still holds. Now, the non-conservative force, $\mb{g}(\mb{x})$, is added in the Langevin equation for a closed system, Eq.(\ref{nonlinear-langevin}):
\begin{equation}
\frac{d\mb{v}}{dt} = \mb{g}(\mb{x})+\mathbf{f}(\mb{v}) + \mbh{\Gamma}(\mb{v}) \cdot \mb{\xi}(t).
\label{langevin-noneuqil}
\end{equation}
The corresponding Langevin equation in Ito-prescribed form becomes
\[
\frac{d\mb{v}}{dt} = \mb{g}(\mb{x})+\mathbf{f^\prime}(\mb{v}) + \mbh{\Gamma}(\mb{v}) \cdot \mb{\xi}(t).
\]
Energy balance is expressed as $dH=dW-dQ$, where $dH$ is the change in internal energy that is work done by all external forces, i.e., $dH = [ \mb{g} + \mb{f^\prime} + \mbh{\Gamma}(\mb{v}) \cdot \mb{\xi} ]\circ d\mb{X}$ and $dW \equiv \mb{g}(\mb{X}) \circ d\mb{X}$ is work done by the control force $\mb{g}$. The heat dissipation $dQ$ from the macromolecule to the surrounding heat bath becomes the same form as in a closed system: $dQ = -(f^\prime+\hat{\Gamma} \cdot \xi) \circ dX$.
Fokker-Planck equation is changed to
\[
\frac{\partial P}{\partial t} = \frac{1}{2} \partial_i \partial_j \{(\hat{\Gamma} \hat{\Gamma}^T)_{ij} P\} - \partial_i \{(g_i + f^\prime_i) P\}.
\]
Eq.(\ref{dQ}), (\ref{dissipation}), (\ref{fric2}), (\ref{entropy-balance}), (\ref{ep}) and (\ref{dsh}) are not altered. Its proof is just book-keeping of \cite{Kim-prescription} and Sec.\ref{sec-entropy}: entropy balance equation is not changed. However, the steady state is not in equilibrium and the entropy production becomes positive, which means that the total entropy of both the macromolecule and heat bath constantly increases. In other words, there is net positive heat dissipation from the macromolecule to the heat bath.
\section{The Definition of Heat with Stratonovich Prescription}\label{sec-proof}
As we discussed in Sec.{\ref{model}}, we have used Stratonovich prescription for the definition of $dQ$: $dQ \equiv -(\mb{f}^\prime + \mbh{\Gamma } \cdot \mb{\xi}) \circ d\mb{X}$. It will be shown that the diffusion coefficient matrix, $\mbh{\Gamma}\mbh{\Gamma}^T$, becomes traceless, which is unphysical for diffusive systems, if we use other prescriptions in $dQ$.
Let $dQ \equiv -(\mb{f}^\prime + \mbh{\Gamma } \cdot \mb{\xi}) \bullet d\mb{X}$, where $\bullet$ indicates the stochastic integration is not in Stratonovich way. Eq.(\ref{dQ}) is changed to Ito-prescribed stochastic equation with multiplicative noise,
\[
\frac{dQ}{dt} = H_d + v_i \cdot \Gamma_{ij} \cdot \xi_j
\]
where $H_d \equiv -\mb{v} \cdot \mb{f}^\prime - d Tr[\mbh{\Gamma} \mbh{\Gamma}^T]$. If $d=0$, it corresponds to Ito prescription for the definition of $dQ$. $d \neq 1/2$ since we assume that stochastic integration in the definition of $dQ$ is not in Stratonovich way. $\mb{f}^\prime$ is given by Eq.(\ref{fric2}). Therefore, average heat dissipation rate is changed to
\begin{eqnarray}
h_d(t) &=&\int d\mb{v} \{ -\mb{v} \cdot \mb{f}^\prime - d Tr[\mbh{\Gamma} \mbh{\Gamma}^T] \} P(\mb{v},t) \label{hd-1} \\
&=& \int d\mb{v} \Bigl[ \{ - \frac{1}{2}\partial_j (\hat{\Gamma} \hat{\Gamma}^T)_{ij}+\frac{1}{2}(\hat{\Gamma} \hat{\Gamma}^T)_{ij}\frac{v_j}{T}-b_i\} v_i \nonumber \\
&&- d Tr[\hat{\Gamma} \hat{\Gamma}^T] \Bigr] P(t).
\label{hd-general}
\end{eqnarray}
The average heat dissipation in equilibrium steady state must vanish:
\begin{equation}
h_d(\infty) = \langle \mb{b}\cdot \mb{v}\rangle_e + T(2d-1) \langle Tr[\Xi] \rangle_e=0
\label{hd-general2}
\end{equation}
The generalized Einstein relation is changed to
\begin{equation}
\mbox{Eq.(\ref{fric2})} \quad \mbox{and} \quad \mbox{Eq.(\ref{hd-general2})}.
\end{equation}
Eq.(\ref{entropy-bal2}) still holds without any change. Substitution of $P_e$ into Eq.(\ref{entropy-bal2}) leads to
\[
\langle \mb{b}\cdot \mb{v} \rangle _e = 0.
\]
Therefore, the generalized Einstein relation can be redefined to
\begin{equation}
\mbox{Eq.(\ref{fric2})} \quad \mbox{and} \quad \langle \mb{b}\cdot \mb{v} \rangle _e = \langle Tr[\Xi] \rangle_e=0.
\end{equation}
The last equality in the above equation means that the diffusion coefficient of the Fokker-Planck equation, Eq.(\ref{fokker}), becomes traceless on average in an equilibrium steady state. From Eq.(\ref{hd-1}), both frictional dissipation and heat absorption by fluctuation vanish in the equilibrium steady state since $\langle \mb{v}\cdot \mb{f}^\prime \rangle_e =0$! This is unphysical in dissipative systems. Therefore, Stratonovich prescription is the unique physical choice.
\section{Conclusions and Remarks}
In this manuscript, we have provided quantitative mesoscopic non-equilibrium thermodynamics of Brownian particles under both multiplicative noise and feedback control by an external agent. The dynamics of the Brownian particles is described by a Langevin equation with the multiplicative noise and the feedback control by non-conservative force field. There is ambiguity in stochastic integration prescription of the Langevin equation due to the multiplicative noise. However, such ambiguity can be removed by proposing a generalized Einstein relation to guarantee an equilibrium steady state in a closed system: the corresponding Fokker-Planck equation has no ambiguity. Statistical properties of the Langevin equation is described unambiguously. Then, how does one construct thermodynamics without such prescription ambiguity? Once heat is defined, work and entropy production are well defined from energy balance and entropy balance. Thus, we focus on how to define heat dissipated from the particles to their surrounding. The heat is the energy dissipation of the work done by contact forces between the particles and their surrounding solvent molecules along the position space trajectories of the particles. There are two ambiguities in the definition of heat. First, the contact force between the particles and their surrounding depends on the stochastic integration prescription involved in the Langevin equation. Such ambiguity is removed by the Einstein relation. Second, the stochastic integration along the trajectories of the particles involved in the calculation of the work done by the contact forces needs to be prescribed. The Stratonovich prescription is shown to be the unique physical choice.
We remark that since both Fokker-Planck equation Eq.(\ref{fokker}) and the mesoscopic heat equation Eq.(\ref{stochasticheat}) are expressed in terms of diffusion coefficient $\mbh{\Gamma}\mbh{\Gamma}^T$ and independent to the integration prescription of the Langevin equation, \emph{one can numerically or analytically predict all the thermodynamic quantities by measuring the diffusion coefficient without any mathematical ambiguity}.
Finally, we gives a comment on the link to fluctuation theorems and Jarzynski equality, which have been studied on Brownian systems with a linear friction, where the thermal noise is not multiplicative. The proposed thermodynamics in this manuscript shows the possibility to extend the applicability of the fluctuation theorems and Jarzynski equality to the Brownian systems with multiplicative noise \cite{Bochkov81, Rubi87, Crooks99, Crooks00, Hummer01, Seifert05, Lebo, Kurch98, Jarzynski97, Jarzynski97-pre, Jarzynski00, Kim06}.
\begin{acknowledgments}
We thank M.den Nijs and Suk-jin Yoon for useful discussions and comments. This research is supported by NSF under grant DMR-0341341.
\end{acknowledgments}
|
1,116,691,498,386 | arxiv | \section*{Acknowledgments}
MBGD acknowledges F. Halzen for useful discussions.
This work was partially financed by CNPq and by Programa de Apoio a N\'ucleos de Excel\^encia (PRONEX), BRAZIL.
|
1,116,691,498,387 | arxiv | \section{Introduction}
For a measure $\mu$ on $\mathbb{R}\times(a,b)$ such that $\mu=\mu_t\otimes dt$ with $\mu_t=\sum_i \phi_i\delta_{X_i}$ for a.e. $t\in(a,b)$ for some (pairwise distinct) $X_i\in \mathbb{R}$, we consider the branched transportation type functional (see Section \ref{secnot} for a more precise definition)
\begin{equation}\label{functional}
\mathcal{E}(\mu):= \int_{a}^b \sharp\{\phi_i\neq 0\} +\sum_i \phi_i |\dotX_i|^2 dt,
\end{equation}
where $\dotX_i$ denotes the time derivative of $X_i(t)$. This is indeed a branched transportation problem since by the Benamou-Brenier formula \cite{AGS, Sant,villani}, the second term is exactly the length of the curve $t\to \mu_t$ measured in the Wasserstein metric, while the first term forces concentration and thus branched structures.\\
Our main focus is the irrigation problem of the Lebesgue measure from a Dirac mass. By this we mean that we want to minimize the cost \eqref{functional} under the condition that the starting measure $\mu_a$ is a Dirac mass and that $\mu_t$ converges (weakly) to the Lebesgue measure as $t$ goes to $b$. This implies that the measure $\mu_t$ must infinitely refine as $t\to b$.
We will also be interested in the case when both the initial and final measures are the Lebesgue measure. \\
More generally, for two given measures $\mu_{\pm}$ of equal mass, we study the following Dirichlet problem
\begin{equation}\label{minmainintro}
\min_{\mu}\left\{ \mathcal{E}(\mu) \ : \ \mu_a=\mu_- \ , \ \mu_b=\mu_+ \right\},
\end{equation}
where the boundary condition is understood in the sense that $\mu_t\rightharpoonup \mu_-$ as $t\to a$ and $\mu_t\rightharpoonup \mu_+$ as $t\to b$.\\
Our main result is a full characterization of the minimizers of \eqref{minmainintro} in the case $\mu_-$ is a Dirac mass, $\mu_+$ is the Lebesgue measure restricted to an interval of length $\mu_-(\mathbb{R})$ and $b-a$ is large enough. In order to fix notation, since the
problem is invariant by translations, we may assume that $a=0$, $b=T$, $\mu_-= \phi \delta_{X}$ and $\mu_+=dx{\LL}[-\phi/2,\phi/2]$ for some $T, \phi>0$ and $X\in\mathbb{R}$. As will be apparent below, up to rescalings and shears, we may further normalize to $X=0$ and $\phi=1$, so that
\[\mu_0=\delta_0 \qquad \textrm{and} \qquad \mu_T= dx{\LL}[-1/2,1/2].\]
In this case, as will become clearer in the proof, the threshold value $T=1/4$ naturally appears. In order to state our main theorem, let us define for $t\in[0,1/4]$, the dyadically branching measure $\mu^*_t$ (see Figure \ref{figmin}). For $k\ge 0$, let $t_k:=\frac{1}{4}\left(1-\left(\frac{1}{2}\right)^{3k/2}\right)$ be the branching times.
We define recursively $\mu^*_t$ in the intervals $[t_{k-1},t_k]$.
Let $X_1^0=0$ and $\mu^*_0=\delta_0$. Assume that $\mu^*_t$ is defined in $[0,t_{k-1}]$ and that $\mu^*_{t_{k-1}}=2^{-(k-1)}\sum_{i=1}^{2^{k-1}} \delta_{X_i^{k-1}} $.
For $t\in[t_{k-1},t_{k}]$ and $1\le i\le 2^k$, we now define $X_i^k(t)$. For this, let us divide $[-1/2,1/2]$ in $2^k$ intervals of equal size and let $\BXi^k$ be the barycenter of the $i-$th such interval i.e.
$\BXi^k:=\frac{-1}{2}+\frac{i-1}{2^{k}}+\frac{1}{2^{k+1}}$. We then let
\[X_i^k(t):=\frac{t-t_{k-1}}{\frac{1}{4}-t_{k-1}}\left(\BXi^k-{X}_{ \lceil i/2\rceil}^{k-1}\right)+{X}_{ \lceil i/2\rceil}^{k-1},\]
and $\mu^*_t:= 2^{-k}\sum_{i=1}^{2^k}\delta_{X_i^k(t)}$. Notice that with this definition, for every $t\in[0,1/4)$, $k\in \mathbb{N}$ and $1\le i\le 2^k$, the mass at $X_i^k(t)$ is irrigating the interval
$(\BXi^k-2^{-(k+1)},\BXi^k+2^{-(k+1)})$ and $X_i^k$ is moving at constant speed towards $\BXi^k$ (and would reach it at time $T=1/4$ if there were no further branching points). Our main theorem is the following
\begin{figure}
\begin{center}
\includegraphics[height=6.5cm]{minimizeraxiscrop.eps}
\label{figmin}
\caption{The optimal configuration $\mu^*$}
\end{center}
\end{figure}
\begin{theorem}\label{main}
For $T=1/4$, $\mu_0=\delta_0$ and $\mu_T=dx{\LL}[-1/2,1/2]$, $\mu^*$ is the unique minimizer of \eqref{minmainintro}. Moreover, if $T\ge 1/4$, the unique minimizer of \eqref{minmainintro} is given by $\mu_t=\delta_0$ for $t\in[0,T-1/4]$ and
$\mu_t=\mu^*_{t-(T-1/4)}$ for $t\in(T-1/4,T)$, with
\[\mathcal{E}(\mu)=\frac{1}{2-\sqrt{2}} +T.\]
\end{theorem}
As a consequence, we obtain the following corollary (see Lemma \ref{lemrescale} for the exact definitions of the rescaling and shear)
\begin{corollary}\label{cormain}
For $X\in \mathbb{R}$, $T,\phi>0$ with $T\phi^{-3/2}\ge 1/4$, the unique minimizer of \eqref{minmainintro} with $\mu_0=\phi \delta_X$ and $\mu_T= dx{\LL} [-\phi/2,\phi/2]$ is given by a suitably sheared and rescaled version of the optimal measure for $X=0$, $\phi=1$ and $\widehat{T}=T\phi^{-3/2}$. Moreover
\[
\mathcal{E}(\mu)=\phi^{3/2} \frac{1}{2-\sqrt{2}} +T+\frac{\phi}{T} |X|^2.
\]
\end{corollary}
As an application of Corollary \ref{cormain}, we will further derive a full characterization of symmetric (with respect to $t=0$) minimizers in the case $a=-b=-T$, $\mu_{\pm}=dx{\LL} [-1/2,1/2]$ and $T\ge 1/4$ (see Theorem \ref{structureTlarge}).\\
The proof of Theorem \ref{main} is based on the tree structure of the minimizers of \eqref{minmainintro} (see Proposition \ref{reg}) which together with invariance by scaling and shearing (Lemma \ref{lemrescale}) leads to a recursive characterization of the minimizers. Indeed, if we let
\[
E(T):=\min\{ \mathcal{E}(\mu) \ : \ \mu_0=\delta_0, \ \mu_T= dx{\LL} [-1/2,1/2]\},
\]
then (see \eqref{recursive})
\begin{equation}\label{eq:introrecurE}
E(T)=\min_{\sum_{i=1}^N \phi_i=1} \sum_{i=1}^N \phi_i^{3/2} E(T\phi_i^{-3/2}) +\frac{1}{12T} \left(1-\sum_{i=1}^N \phi_i^3\right).
\end{equation}
This formula reflects the fact that if at level $T$, the minimizer branches into $N$ pieces of respective masses $\phi_i$ then up to rescaling and shearing, each of the subtrees
solves the exact same problem as the original one (connecting a Dirac mass to the Lebesgue measure). In particular, if we define $T_*$ to be the first branching time, meaning that if $T>T_*$ then the minimizer of $E(T)$ cannot branch for a time $T-T_*$, we may use that $T_* \phi_i^{-3/2}\ge T_*$ to obtain
\[
E(T_* \phi_i^{-3/2})= E(T_*)+ T_*(\phi_i^{-3/2}-1)
\]
and then rewrite \eqref{eq:introrecurE} in the purely analytical form (see \eqref{alternativeq})
\begin{equation}\label{eq:introalternativeq}
\frac{E(T_*)-T_*}{T_*}=\min_{\sum_{i=1}^N\phi_i=1} \frac{(N-1)+\frac{1}{12T_*^2} \left(1-\sum_{i=1}^N \phi_i^{3}\right)}{1-\sum_{i=1}^N \phi_i^{3/2}}.
\end{equation}
The idea of the proof of Theorem \ref{main} is to use \eqref{eq:introalternativeq} to prove that $T_*=1/4$ and that the corresponding minimizer has exactly two branches of mass $1/2$ at time zero. Once this is proven, the conclusion is readily
reached thanks to the recursive nature of the problem.\\
In order to prove that $T_*=1/4$ and $N=2$, we introduce for fixed $N\ge 2$ the quantity
\[
\alpha_N:=\inf_{\phi_i\ge 0}\left\{ \frac{1-\sum_{i=1}^N \phi_i^3}{1-\sum_{i=1}^N \phi_i^{3/2}} \ : \ \sum_{i=1}^N \phi_i=1\right\}.
\]
By \eqref{eq:introalternativeq}, if at time $T_*$ the minimizer has $N$ branches then since for $\sum_{i=1}^N \phi_i=1$ there holds $1-\sum_{i=1}^N \phi_i^{3/2}\le 1-N^{-1/2}$, we have the lower bound
\[
\frac{E(T_*)-T_*}{T_*}\ge \frac{N-1}{1-N^{-1/2}} +\frac{\alpha_N}{12 T_*^2}.
\]
In Proposition \ref{estimTstar}, we use this together with an upper bound on $E(T_*)$ given by a dyadically branching construction to obtain both that $T_*\le 1/4$ and that a lower bound on $\alpha_N$ gives a corresponding upper bound on $N$ (see \eqref{criterionalphaN}).
These lower bounds on $\alpha_N$ are obtained in Lemma \ref{lem:Nge3} using a computer assisted proof whose details are given in Appendix \ref{appendix}. This excludes that $N\ge 3$. The case $N=2$ is finally studied in Proposition \ref{prop:Neq2} where we prove that $T_*=1/4$ and that the mass splits in half. \\
The variational problem \eqref{minmainintro} may be seen as a two dimensional (one for time and one for space) analog of the three dimensional (one for time and two for space) problem derived in \cite{CoGoOtSe}
as a reduced model for the description of branching in type-I superconductors in the regime of very small applied external field. We refer the reader to \cite{CoGoOtSe} for more precise physical motivations and references. In this regime, the natural Dirichlet conditions appearing are $\mu_{\pm}=dx{\LL}[-1/2,1/2]$. Let us point out that in
the three dimensional model, the term $\sharp\{\phi_i\neq 0\}$ is replaced by $\sum_i \phi_i^{1/2}$. This is in line with the interpretation of the first term in \eqref{functional} as an interfacial
term penalizing the creation of many flux tubes. That is, if we are in $(1+d)-$dimensions, it is proportional to the perimeter of a union of $d-$dimensional balls of volume $\phi_i$ (which is $2\sqrt{\pi}\sum_i \phi_i^{1/2}$ if $d=2$ and $2\sharp\{\phi_i\neq 0\}$ if $d=1$).
As already alluded to, the second term in \eqref{functional} may be interpreted as the Wasserstein transportation cost \cite{AGS, Sant,villani} of moving such balls.\\
In many models describing pattern formation in material sciences, branching patterns similar to the one observed here are expected. However, it is usually very hard to go beyond scaling laws \cite{KohnMuller94,Zwicknagl2014,ChoKoOtmicro,CoOtSer}. In some cases, reduced models have been derived \cite{ViehOtt,CoGoOtSe,CoDiZw} but so far the best results concerning the minimizers are local energy bounds leading
to the proof of asymptotic self-similarity \cite{Conti2006,Viehmanndiss}. Our result is thus the first complete characterization of a minimizer in this context. Of course, this was possible thanks to the simplicity of our model (one dimensional trees in a two dimensional ambient space). We should however point out
that our result is not fully satisfactory since we are essentially able to study only the situation of an isolated microstructure (due to the constraint $T\ge 1/4$) whereas one is typically interested in the
case $T\ll1$ where many microstructures are present and where the lateral boundary conditions have limited effect i.e. one tries to capture an extensive behavior of the system. As detailed in the final section \ref{sec:appli},
we believe that even in the regime $T\ll 1$, every microstructure is of the type described in Corollary \ref{cormain}.\\
As pointed out in \cite{CoGoOtSe}, the functional \eqref{functional} bears many similarities with so-called branched transport (or irrigation) models \cite{Xia,MSM,BeCaMo} (see also \cite{BrBuS,BraSa} for a formulation reminiscent of our model). Also in this class of problems, there
has been a strong interest for the possible fractal behavior of minimizers. Besides results on scaling laws \cite{BraWirth} and fractal regularity \cite{BraSol}, to the best of our knowledge,
the only explicit minimizers exhibiting infinitely many branching points have been obtained in \cite{PaoSteTep} for a Dirac irrigating a Cantor set and in \cite{MaMa} in an infinite dimensional context.
In particular, the optimal irrigation pattern from a Dirac mass to the Lebesgue measure is currently not known for the classical branched transportation model. One important difference between our model and branched transportation
is that in our case, minimality does not imply triple junctions nor conditions on the angles between the branches.\\
The organization of the paper is the following. In Section \ref{secnot}, we recall the definition and the basic properties of the functional $\mathcal{E}$. Then, in Section \ref{irrig} we prove Theorem \ref{main}. In the final Section \ref{sec:appli}, we give
an application of Theorem \ref{main} to the irrigation of the Lebesgue measure by itself and state an open problem. Appendix \ref{appendix} contains the computer assisted computation of $\alpha_N$.\\
\textbf{Notation}
In the paper we will use the following notation. The symbols $\simeq$, $\gtrsim$, $\lesssim$, $\ll$ indicate estimates that hold up to a global constant. For instance, $f\lesssim g$ denotes the existence of a constant $C>0$ such that $f\le Cg$.
We denote by $\mathcal{H}^1$ the $1-$dimensional Hausdorff measure. For a Borel measure $\mu$, we will denote by $supp\, \mu$ its support.
\section*{Acknowledgment}
I warmly thank F. Otto for many useful discussions and constant support as well as A. Lemenant for a careful reading of a preliminary version of the paper. The hospitality of the Max Planck Institute for Mathematics in the Sciences of Leipzig, where part of this
research was carried out is gratefully acknowledged. Part of this research was funded by the program PGMO of the FMJH through the project COCA.
\section{The variational problem and main properties of the functional}\label{secnot}
In this section we first give a rigorous definition of the energy $\mathcal{E}(\mu)$ and then prove that the boundary value problem \eqref{minmainintro} has minimizers which are locally given by finite union of straight segments so that the representation \eqref{functional} makes sense.
\begin{definition}
For $a<b$ we denote by $\calA_{a,b}$ the set of pairs of measures
$\m\ge 0$, $m$ with $m\ll\mu$, satisfying the continuity equation
\begin{equation}\label{conteq0}
\partial_t \m +\partial_x m=0 \qquad \textrm{in } \mathbb{R}\times(a,b)
\end{equation}
and such that $\m=\m_{t}\otimes d{t}$ where, for a.e. $t\in(a,b)$,
$\m_{t}=\sum_i \phi_i \delta_{X_i}$ for some $\phi_i\ge 0$ and $X_i\in \mathbb{R}$.
We denote by $\calA_{a,b}^*:=\{\mu: \exists m, (\mu,m)\in\calA_{a,b}\}$ the
set of admissible $\mu$.
Further, we define $\mathcal{E}:\calA_{a,b}\to[0,\infty]$ by
\begin{equation}\label{Imum}
\mathcal{E}(\mu,m):=
\int_{a}^b \sharp\{supp \, \m_t\} dt +
\int_{\mathbb{R}\times(a,b)}
\left(\frac{dm}{d\m} \right)^2 d\m
\end{equation}
and (with abuse of notation) $\mathcal{E}:\calA_{a,b}^*\to[0,\infty]$ by
\begin{equation}\label{Imu}
\mathcal{E}(\m):=\min \{ \mathcal{E}(\mu,m)\ : \ m\ll\m, \ \partial_t \m + \partial_x m=0\}.
\end{equation}
\end{definition}
Equation (\ref{conteq0}) is understood in the sense of distributions (testing with test functions in $C^\infty_c(\mathbb{R}\times(0,T))$). Contrary to \cite{CoGoOtSe}, we use free boundary conditions instead of periodic ones but this makes only minor differences.
In the sequel we will only deal with measures $\mu$ of bounded support. In this case,
because of \eqref{conteq0}, $\mu_{t}(\mathbb{R})$ does not depend on $t$. Let us point out that for such measures, the minimum in (\ref{Imu}) is attained thanks to \cite[Th. 8.3.1]{AGS}.
Moreover, the minimizer is unique by strict convexity of $m\to\int_{\mathbb{R}\times(a,b)} \left(\frac{dm}{d\m} \right)^2 d\m$. {}Let us also notice that by the Benamou-Brenier formula \cite{AGS}, we have
for every measure $\mu$, and every $t, t'\in(a,b)$,
\begin{equation}\label{HolderW2}
W_2^2(\mu_{t}, \m_{t'})\le \mathcal{E}(\mu) |t-t'|,
\end{equation}
where
the $2$-Wasserstein distance between
two measures $\mu$ and $\nu$ of bounded second moment with $\mu(\mathbb{R})=\nu(\mathbb{R})$ is defined by
\[W_2^2(\mu,\nu):=\min\left\{ \int_{\mathbb{R}\times \mathbb{R}} |x-y|^2 \, d\Pi(x,y) \, : \, \Pi_1=\mu, \ \Pi_2=\nu\right\},\]
where the minimum is taken over measures on $\mathbb{R}\times \mathbb{R}$ and $\Pi_1$ and $\Pi_2$ are respectively the first and second marginal of $\Pi$.
In particular for every measure $\mu$ with $\mathcal{E}(\mu)<\infty$, the curve $t\mapsto \mu_{t}$ is H\"older continuous with exponent one half in the space of measures (endowed with the metric $W_2$) and the traces $\mu_{a}$ and $\mu_{b}$
are well defined.
Given two measures $ \m_{\pm}$ on $\mathbb{R}$ with $\ \mu_+(\mathbb{R})= \mu_-(\mathbb{R})$ and bounded support, we are interested in the variational problem
\begin{equation}\label{limitProb}
\inf\left\{ \mathcal{E}(\mu) \ : \ \mu_{a}= \m_{-}, \ \m_b=\mu_+ \right\}.
\end{equation}
Let us first notice that if $L>0$ is such that $supp \, \mu_-\cup supp \, \mu_+\subset [-L/2,L/2]$, then we may restrict the infimum in \eqref{limitProb} to measures satisfying $supp \, \mu_t\subset [-L/2,L/2]$ for a.e. $t\in (a,b)$.
Indeed, if $\mu$ is admissible with $\mu_t=\sum_i \phi_i \delta_{X_i}$ then letting $\widetilde{X}_i:= \min(L/2, |X_i|) sign X_i$ and then $\tilde{\mu}_t:=\sum_i \phi_i \delta_{\widetilde{X}_i}$, we get that $\tilde{\mu}$ is admissible and has lower energy than $\mu$ (i.e. the energy decreases by projection on $[-L/2,L/2]$). From now on we will only consider such measures.\\
As in \cite[Prop 5.2]{CoGoOtSe} (to which we refer for the proof), a simple branching construction shows that any pair of measures with equal flux may be connected with finite cost.
\begin{proposition}\label{branchingmu}
For every pair of measures $ \m_{\pm}$ with $supp \, \mu_{\pm}\subset[-L/2,L/2]$ and $\mu_+(\mathbb{R})=\mu_-(\mathbb{R})=\phi$,
there is $\mu\in \calA^*_{a,b}$ such that letting $b-a=2T$,
$\mu_{a}=\mu_-$, $\mu_b=\mu_+$ and
\begin{equation*}
\mathcal{E}(\mu)\lesssim T+\frac{\phi L^2}{T}.
\end{equation*}
If $\mu_+=\mu_-$, then there is a construction with
\begin{equation*}
\mathcal{E}(\mu)\lesssim T+T^{1/3} \phi^{1/3} L^{2/3}.
\end{equation*}
\end{proposition}
From this, arguing as in \cite[Prop. 5.5]{CoGoOtSe}, we obtain that
\begin{proposition}\label{existmu}
For every pair of measures $ \m_{\pm }$ with bounded support and $ \m_+(\mathbb{R})= \m_-(\mathbb{R})$, the infimum in \eqref{limitProb} is finite and attained.
\end{proposition}
We now give some regularity results for minimizers of \eqref{limitProb}. These can be mostly proven as in \cite{CoGoOtSe} so we state them without proof. Let us first recall the notion of subsystem.
\begin{proposition} [Definition of a subsystem]\label{def:subsystem}
Given a point $(X, t) \in [-L/2,L/2]\times(a,b) $ and $\mu\in\calA^*_{a,b}$ with $\mathcal{E}(\mu)<\infty$, there exists a subsystem $\m'$ of $\m $ emanating from $(X,t)$. By this we mean that there exists $\m'$ such that
\begin{itemize}
\item[(i)] $ \m'\le \m$ i.e. $\m-\m'$ is a positive measure,
\item[(ii)] $\m'_{t}=a\delta_{X}$, where $a=\m_{t}(X)$,
\item[(iii)] if $m$ is such that $\mathcal{E}(\mu)=\mathcal{E}(\mu,m)$, then \begin{equation*
\partial_t \m'+ \partial_x \left(\frac{dm}{d\m} \m'\right)=0.\end{equation*}
\end{itemize}
In particular, (ii) implies that $(\m_{t} - \m'_{t}) \perp \delta_{X}$ in the sense of the Radon-Nikodym decomposition. We call $\mu^+:=\mu'{\LL}\mathbb{R}\times (t,b)$ the forward subsystem emanating from $(X,t)$ and $\mu^-:=\mu'{\LL}\mathbb{R}\times (a,t)$ the backward subsystem emanating from $X$.
\end{proposition}
\begin{lemma}[No loops] \label{noloop} Let $\m$ be a minimizer for the Dirichlet problem (\ref{limitProb}), $\bar t\in (a,b)$.
Let $X_1$, $X_2$ be two points in the line $\{(x,t):t=\bar{t}\}$. Let $\m_1$ and $\m_2$ be subsystems of $\m$ emanating from $(X_1,\bar t)$, resp. $(X_2,\bar t)$. Let $(X_+,t_+)$
be a point with $ t_+ > \bar{t}$ and $(X_-,t_-)$ a point with $ t_-<\bar{t}$, and such that $\m_1$ and $\m_2$ both have Diracs at both $X_+$ and $X_-$ with nonzero mass. Then $X_1=X_2$.
\end{lemma}
As in \cite{CoGoOtSe}, a consequence of this lemma is that we have a representation of the form
\begin{equation}\label{representationmu}
\m= \sum_i \frac{\varphi_i}{\sqrt{1+|\dotX_i|^2}} \, \mathcal{H}^1{\LL}\Gamma_i \end{equation}
where the sum is countable and
$\Gamma_i=\{ (X_i(t),t) : t\in [a_i,b_i]\} $ with $X_i$ absolutely continuous and almost everywhere
non overlapping.\\
Another consequence is that if there are two levels at which $\m$ is a finite sum
of Diracs, then it is the case for all the levels in between. Since $\mathcal{E}(\mu)<\infty$ implies in particular that $\sharp\{\phi_i\neq0\}<\infty$ for a.e. $t\in (a,b)$, this means that $\mu_t$
is in fact a locally finite (in time) sum of Dirac masses and thus, the sum in \eqref{representationmu} is finite away from the initial and finite time.
\\
For measures which are concentrated on finitely many curves, we have as in \cite[Lem. 5.9]{CoGoOtSe}, a representation formula for $\mathcal{E}(\mu)$.
\begin{lemma}\label{lemmacurves}
Let $\mu=\sum_{i=1}^N \frac{\varphi_i}{\sqrt{1+|\dotX_i|^2}} \, \mathcal{H}^1{\LL} \Gamma_i \in\calA^*_{a,b}$ with $\Gamma_i=\{ (X_i(t),t) : t\in [a_i,b_i]\}$ for some absolutely continuous curves $X_i$, disjoint up to the endpoints.
Every $\phi_i$ is then constant on $[a_i,b_i]$ and we have conservation of mass. That is,
for $z:=(x,t)$, letting
\begin{align*}\calI^-(z)&:=\{ i\in[1,N] \ : \ t=b_i, \ X_i(b_i)=x\} \\ \calI^+(z)&:=\{ i\in[1,N] \ : \ t=a_i, \ X_i(a_i)=x\}, \end{align*}
there holds
\begin{equation*
\sum_{i\in\calI^-(z)} \phi_i=\sum_{i\in\calI^+(z)} \phi_i.
\end{equation*}
Moreover, $m=\sum_i \frac{\varphi_i}{\sqrt{1+|\dotX_i|^2}}
\dotX_i \, \mathcal{H}^1{\LL}\Gamma_i$ and
\begin{equation}\label{Iparticul}
\mathcal{E}(\mu)=\sum_i \int_{a_i}^{b_i} 1+ \phi_i |\dotX_i|^2 dt.
\end{equation}
\end{lemma}
In particular, this proves that for minimizers, formula \eqref{Iparticul} holds (where the sum is at most countable). By a slight abuse of notation, for such measures we will denote
\[\mathcal{E}(\mu)=\int_{a}^b \sharp\{\phi_i\neq 0\}+ \sum_i \phi_i |\dotX_i|^2 dt.\]
We gather below some properties of the minimizers
\begin{proposition}\label{reg}
A minimizer of the Dirichlet problem (\ref{limitProb}) with boundary conditions
${\mu}_{\pm}$ satisfies
\begin{itemize}
\item[(i)] Each $X_i$ is affine.
\item[(ii)] There is monotonicity of the traces in the sense that for every $t\in(a,b)$, if $\mu_t=\sum_i \phi_i \delta_{X_i}$ with $X_i$ ordered (i.e. $X_i\le X_{i+1}$) and if $\mu^{i,+}$ is the forward subsystem emanating from $X_i$, then the traces $\mu_b^{i,+}$ satisfy
$supp \, \mu_b^{i,+}=[x_{i}^+,y_i^+]$ with $y_i^+\le x_{i+1}^+$. The analogous statement holds for the backward subsystems.
\item[(iii)] If $\mu_-=\phi \delta_X$ then $\mu$ has a tree structure.
\item[(iv)] If ${\mu}_{-}= {\mu}_{+} $, then letting $a=-T$ and $b=T$, there exists a minimizer which is symmetric with respect to the $t=0$ plane. For every such minimizer, the number of Dirac masses at time $t$ is minimal for $t=0$.
\end{itemize}
\end{proposition}
\begin{proof}
Item (i) follows from fixing the branching points and minimizing in $X_i$. The other points are simple consequences of Lemma \ref{noloop}.
\end{proof}
The monotonicity property (ii), is analogous to the monotonicity of optimal transport maps in one space dimension \cite{villani}.
\section{Irrigation of the Lebesgue measure by a Dirac mass}\label{irrig}
In this section we consider \eqref{minmainintro} with $a=0$ and $b=T$, $\mu_-=\phi \delta_X$ and $\mu_+=dx {\LL} [-\phi/2,\phi/2]$. We will denote
\[E(T,\phi,X):=\min \{\mathcal{E}(\mu) \ : \ \mu_0=\phi \delta_X , \ \mu_T= dx {\LL} [-\phi/2,\phi/2]\}.\]
For simplicity we let $E(T,\phi):=E(T,\phi,0)$ be the energy required to connect the Lebesgue measure to the centered Dirac mass and $E(T):=E(T,1)$. The following lemma shows that understanding $E(T)$ is enough for understanding $E(T,\phi,X)$.
\begin{lemma}\label{lemrescale}
For every $T,\phi,X$, there holds,
\begin{equation}\label{equationEX}
E(T,\phi,X)=E(T,\phi)+\frac{1}{T} \phi |X|^2.
\end{equation}
Moreover, if $\mu_t=\sum_i \phi_i X_i(t)$ is optimal for $E(T,\phi)$, then letting $\widehat{X}_i(t):= (1-\frac{t}{T})X+X_i(t)$, $\hat{\mu}_t=\sum_i \phi_i \widehat{X}_i(t)$ is optimal for $E(T,\phi,X)$.\\
Furthermore, we have
\begin{equation}\label{rescaleTphi}
E(T,\phi)=\phi^{3/2}E(T\phi^{-3/2}).
\end{equation}
In addition, if $\mu_t=\sum_i \phi_i X_i(t)$ is optimal for $E(T\phi^{-3/2})$ then letting $\hat{t}:=\phi^{3/2}t$, $\widehat{\phi}_i:= \phi \phi_i$,
and $\widehat{X}_i:= \phi X_i(\hat{t})$, then $\hat{\mu}_{\hat{t}}= \sum_i \widehat{\phi}_i \delta_{\widehat{X}_i}$ is optimal for $E(T,\phi)$.\\
\end{lemma}
\begin{proof}
For $\mu_t=\sum_i \phi \delta_{X_i}$ admissible for $E(T,X)$, we define $\hat{\mu}_t:=\sum_i \phi_i \delta_{\hat{X}_i}$, where $\hat{X}_i(t):= (1-\frac{t}{T})X+X_i(t)$. Then, $\hat{\mu}_t$ is admissible for $E(T,\phi,X)$ and
\begin{align}\label{Erescale}\mathcal{E}(\hat{\mu})&=\int_0^T \sharp\{\phi_i\neq 0\} +\sum_i \phi_i |\dotX_i-\frac{1}{T}X|^2dt \\
&=\int_0^T \sharp\{\phi_i\neq 0\} +\sum_i \phi_i |\dotX_i|^2 dt +\frac{|X|^2}{T^2}\int_0^T \sum_i \phi_i dt -2\frac{X}{T} \int_0^T \sum_i \phi_i \dotX_i dt \nonumber.
\end{align}
For $\varepsilon>0$, thanks to Lemma \ref{lemmacurves} and the fact that $X_i(0)=0$, we have
\[\int_0^{T-\varepsilon} \sum_i \phi_i \dotX_i dt =\sum_i \phi_i X_i(T-\varepsilon).\]
Furthermore, testing the weak convergence of $\mu_t$ to $dx$ as $t\to T$, with the function $x$, we get
\[\lim_{\varepsilon\to 0} \sum_i \phi_i X_i(T-\varepsilon)=\int_{-\phi/2}^{\phi/2} xdx =0.\]
Finally, since by H\"older's inequality applied twice and $\sum_i \phi_i=\phi$,
\[\int_{T-\varepsilon}^T \sum_i \phi_i |\dotX_i|\le \phi^{1/2} \int^T_{T-\varepsilon} (\sum_i \phi_i |\dotX_i|^2)^{1/2}\le \phi^{1/2}\varepsilon^{1/2} \left(\int_{T-\varepsilon}^T \sum_i \phi_i |\dotX_i|^2\right)^{1/2},\]
we get
\[\int_0^T \sum_i \phi_i \dotX_i dt=\lim_{\varepsilon\to 0} \left(\int_0^{T-\varepsilon} \sum_i \phi_i \dotX_i dt+ \int_{T-\varepsilon}^T \sum_i \phi_i |\dotX_i| dt\right)=0.\]
Combining this with $\sum_i \phi_i=\phi$ and \eqref{Erescale}, we get
\[\mathcal{E}(\hat{\mu})=\mathcal{E}(\mu)+\frac{1}{T} \phi |X|^2,\]
from which the first part of the proposition follows noticing that the map $\mu\to \hat \mu$ is one-to-one between admissible measures for $E(T,X)$ and admissible measures for $E(T,\phi,X)$.\\
The second part follows simply by using the rescaling $\hat{t}:=\phi^{3/2}t$, $\widehat{\phi}_i:= \phi \phi_i$,
and $\widehat{X}_i:= \phi X_i(\hat{t})$.\\
\end{proof}
\begin{remark}
Using the same type of rescalings as the one leading to \eqref{rescaleTphi}, it is not hard to prove that $T\to E(T)$ is a continuous function.
\end{remark}
As a consequence of the monotonicity of the support and of the previous lemma, we can derive the following fundamental recursive characterization of $E(T)$.
\begin{lemma}
For every $T>0$,
\begin{equation}\label{recursive}
E(T)=\min_{\sum_{i=1}^N \phi_i=1} \sum_{i=1}^N \phi_i^{3/2} E(T\phi_i^{-3/2}) +\frac{1}{12T} \left(1-\sum_{i=1}^N \phi_i^3\right),
\end{equation}
so that in particular the energy does not change if we reorder the $\phi_i$.
\end{lemma}
\begin{proof}
Let $\phi_1,..,\phi_N$ be the fluxes of the branches leaving from the point $(0,0)$ (if it is not a branching point then $N=1$). Up to relabeling, we may assume that the $\phi_i$ are ordered i.e. $\phi_1$ corresponds to the first branch, $\phi_2$ to the second and so on.
By the monotonicity of the traces (Proposition \ref{reg}), the $N$ branches are independent and the mass from the first branch will go to $[-1/2,-1/2+\phi_1]$, the second branch will go to $[-1/2+\phi_1,-1/2+\phi_1+\phi_2]$ and so on. Let $\overline{X}_i$ be the centers of the intervals of length $\phi_i$ i.e. $\overline{X}_i= -\frac{1}{2}+ \sum_{j<i} \phi_j +\frac{\phi_i}{2}$. We first prove that
From \eqref{equationEX} and \eqref{rescaleTphi} we have
\begin{equation}\label{toproverecursive}
E(T)=\min_{\sum_{i=1}^N \phi_i =1} \sum_{i=1}^{N} E(T,\phi_i, \overline{X}_i)= \min_{\sum_{i=1}^N \phi_i =1} \sum_{i=1}^{N}\phi_i^{3/2}E(T \phi_i^{-3/2}) +\frac{1}{T} \sum_{i=1}^N \phi_i |\overline{X}_i|^2,\end{equation}
so that we are left to prove that for every $(\phi_i)_{i=1}^N$ with $\sum_{i=1}^N \phi_i=1$, there holds
\begin{equation}\label{sumequalsum}\sum_{i=1}^N \phi_i |\overline{X}_i|^2=\frac{1}{12}\left(1-\sum_{i=1}^N \phi_i^3\right).\end{equation}
We prove this by induction on $N$. For $N=1$, there is nothing to prove. For $N=2$, since $\phi_2=1-\phi_1$, the left-hand side of \eqref{sumequalsum} is equal to
\[\phi_1\left(\frac{-1+\phi_1}{2}\right)^2+(1-\phi_1)\left(\frac{\phi_1}{2}\right)^2=\frac{1}{4}\phi_1(1-\phi_1),\]
which is equal to the right-hand side of \eqref{sumequalsum}.\\
Assume now that \eqref{sumequalsum} holds for $N-1$. Let then $\phi=\phi_1+\phi_2$ and $\overline{X}=\overline{X}_1+\frac{\phi_2}{2}$. By the induction hypothesis,
\[\sum_{i=3}^{N} \phi_i |\overline{X}_i|^2 +\phi |\overline{X}|^2=\frac{1}{12}\left(1-\sum_{i=3}^N \phi^3_i -\phi^3\right)\]
so that
\[\sum_{i=1}^N \phi_i |\overline{X}_i|^2=\frac{1}{12}\left(1-\sum_{i=1}^N \phi_i^3\right)+\frac{1}{12}(\phi_1^3+\phi_2^3-\phi^3) +\phi_1|\overline{X}_1|^2+\phi_2|\overline{X}_2|^2-\phi|\overline{X}|^2.\]
We are thus left to prove that
\begin{equation}\label{toprovesum}
\frac{1}{12}(\phi_1^3+\phi_2^3-\phi^3) +\phi_1|\overline{X}_1|^2+\phi_2|\overline{X}_2|^2-\phi|\overline{X}|^2=0.
\end{equation}
By definition of $\phi$, $\overline{X}$ and since $\overline{X}_2=\overline{X}_1+\frac{\phi_1+\phi_2}{2}$, we have
\[
\frac{1}{12}(\phi_1^3+\phi_2^3-\phi^3)=-\frac{\phi_1\phi_2}{4}(\phi_1+\phi_2)=-\frac{\phi_1\phi_2 \phi}{4}.
\]
Analogously we can compute
\begin{align*}
\phi_1|\overline{X}_1|^2+\phi_2|\overline{X}_2|^2-\phi|\overline{X}|^2= & \phi_1|\overline{X}_1|^2+\phi_2|\overline{X}_1+\frac{\phi}{2}|^2-\phi|\overline{X}_1+\frac{\phi_2}{2}|^2\\
=& \phi_1|\overline{X}_1|^2+\phi_2|\overline{X}_1|^2 +\phi_2\phi \overline{X}_1+\frac{\phi_2\phi^2}{4}-\phi|\overline{X}_1|^2-\phi\phi_2\overline{X}_1-\frac{\phi\phi_2^2}{4}\\
=&\frac{\phi_2\phi}{4}(\phi-\phi_2)\\
=&\frac{\phi_1\phi_2\phi}{4}.
\end{align*}
Adding these two equalities we get \eqref{toprovesum} which concludes the proof of \eqref{sumequalsum}.
\end{proof}
Before going further, let us point out that for $T,t>0$ using as test configuration for $E(T+t)$, $\delta_0$ in $[0,t]$ extended by the minimizer of $E(T)$ in $[t,T+t]$, we obtain
\begin{equation}\label{estim1}
E(T+t)\le E(T)+t.
\end{equation}
This together with \eqref{recursive}, motivates the introduction of the largest branching time:
\begin{equation}\label{Tstar}T_*=\inf\{T \ : E(T+t)=E(T)+t, \quad \forall t\ge 0\}.\end{equation}
By definition of $T_*$ and \eqref{estim1}, we see that for every $\varepsilon>0$, $E(T_*-\varepsilon)+\varepsilon>E(T_*)$ which means that every minimizer of \eqref{recursive} for $T=T_*$
must have $N>1$ branches (there must be branching at time zero).\\
We will also need the following simple lemma.
\begin{lemma}\label{straight}
Let $X\in \mathbb{R}$, $T,\phi>0$ with $T\phi^{-3/2}>T_*$. And let $\mu_t$ be a minimizer for $E(T,\phi,X)$. Letting $X(t):=(1-\frac{t}{T})X$, for $t\in[0,T-\phi^{3/2}T_*]$ there holds, $\mu_t=\phi \delta_{X(t)}$.
\end{lemma}
\begin{proof}
Since $T\phi^{-3/2}>T_*$, by definition of $T_*$, in $[0,T\phi^{-3/2}-T_*]$, every minimizer of $E(T\phi^{-3/2})$ is of the form $\delta_0$.
Therefore, by \eqref{equationEX} and \eqref{rescaleTphi}, if $X(t):=(1-\frac{t}{T})X$, then for $t\in[0,T-\phi^{3/2}T_*]$, $\mu_t=\phi \delta_{X(t)}$.
\end{proof}
We can now state the main result of this section.
\begin{proposition}\label{propTstar}
We have $T_*=1/4$ and if $\phi_1,..,\phi_N$ are optimal in \eqref{recursive} for $T=T_*$, then $N=2$ and $\phi_1=\phi_2=1/2$. Moreover,
\[E(1/4)-1/4=\frac{1}{2-\sqrt{2}}.\]
\end{proposition}
The proof of this proposition will consist of the remaining part of this section. Before doing so, let us see how it implies Theorem \ref{main}. In the proof, we will use the following notation
\begin{definition}
For $\mu\in \calA^*_{a,b}$ with $\mu_t=\sum_i \phi_i \delta_{X_i}$ and $X\in \mathbb{R}$, let $S_X(\mu)$ be the measure defined by $(S_X(\mu))_t:=\sum_i \phi_i \delta_{X_i+X}$.
\end{definition}
\begin{proof}[Proof of Theorem \ref{main}]
By definition of $T_*$, for $T\ge 1/4$, we have $E(T)=E(1/4)+(T-1/4)$ and if $\mu$ is a minimizer for $E(T)$, then it coincides with $\delta_0$ in $[0,T-1/4]$ and with (a translated version of) a minimizer for $E(1/4)$ in $[T-1/4,T]$.
Therefore, it is enough to prove that for $T=1/4$, the only minimizer of $E(1/4)$ is given by $\mu^*$.\\
Let $\mu$ be such a minimizer and let us prove by induction that $\mu=\mu^*$. Recall first that we defined $t_k=\frac{1}{4}\left(1-\left(\frac{1}{2}\right)^{3k/2}\right)$.
Assume that $\mu_t=\mu^*_t$ for $t\in[0,t_{k-1}]$ and that $\mu_{t_{k-1}}=2^{-(k-1)}\sum_{i=1}^{2^{k-1}} \delta_{X_i^{k-1}} $ for some ordered $X_i^{k-1}\in \mathbb{R}$.
By monotonicity of the support, in $[t_{k-1},1/4]$, each of the forward subsystems $\mu^{+,i}$ emanating from $X_i^{k-1}$ must be of the form $\mu^{+,i}_t= S_{\overline{X}_i^{k-1}}(\mu^{i}_{t-t_{k-1}})$
where $\mu^{i}$ is a minimizer of $E(\frac{1}{4}-t_{k-1},2^{-(k-1)}, X_i^{k-1}-\overline{X}_i^{k-1})=E(\frac{1}{4} (2^{-(k-1)})^{3/2},2^{-(k-1)}, X_i^{k-1}-\overline{X}_i^{k-1})$. By \eqref{rescaleTphi} and Proposition \ref{propTstar}, every minimizer
of $E(\frac{1}{4} (2^{-(k-1)})^{3/2},2^{-(k-1)}, X_i^{k-1}-\overline{X}_i^{k-1})$ must branch into two pieces of equal mass. Thus, we can further decompose $\mu^{i}=\mu^{i,1}+\mu^{i,2}$ where
$\mu^{i,1}=S_{-2^{-(k+1)}}(\nu^{i,1})$ with $\nu^{i,1}$ a minimizer for
$E(\frac{1}{4} (2^{-(k-1)})^{3/2},2^{-k}, 2^{-(k+1)}+X_i^{k-1}-\overline{X}_i^{k-1})$ and similarly for $\mu_i^2$.
Let
\begin{align*}Y_{2i-1}^k(s)&:= -2^{-(k+1)}+(1-\frac{s}{\frac{1}{4}-t_{k-1}})(X_{i}^{k-1}-\overline{X}_{2i-1}^k) \qquad \textrm{ and } \\
Y_{2i}^k(s)&:= 2^{-(k+1)}+(1-\frac{s}{\frac{1}{4}-t_{k-1}})(X_{i}^{k-1}-\overline{X}_{2i}^k),
\end{align*}
Since $\overline{X}_i^{k-1}-2^{-(k+1)}=\overline{X}^k_{2i-1}$ and $\overline{X}_i^{k-1}+2^{-(k+1)}=\overline{X}^k_{2i}$,
by Lemma \ref{straight}, for $s\in[0,t_k-t_{k-1}]$, $\mu^i_s= 2^{-k} (\delta_{Y_{2i-1}^k(s)}+\delta_{Y_{2i}^k(s)})$ and thus letting
\begin{align*}X_{2i-1}^k(t)&:= \frac{t-t_{k-1}}{\frac{1}{4}-t_{k-1}}(\overline{X}_{2i-1}^k-X_{i}^{k-1})+ X_i^{k-1} \qquad \textrm{ and } \\
X_{2i}^k(t)&:= \frac{t-t_{k-1}}{\frac{1}{4}-t_{k-1}}(\overline{X}_{2i}^k-X_{i}^{k-1})+ X_i^{k-1},
\end{align*}
we finally obtain as claimed that for $t\in[t_{k-1},t_k]$,
\[\mu^{+,i}_t=2^{-k} (\delta_{X_{2i-1}^k(t)}+\delta_{X_{2i}^k(t)}).\]
\end{proof}
We may start investigating the properties of $T_*$.
\begin{lemma}
There holds
\[0<T_*<\infty.\]
As a consequence, the infimum in \eqref{Tstar} is attained.
\end{lemma}
\begin{proof}
We first observe that for every $T>0$, by \eqref{HolderW2}
\begin{equation}\label{estimbelowE}E(T)\ge T+\frac{W_2^2(\delta_0,dx{\LL}[-1/2,1/2])}{T}.\end{equation}
Let us prove that $T_*<\infty$. Let $T\ge 1$ and $\mu_t$ be a minimizer for $E(T)$. By \eqref{estim1}, for every $T\ge 1$,
\[E(T)\le E(1)+(T-1).\]
By the no-loop condition, if $\mu_t$ has its first branching at time $t_0$ then in $[t_0,T]$ it has at least two branches and thus
\[E(T)\ge 2(T-t_0).\]
Putting these two inequalities together we get $t_0\ge T-(E(1)-1)$. Letting $T_1:= E(1)-1$, which is positive by \eqref{estimbelowE}, and assuming that $T\ge T_1$, this implies that before $T-T_1$,
no branching may occur. Hence, for $T\ge T_1$,
\[E(T)=E(T_1)+ (T-T_1),\]
that is $T_*\le T_1$.\\
We now prove that $T_*>0$. By \eqref{estimbelowE}, if $T_*=0$, for every $T_1\le T$,
\[E(T)=E(T_1)+T-T_1\ge T +\frac{W_2^2(\delta_0,dx{\LL}[-1/2,1/2])}{T_1}\]
which letting $T_1\to 0$ would give a contradiction to $E(T)<\infty$. The fact that the infimum in \eqref{Tstar} is attained follows by continuity of $T\to E(T)$.
\end{proof}
The next result is a form of equipartition of energy which will be used to prove that $T_*\le 1/4$.
\begin{lemma}
If $\phi_1,..,\phi_N>0$ are such that
\[E(T_*)= \sum_{i=1}^N \phi_i^{3/2} E(T_* \phi_i^{-3/2})+ \frac{1}{12T_*} \left(1-\sum_{i=1}^N \phi_i^3\right),\]
then
\begin{equation}\label{mainestim}
T_*(N-1)= \frac{1}{12T_*} \left(1-\sum_{i=1}^N \phi_i^3\right).
\end{equation}
\end{lemma}
\begin{proof}
By \eqref{sumequalsum}, if we denote by $\overline{X}_i$ the barycenters of the intervals of length $\phi_i$, it is enough to prove that
\begin{equation}\label{mainestimbis}
T_*(N-1)=\frac{1}{T_*} \sum_{i=1}^N \phi_i |\overline{X}_i|^2.
\end{equation}
Let $\mu_t=\sum_i\phi_i \delta_{X_i}$ be optimal for $T_*$. By definition of $T_*$, the fact that $T_*\phi^{-3/2}_i>T_*$ and Lemma \ref{straight}, there is $\overline{\eps}>0$ such that for $t\in [0,\overline{\eps}]$, $X_i(t)=\frac{t}{T_*} \overline{X}_i$.
For $\overline{\eps}>\varepsilon>0$, we are going to construct a competitor for $E(T_*-\varepsilon)$. In $[\overline{\eps}-\varepsilon,T_*-\varepsilon]$, let $Y_i(t):=X_i(t+\varepsilon)$ and
in $[0,\overline{\eps}-\varepsilon]$, $Y_i(t):=\frac{1}{T_*}\frac{\overline{\eps}}{\overline{\eps}-\varepsilon} t \overline{X}_i$ so that $Y_i(\overline{\eps}-\varepsilon)=X_i(\overline{\eps})$. Therefore,
\begin{align*}
E(T_*-\varepsilon)&\le E(T_*)-N\varepsilon +\frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2 (\overline{\eps}-\varepsilon) \frac{\overline{\eps}^2}{(\overline{\eps}-\varepsilon)^2} - \frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2 \overline{\eps}\\
&= E(T_*)-\left(N-\frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2\right) \varepsilon +o(\varepsilon^2).
\end{align*}
Hence,
\[\frac{E(T_*-\varepsilon)-E(T_*)}{-\varepsilon}\ge N-\frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2 +o(\varepsilon).\]
Using \eqref{estim1} and letting $\varepsilon\to 0$, we get
\[N-\frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2\le 1.\]
We similarly define a competitor for $E(T_*+\varepsilon)$ by letting $Y_i(t):=X_i(t-\overline{\eps})$ in $[\varepsilon+\overline{\eps},T_*+\varepsilon]$ and $Y_i(t):=\frac{1}{T_*} \frac{\overline{\eps}}{\varepsilon+\overline{\eps}} t \overline{X}_i$ in $[0, \varepsilon+\overline{\eps}]$ and get
\begin{align*}
E(T_*+\varepsilon)&\le E(T_*)+N\varepsilon +\frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2 (\overline{\eps}+\varepsilon) \frac{\overline{\eps}^2}{(\overline{\eps}+\varepsilon)^2} - \frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2 \overline{\eps}\\
&= E(T_*)+\left(N-\frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2\right) \varepsilon +o(\varepsilon^2).
\end{align*}
From this we infer that
\[\frac{E(T_*+\varepsilon)-E(T_*)}{\varepsilon}\le N-\frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2 +o(\varepsilon).\]
By definition of $T_*$ and by continuity of $E$,
\[\frac{E(T_*+\varepsilon)-E(T_*)}{\varepsilon}=1\]
and therefore
\[N-\frac{1}{T_*^2}\sum_i \phi_i |\overline{X}_i|^2\ge 1,\]
which conclude the proof of \eqref{mainestimbis}.
\end{proof}
\begin{remark}
Notice that \eqref{mainestim} is compatible with $N=2$, $\phi_1=\phi_2=1/2$ and $T_*=\frac{1}{4}$.
\end{remark}
Using the characterization \eqref{recursive}, we show another characterization of $E(T_*)$ which has the advantage of not being recursive anymore.
\begin{proposition}
There holds
\begin{equation}\label{alternativeq}
\frac{E(T_*)-T_*}{T_*}=\min_{\sum_{i=1}^N\phi_i=1} \frac{(N-1)+\frac{1}{12T_*^2} \left(1-\sum_{i=1}^N \phi_i^{3}\right)}{1-\sum_{i=1}^N \phi_i^{3/2}}.
\end{equation}
Moreover, if $\phi_1,..,\phi_N$ are minimizers for $E(T_*)$ then they also minimize the right-hand side of \eqref{alternativeq} and vice-versa.
\end{proposition}
\begin{proof}
Let $\overline{\phi}_i$ be the optimal fluxes for \eqref{recursive}. By definition of $T_*$, we have for every $\phi_i$ with $\sum_i \phi_i=1$ (since $\phi_i^{-3/2}\ge1$)
\begin{align*}
E(T_*)&\le\sum_{i=1}^N \phi_i^{3/2} E(T_* \phi_i^{-3/2})+ \frac{1}{12T_*} \left(1-\sum_{i=1}^N \phi_i^{3}\right)\\
&=\sum_{i=1}^N \phi_i^{3/2} (E(T_*) + (T_* \phi_i^{-3/2}-T_*)) +\frac{1}{12T_*} \left(1-\sum_{i=1}^N \phi_i^{3}\right)\\
&= E(T_*) \left(\sum_{i=1}^N \phi_i^{3/2}\right) +T_* \sum_{i=1}^N (1-\phi_i^{3/2}) +\frac{1}{12T_*} \left(1-\sum_{i=1}^N \phi_i^{3}\right).
\end{align*}
Therefore
\[E(T_*)-T_*\le (E(T_*)-T_*) \left(\sum_{i=1}^N \phi_i^{3/2}\right)+(N-1)T_*+\frac{1}{12T_*} \left(1-\sum_{i=1}^N \phi_i^{3}\right),\]
and then
\[\frac{E(T_*)-T_*}{T_*}\le\frac{(N-1)+\frac{1}{12T_*^2} \left(1-\sum_{i=1}^N \phi_i^{3}\right)}{1-\sum_{i=1}^N \phi_i^{3/2}}\]
with equality for $\phi_i=\overline{\phi}_i$.
\end{proof}
For $N\ge 2$, we introduce a quantity which will play a central role in our analysis. Let
\[\alpha_N:=\inf_{\phi_i\ge 0}\left\{ \frac{1-\sum_{i=1}^N \phi_i^3}{1-\sum_{i=1}^N \phi_i^{3/2}} \ : \ \sum_{i=1}^N \phi_i=1\right\}.\]
We now prove that $T_*\le 1/4$ and that a lower bound on $\alpha_N$ gives an upper bound on $N$.
\begin{proposition}\label{estimTstar}
There holds
\[T_*\le \frac{1}{4}.\]
Moreover, the number $N>1$ of branches of the minimizer for $T_*$, satisfies
\begin{equation}\label{estimcentral1}
\sqrt{N}(\sqrt{N}+1) T_*+\frac{\alpha_N}{12 T_*}\le \frac{\sqrt{2}}{2(\sqrt{2}-1)}.
\end{equation}
As a consequence,
\begin{equation}\label{criterionalphaN}
\sqrt{N}\le \frac{1}{2} \left(-1+\left(1+\frac{6}{\alpha_N(\sqrt{2}-1)^2}\right)^{1/2}\right).
\end{equation}
In particular, since $\alpha_N\ge 1$, this gives $N\le 6$.
\end{proposition}
\begin{proof}
We start with the upper bound $T_*\le \frac{1}{4}$. Let $\phi_1,..,\phi_N$ be optimal for $E(T_*)$. Then, by \eqref{alternativeq} and \eqref{mainestim},
\[\frac{E(T_*)-T_*}{T_*}=\frac{2(N-1)}{\left(1-\sum_i \phi_i^{3/2}\right)}.\]
Using that for every $\phi_1,.., \phi_N$ with $\sum \phi_i=1$,
\begin{equation}\label{convex32}\frac{1}{1-\sum_i \phi_i^{3/2}}\ge \frac{1}{1-\sqrt{N}^{-1}}\end{equation}
we get
\[\frac{E(T_*)-T_*}{T_*}\ge\frac{2(N-1)\sqrt{N}}{\sqrt{N}-1}.\]
Since the right-hand side is minimized for $N=2$ (among $N\in\mathbb{N}$, $N\ge 2$), we have
\begin{equation}\label{lowerbound}
\frac{E(T_*)-T_*}{T_*}\ge \frac{2\sqrt{2}}{\sqrt{2}-1}.
\end{equation}
We proceed further by proving an upper bound for the left-hand side of \eqref{lowerbound}. For every $T>0$, we can construct the self similar competitor for which every branch is
divided into two branches of half the mass at every branching point (which is at $T_k=T(1-(\frac{1}{2})^{3k/2})$. Let $\tilde{E}(T)$ be its energy. Then,
arguing as in \eqref{recursive}, we have
\[\tilde{E}(T)= 2 \left(\frac{1}{2}\right)^{3/2}\tilde{E}(T)+2\left(T-\left(\frac{1}{2}\right)^{3/2} T\right)+\frac{1}{16 T}\]
that is
\[(\tilde{E}(T)-T)\left(1-\frac{1}{\sqrt{2}}\right)= T +\frac{1}{16 T}\]
from which we get
\[\frac{\tilde{E}(T)-T}{T}= \frac{\sqrt{2}}{\sqrt{2}-1}\left(1+\frac{1}{16 T^2}\right).\]
Since by definition $E(T)\le \tilde{E}(T)$, we get
\begin{equation}\label{upperboundE}\frac{E(T)-T}{T}\le \frac{\sqrt{2}}{\sqrt{2}-1}\left(1+\frac{1}{16 T^2}\right).\end{equation}
For $T> \frac{1}{4}$, the right-hand side is strictly smaller than $\frac{2\sqrt{2}}{\sqrt{2}-1}$ hence by \eqref{lowerbound}, we cannot have $T=T_*$. This gives the upper bound.\\
We now turn to \eqref{estimcentral1}. For this, we notice that
\begin{equation}\label{estimcentral2}
\frac{E(T_*)-T_*}{T_*}= \frac{N-1}{1-\sum_i \phi_i^{3/2}}+\frac{(1- \sum_i \phi_i^3)}{12T_*^2(1-\sum_i \phi_i^{3/2})}\ge \frac{N-1}{1-N^{-1/2}}+\frac{\alpha_N}{12 T_*^2}.
\end{equation}
Since $T_*\le 1/4$, by definition of $T_*$ (recall \eqref{Tstar}), $E(1/4)-1/4=E(T_*)-T_*$, combining \eqref{estimcentral2} with \eqref{upperboundE} for $T=1/4$ and
\[
\frac{N-1}{1-N^{-1/2}}=\sqrt{N}(\sqrt{N}+1),
\]
yields \eqref{estimcentral1}.
We finally derive \eqref{criterionalphaN}. For this, multiply \eqref{estimcentral1} by $T_*$ to obtain that
\[
\sqrt{N}(\sqrt{N}+1) T_*^2-\frac{\sqrt{2}}{2(\sqrt{2}-1)} T_* +\frac{\alpha_N}{12}\le 0.
\]
This implies that the polynomial $\sqrt{N}(\sqrt{N}+1) X^2-\frac{\sqrt{2}}{2(\sqrt{2}-1)} X +\frac{\alpha_N}{12}$ has real roots (and that $T_*$ lies between these two roots) so that
\[\Delta=\frac{1}{2(\sqrt{2}-1)^2}-\frac{\alpha_N}{3}\sqrt{N}(\sqrt{N}+1)\ge 0,\]
which is equivalent to
\[(\sqrt{N})^2+\sqrt{N}-\frac{3}{2\alpha_N(\sqrt{2}-1)^2}\le 0.\]
Since the largest root of this polynomial (in the variable $\sqrt{N}$) is given by $\frac{1}{2} (-1+(1+\frac{6}{\alpha_N(\sqrt{2}-1)^2})^{1/2})$, we have obtained \eqref{criterionalphaN}.
\end{proof}
\begin{remark}
From the proof of the previous proposition, one could also get a lower bound for $T_*$. We will make use of this fact later on to study the case $N=2$.
\end{remark}
Estimate \eqref{criterionalphaN} shows that by obtaining a good lower bound on $\alpha_N$, we may exclude that $N\ge 3$. This is the purpose of the next lemma whose proof is essentially postponed to Appendix \ref{appendix}.
\begin{lemma}\label{lem:Nge3}
For $3\le N\le 6$,
\begin{equation}\label{criterionalphaNbis}
\alpha_N>\frac{6}{(\sqrt{2}-1)^2 (2\sqrt{N}+1)^2}.
\end{equation}
As a consequence, the number of branches at $T=T_*$ equals two.
\end{lemma}
\begin{proof}
By inverting the relation between $\alpha_N$ and $N$ in \eqref{criterionalphaN}, it is readily seen that \eqref{criterionalphaNbis} and \eqref{criterionalphaN} are incompatible.
Therefore, proving the lower bound \eqref{criterionalphaNbis} directly excludes the possibility of having $N>2$ branches. Since the proof of \eqref{criterionalphaNbis}
is basically based on a reduction of the problem defining $\alpha_N$ to a union of one dimensional optimization problems which can be solved by a computer assisted proof, we postpone it to Appendix \ref{appendix}.
\end{proof}
We now conclude the proof of Proposition \ref{propTstar} by studying the case $N=2$. \\
Let first
\[T_-:=\frac{1}{4}(1-(1-\frac{4}{3}(2-\sqrt{2})^{1/2}).
\]
Then $1/4\ge T_*\ge T_-$. Indeed, \eqref{estimcentral1} for $N=2$ (recalling that $\alpha_2=2$) may be seen to be equivalent to
\begin{equation}\label{quadraticT}T_*^2-\frac{1}{2}T_*+\frac{1}{6\sqrt{2}(\sqrt{2}+1)}\le 0.\end{equation}
In particular, $T_*$ has to lie between the two roots of the right-hand side of \eqref{quadraticT} which yields $T_*\ge T_-$ (recall that the bound $1/4\ge T_*$ was already derived in Proposition \ref{estimTstar}).\\
Now, if $T_*\le 1/4$ then $E(1/4)=E(T_*)+1/4-T_*$ so that using \eqref{upperboundE} for $T=1/4$, we get
\[
E(T_*)-T_*=E(1/4)-1/4\le \frac{\sqrt{2}}{2(\sqrt{2}-1)}.
\]
Using \eqref{alternativeq}, we obtain for $\phi$ optimal for $T_*$, the bound
\begin{equation}\label{boundalmostend}
T_* \frac{1+\frac{1}{4T_*^2} \phi(1-\phi)}{1-\phi^{3/2}-(1-\phi^{3/2})}\le \frac{\sqrt{2}}{2(\sqrt{2}-1)}.
\end{equation}
The next proposition shows the reverse inequality which concludes the proof of Proposition~\ref{propTstar}.
\begin{proposition}\label{prop:Neq2}
For $T\in [T_-,1/4]$ and $\phi\in [0,1]$,
\begin{equation}\label{tofinishtheproof}
T \frac{1+\frac{1}{4T^2}\phi(1-\phi)}{1-\phi^{3/2}-(1-\phi)^{3/2}}\ge \frac{\sqrt{2}}{2(\sqrt{2}-1)},
\end{equation}
with equality if and only if $T_*=\frac{1}{4}$ and $\phi=1/2$.
\end{proposition}
\begin{proof}
By symmetry we may assume that $\phi\in[0,1/2]$. Letting $\lambda:= 2T$, \eqref{tofinishtheproof} is equivalent to show that for $\lambda\in [2T_-,1/2]$ and $\phi\in [0,1/2]$,
\begin{equation}\label{finishreduced}
1+\frac{1}{\lambda^2}\phi(1-\phi)\ge \frac{\sqrt{2}}{\lambda(\sqrt{2}-1)} (1-\phi^{3/2}-(1-\phi)^{3/2}).
\end{equation}
It will be more convenient to work with $a:= \frac{3\sqrt{2}\lambda}{2(\sqrt{2}-1)}$. Letting
\[a_-:=\frac{3\sqrt{2}}{\sqrt{2}-1}T_-=\frac{3}{2(2-\sqrt{2})}(1-(1-\frac{4}{3}(2-\sqrt{2}))^{1/2})\simeq 1.36 ,\] we are reduced to $a \in[a_-,\frac{3\sqrt{2}}{4(\sqrt{2}-1)}]$. Inequality \eqref{finishreduced} then reads
\begin{equation}\label{finishreduced2}
L(a,\phi):=1+\frac{9}{2a^2(\sqrt{2}-1)^2}\phi(1-\phi)\ge \frac{3}{a(\sqrt{2}-1)^2} (1-\phi^{3/2}-(1-\phi)^{3/2})=:R(a,\phi).
\end{equation}
Let us first notice that $L(a,0)=1>0=R(a,0)$ and that for $\phi=\frac{1}{2}$, \eqref{finishreduced2} reads,
\[1+\frac{9}{8a^2(\sqrt{2}-1)^2}\ge \frac{3}{a\sqrt{2}(\sqrt{2}-1)},\]
which always holds true (in terms of $\lambda$, this amounts to $1+\frac{1}{4 \lambda^2}\ge \frac{1}{\lambda}$). Moreover, the inequality above is strict if $a<\frac{3\sqrt{2}}{4(\sqrt{2}-1)}$. We are going to study the variations (for fixed $a$) of $L(a,\phi)-R(a,\phi)$.
By differentiating, this is equivalent to study the sign of
\begin{equation}\label{finishderived}
D(\phi):= 1-2\phi- a( (1-\phi)^{1/2}-\phi^{1/2}).
\end{equation}
Let $X:=\phi^{1/2}$. For $a\in [a_-,\frac{3\sqrt{2}}{4(\sqrt{2}-1)}]$ and $X\in[0,1/\sqrt{2}]$, since $1-2X^2+aX\ge 0$,
the sign of \eqref{finishderived} is the same as the sign of
\[P(X):=(1-2X^2+aX)^2- a^2(1-X^2)=4X^4 -4a X^3+2(a^2-2)X^2+2aX+(1-a^2).\]
Since $P$ has roots $\{\pm 1/\sqrt{2}\}$, we can factor it to obtain
\[P(X)= 2(X^2-\frac{1}{2})(2X^2-2aX +(a^2-1)).\]
For $a>\sqrt{2}$, $2X^2-2aX +(a^2-1)$ has no real roots and therefore, for $a\in[\sqrt{2}, \frac{3\sqrt{2}}{4(\sqrt{2}-1)}]$, $P$ is negative inside $[0,1/\sqrt{2}]$ and thus $\partial_\phi L- \partial_\phi R\le 0$ implying that
\begin{equation}\label{biggersqt}\min_{\phi\in[0,1/2]} L(a,\phi)-R(a,\phi)=L(a,1/2)-R(a,1/2)\ge 0,\end{equation}
with strict inequality if $a< \frac{3\sqrt{2}}{4(\sqrt{2}-1)}$ or $\phi\neq 1/2$. This proves \eqref{finishreduced} for $a\in[\sqrt{2}, \frac{3\sqrt{2}}{4(\sqrt{2}-1)}]$. If now $a\in [a_-,\sqrt{2}]$, besides $\pm 1/\sqrt{2}$, $P$ has two more roots
\begin{equation}\label{rootsP}X_\pm:=\frac{a\pm\sqrt{2-a^2}}{2}.\end{equation}
For $a\in[a_-,\sqrt{2}]$,
\[0\le X_-\le 1/\sqrt{2}\le X_+,\]
and thus $P$ is negative in $[0,X_-]$ and positive in $[X_-,1/\sqrt{2}]$ from which,
\begin{equation}\label{Psi}\Psi(a):=\min_{\phi\in[0,1/2]} L(a,\phi)-R(a,\phi)=L(a,X_-^2)-R(a,X_-^2).\end{equation}
Let us now prove that for $a\in[a_-,\sqrt{2})$, $\Psi'(a)\le 0$. We first compute
\[\Psi'(a)=\partial_a L(a,X_-^2)-\partial_a R(a,X_-^2)+ 2X_-\partial_a X_-(\partial_\phi L(a,X_-^2)- \partial_\phi R(a,X_-^2)).\]
By minimality of $X_-$, $\partial_\phi L(a,X_-^2)- \partial_\phi R(a,X_-^2)=0$ so that
\[
\Psi'(a)=\partial_a L(a,X_-^2)-\partial_a R(a,X_-^2)=\frac{3}{a^2(\sqrt{2}-1)^2}\left(1-X_-^3-(1-X_-^2)^{3/2}-\frac{3}{a} X_-^2(1-X_-^2)\right).
\]
A simple computation shows that $X_-^2=\frac{1}{2}(1-2a\sqrt{2-a^2})$ so that $\Psi'\le 0$ is equivalent to
\[\frac{1}{2\sqrt{2}}(1-2a\sqrt{2-a^2})^{3/2}+\frac{1}{2\sqrt{2}}(1+2a\sqrt{2-a^2})^{3/2}+\frac{3}{4a}(1-a^2(2-a^2))\ge 1.\]
This indeed holds since for $a\in[a_-,\sqrt{2}]$,
\begin{multline*}
\frac{1}{2\sqrt{2}}(1-2a\sqrt{2-a^2})^{3/2}+\frac{1}{2\sqrt{2}}(1+2a\sqrt{2-a^2})^{3/2}+\frac{3}{4a}(1-a^2(2-a^2))\ge\\
\frac{1}{2\sqrt{2}}( (1-2a_-\sqrt{2-a_-^2})^{3/2}+1)+\frac{3}{4\sqrt{2}}(1-a_-^2(2-a_-^2)) \simeq 1.02>1.
\end{multline*}
Therefore, $\Psi'\le 0$ and thus for $a\in[a_-,\sqrt{2}]$, by \eqref{biggersqt}
\[\Psi(a)\ge \Psi(\sqrt{2})> 0,\]
which ends the proof of \eqref{tofinishtheproof}.
\end{proof}
\begin{remark}
From \eqref{quadraticT}, one could infer the simpler bound $T_*\ge \frac{1}{3\sqrt{2}(\sqrt{2}+1)}$ which leads to $a\ge 1$. For $a\in[1,\sqrt{2}]$, we still have \eqref{rootsP} and \eqref{Psi}.
Numerically, it seems that $\Psi$ is decreasing not only in $[a_-,\sqrt{2}]$ but actually on the whole $[1,\sqrt{2}]$. We were unfortunately not able to prove this fact which would have yield a more elegant proof of \eqref{tofinishtheproof}.
\end{remark}
\section{Applications and open problems}\label{sec:appli}
In this section we use Theorem \ref{main} to characterize the symmetric minimizers of
\begin{equation}\label{symmproblem}\min\{ \mathcal{E}(\mu) \ : \ \mu_{\pm T}=\phi/L dx{\LL} [-L/2,L/2]\},\end{equation}
at least for $T$ large enough. By rescaling, it is enough to consider $\phi=L=1$.
\begin{theorem}\label{structureTlarge}
For $T\in[\frac{1}{4},\frac{1}{4(2\sqrt{2}-2)})$, the unique symmetric minimizer of \eqref{symmproblem} is equal in $[0,T]$ to $S_{-1/4}(\mu^1)+S_{1/4}(\mu^1)$, where $\mu^1$ is equal to the unique minimizer of $E(T,1/2)$ given by Corollary \ref{cormain} in $[0,T]$ and to its symmetric in $[-T,0]$.
For $T> 4(2\sqrt{2}-2)$, it is given by the unique minimizer of $E(T)$ given by Theorem \ref{main} in $[0,T]$ and to its symmetric in $[-T,0]$.
\end{theorem}
\begin{proof}
By Proposition \ref{reg}, we know that a symmetric minimizer exists. Let $\mu$ be such a minimizer. Thanks to the symmetry, we can restrict ourselves to study its structure in $[0,T]$. We let
\[
\mathcal{E}^+(\mu):= \int_{0}^T \sharp\{supp \, \m_t\} + \sum_i \phi_i |\dotX_i|^2 dt.
\]
Let $\mu_0=\sum_{i=1}^N \phi_i \delta_{X_i}$. We first claim that $X_i=\overline{X}_i$, where as before, $\overline{X}_i=\frac{-1}{2}+\sum_{j<i} \phi_j +\frac{\phi_i}{2}$.
Indeed, applying the same shear as in \eqref{equationEX}, we obtain by minimality of $\mu$,
\[\mathcal{E}^+(\mu)\ge \mathcal{E}^+(\hat{\mu})+\frac{1}{T}\sum_{i=1}^N \phi_i |X_i-\overline{X}_i|^2\ge \mathcal{E}^+(\mu)+\frac{1}{T}\sum_{i=1}^N \phi_i |X_i-\overline{X}_i|^2,\]
where the inequality could arise from a decreasing of the number of branches after the shear. This proves the claim. For $i=1, .., N$, let $\mu^{+,i}$ be the forward system emanating from $(X_i,0)$. Then, by monotonicity of the traces (Proposition \ref{reg}),
$\mu^{+,i}_T=dx{\LL}[\overline{X}_i-\phi_i/2,\overline{X}_i+\phi_i/2]$ and thus, by the no-loop property
\begin{equation}\label{splittingEplus}\mathcal{E}^+(\mu)=\sum_{i=1}^N\mathcal{E}^+(\mu^{+,i})=\sum_{i=1}^N E(T,\phi_i).\end{equation}
Moreover, $\mu^{+,i}= S_{\overline{X}_i}(\mu^i)$ where $\mu^i$ is some minimizer of $E(T,\phi_i)$. Let now $T\ge 1/4$. Since $\phi_i\le 1$, we have $\phi_i^{-3/2} T\ge 1/4$ and thus by Corollary \ref{cormain},
\[
\mathcal{E}^+(\mu)=\frac{1}{2-\sqrt{2}}\sum_{i=1}^N \phi_i^{3/2} +NT.
\]
For fixed $N$, this is minimized by $\phi_i=1/N$ so that
\[
\mathcal{E}^+(\mu)=\frac{1}{2-\sqrt{2}} N^{-1/2} +NT.
\]
The function $x\to \frac{1}{2-\sqrt{2}} x^{-1/2} +xT$ is minimized by $x_{opt}=\left(2T(2-\sqrt{2})\right)^{-2/3}$. Since $x_{opt}< 2$ for $T\ge \frac{1}{4(2\sqrt{2}-2)}$ and $3>x_{opt}>2$ for $\frac{1}{4}\le T< \frac{1}{4(2\sqrt{2}-2)}$, this concludes the proof.
\end{proof}
As already explained in the introduction, this theorem is not completely satisfactory. Indeed, physically, the most significant case is $T\ll 1$ (where many microstructures should appear),
which is not covered by Theorem \ref{structureTlarge}. However, if we could prove the following conjecture,\\
\smallskip
\noindent \textbf{Conjecture}\\
For $T<T_*$ every minimizer of $E(T)$ branches at time zero (or equivalently $E(T-\varepsilon)<E(T)-\varepsilon$ for $\varepsilon$ small enough),\\
\smallskip
\noindent then the picture would be almost complete. Indeed, in that case, arguing as in the proof of Theorem \ref{structureTlarge}, we would have that every symmetric minimizer $\mu$ with $\mu_0=\sum_{i=1}^N \phi_i \delta_{X_i}$ would satisfy \eqref{splittingEplus}. Now for $1\le i\le N$, let $\phi_{i,1},..,\phi_{i,N_i}$
be the $N_i$ branches starting from $(0,X_i)$. As in the proof of \eqref{recursive}, we would have
\[
E(T,\phi_i)=\sum_{k=1}^{N_i} E(T,\phi_{i,k},\overline{X}_{i,k}),
\]
where $\overline{X}_{i,k}:= -\frac{1}{2}+\sum_{j<k} \phi_{i,j} +\frac{\phi_{i,k}}{2}$. Since the minimizer corresponding to $E(T,\phi_{i,k},\overline{X}_{i,k})$ cannot branch at time zero, we would have (if the conjecture holds) that $\phi_{i,k}^{-3/2}T\ge 1/4$
so that Corollary \ref{cormain} applies and the structure of the minimizers would be fully determined. Let us point out that our conjecture would be for instance implied by the convexity of $T\to E(T)$.
|
1,116,691,498,388 | arxiv |
\section{Introduction} \label{sec1}
The Landau--Brazovskii (LB) model is a generic model to describe phase transitions driven by a short-wavelength instability between the disordered phases and the ordered phases \cite{bra75}. This model has been widely used to simulate several physical systems, such as block copolymers \cite{shi96,zh08}, liquid crystals \cite{kat93, wan21} and other microphase-separating systems \cite{lif97,yao22}.
More concretely, the energy functional of the LB Model is given by
\begin{align}\label{LB-energy}
\mathcal{E}(\phi) = \int_\Omega\left\{ \frac{1}2\left((\Delta \phi + \phi\right)^2 - \frac\alpha{2!}\phi^2 + \frac{1}{4!}\phi^4 - \frac\gamma{3!}\phi^3 \right\}\,\mathrm{d}\Bx,\quad
\end{align}
where the order parameter field $\phi({\boldsymbol{x}})$ is the real-valued periodic function that measures the order of the system in $\mathbb{R}^d$,
$\SN{\Omega}$ is the volume of domain $\Omega\subset\mathbb{R}^d$,
and $\alpha,\gamma$ are adjustable parameters .
Compared with the typical Swift-Hohenberg model \cite{swi77} with double-well bulk energy,
the LB energy functional includes a cubic term that can be used to characterize the asymmetry of the order phase.
Note that $\mathcal{E}$ is invariant under the transformation $\phi\to-\phi,\gamma\to-\gamma$.
For convenience, we suppose $\gamma \ge 0$.
Moreover, to conserve the number of particles in the system, $ \phi $ satisfies mass conservation as follow:
\begin{align}\label{mass}
\bar\phi:=\frac1{|\Omega|}\int_\Omega\phi\,\mathrm{d}\Bx = 0.
\end{align}
To find the stationary states of the LB model, the Allen-Cahn gradient flow of the LB model (AC-LB) reads as
\begin{align}\label{AC-LB}
\partial_t\phi = -\dphi{\mathcal{E}}(\phi) + \beta(\phi),\quad
\beta(\phi):=\frac1{\SN{\Omega}} \int_\Omega\left((1 - \alpha)\phi + \frac{\phi^3}{3!} - \frac{\gamma}{2} \phi^2\right)\,\mathrm{d}\Bx,
\end{align}
where $\dphi{\mathcal{E}}$ is the first order variational derivative of $\mathcal{E}$ with respect to $\phi$,
$\partial_t $ is the partial derivative with respect to $t$,
and the last term $\beta(\phi)$ is the Lagrange multiplier to conserve the total mass of $\phi$ \cite{rub92,zh19}.
It is straightforward to show that the equation \eqref{AC-LB} satisfies the mass conservation and energy dissipative law. First,
by taking the inner product of \eqref{AC-LB} with $1$ and using integration by parts, we have
\begin{align}\label{mass-con}
\dt{}\bar\phi = 0.
\end{align}
Next, by taking the inner product of \eqref{AC-LB} with $\partial_t\phi$ and using integration by parts and the mass conservation \eqref{mass-con}, we obtain the following energy dissipative law:
\begin{align}\label{dissipation}
\dt{}\mathcal{E}(\phi) = -\int_\Omega (\partial_t\phi)^2 \,\mathrm{d}\Bx <0.
\end{align}
Therefore, the goal of this paper is to develop the efficient numerical method for the AC-LB equation \eqref{AC-LB} while keeping the mass conservation \eqref{mass-con} and desired energy dissipation \eqref{dissipation} during the iterative process.
Then the energy minimizer of the LB model \eqref{LB-energy} is obtained with a proper choice of initialization.
There are many efforts devoted to designing the numerical schemes for the nonlinear gradient flow equations with energy dissipation and mass conservation properties.
For example, typical energy stable schemes to gradient flow include the convex splitting methods \cite{eyr98}, the exponential time differencing schemes \cite{du21}, the stabilized factor methods \cite{she10}, and invariant energy quadrature \cite{yan16} and scalar auxiliary variable methods \cite{she19} for modified energy.
Numerically, the gradient flow needs to be discretized in both the space and time domains. The typical spatial discretization techniques
include the finite difference method \cite{wis09, wan11, xu21}, the finite element method \cite{du08} and the Fourier pseudo-spectral method \cite{she98,jk14,yin21}.
To calculate the stationary states of the LB model, an efficient numerical method was developed by using the Fourier expansion of order parameter to find the meta-stable and stable phases in the diblock copolymer system \cite{zh08}. A second-order invariant energy quadrature approach with the stabilization technique was proposed to keep the required accuracy while using large time steps \cite{zh19}.
By using the optimization techniques, Jiang et al. proposed adaptive accelerated Bregman proximal gradient methods for phase field crystal models \cite{jk20}.
In this paper, we propose an efficient mass conservative and energy stable scheme for the LB model by combining the convex splitting technique with the spectral deferred correction (SDC) method.
The mass conservative and energy stable properties are proved for the linear convex splitting method for the AC-LB equation \eqref{AC-LB}.
The SDC method was first introduced to solve initial value ordinary differential equations (ODEs) in \cite{dut00}, and the central idea of the SDC method is to convert the original ODEs into the corresponding Picard equation and then use a deferred correction procedure in the integral formulation to achieve higher-order accuracy iteratively.
We choose the SDC method combined with the convex splitting technique for the following reasons:
Iteration loops can improve formal accuracy flexibly and straightforwardly;
the SDC method was designed to handle stiff systems, such as singularly nonlinear equations.
Moreover, an adaptive correction strategy for the SDC method is proposed to increase the rate of convergence and reduce the cost time.
Both two- and three-dimensional periodic crystals in the LB model are shown in numerical examples to demonstrate the accuracy and efficiency of the proposed approach.
The rest of this paper is organized as follows.
In section \ref{sec2}, the convex splitting scheme is constructed, which is linear and unconditionally stable for the AC-LB equation.
We will give some direct energy stability proof by applying the general result of the convex-concave argument.
In section \ref{sec3}, we give a brief review of the classical SDC method and combine the SDC method with the convex splitting method to solve the AC-LB equation.
The Fourier spectral method is presented for the spatial discretization in section \ref{sec4}.
Numerical experiments are carried out in section \ref{sec5}, and some concluding remarks will be given in the final section.
\section{Convex splitting method} \label{sec2}
We suppose the domain $\Omega\subset\mathbb{R}^d$ is rectangular.
Let $\Ltwo$ be the space of square-integrable functions.
The inner product and norm on $\Ltwo$ are denoted by
\begin{align*}
\IP{\phi}{\psi} := \int_\Omega\phi\psi\,\mathrm{d}\Bx,
\quad\NLtwo{\phi} := \sqrt{\IP{\phi}{\phi}}.
\end{align*}
For any integer $m>0$, denote
$\Hm{m} := \SET{v\in\Ltwo: D^\xi{v}\in\Ltwo,\SN{\xi}\le{m}},$
where $\xi$ is a non-negative triple index.
Let $\Hmper{m}$ be the subspace composed of periodic functions on $\Hm{m}$.
Define the space $\Unicont$ to consist of all functions which are bounded and uniformly continuous on $\Omega$.
$\Unicont$ is a Banach space with norm given by $\Nunicont{\phi} := \sup_{{\boldsymbol{x}}\in\Omega}|\phi({\boldsymbol{x}})|.$
Vector-valued quantities will be denoted by boldface notations, such as $\Ltwov:=(\Ltwo)^d$.
\subsection{A convex splitting of the energy functional}
We introduce a sufficiently large positive constant $S$, the convex splitting form $\mathcal{E}(\phi)=\mathcal{E}^c(\phi)-\mathcal{E}^e(\phi)$ can be taken as
\begin{align}\label{ce-energy}
\begin{split}
\mathcal{E}^c(\phi) &= \int_\Omega\left\{\frac{1}2\left((\Delta+1)\phi\right)^2 - \frac\alpha{2!}\phi^2 + \frac{S} 2\phi^2\right\}\,\mathrm{d}\Bx, \\
\mathcal{E}^e(\phi) &= \int_\Omega\left\{\frac{S}2\phi^2 - \frac{1}{4!}\phi^4 + \frac{\gamma}{3!}\phi^3 \right\}\,\mathrm{d}\Bx,
\end{split}
\end{align}
where “c” (“e”) refers to the contractive (expansive) part of the energy.
This idea of adding and subtracting a term ${S}/2\NLtwo{\phi}^2$ to a nonlinear energy $\mathcal{E}$ to obtain a stable time discretization is based on the convex-concave splitting of Eyre \cite{eyr98}.
A calculation of the second variation shows
\begin{align*}
\ddv{\mathcal{E}^c}{s}(\phi+s\psi)\bigg|_{s=0} &= \int_\Omega\left((\Delta+1)^2 + S - \alpha \right)\psi^2 \,\mathrm{d}\Bx, \\
\ddv{\mathcal{E}^e}{s}(\phi+s\psi)\bigg|_{s=0} &= \int_\Omega\left(S-\frac{\phi^2}{2} + \gamma\phi\right)\psi^2 \,\mathrm{d}\Bx,
\end{align*}
which implies that $\mathcal{E}^c$ is globally convex on $\Ltwo$ if $S>\alpha$,
and $\mathcal{E}^e$ is locally convex depending on $\Nunicont{\phi}$.
Fortunately, we find that the $\Unicont$-bound of the state function $\phi$ is depending on its LB energy. The argument is similar to the scheme analysis in \cite{els13}, but for the sake of completeness, we will provide a condensed version of the proof.
\begin{lemma}\label{lm2.1}
Assume $\alpha < 1 $. For any $\phi \in \Hmper{2}$ with finite energy $\mathcal{E}(\phi)$ there is a constant $\lambda>0$ independent of $\phi$ such that
\begin{align}
\Nunicont{\phi} \le \sqrt{\frac{\mathcal{E}(\phi) + (9\gamma^4 + 3)\SN{\Omega}}{\lambda}} := \mathcal{C}(\phi)
\end{align}
\end{lemma}
\begin{proof}
By H\"{o}lder and Young inequalities, we deduce that
\begin{align}\label{use-ineq}
\begin{split}
\frac{\gamma}{3!}\NLp{\phi}{3}^3 &\le \frac1{2} \frac{1}{4!} \NLp{\phi}{4}^4 + 9 \gamma^4 \SN{\Omega},\\
\frac{1}{4!} \NLp{\phi}{4}^4 &\ge \NLtwo{\phi}^2 - 6 \SN{\Omega} \\
\NLtwo{\nabla \phi}^2 &= - \SP{\phi}{\Delta \phi} \le \frac 1{2\epsilon} \NLtwo{\phi}^2 + \frac{\epsilon}{2} \NLtwo{\Delta \phi}^2,
\end{split}
\end{align}
for any $\epsilon > 0$.
By substituting \eqref{use-ineq} into the original energy \eqref{LB-energy}, we get
\begin{align*}
\mathcal{E}(\phi) &= \frac{1}{2} \NLtwo{\Delta\phi}^2 - \NLtwo{\nabla \phi}^2 + \frac{1-\alpha}{2}\NLtwo{\phi}^2 - \frac{\gamma}{3!}\NLp{\phi}{3}^3 + \frac{1}{4!}\NLp{\phi}{4}^4 \\
&\ge \frac{1-\epsilon}{2}\NLtwo{\Delta \phi}^2 + \frac{2-\alpha-\frac{1}{\epsilon}}{2} \NLtwo{\phi}^2 - (9 \gamma^4 + 3) \SN{\Omega} \\
&\ge \frac{3}{2} \eta \left( \NLtwo{\Delta \phi}^2 + \NLtwo{\phi}^2 \right) - (9 \gamma^4 + 3) \SN{\Omega}
\end{align*}
for some $\eta>0$. Then we have
\begin{align*}
&\mathcal{E}(\phi) + (9 \gamma^4 + 3) \SN{\Omega} \ge \eta \left( \NLtwo{\Delta \phi}^2 + \NLtwo{\phi}^2 + \frac{1}{2} \left( \NLtwo{\Delta \phi}^2 + \NLtwo{\phi}^2 \right) \right) \\
& \ge \eta \left( \NLtwo{\Delta \phi}^2 + \NLtwo{\phi}^2 + \NLtwo{\nabla \phi}^2 \right) = \eta \NHm{\phi}{2}^2 \ge \lambda \Nunicont{\phi}^2
\end{align*}
for some $\lambda >0$ by Sobolev imbedding theorem $\Hmper{2} \subset \Hm{2} \hookrightarrow \Unicont$ (cf. \cite{ada03}).
The proof is finished.
\end{proof}
For the forthcoming analysis, we introduce the classical convex-concave splitting argument without proof.
For proof of the lemma, the reader is referred to \cite{wis09}.
\begin{lemma}\label{lm2.2}
Suppose that $\phi,\psi \in \Hmper{2}$, and $\mathcal{E}^c,\mathcal{E}^e$ are all convex on $\Ltwo$.
Then
\begin{align}\label{CS-argue}
\mathcal{E}(\phi)-\mathcal{E}(\psi) \le \SP{\dphi{\mathcal{E}^c}(\phi)-\dphi{\mathcal{E}^e}(\psi)}{\phi-\psi},
\end{align}
where $\dphi{}$ are first order variational derivative with respect to $\phi$.
\end{lemma}
\subsection{A linear stable time discretization}\label{sec-CS}
For the choices \eqref{ce-energy}, we obtain a convex splitting scheme of \eqref{AC-LB}: Find $\phi^{n+1} \in \Hmper{2}, n \in \mathbb{N}$ such that
\begin{align}\label{CS}
\frac{\phi^{n+1}-\phi^n}{\Delta t} = - \left(\dphi{\mathcal{E}^c}(\phi^{n+1})-\dphi{\mathcal{E}^e}(\phi^n)\right) + \beta(\phi^n),
\end{align}
where $\phi^n\approx\phi(t_n)$ is the numerical solution at the $n$-th level $t_n=n\Delta t$ and $\Delta t$ is the step size.
The above scheme satisfies many properties.
First of all, The scheme \eqref{CS} is explicit in nonlinear terms and hence solves a linear system to generate $\phi^{n+1}$ at the next time level.
By taking $\Ltwo$ product of \eqref{CS} with $1$ and using integration by parts, we obtain
\begin{align}\label{mass-CS}
\overline{\phi^{n}} = \overline{{\phi}^0}\quad \forall n \in \mathbb{N},
\end{align}
which implies the proposed scheme can preserve the average mass precisely.
Also, the scheme \eqref{CS} is $\Unicont$-stable and decreases the original energy \eqref{LB-energy} in every step, as shown by the following theorem.
\begin{theorem}\label{th2.1}
Assume $\alpha < 1 $. For any $\phi^0\in\Hmper{2}$ with finite energy $\mathcal{E}(\phi^0)$ there exists a $S>0$ such that the scheme \eqref{CS} is stable for any $\Delta{t}>0$ in the sense
\begin{align}\label{stable-CS}
\Nunicont{\phi^n} \le \mathcal{C}^0, \quad \mathcal{E}(\phi^{n+1}) \le \mathcal{E}(\phi^n) \quad \forall{n}\in\mathbb{N},
\end{align}
where $\mathcal{C}^0:= \mathcal{C}(\phi^0)$.
\end{theorem}
\begin{proof}
Without loss of generality, we assume $\mathcal{C}^0 \ge 1$.
Choose
\begin{align}\label{CS-choose}
S > \max \left( 1, \alpha, \frac{1}{2} (\mathcal{C}^0)^2 + \gamma \mathcal{C}^0, \frac{\mathcal{E}(\mathcal{C}^0)+\gamma^2\SN{\Omega}/4}{2\lambda} \right)
\end{align}
We will prove the theorem by induction on $n\in\mathbb{N}$, so assume $\mathcal{E}(\phi^n) \le \mathcal{E}(\phi^0)$ and $\Nunicont{\phi^n} \le \mathcal{C}^0$.
Let us introduce
\begin{align*}
\widetilde{\mathcal{E}}^e &= \int_{\Omega} \left\{ \frac{S}{2} \phi^2 - F(\phi) \right\} \,\mathrm{d}\Bx, \quad \text{where} \\
F(\phi) &=
\begin{cases}
\frac{1}{4!}\phi^4 - \frac{\gamma}{3!}\phi^3, \quad &\Nunicont{\phi} \le \mathcal{C}^0,\\
\left( \frac{1}{4} (\mathcal{C}^0)^2 + \frac{\gamma}{2} \mathcal{C}^0 \right) \phi^2 + \left( \frac{1}{2}(\mathcal{C}^0)^2-\gamma \mathcal{C}^0 \right)\SN{\phi},\quad & \text{else}.
\end{cases}
\end{align*}
Since $S$ satisfies \eqref{CS-choose}, a calculation of the second variation yield
\begin{align*}
\ddv{\widetilde\mathcal{E}^e}{s}(\phi+s\psi)\bigg|_{s=0} & \ge \int_\Omega\left( S - \left( \frac{1}{2} (\mathcal{C}^0)^2 + \gamma \mathcal{C}^0 \right) \right)\psi^2 \,\mathrm{d}\Bx > 0.
\end{align*}
Thus, $\widetilde{\mathcal{E}}^e$ is globally convex on $\Ltwo$.
According to the equivalent argument, we can prove that $\mathcal{E}^e$ is convex on $\{\phi \in \Ltwo: \Nunicont{\phi} \le \sqrt{2S} \}$.
Then, using the convexity of $\mathcal{E}^c$ and $\widetilde{\mathcal{E}}^e$, we employ the traditional convex-concave splitting argument \eqref{CS-argue},
\begin{align}\label{use-thm-1}
\begin{split}
\mathcal{E}^c(\phi^{n+1})-\widetilde{\mathcal{E}}^e(\phi^{n+1})
&\le \mathcal{E}^c(\phi^{n})-\widetilde{\mathcal{E}}^e(\phi^{n}) +
\SP{\delta_\phi \mathcal{E}^c(\phi^{n+1}) - \delta_\phi \widetilde{\mathcal{E}}^e(\phi^n)}{\phi^{n+1}-\phi^n}\\
&= \mathcal{E}^c(\phi^{n})-{\mathcal{E}}^e(\phi^{n}) +
\SP{\delta_\phi \mathcal{E}^c(\phi^{n+1}) - \delta_\phi {\mathcal{E}}^e(\phi^n)}{\phi^{n+1}-\phi^n}.
\end{split}
\end{align}
Inserting scheme \eqref{CS} into \eqref{use-thm-1} and using mass conservation \eqref{mass-CS}, we have
\begin{align}\label{use-thm-2}
\mathcal{E}^c(\phi^{n+1})-\widetilde{\mathcal{E}}^e(\phi^{n+1})
\le \mathcal{E}(\phi^n) - \frac{1}{\Delta{t}} \NLtwo{\phi^{n+1}-\phi^n}^2
\le \mathcal{E}(\phi^n) \le \mathcal{E}(\phi^0).
\end{align}
By same argument as in Lemma \ref{lm2.1}, \eqref{use-thm-2} and \eqref{CS-choose} can lead to a desired bound
\begin{align*}
\Nunicont{\phi^{n+1}} \le \sqrt{\frac{\mathcal{E}(\phi^0)+\gamma^2\SN{\Omega}/4}{\lambda}} \le \sqrt{2S}.
\end{align*}
The proof is finished by using the classical convex-concave splitting argument \eqref{CS-argue} to $\mathcal{E}^c-\mathcal{E}^e$.
\end{proof}
\section{Spectral deferred correction method.} \label{sec3}
To construct an efficient numerical method for the AC-LB equation,
we develop a novel SDC method by combining the semi-implicit SDC method \cite{dut00,min03} with the convex splitting method.
First, we present the original SDC method including the classical deferred correction and its some technique details.
The SDC method for the AC-LB equation will be presented next.
\subsection{The classical deferred correction}
The original deferred correction method was introduced to solve the following Cauchy problem.
\begin{align}\label{Cauchy-ODE}
\begin{split}
\phi'(t) &= G(\phi) \quad t \in (a,b],\\
\phi(a)&= \phi_a.
\end{split}
\end{align}
The deferred correction approach works by converting the original ODEs \eqref{Cauchy-ODE} into the corresponding Picard equation
\begin{align}\label{pre-eqn}
\phi(t) = \phi_a + \int_a^t G(\phi(s)) \dmea{s}.
\end{align}
Given an initial approximation $\phi^p$, an error function to measure the approximation is defined by
\begin{align}\label{error}
E(t,{\phi}^p) = \phi_a + \int_a^t G({\phi}^p(s)) \dmea{s} - {\phi}^p(t).
\end{align}
Define correction is $\delta(t) = \phi(t) - {\phi}^p(t)$,
substituting $\phi(t) = {\phi}^p(t)+\delta(t)$ into \eqref{pre-eqn} and using \eqref{error}, we obtain the correction equation
\begin{align}\label{cor-eqn-1}
\delta(t) = \int_a^t \left( G({\phi}^p(s)+\delta(s)) - G({\phi}^p(s)) \right) \dmea{s} + E(t,{\phi}^p).
\end{align}
After using some numerical method to discretize the correction equation \eqref{cor-eqn-1}, and adding the correction $\delta(t)$ to the initial approximation $\phi^p(t)$,
we can get higher-order approximated solution $\phi^c(t)$.
An advantage of this method is that it is a one-step method and can be constructed easily and systematically for any order of accuracy.
\subsection{Subintervals and integral of the interpolant}
The SDC method focuses on a single time interval $[t_n,t_{n+1}]$.
Given a set of $M$ Gauss-Lobatto quadrature nodes $t_n = \xi_1 < \dots < \xi_M = t_{n+1}$ (cf. \cite{she11}),
we devide the time interval $[t_n,t_{n+1}]$ into a total of $M-1$ disjoint subintervals, i.e.,
$[t_n,t_{n+1}] = \bigcup_{i=1}^{M-1}[\xi_i,\xi_{i+1}]$.
Let $\Delta \xi_i = \xi_{i+1} - \xi_i$ denote the length of subinterval $[\xi_i,\xi_{i+1}]$.
For convenience, we use the notation $\phi_i = \phi(\xi_i)$.
The same principle also applies to approximations $\delta_i,\phi_i^p,\phi_i^c$.
To compute correction $\delta$ by approximating equation \eqref{cor-eqn-1},
the error function $E(t,\phi^p)$ in \eqref{error} must be approximated using numerical quadrature.
Since $\phi^p$ is known at $M$ Gauss-Lobatto quadrature nodes,
we can define the Lagrange interpolation operator $\mathcal{I} $ to be the projection onto the space of polynomials of degree at most $M - 1$ via
\begin{align}\label{lag-inter}
\mathcal{I}(G(\phi^p))(t) := \sum_{j=1}^M G(\phi^p_j) \ell_j(t),
\end{align}
where $\ell_j(t)$ is Lagrange interpolating basis polynomial corresponding to the spectral point $\xi_j$:
\begin{align*}
\ell_j(t) := \frac{1}{c_j} \prod_{k=1,k \neq j}^M (t-\xi_k), \quad c_j = \prod_{k=1,k \neq j}^M (\xi_j-\xi_k).
\end{align*}
Then we have the integral of the Lagrange interpolant \eqref{lag-inter} over subinterval $\left[\xi_i,\xi_{i+1}\right]$ as follows.
\begin{align}\label{int-lag}
\int_{\xi_i}^{\xi_{i+1}} \mathcal{I}(G(\phi^p))(t) \dmea{t} =\sum_{j=1}^M \omega_{ij} G(\phi^p_j),
\quad
\omega_{ij} = \int_{\xi_i}^{\xi_{i+1}} \ell_j(t) \dmea{t},
\end{align}
where $\omega_{ij}$ is quadrature weight.
The coefficients $\omega_{ij}$ can be precomputed, and the quadrature is reduced to a simple matrix-vector multiplication.
\begin{remark}
Given one approximation ${\phi^p}$, the error estimate for the integral of interpolant
relies on the regularity of solution $\phi$ and the choice of the quadrature rules.
In \cite{cau17}, the author found that Gauss-Lobatto quadrature nodes minimize the error constant and avoid the Runge phenomenon if a large number of quadrature nodes are chosen.
Another advantage of using Gauss-Lobatto nodes is that it contains interval endpoints $t_n$ and $t_{n+1}$, so do not need additional extrapolation.
\end{remark}
\subsection{SDC method combined with the convex splitting method.}
Suppose we already have the initial numerical approximation $\phi_1^p$ at the left endpoint $\xi_1$.
By section \ref{sec-CS}, a convex splitting method for computing approximation $\phi^p$ to Picard equation \eqref{pre-eqn} is
\begin{align}\label{pre-CS}
\phi^p_{i+1} = \phi^p_i + \Delta \xi_i \left( G_\text{im}(\phi^p_{i+1}) + G_\text{ex}(\phi^p_i) \right) \quad i = 1,\dots,M-1,
\end{align}
where $G(\phi) := G_\text{im}(\phi) + G_\text{ex}(\phi)$ and implicit-explicit parts are defined by
\begin{align}
G_\text{im}(\phi) := - \delta_\phi \mathcal{E}^c(\phi), \quad
G_\text{ex}(\phi) := \delta_\phi \mathcal{E}^e(\phi) + \beta(\phi).
\end{align}
Then we focus on the correction process.
Note that $\phi^c_i = \phi^p_i + \delta_i$.
To be more specific, we set $\phi^c_1 = \phi^p_1$ and $\delta_1 = 0$ as the initial value.
Discretizing the correction equation \eqref{cor-eqn-1} via the convex splitting method, we have
\begin{align*}
\phi^c_{i+1}
= \phi^c_i + \Delta \xi_i \left( G_\text{im}(\phi^c_{i+1}) + G_\text{ex}(\phi^c_i)
- G_\text{im}(\phi^p_{i+1}) - G_\text{ex}(\phi^p_i) \right)
+ \int_{\xi_i}^{\xi_{i+1}} G(\phi^p)(t) \dmea{t}.
\end{align*}
Since the function $G(\phi^p)$ is only known at $M$ Gauss-Lobatto quadrature nodes,
the last term of the above equation can be computed with the integral of interpolant \eqref{int-lag}.
Then we get the correction approximation
\begin{align}\label{cor-CS}
\phi^c_{i+1}
= \phi^c_i + \Delta \xi_i \left( G_\text{im}(\phi^c_{i+1}) + G_\text{ex}(\phi^c_i)
- G_\text{im}(\phi^p_{i+1}) - G_\text{ex}(\phi^p_i) \right)
+ \int_{\xi_i}^{\xi_{i+1}} \mathcal{I}(G(\phi^p)) (t) \dmea{t}.
\end{align}
Iterated deferred correction proceeds by computing a new correction $\phi^c$ to the updated prediction $\phi^p$, and solving the correction equation \eqref{cor-CS} again to obtain a higher order approximation.
For ease of identification, the SDC method using $M$ Gauss-Lobatto nodes and $K$ correction iterations will be denoted SDC$_M^K$.
For given initial point approximation $\phi^n$, the SDC$_M^K$ algorithm generates $\phi^{n+1}$ as follows.
\begin{algorithm}[htb]
\caption{$\phi^{n+1}=$SDC$_M^K(\phi^n)$}
\begin{algorithmic}\label{alg1}
\STATE {Set: $\phi_1^p\leftarrow\phi^n$, $\phi_1^c \leftarrow \phi_1^p$.}
\FOR{{$i = 1:M-1$}}
\STATE Solve prediction equation \eqref{pre-CS} to get ${\phi_{i+1}^p}$.
\ENDFOR
\FOR{$j=1:K$}
\FOR{$i=1:M-1$}
\STATE Solve correction equation \eqref{cor-CS} to get $\phi^c_{i+1}$.
\ENDFOR
\STATE {Update the approximate solution: $\phi^p \leftarrow \phi^c$.}
\ENDFOR
\STATE {Return: $\phi^{n+1}\leftarrow\phi^c_M$.}
\end{algorithmic}
\end{algorithm}
The global order of accuracy for SDC$_M^K$ method is $\min\{M,K\}$ (cf. \cite{min03}).
Furthermore, under the assumptions of theorem \ref{th2.1},
we find that for any $\phi^n \in \Hmper{2}$ with finite energy $\mathcal{E}(\phi^n)$ there exists a $S>0$ such that the scheme \eqref{pre-CS} is stable for any $\Delta \xi_i>0$ in the sense
\begin{align}\label{stable-pre}
\overline{\phi_{i+1}^p} = \overline{\phi^n},\quad \Nunicont{\phi_{i+1}^p} \le \mathcal{C}(\phi^n),\quad \mathcal{E}(\phi_{i+1}^p) \le \mathcal{E}(\phi_{i}^p)\quad i = 1,\dots, M-1.
\end{align}
By the definition of the operator $\mathcal{I}$ and using integration by parts, we have
\begin{align*}
\int_{\Omega} \int_{\xi_i}^{\xi_{i+1}} \mathcal{I}(G(\phi^p))(t) \dmea{t} \,\mathrm{d}\Bx = 0.
\end{align*}
Taking $\Ltwo$ product of \eqref{cor-CS} with 1, and combining the above identity with \eqref{stable-pre} yield
\begin{align}
\overline{\phi_{i+1}^c} = \overline{\phi^n}\quad i = 1,\dots,M-1,
\end{align}
which implies SDC method combined with the convex splitting method can preserve the mass precisely.
\subsection{Adaptive correction strategy.}
The correction number for SDC$_M^K$ algorithm is $K(M-1)$.
Using too many corrections costs too much time.
Without using multiple corrections, accuracy may not be satisfactory.
It reminds us to balance accuracy and stability.
Inspired by adaptive restart technique \cite{odo15} for accelerated gradient schemes,
we provide an adaptive-SDC (ASDC$_M^K$) algorithm that makes some computationally cheap observation and decides whether or not to use correction based on that observation.
\begin{algorithm}[htb!]\label{alg2}
\caption{$\phi^{n+1}=$ASDC$_M^K(\phi^n)$}
\begin{algorithmic}
\STATE {Set: $\phi_1^p\leftarrow\phi^n$, $\phi_1^c \leftarrow \phi_1^p$.}
\FOR{{$i = 1:M-1$}}
\STATE Solve prediction equation \eqref{pre-CS} to get ${\phi_{i+1}^p}$.
\ENDFOR
\STATE {$ k \leftarrow 1 $}
\FOR{$j=1:K$}
\FOR{$i=k:M-1$}
\STATE Solve correction equation \eqref{cor-CS} to get $\phi^c_{i+1}$.
\IF{$\SP{\delta_\phi\mathcal{E}^c(\phi^c_{i+1})-\delta_\phi\mathcal{E}^e({\phi^c_i})}{\phi_{i+1}^c-\phi_i^c}<0$}
\STATE {$ \phi_{i}^c \leftarrow \phi_{i+1}^c $ }
\STATE {$ k \leftarrow i $ }
\ENDIF
\ENDFOR
\STATE {Update the approximate solution: $\phi^p \leftarrow \phi^c$.}
\ENDFOR
\STATE {Return: $\phi^{n+1}\leftarrow\phi^c_M$.}
\end{algorithmic}
\end{algorithm}
Note that we use a convex-concave argument to control the correction number.
Because variable index $k$ satisfies $k < {M}$, we don't change the correction times of the approximation on the $M$-th Gauss-Lobatto point.
If the the argument $\SP{\delta_\phi\mathcal{E}^c(\phi^c_{i+1})-\delta_\phi\mathcal{E}^e({\phi^c_i})}{\phi_{i+1}^c-\phi_i^c}<0$ satisfies,
using the convex-concave argument \eqref{CS-argue} yields $\mathcal{E}(\phi_{i+1}^c) \le \mathcal{E}(\phi_{i}^c).$
Frome \eqref{stable-pre}, we find that
\begin{align*}
\mathcal{E}(\phi_{i+1}^c) \le \mathcal{E}(\phi^n).
\end{align*}
By the same argument in Lemma \ref{lm2.1}, we have the following $\Unicont$-bound of $\phi_{i+1}^c$:
\begin{align*}
\Nunicont{\phi_{i+1}^c} \le \mathcal{C}(\phi^n).
\end{align*}
We can reset the $ i$-th approximation by $\phi_{i+1}^c$ and adjust the initial position $ k $ for the next correction loop.
In other words, the energy-decreasing property will be preserved and fewer corrections are required for stable solutions.
Otherwise, using more corrections generates approximations until we observe stable solutions.
Therefore, we get a mass conservative and energy-stable spectral deferred correction method for the AC-LB equation.
\section{Spatial discretization}\label{sec4}
The purpose of this section is to construct Fourier spectral method for the LB model.
Without loss of generality, we denote the rectangular domain $ \Omega $ as $\Pi_{j=1}^d[0,L_j] \subset \mathbb{R}^d$.
For a given positive even integer $ N $, we denote discretized gridpoint space as
\[ \mathcal{P} = \SET{ {\boldsymbol{x}}:=(x_1,x_2,\dots,x_d)\in\mathbb{R}^d: x_j = nL_j/N, n=1,2,\dots,N, j=1,2,\dots,d }.\]
Define the discrete Fourier spectral space
\[ \mathcal{K} = \SET{ {\boldsymbol{k}}:=(k_1,k_2,\dots,k_d)\in\mathbb{Z}^d: \SN{k_j} \le N/2, j=1,2,\dots,d}. \]
Next, we consider the discrete Fourier transform (DFT) of $\phi$ on the gridpoint ${\boldsymbol{x}}\in\mathcal{P}$ and its inverse as
\begin{align}\label{DFT}
\wh{\phi}({\boldsymbol{k}}) = \sum_{{\boldsymbol{x}}\in\mathcal{P}} \phi({\boldsymbol{x}}) e^{-i2\pi({\boldsymbol{B}}{\boldsymbol{k}})\cdot{\boldsymbol{x}}},\quad
\phi({\boldsymbol{x}}) = \frac{1}{N^d} \sum_{{\boldsymbol{k}}\in\mathcal{K}} \wh{\phi}({\boldsymbol{k}}) e^{i2\pi ({\boldsymbol{B}}{\boldsymbol{k}})\cdot{\boldsymbol{x}}},
\end{align}
where ${\boldsymbol{B}} = \diag{L_1^{-1},L_2^{-1},\dots,L_d^{-1}}\in\mathbb{R}^{d \times d}$ is the scaling matrix.
We assume that $ \phi({\boldsymbol{x}}) $ is sufficiently smooth.
The following equation shows that the differentiation works with DFT.
\begin{align*}
\frac{\partial}{\partial x_j} \phi({\boldsymbol{x}}) = \sum_{k\in\mathcal{K}} \left(i \frac{2\pi k_j}{L_j}\right) \wh{\phi}({\boldsymbol{k}}) e^{i2\pi({\boldsymbol{B}}{\boldsymbol{k}})\cdot{\boldsymbol{x}}} .
\end{align*}
Therefore we can represent the Laplacian to coefficients in the discrete Fourier space as follow:
\begin{align*}
\Delta \phi({\boldsymbol{x}}) = \sum_{k\in\mathcal{K}} -2\pi\SN{{\boldsymbol{B}}{\boldsymbol{k}}}^2 \wh{\phi}({\boldsymbol{k}}) e^{i2\pi({\boldsymbol{B}}{\boldsymbol{k}})\cdot{\boldsymbol{x}}},
\end{align*}
where $ \SN{{\boldsymbol{B}}{\boldsymbol{k}}}:=({\boldsymbol{B}}{\boldsymbol{k}}\cdot{\boldsymbol{B}}{\boldsymbol{k}})^{1/2} $ is the Euclidean norm of $ {\boldsymbol{B}}{\boldsymbol{k}} \in \mathbb{R}^d$.
Thus we transform convex splitting scheme \eqref{CS} into discrete Fourier space as follows:
For the given data $\phi^n({\boldsymbol{x}}),\forall{\boldsymbol{x}}\in\mathcal{P}$,
find $\wh{\phi^{n+1}}({\boldsymbol{k}}),\forall{\boldsymbol{k}}\in\mathcal{K}$ such that
\begin{align*}
\wh{\phi^{n+1}}({\boldsymbol{k}}) = \frac{\wh{\phi^n}({\boldsymbol{k}})+\Delta{t}\left(\wh{\delta_\phi\mathcal{E}^e({\phi^n})}({\boldsymbol{k}}) + \beta(\phi^n) \right)}{1+\Delta{t}\left(S-\alpha+(1-2\pi\SN{{\boldsymbol{B}}{\boldsymbol{k}}}^2)\right)}.
\end{align*}
Then, the updated numerical solution's data $ \phi^{n+1}({\boldsymbol{x}}),\forall{\boldsymbol{x}}\in\mathcal{P} $ can be computed using inverse DFT in \eqref{DFT}.
Similarly, approximations \eqref{pre-CS} and \eqref{cor-CS} can also be transformed into discrete Fourier space.
Discretize the energy functional of LB model \eqref{LB-energy} with spectral derivative, it yields
\begin{align}\label{DFT-energy}
\mathcal{E}_\mathcal{P}(\phi) = \frac{\SN{\Omega}}{N^{2d}} \sum_{{\boldsymbol{k}}\in\mathcal{K}} \frac{(1-2\pi\SN{{\boldsymbol{B}}{\boldsymbol{k}}}^2)^2-\alpha}{2} \SN{\wh{\phi}({\boldsymbol{k}})}^2
+ \frac{\SN{\Omega}}{N^d} \sum_{{\boldsymbol{x}}\in\mathcal{P}} \left(\frac{1}{4!} \phi({\boldsymbol{x}})^4 - \frac{\gamma}{3!} \phi({\boldsymbol{x}})^3\right),
\end{align}
since the following discrete Parseval’s identity can be applied,
\begin{align}\label{parseval}
\sum_{{\boldsymbol{x}}\in\mathcal{P}} \phi({\boldsymbol{x}})^2 = \frac{1}{N^d} \sum_{{\boldsymbol{k}}\in\mathcal{K}} \SN{\wh{\phi}({\boldsymbol{k}})}^2.
\end{align}
The computation of the DFT and spectral derivatives can be accomplished by the Fast Fourier Transform to reduce floating point operations.
\section{Numerical experiments} \label{sec5}
Now we carry out the numerical experiments for the LB model to demonstrate the performance of the proposed method.
All experiments were performed on a workstation with a 2.90 GHz CPU (intel Xeon Gold 6326, 16 processors).
All codes were written in MATLAB language without parallel implementation.
\begin{example}\label{ex1}
We first examine the convergence of the SDC method.
The problem's setting is the following:
\begin{equation*}
\Omega=\left[0,{16\pi}/{\sqrt{3}}\right]\times\left[0,8\pi\right],\quad \alpha=0.15,\quad \gamma=0.25,\quad S=0.
\end{equation*}
To verify the convergence rate, we add a source term to the AC-LB equation such that the exact solution is
\begin{equation*}
\phi(t,x,y)=e^{-2t}\sin{\sqrt{3}x}\sin{y}
\end{equation*}
We use the DFT in Section \ref{sec4} for the spatial discretization and SDC$_M^K$ algorithm with Legendre-Gauss-Lobatto quadrature points for the time discretization.
We compute approximation $\phi_\mathcal{P}$ on a spatial grid $\mathcal{P}$ with $N=512$ and take a rough step size $\Delta{t}=0.05$.
Table \ref{tab1} shows the $ \Ltwo $ norm error and $ \Unicont $ norm error at $T=4$ with $M=4$ and $K=1,2,3,4$ respectively.
Optimal convergence rates are obtained for the SDC method concerning the step size.
\begin{table}[!htbp]
\centering
\caption{Convergence rates against the number of corrections $K$ in SDC$_M^K$ algorithm.}
\label{tab1}
\vspace{10pt}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Method & Error & $\Delta{t}=$0.05 & $\Delta{t}/2$ & $\Delta{t}/4$ & $\Delta{t}/8$ \\ \hline
\multirow{4}{*}{SDC$_4^1$} & $\NLtwo{\phi_\mathcal{P} - \phi}$ & 1.0241e-05 & 3.0872e-06 & 8.5213e-07 & 2.2422e-07 \\
& Order & -- & 1.7299 & 1.8572 & 1.9261 \\
& $\Nunicont{\phi_\mathcal{P}-\phi}$ & 7.5838e-07 & 2.2863e-07 & 6.3105e-08 & 1.6605e-08 \\
& Order & -- & 1.7299 & 1.8572 & 1.9261 \\ \hline
\multirow{4}{*}{SDC$_4^2$} & $\NLtwo{\phi_\mathcal{P} - \phi}$ & 5.9386e-07 & 1.0933e-07 & 1.6597e-08 & 2.2880e-09 \\
& Order & -- & 2.4414 & 2.7197 & 2.8587 \\
& $\Nunicont{\phi_\mathcal{P}-\phi}$ & 4.3979e-08 & 8.0967e-09 & 1.2291e-09 & 1.6944e-10 \\
& Order & -- & 2.4414 & 2.7197 & 2.8587 \\ \hline
\multirow{4}{*}{SDC$_4^3$} & $\NLtwo{\phi_\mathcal{P} - \phi}$ & 3.2695e-08 & 3.8652e-09 & 3.2659e-10 & 2.3687e-11 \\
& Order & -- & 3.0804 & 3.5650 & 3.7853 \\
& $\Nunicont{\phi_\mathcal{P}-\phi}$ & 2.4213e-09 & 2.8624e-10 & 2.4187e-11 & 1.7548e-12 \\
& Order & -- & 3.0804 & 3.5650 & 3.7853 \\ \hline
\multirow{4}{*}{SDC$_4^4$} & $\NLtwo{\phi_\mathcal{P} - \phi}$ & 1.5947e-09 & 1.3396e-10 & 6.4103e-12 & 2.4596e-13 \\
& Order & -- & 3.5734 & 4.3853 & 4.7039 \\
& $\Nunicont{\phi_\mathcal{P}-\phi}$ & 1.1812e-10 & 9.9231e-12 & 4.7539e-13 & 1.8834e-14 \\
& Order & -- & 3.5733 & 4.3836 & 4.6577 \\ \hline
\end{tabular}
\end{table}
\end{example}
\begin{figure}[!htbp]
\centering
\subfigure[Lamellar phase]{
\includegraphics[width=2.5in]{LB2D_lamellar}}
\subfigure[Cylindrical phase]{
\includegraphics[width=2.5in]{LB2D_cylinder}}
\caption{The two-dimensional periodic crystals in LB model with $ \alpha = 0.15, \gamma = 0.25 $.}
\label{fig1}
\end{figure}
\begin{figure}[!htbp]
\centering
\subfigure[Energy difference during iterations]{
\includegraphics[width=2.5in]{./figures/LB2D_lammer_SDC4_dt_1_legendre_energy.eps}}
\subfigure[Average mass during iterations]{
\includegraphics[width=2.5in]{./figures/LB2D_lammer_SDC4_dt_1_legendre_mass.eps}}
\subfigure[Energy difference during iterations]{
\includegraphics[width=2.5in]{./figures/LB2D_lammer_SDC4_dt_1_chebyshev_energy.eps}}
\subfigure[Average mass during iterations]{
\includegraphics[width=2.5in]{./figures/LB2D_lammer_SDC4_dt_1_chebyshev_mass.eps}}
\caption{Lamellar phase: Numerical behavior of SDC$_M^K$ algorithm with $\Delta{t}=1$;
\textit{First row:} Legendre-Gauss-Lobatto quadrature nodes; \textit{Second row:} Chebyshev-Gauss-Lobatto quadrature nodes.}
\label{fig2}
\end{figure}
\begin{example}\label{ex2}
This example is to verify the mass conservative and energy stable properties of the SDC method.
We use two-dimensional periodic crystals of lamellar phase and cylindrical phase,
as show in Figure \ref{fig1}, to demonstrate the performance of SDC$ _M^K $ algorithm.
The computational domain is $ \Omega=\left[0,{16\pi}/{\sqrt{3}} \right]\times\left[0,8\pi\right] \subset \mathbb{R}^2$.
The parameters of LB model \eqref{LB-energy} are set as $ \alpha=0.15,\gamma=0.25 $.
The initial approximation to those phases which can be found in \cite{shi03} is chosen as
\begin{align*}
\phi^0({\boldsymbol{x}}) = 2 a_1 \cos({\boldsymbol{G}}_1\cdot{\boldsymbol{x}}) + 2 a_2 \left(\cos({\boldsymbol{G}}_2\cdot{\boldsymbol{x}}) + \cos({\boldsymbol{G}}_3\cdot{\boldsymbol{x}})\right)\quad \forall{\boldsymbol{x}}\in\mathcal{P},
\end{align*}
where the ${\boldsymbol{G}}_i$ are given by
\begin{align*}
{\boldsymbol{G}}_1=(0,1),\quad {\boldsymbol{G}}_2 = (-\sqrt{3}/2,1/2),\quad {\boldsymbol{G}}_3 = (-\sqrt{3}/2,-1/2).
\end{align*}
The lamellar phase is described by $a_1=\sqrt{2\alpha},a_2=0$,
and the cylindrical phase is described by $a_1= a_2=(\gamma+\sqrt{\gamma^2+10\alpha})/5$.
Note that the initial phases satisfy $ \overline{\phi^0}=0 $.
The spatial grip $\mathcal{P}$ is fixed with $N=512$.
We take $M=4$ and change $K=1,2,3,4$ to implement SDC$_M^K$ algorithm.
We choose the positive constant $S=2$ to allow a pretty large step size $\Delta{t}=1$.
To show the energy dissipation obviously, we calculate a reference energy $\mathcal{E}_s$ by choosing the
invariant energy value as the grid size converges to $0$.
From our numerical tests, the reference energy has $14$ significant decimal digits.
The reference energy values of lamellar and cylindrical phases are $-16.532074091947$ and $-17.324103376071$ respectively.
Figure \ref{fig1} (a) and (b) show the stationary solutions of the lamellar and cylindrical phases respectively.
\end{example}
\begin{figure}[!htbp]
\centering
\subfigure[Energy difference during iterations]{
\includegraphics[width=2.5in]{./figures/LB2D_cylinder_SDC4_dt_1_legendre_energy.eps}}
\subfigure[Average mass during iterations]{
\includegraphics[width=2.5in]{./figures/LB2D_cylinder_SDC4_dt_1_legendre_mass.eps}}
\subfigure[Energy difference during iterations]{
\includegraphics[width=2.5in]{./figures/LB2D_cylinder_SDC4_dt_1_chebyshev_energy.eps}}
\subfigure[Average mass during iterations]{
\includegraphics[width=2.5in]{./figures/LB2D_cylinder_SDC4_dt_1_chebyshev_mass.eps}}
\caption{Cylindrical phase: Numerical behavior of SDC$_M^K$ algorithm with $\Delta{t}=1$;
\textit{First row:} Legendre-Gauss-Lobatto quadrature nodes; \textit{Second row:} Chebyshev-Gauss-Lobatto quadrature nodes.}
\label{fig3}
\end{figure}
Figure \ref{fig2} gives iteration process of SDC$_M^K$ algorithm for the lamellar phase,
including the energy difference and the average mass during iterations.
We do numerical experiments when Legendre-Gauss-Lobatto,
and Chebyshev-Gauss-Lobatto quadrature nodes are used to construct the polynomial interpolant \eqref{lag-inter}.
It is observed that SDC$_M^K$ algorithm has energy dissipative and mass conservative properties no matter what kind of quadrature points we use.
The numerical behavior of SDC$_M^K$ algorithm for the cylindrical phase can be found in Figure \ref{fig3}.
We find again that our proposed approaches are mass conservative and energy stable.
For SDC$_M^K$ algorithm, an obvious observation is that the rate of energy difference descent doesn't change when correction number $K$ increases.
Therefore, the balance between efficiency and accuracy should be considered when using the SDC method.
In addition, we find that the mass curve was slightly disturbed during the iteration.
\begin{example}\label{ex3}
The purpose of this example is to investigate the performance of ASDC$_M^K$ algorithm by the two-dimensional periodic crystals in Example \ref{ex2}.
The setting for this example is same as that for Example 2 excluding the initial correction number $K$.
For computing a stationary solution, we stop iteration when the following criteria is met:
\begin{equation}\label{stop}
\mathcal{E}_\mathcal{P} - \mathcal{E}_s \le \varepsilon,
\end{equation}
where $\varepsilon>0$.
We set $\varepsilon=10^{-12}$ and change $K=2,3,4,5$ to implement ASDC$_M^K$ algorithm.
To compare the efficiency with SDC$_M^K$ algorithm,
let $ N_\text{correction} $ denote the average number of times the correction equation \eqref{cor-CS} was solved.
For computing the lamellar phase,
Table \ref{tab2} shows the average correction number $ N_\text{correction} $, the total iteration number $ N_\text{iteration} $, and the final energy difference $\mathcal{E}_p-\mathcal{E}_s$.
Clearly, for ASDC$_M^K$ algorithm both $ N_\text{iteration} $ and $ N_\text{correction} $ are less than SDC$_M^K$ algorithm whenever Legendre-Gauss-Lobatto
or Chebyshev-Gauss-Lobatto quadrature points are used.
Moreover, $ N_\text{iteration} $ of ASDC$_M^K$ algorithm decreases with the increase of the initial correction number $K$,
while $ N_\text{iteration} $ of SDC$_M^K$ increases.
Especially, when $K=5$, ASDC$_M^K$ algorithm only needs less than half of the iterations of SDC$_M^K$ algorithm.
Therefore, ASDC$_M^K$ algorithm is more efficient than SDC$_M^K$ algorithm.
Again, as shown in Table \ref{tab3}, ASDC$_M^K$ algorithm demonstrates a better performance over SDC$_M^K$ algorithm in computing the cylindrical phase.
\begin{table}[!htbp]
\centering
\caption{Numerical results of SDC$_M^K$ and ASDC$_M^K$ algorithms with $\Delta{t}=1,M=4$ for computing the lamellar phase.}
\label{tab2}
\vspace{10pt}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{3}{*}{$K$} & \multicolumn{4}{c|}{$N_\text{iteration}(N_\text{correction})$} \\ \cline{2-5}
& \multicolumn{2}{c|}{Legendre-Gauss-Lobatto} & \multicolumn{2}{c|}{Chebyshev-Gauss-Lobatto} \\ \cline{2-5}
& SDC$_M^K$ & ASDC$_M^K$ & SDC$_M^K$ & ASDC$_M^K$ \\ \hline
2 & 37 (6) & 32 (5) & 37 (6) & 33 (5) \\ \hline
3 & 36 (9) & 27 (7) & 36 (9) & 28 (7) \\ \hline
4 & \;\;35 (12) & 23 (9) & \;\;35 (12) & 25 (9) \\ \hline
5 & \;\;41 (15) & \;\;\textbf{21} (11) & \;\;45 (15) & \;\;\textbf{22} (11) \\ \hline
\multirow{3}{*}{$K$}& \multicolumn{4}{c|}{${\mathcal{E}_\mathcal{P}-\mathcal{E}_s}$} \\ \cline{2-5}
& \multicolumn{2}{c|}{Legendre-Gauss-Lobatto} & \multicolumn{2}{c|}{Chebyshev-Gauss-Lobatto} \\ \cline{2-5}
& SDC$_M^K$ & ASDC$_M^K$ & SDC$_M^K$ & ASDC$_M^K$ \\ \hline
2 & 9.0239e-13 & 9.5923e-14 & 7.5673e-13 & 7.1054e-14 \\ \hline
3 & 6.6080e-13 & 9.5923e-14 & 8.2778e-13 & 8.1712e-14 \\ \hline
4 & 7.5673e-13 & 5.6843e-14 & 9.8055e-13 & 9.9476e-14 \\ \hline
5 & 8.1357e-13 & 6.0396e-14 & 8.2423e-13 & 7.1054e-14 \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\centering
\caption{Numerical results of SDC$_M^K$ and ASDC$_M^K$ algorithms with $\Delta{t}=1,M=4$ for computing the cylindrical phase.}
\label{tab3}
\vspace{10pt}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{3}{*}{$K$} & \multicolumn{4}{c|}{$N_\text{iteration}(N_\text{correction})$} \\ \cline{2-5}
& \multicolumn{2}{c|}{Legendre-Gauss-Lobatto} & \multicolumn{2}{c|}{Chebyshev-Gauss-Lobatto} \\ \cline{2-5}
& SDC$_M^K$ & ASDC$_M^K$ & SDC$_M^K$ & ASDC$_M^K$ \\ \hline
2 & 37 (6) & 32 (5) & 37 (6) & 33 (5) \\ \hline
3 & 36 (9) & 27 (7) & 36 (9) & 28 (7) \\ \hline
4 & \;\;35 (12) & 23 (9) & \;\;35 (12) & 24 (9) \\ \hline
5 & \;\;38 (15) & \;\;\textbf{21} (11) & \;\;41 (15) & \;\;\textbf{21} (11) \\ \hline
\multirow{3}{*}{$K$}& \multicolumn{4}{c|}{${\mathcal{E}_\mathcal{P}-\mathcal{E}_s}$} \\ \cline{2-5}
& \multicolumn{2}{c|}{Legendre-Gauss-Lobatto} & \multicolumn{2}{c|}{Chebyshev-Gauss-Lobatto} \\ \cline{2-5}
& SDC$_M^K$ & ASDC$_M^K$ & SDC$_M^K$ & ASDC$_M^K$ \\ \hline
2 & 8.1712e-14 & 9.5923e-14 & 8.5265e-14 & 5.3291e-14 \\ \hline
3 & 6.7502e-14 & 9.5923e-14 & 7.8160e-14 & 7.4607e-14 \\ \hline
4 & 4.9738e-14 & 5.6943e-14 & 4.9738e-14 & 7.8160e-14 \\ \hline
5 & 7.4607e-14 & 6.0396e-14 & 8.5265e-14 & 8.8818e-14 \\ \hline
\end{tabular}
\end{table}
\end{example}
\begin{example}\label{ex4}
In this example, we use three-dimensional periodic crystals of the A15 phase, the body-centered cubic (BCC) phase, the face-centered-cubic (FCC) phase, and the double gyroid (GYR) phase,
to test the robustness of the parameters in LB model \eqref{LB-energy}.
The A15 phase is a cubic phase with two nonequivalent types of lattice sites: one whose atoms sit at the edges and center of the conventional unit cell, and one whose atoms are placed along lines subdividing the cubic faces into two congruent parts \cite{sin72}.
The BCC phase has one lattice point in the center of the unit cell in addition to the eight corner points \cite{wol85}.
The FCC phase has lattice points on the faces of the cube, each giving exactly one-half contribution, in addition to the corner lattice points, giving a total of 4 lattice points per unit cell \cite{wol85}.
The GYR phase is a continuous network periodic phase \cite{shi99}.
Those phases are shown in Figure \ref{fig4}.
\begin{figure}[!htbp]
\centering
\subfigure[A15]{
\includegraphics[width=2.5in]{LB3D_A15}}
\subfigure[BCC]{
\includegraphics[width=2.5in]{LB3D_BCC}}
\subfigure[FCC]{
\includegraphics[width=2.5in]{LB3D_FCC}}
\subfigure[GYR]{
\includegraphics[width=2.5in]{LB3D_GYR}}
\caption{The three-dimensional periodic crystals in the LB model.}
\label{fig4}
\end{figure}
The computational domains in this example are defined by the unit cell $ \Omega=[0,a]^3 \subset\mathbb{R}^3$.
Given a spatial grip $ \mathcal{P} $, the initial values are chosen as
\begin{align*}
\phi^0({\boldsymbol{x}}) = \sum_{{\boldsymbol{k}}\in\Lambda^0} \wh{\phi}({\boldsymbol{k}}) e^{i2\pi{\boldsymbol{k}}\cdot{\boldsymbol{x}}/a}\quad\forall{\boldsymbol{x}}\in\mathcal{P},
\end{align*}
where initial lattice points set $ \Lambda^0\subset\mathcal{K} $ only on which the Fourier coefficients located are nonzero.
The corresponding $ \Lambda^0 $ of those phases and the parameters in the LB model can be found in Table \ref{tab4}.
For more details, please refer to \cite{jk13}.
\begin{table}[!htbp]
\centering
\caption{Initial lattice points set and parameters for three-dimensional periodic crystals.
$^o$ denotes the sign of Fourier coefficients is opposite.}
\label{tab4}
\vspace{10pt}
\begin{tabular}{|c|c|c|c|c|}
\hline
Phase & $ \Lambda^0 $ & $ a $ & $ \alpha $ & $ \gamma $ \\ \hline
\multirow{2}{*}{A15} & $ (\pm2,\pm1, 0),(0,\pm2,1),(\pm1,0,2), $ & \multirow{2}{*}{$2\sqrt{5}\pi $ } & \multirow{2}{*}{0} & \multirow{2}{*}{1.23} \\
& $ (\pm1,\pm2,0)^o,(\pm2,0,1)^o,(0,\pm1,2)^o $ & & & \\ \hline
BCC & $ (\pm1,\pm1,0),(\pm1,0,\pm1),(0,\pm1,\pm1) $ & $ 2\sqrt{2}\pi$ & 0 & 1.23 \\ \hline
FCC & $ (\pm1,\pm1,1) $ & $ 2\sqrt{3}\pi$ & 0 & 2 \\ \hline
\multirow{4}{*}{GYR} & $ (1,-2,1),(1,2,-1),(-2,1,1), $ & \multirow{4}{*}{$ 2\sqrt{6}\pi $ } & \multirow{4}{*}{0.47 } & \multirow{4}{*}{0.46} \\
& $ (1,1, -2),(-1,1,2),(2,-1,1), $ & & & \\
& $ (1,2,1)^o,(-1,2,1)^o,(2,1,1)^o, $ & & & \\
& $ (1,1,2)^o,(1,-1,2)^o,(-2,1,1)^o $ & & & \\ \hline
\end{tabular}
\end{table}
We use Legendre-Gauss-Lobatto quadrature points and change $ \Delta{t} $ to implement ASDC$_4^4$ algorithm.
The spatial grip $\mathcal{P}$ with $N=128$ is employed to compute the three-dimensional periodic crystals.
The reference energies in Table \ref{tab5} are obtained via the spatial grip $\mathcal{P}$ with $N=256$.
Table \ref{tab5} shows the numerical results of ASDC$_4^4$ algorithm with the different step sizes.
More precisely, $ N_\text{iteration} $ and CPU time decrease with the increase of the time step $ \Delta{t} $.
As Figure \ref{fig5} shows, ASDC$_M^K$ algorithm is also a monotone method for computing the three-dimensional periodic crystals.
\begin{table}[!htbp]
\centering
\caption{Numerical results of ASDC$_4^4$ algorithm for computing the three-dimensional periodic crystals.}
\label{tab5}
\vspace{10pt}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Phase & $ \Delta{t} $ & $ N_\text{iteration}$ & CPU Time (s) & $ \mathcal{E}_\mathcal{P}-\mathcal{E}_s $ & $ \mathcal{E}_s $ \\ \hline
\multirow{4}{*}{A15} & 0.1 & 643 & 2970.6 & 9.2371e-13 & \multirow{4}{*}{-57.4752889933902} \\ \cline{2-5}
& 0.5 & 141 & 729.49 & 9.1660e-13 & \\ \cline{2-5}
& 1 & 80 & 471.26 & 9.1660e-13 & \\ \cline{2-5}
& 2 & \textbf{51} & \textbf{225.93} & 8.0291e-13 & \\ \hline
\multirow{4}{*}{BCC} & 0.1 & 212 & 1011.6 & 9.3614e-13 & \multirow{4}{*}{-14.4932738221454} \\ \cline{2-5}
& 0.5 & 46 & 317.59 & 9.8410e-13 & \\ \cline{2-5}
& 1 & 26 & 169.17 & 5.7376e-13 & \\ \cline{2-5}
& 2 & \textbf{16} & \textbf{90.63} & 6.0574e-13 & \\ \hline
\multirow{4}{*}{FCC} & 0.1 & 118 & 518.26 & 7.3896e-13 & \multirow{4}{*}{-209.6360921245683} \\ \cline{2-5}
& 0.5 & 26 & 179.81 & 8.5265e-13 & \\ \cline{2-5}
& 1 & 15 & 67.62 & 2.8422e-13 & \\ \cline{2-5}
& 2 & \textbf{9} & \textbf{61.79} & 2.2737e-13 & \\ \hline
\multirow{5}{*}{GYR} & 0.1 & 435 & 1998.8 & 9.3792e-13 & \multirow{5}{*}{-162.0665004168457 } \\ \cline{2-5}
& 0.5 & 94 & 514.6 & 9.9476e-13 & \\ \cline{2-5}
& 1 & 53 & 329.04 & 6.5370e-13 & \\ \cline{2-5}
& 2 & 33 & 235.91 & 3.9790e-13 & \\ \cline{2-5}
& 3 & \textbf{28} & \textbf{144} & 6.2528e-13 & \\ \hline
\end{tabular}
\end{table}
\begin{figure}[!htbp]
\centering
\subfigure[A15]{
\includegraphics[width=2.5in]{LB3D_A15_ASDC44_legendre_energy}}
\subfigure[BCC]{
\includegraphics[width=2.5in]{LB3D_BCC_ASDC44_legendre_energy}}
\subfigure[FCC]{
\includegraphics[width=2.5in]{LB3D_FCC_ASDC44_legendre_energy}}
\subfigure[GYR]{
\includegraphics[width=2.5in]{LB3D_GYR_ASDC44_legendre_energy}}
\caption{Energy difference over CPU Time of ASDC$_4^4$ algorithm for computing the three-dimensional periodic crystals.}
\label{fig5}
\end{figure}
\end{example}
\section{Conclusions}
This paper proposes an efficient numerical scheme to compute periodic crystals in the Landau--Brazovskii model by combining the SDC method with the linear convex splitting technique.
Our algorithms can retain the energy dissipation and mass conservation properties during iteration.
An adaptive correction strategy is further implemented to reduce the cost time and improve the energy stability.
Numerical experiments for two and three dimensional periodic crystals are presented to show the efficiency and accuracy of the proposed method.
In the future, we will apply the SDC algorithm to the high-index saddle dynamics for efficient construction of solution landscape \cite{YinPRL,YinSCM,YinSISC}, which provides a pathway map including both stable minima and unstable saddle points. We may extend the developed numerical approach to the Lifshitz--Petrich (LP) model, which is widely used to compute quasiperiodic structures, such as the bi-frequency excited Faraday wave \cite{lif97}, and the phase transitions between crystals and quasicrystals \cite{yin21}.
\section*{Acknowledgments}
This work was supported by the National Key Research and Development Program of China 2021YFF1200500
and the National Natural Science Foundation of China 12225102, 12050002, and 12226316.
\section*{References}
|
1,116,691,498,389 | arxiv |
\section{Feasibility and Challenges: Empirical Observations}
\label{sec:questions}
Although the idea of concurrent ranging is extremely simple and can be
implemented straightforwardly on the DW1000, several questions must be
answered to ascertain its practical feasibility. We discuss them
next, providing answers based on empirical observations.
\subsection{Experimental Setup}
\label{sec:ewsn:setup}
All our experiments employ the Decawave\xspace EVB1000 development
platform~\cite{evb1000}, equipped with the DW1000 transceiver, an
STM32F105 ARM Cortex-M3 MCU, and a PCB antenna.
\fakeparagraph{UWB Radio Configuration} We use a preamble length of
128~symbols and a data rate of 6.8~Mbps. Further, we use channel~4,
whose wider bandwidth provides better resolution in determining
the timing of the direct path and therefore better ranging estimates.
\fakeparagraph{Firmware} We program the behavior of initiator and responder
nodes directly atop Decawave\xspace libraries, without any OS layer, by adapting
towards our goals the demo code provided by Decawave\xspace. Specifically, we provide
support to log, via the USB interface,
\begin{inparaenum}[\itshape i)]
\item the packets transmitted and received,
\item the ranging measurements, and
\item the CIR measured upon packet reception.
\end{inparaenum}
\fakeparagraph{Environment} All our experiments are carried out in a
university building, in a long corridor whose width
is 2.37~m. This is arguably a challenging environment due to the
presence of strong multipath, but also very realistic to test the
feasibility of concurrent ranging\xspace, given that one of the main applications of
UWB is for localization in indoor environments.
\fakeparagraph{Network Configuration}
In all experiments, one initiator node and one or more responders
are arranged in a line, placed exactly in the middle of the
aforementioned corridor. This one-dimensional configuration
allows us to clearly and intuitively relate the temporal
displacements of the received signals to the spatial displacement of
their source nodes. For instance, Figure~\ref{fig:capture-exp-deployment}
shows the network used in~\ref{sec:obs:prr}; we change
the arrangement and number of nodes depending
on the question under investigation.
\subsection{Locally Compensating for TX Scheduling Uncertainty}
\label{sec:crng-tosn:txfix}
The DW1000 transceiver can schedule a TX in the future with a
precision of $4 / (499.2\times10^6)\approx 8$~ns, much less than the
signal timestamping resolution. SS-TWR\xspace responders circumvent this lack
of precision by embedding the necessary TX/RX timestamps in their
\textsc{response}\xspace. This is not possible in concurrent ranging\xspace, and an uncertainty
$\epsilon$ from a uniform distribution $U[-8, 0)$~ns directly affects
concurrent transmissions from responders. The empirical observations
in~\ref{sec:questions} show that mitigating this TX uncertainty is
crucial to enhance accuracy. This section illustrates a
technique, inspired by Decawave\xspace engineers during private
communication, that achieves this goal effectively.
A key observation is that both the accurate desired TX timestamp and
the inaccurate one actually used by the radio are \emph{known} at the
responder. Indeed, the DW1000 obtains the latter from the former by
simply discarding its less significant 9~bits.
Therefore, given that the responder knows beforehand the TX timing
error that will occur, it can \emph{compensate} for it while preparing
its \textsc{response}\xspace.
We achieve this by fine-tuning the frequency of the oscillator, an
operation that can be performed entirely in firmware and
\emph{locally} to the responder. In the technique described here, the
compensation relies on the ability of the DW1000 to \emph{trim} its
crystal oscillator frequency~\cite[p.~197]{dw1000-manual-v218} during
operation. The parameter accessible via firmware is the radio
\emph{trim index}, whose value determines the correction currently
applied to the crystal oscillator. By modifying the index by a given
negative or positive amount (\emph{trim step}) we can
increase or decrease the oscillator frequency (i.e.,\ clock
speed) and compensate for the aforementioned known TX timing
error. Interestingly, this technique can also be exploited to reduce
the relative \emph{carrier frequency offset} (CFO) between transmitter and
receiver, with the effect of increasing receiver sensitivity,
enhancing CIR estimation, and ultimately improving ranging
accuracy and precision.
\fakeparagraph{Trim Step Characterization} To design a compensation
strategy, it is necessary to first characterize the impact of a trim
step. To this end, we ran several experiments with a transmitter and a
set of 6~receivers, to assess the impact on the CFO. The transmitter
is initially configured with a trim index of~0, the minimum allowed,
and sends a packet every 10~ms. After each TX, a trim step of~+1 is
applied, gradually increasing the index until~31, the maximum allowed,
after which the minimum index of 0 is re-applied; increasing the trim index
reduces the crystal frequency. Receivers do not apply a trim step;
they use a fixed index of~15. For each received packet, we read the
CFO between the transmitter and the corresponding receiver from the
DW1000, which stores in the \texttt{DRX\_CONF} register the receiver
carrier integrator value~\cite[p.~80--81]{dw1000-sw-api} measured
during RX, and convert this value first to Hz and then to
parts-per-million (ppm).
Figure~\ref{fig:ppm-offset} shows the CFO measured for each receiver
as a function of the transmitter trim index, over $\geq$100,000
packets. If the CFO is positive (negative), the receiver local clock
is slower (faster) than the transmitter
clock~\cite[p.~81]{dw1000-sw-api}. All receivers exhibit a
quasi-linear trend, albeit with a different offset. Across many
experiments, we found that the average curve slope is
$\approx -1.48$~ppm per unit trim step. This knowledge is crucial to
properly trim the clock of the responders to match the frequency of
the initiator and compensate for TX uncertainty, as described next.
\begin{figure}[!t]
\centering
\includegraphics{figs/cfo.png}
\caption{CFO between a transmitter and a set of six receivers,
as a function of the transmitter trim index.}
\label{fig:ppm-offset}
\end{figure}
\fakeparagraph{CFO Adjustment} After receiving the broadcast \textsc{poll}\xspace,
responders obtain the CFO from their carrier integrator and trim their
clock to better match the frequency of the initiator. For instance, if
a given responder measures a CFO of $+3$~ppm, this means that its
clock is slower than the initiator, and its frequency must be
increased by applying a trim step of
$-\frac{\SI{3}{ppm}}{\SI{1.48}{ppm}} \approx -2$. Repeating this
adjustment limits at $\leq 1$~ppm the absolute value of the CFO
between initiator and responders, reducing the impact of clock drift
and improving RX sensitivity. Moreover, it also improves CIR
estimation, enabling the initiator to better discern the signals from
multiple, concurrent responders and estimate their ToA\xspace more
accurately. Finally, this technique can be used to \emph{detune} the
clock (i.e.,\ alter its speed), key to compensating for TX uncertainty.
\fakeparagraph{TX Uncertainty Compensation} The DW1000 measures TX and
RX times at the \texttt{RMARKER}\xspace (\ref{sec:dw1000}) with \mbox{40-bit}
timestamps in radio time units of $\approx 15.65$~ps. However,
when scheduling transmissions, it ignores the lowest 9~bits of the
desired TX timestamp.
The \emph{known} 9 bits ignored directly inform us of the TX error
$\epsilon \in [-8, 0)$~ns to be compensated for. The compensation
occurs by \emph{temporarily} altering the clock frequency via the trim
index only for a given \emph{detuning interval}, at the end of which
the previous index is restored. Based on the known error $\epsilon$
and the predefined detuning interval $\ensuremath{T_\mathit{det}}\xspace$, we can easily
compute the trim step
\mbox{$\ensuremath{\mathcal{S}}\xspace = \lfloor \frac{\epsilon}{\SI{1.48}{ppm}\times
\ensuremath{T_\mathit{det}}\xspace}\rceil$} to be applied to compensate for the TX
scheduling error. For instance, assume that a responder knows that
its TX will be anticipated by an error $\epsilon=-5$~ns; its clock
must be slowed down. Assuming a configured detuning interval
$\ensuremath{T_\mathit{det}}\xspace=\SI{400}{\micro\second}$, a trim step
\mbox{
$\ensuremath{\mathcal{S}}\xspace = \lfloor \frac{\SI{5}{ns}}{\SI{1.48}{ppm} \times
\SI{400}{\micro\second}} \rceil = \lfloor 8.45 \rceil = 8$ } must
be applied through the entire interval \ensuremath{T_\mathit{det}}\xspace. The rounding,
necessary to map the result on the available integer values of the trim index,
translates into a residual TX scheduling error. This can be easily
removed, after the trim step \ensuremath{\mathcal{S}}\xspace is determined, by recomputing
the detuning interval as
\mbox{$\ensuremath{T_\mathit{det}}\xspace = \frac{\epsilon}{\SI{1.48}{ppm} \times \ensuremath{\mathcal{S}}\xspace}$},
equal to \SI{422.3}{\micro\second} in our example. Indeed, the
duration of \ensuremath{T_\mathit{det}}\xspace can be easily controlled in firmware and with a
significantly higher resolution than the trim index, yielding a more
accurate compensation.
\fakeparagraph{Implementation} In our prototype, we determine the trim
step \ensuremath{\mathcal{S}}\xspace, adjust the CFO, and compensate the TX scheduling error
in a single operation. While detuning the clock, we set the data
payload and carry out the other operations necessary before TX,
followed by an idle loop until the detuning interval is over. We then
restore the trim index to the value determined during CFO adjustment
and configure the DW1000 to transmit the \textsc{response}\xspace at the desired
timestamp. To compensate for an error $\epsilon \in [-8, 0)$~ns
without a very large trim step (i.e.,\ abrupt changes of the trim
index) we set a default detuning interval
$\ensuremath{T_\mathit{det}}\xspace=\SI{560}{\micro\second}$ and increase the ranging response
delay to $\ensuremath{T_\mathit{RESP}}\xspace = \SI{800}{\micro\second}$. This value is higher than
the one ($\ensuremath{T_\mathit{RESP}}\xspace=\SI{330}{\micro\second}$) used in~\ref{sec:questions}
and, in general, would yield worse SS-TWR\xspace ranging accuracy due to a
larger clock drift (\ref{sec:toa}). Nevertheless, here we directly
limit the impact of the clock drift with the CFO adjustment, precisely
scheduling transmissions with $<1$~ns errors, as shown
in~\ref{sec:crng-tosn:exp-tx-comp}; therefore, in practice,
the minor increase in \ensuremath{T_\mathit{RESP}}\xspace bears little to no impact.
\section{Introduction}
\label{sec:intro}
A new generation of localization systems is rapidly gaining interest,
fueled by countless applications~\cite{benini2013imu,follow-me-drone,
museum-tracking, mattaboni1987autonomous, guo2016ultra, irobot-lawnmower,
fontana2003commercialization} for which global navigation satellite
systems do not provide sufficient reliability, accuracy, or update
rate. These so-called real-time location systems (RTLS) rely on
several technologies, including optical~\cite{optitrack, vicon},
ultrasonic~\cite{cricket, alps, ultrasonic-tdoa},
inertial~\cite{benini2013imu}, and radio frequency (RF). Among these,
RF is predominant, largely driven by the opportunity of exploiting
ubiquitous wireless communication technologies like WiFi and Bluetooth
also towards localization. Localization systems based on these radios
enjoy, in principle, wide applicability; however, they typically
achieve meter-level accuracy, enough for several use cases but
insufficient for many others.
Nevertheless, another breed of RF-based localization recently
re-emerged from a decade-long oblivion: ultra-wideband (UWB).
The recent availability of tiny, low-cost, and low-power
UWB transceivers has renewed interest in this technology,
whose peculiarity is to enable accurate distance
estimation (\emph{ranging}) along with high-rate communication. These
characteristics are rapidly placing UWB in a dominant position in the
RTLS arena, and defining it as a key enabler for several Internet of
Things (IoT) and consumer scenarios. UWB is currently not as
widespread as WiFi or BLE, but the fact that the latest
Apple iPhone~11 is equipped with a UWB transceiver is a witness
that the trend may change dramatically in the near future.
The Decawave\xspace DW1000 transceiver~\cite{dw1000-datasheet} has been at
the forefront of this technological advancement, as it provides
centimeter-level ranging accuracy with a tiny form factor and a power
consumption an order of magnitude lower than its bulky UWB
predecessors. On the other hand, this consumption is still an order of
magnitude higher than other IoT low-power wireless radios;
further, its impact is exacerbated when ranging---the key
asset of UWB---is exploited, due to the long packet exchanges required.
\begin{figure}[!t]
\centering
\subfloat[Single-sided two-way ranging (SS-TWR\xspace).\label{fig:two-way-ranging}]{
\includegraphics[width=.49\textwidth, valign=t]{figs/sstwr-tosn.png}}
\hfill
\subfloat[Concurrent ranging.\label{fig:crng}]{
\includegraphics[width=.49\textwidth, valign=t]{figs/crng-tosn.png}}
\caption{In SS-TWR\xspace, the initiator transmits a unicast
\textsc{poll}\xspace to which a single responder replies with a \textsc{response}\xspace. In
concurrent ranging, the initiator transmits a \emph{broadcast}
\textsc{poll}\xspace to which responders in range reply concurrently.}
\label{fig:sstwr-crng-cmp}
\end{figure}
\fakeparagraph{UWB Two-way Ranging (TWR)}
Figure~\ref{fig:two-way-ranging} illustrates
single-sided two-way ranging (SS-TWR), the simplest
scheme, part of the IEEE~802.15.4-2011
standard~\cite{std154} and further illustrated in~\ref{sec:background}.
The \emph{initiator}\footnote{The IEEE
standard uses \emph{originator} instead of \emph{initiator}; we
follow the terminology used by the Decawave\xspace documentation.} requests
a ranging measurement via a \textsc{poll}\xspace packet; the responder, after a known
delay \ensuremath{T_\mathit{RESP}}\xspace, replies with a \textsc{response}\xspace packet containing the timestamps
marking the receipt of \textsc{poll}\xspace and the sending of \textsc{response}\xspace. This
information, along with the dual timestamps marking the sending of
\textsc{poll}\xspace and the receipt of \textsc{response}\xspace measured locally at the initiator,
enable the latter to accurately compute the
time~of~flight $\tau$
and estimate the distance from the responder as $d=\ensuremath{\tau}\xspace \times c$,
where $c$ is the speed of light in air.
Two-way ranging, as the name suggests, involves a \emph{pairwise}
exchange between the initiator and \emph{every} responder. In other
words, if the initiator must estimate its distance w.r.t.\ $N$ nodes,
$2\times N$ packets are required. The situation is even worse with
other schemes that improve accuracy by acquiring more timestamps via
additional packet transmissions, e.g.,\ up to $4\times N$ in
popular double-sided two-way ranging
(DS-TWR\xspace) schemes~\cite{dstwr, dw-dstwr-patent, dw-dstwr}.
\fakeparagraph{UWB Concurrent Ranging}
We propose a novel approach to ranging in which,
instead of \emph{separating} the pairwise exchanges necessary
to ranging, these are \emph{overlapping} in time (Figure~\ref{fig:crng}).
Its mechanics are extremely simple: when the single
(broadcast) \textsc{poll}\xspace sent by the initiator is received,
each responder sends back its \textsc{response}\xspace as if it were alone,
effectively yielding concurrent replies to the initiator.
This \emph{concurrent ranging} technique enables the initiator
to \emph{range with $N$ nodes at once by using only 2~packets},
i.e.,\ as if it were ranging against a single responder.
This significantly reduces latency and energy consumption,
increasing scalability and battery lifetime,
but causes the concurrent signals from different
responders to ``fuse'' in the communication channel, potentially
yielding a collision at the initiator.
This is precisely where the peculiarities of UWB communications
come into play. UWB transmissions rely on very
short ($\leq$2~ns) pulses, enabling very precise timestamping of
incoming radio signals. This is what makes UWB intrinsically more
amenable to accurate ranging than narrowband, whose reliance on
carrier waves that are more ``spread in time'' induces physical bounds
on the precision that can be attained in establishing a time reference
for an incoming signal.
Moreover, it is what enables our novel idea of
concurrent ranging. In narrowband, the fact that concurrent signals
are spread over time makes them very difficult to tell apart once
fused into a single signal. In practice, this is possible only if
detailed channel state information is available---usually not the case
on narrowband low-power radios, e.g.,\ the popular CC2420~\cite{cc2420}
and its recent descendants. In contrast, the reliance of UWB
on short pulses makes concurrent signals less likely to collide and combine
therefore enabling, under certain conditions discussed later,
their identification if channel impulse response (CIR) information is available.
Interestingly, the DW1000
\begin{inparaenum}[\itshape i)]
\item bases its own operation precisely on the processing of the CIR, and
\item makes the CIR available also to the application layer (\ref{sec:background}).
\end{inparaenum}
\fakeparagraph{Goals and Contributions}
As discussed in~\ref{sec:crng}, a strawman implementation of concurrent ranging\xspace is
very simple. Therefore, using our prototype deployed in a small-scale
setup, we begin by investigating the \emph{feasibility} of concurrent ranging\xspace
(\ref{sec:questions}), given the inevitable degradation in accuracy
w.r.t.\ isolated ranging caused by the interference among the signals of
responders, in turn determined by their relative placement. Our
results, originally published in~\cite{crng},
offer empirical evidence that it is indeed possible to derive accurate
ranging information from UWB signals overlapping in time.
On the other hand, these results also point out the significant
\emph{challenges} that must be overcome to transform concurrent ranging\xspace from an
enticing opportunity to a practical system. Solving these
challenges is the specific goal of this paper w.r.t.\ the original
one~\cite{crng} where, for the first time in the literature, we have
introduced the concept and shown the feasibility of concurrent
ranging.
Among these challenges, a key one is the \emph{limited precision of scheduling
transmissions} in commercial UWB transceivers. For instance, the
popular Decawave\xspace DW1000 we use in this work can timestamp packet receptions (RX)
with a precision of $\approx$15~ps, but can schedule
transmissions (TX) with a precision of only $\approx$8~ns. This is
not an issue in conventional ranging schemes like SS-TWR\xspace; as mentioned
above, the responder embeds the necessary timestamps in the \textsc{response}\xspace
payload, allowing the initiator to correct for the limited TX
granularity. However, in concurrent ranging\xspace only one \textsc{response}\xspace is decoded,
if any; the timing information of the others must be
derived solely from the appearance of their corresponding signal
paths in the CIR. This process is greatly affected by the TX uncertainty,
which significantly reduces accuracy and consequently
hampers the practical adoption of concurrent ranging\xspace.
In this paper,
we tackle and solve this key challenge with a mechanism that
significantly improves the TX scheduling precision via a \emph{local}
compensation (\ref{sec:reloaded}). Indeed, both the precise and
imprecise information about TX scheduling are available at the
responder; the problem arises because the radio discards the less
significant 9~bits of the precise 40-bit timestamp. Therefore, the
responder can correct for the \emph{known} TX timing error when
preparing its \textsc{response}\xspace. We achieve this by fine-tuning the frequency
of the crystal oscillator entirely in firmware and locally to the
responder, i.e.,\ without additional hardware or external out-of-band
infrastructure. Purposely, the technique also compensates for the
oscillator frequency offset between initiator and responders,
significantly reducing the impact of clock drift, the main cause of
ranging error in SS-TWR\xspace.
Nevertheless, precisely scheduling transmissions
is not the only challenge of concurrent ranging\xspace. A full-fledged,
practically usable system also requires tackling
\begin{inparaenum}[\itshape i)]
\item the reliable identification of the concurrent responders, and
\item the precise estimation of the time of arrival (ToA\xspace) of their signals;
\end{inparaenum}
both are complicated by the intrinsic mutual interference of
concurrent transmissions. In this paper, we build upon techniques developed
by us~\cite{chorus} and other groups~\cite{crng-graz,snaploc} since we
first proposed concurrent ranging in~\cite{crng}. Nevertheless, we
\emph{adapt and improve} these techniques (\ref{sec:reloaded}) to
accommodate the specifics of concurrent ranging in general and
the TX scheduling compensation technique in particular.
Interestingly, our novel design significantly increases
not only the accuracy but also the \emph{reliability} of
concurrent ranging\xspace w.r.t.\ our original strawman design in~\cite{crng}.
The latter relied heavily
\begin{inparaenum}[\itshape i)]
\item on the successful RX of at least one \textsc{response}\xspace,
containing the necessary timestamps for
accurate time-of-flight calculation, and
\item on the ToA\xspace estimation of this \textsc{response}\xspace performed by the DW1000,
used to determine the difference in the signal ToA\xspace
(and therefore distance) to the other responders.
\end{inparaenum}
However, the fusion of concurrent signals may cause
the decoding of the \textsc{response}\xspace to be matched to the
wrong responder or fail altogether, yielding grossly incorrect
estimates or none at all, respectively. Thanks to the ability to
precisely schedule the TX of \textsc{response}\xspace packets, we
\begin{inparaenum}[\itshape i)]
\item remove the need to
decode at least one of them, and
\item enable distance estimation \emph{solely} based on the CIR.
\end{inparaenum}
We can actually \emph{remove the payload entirely} from \textsc{response}\xspace packets,
further reducing latency and energy consumption.
We evaluate concurrent ranging\xspace extensively (\ref{sec:crng-tosn:eval}). We first
show via dedicated experiments that our prototype can schedule TX with
$<1$~ns error. We then analyze the \emph{raw} positioning information
obtained by concurrent ranging\xspace, to assess its quality without the help of
additional filtering techniques~\cite{ukf-julier, ukf} that, as
shown in~\cite{atlas-tdoa, guo2016ultra, 7374232, ethz-one-way}, would
nonetheless improve performance. Our experiments in two environments,
both with static positions and mobile trajectories, confirm that the
near-perfect TX scheduling precision we achieve, along with our dedicated
techniques to accurately extract distance information from the CIR,
enable reliable decimeter-level ranging and positioning
accuracy---same as conventional schemes for UWB but at a fraction of
the network and energy cost.
These results, embodied in our prototype implementation, confirm that
UWB concurrent ranging is a concrete option, immediately applicable to
real-world applications where it strikes new trade-offs w.r.t.\ accuracy,
latency, energy, and scalability, offering a valid (and often more
competitive) alternative to established conventional methods,
as discussed in~\ref{sec:discussion}.
Finally, in~\ref{sec:relwork} we place concurrent ranging in the
context of related work, before ending in~\ref{sec:crng-tosn:conclusions}
with brief concluding remarks.
\section{Background}
\label{sec:background}
We concisely summarize the salient features of UWB radios in general
(\ref{sec:uwb}) and how they are made available by the popular DW1000
transceiver we use in this work (\ref{sec:dw1000}). Moreover, we
illustrate the SS-TWR\xspace technique we build upon,
and show how it is used to perform localization (\ref{sec:toa}).
\subsection{Ultra-wideband in the IEEE~802.15.4\xspace PHY Layer}
\label{sec:uwb}
UWB communications have been originally used for military applications due to
their very large bandwidth and interference resilience to mainstream
narrowband radios. In 2002, the FCC approved the unlicensed use of UWB under
strict power spectral masks, boosting a new wave of research from industry and
academia. Nonetheless, this research mainly focused on high data rate
communications, and remained largely based on theory and simulation, as most
UWB radios available then were bulky, energy-hungry, and expensive, hindering
the widespread adoption of UWB. In 2007, the {IEEE~802.15.4\xspace}a standard amendment
included a UWB PHY layer based on impulse radio (IR-UWB)~\cite{impulse-radio},
aimed at providing accurate ranging with low-power consumption.
A few years ago, Decawave\xspace released a standard-compliant IR-UWB radio,
the DW1000, saving UWB from a decade-long oblivion, and taking
by storm the field of real-time location systems (RTLS).
\begin{figure}[!t]
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics{figs/ewsn/monopulse}
\caption{UWB pulse.}
\label{fig:uwb-pulse}
\end{minipage}
\begin{minipage}[t]{0.51\linewidth}
\centering
\includegraphics{figs/ewsn/dest-bw}
\caption{Distance resolution vs.\ bandwidth.}
\label{fig:dest-bw}
\end{minipage}
\vspace{-2mm}
\end{figure}
\fakeparagraph{Impulse Radio}
According to the FCC, UWB signals are
characterized by a bandwidth $\geq 500$~MHz or a fractional bandwidth
$\geq 20\%$ during transmission. To achieve such a large bandwidth,
modern UWB systems are based on IR-UWB, using pulses
(Figure~\ref{fig:uwb-pulse}) very narrow in time ($\leq 2$~ns).
This reduces the power spectral density, the interference
produced to other wireless technologies, and the impact of multipath
components (MPC). Further, it enhances the ability of UWB signals to
propagate through obstacles and walls~\cite{uwb-idea} and simplifies
transceiver design. The large bandwidth also provides excellent time
resolution (Figure~\ref{fig:dest-bw}), enabling UWB receivers to
precisely estimate the time of arrival (ToA\xspace) of a signal and
distinguish the direct path from MPC. Time-hopping
codes~\cite{ir-uwb-maccess} enable multiple access to the
medium. Overall, these features make \mbox{IR-UWB} ideal for low-power
ranging and localization as well as communication.
\fakeparagraph{IEEE~802.15.4\xspace UWB PHY Layer} The IEEE~802.15.4\xspace-2011
standard~\cite{std154} specifies a PHY layer based on IR-UWB.
The highest frequency at which a compliant device shall emit
pulses is 499.2~MHz (fundamental frequency), yielding a
standard chip duration of $\approx2$~ns. A UWB frame is composed of
\begin{inparaenum}[\itshape i)]
\item a synchronization header (SHR) and
\item a data portion.
\end{inparaenum}
The SHR is encoded in single pulses and includes a preamble for
synchronization and the start frame delimiter (SFD), which delimits
the end of the SHR and the beginning of the data portion.
Instead, the data portion exploits a combination of burst
position modulation (BPM) and binary phase-shift keying (BPSK), and
includes a physical header (PHR) and the data payload.
The duration of the preamble is configurable and depends on
the number of repetitions of a predefined symbol, whose structure
is determined by the preamble code. Preamble codes also define the
pseudo-random sequence used for time-hopping in the transmission of
the data part. The standard defines preamble codes of $31$ and $127$
elements, which are then interleaved with zeros according to a
spreading factor. This yields a (mean) \emph{pulse repetition
frequency} (\ensuremath{\mathit{PRF}}\xspace) of $16$~MHz or $64$~MHz. Preamble codes and \ensuremath{\mathit{PRFs}}\xspace
can be exploited to configure non-interfering links within the same RF
channel~\cite{uwb-ctx-fire}.
\subsection{Decawave\xspace DW1000}
\label{sec:dw1000}
The Decawave\xspace DW1000~\cite{dw1000-datasheet} is a commercially
available low-power low-cost UWB transceiver compliant with IEEE~802.15.4\xspace,
for which it supports frequency channels 1--4 in the low band and 5, 7
in the high band, and data rates of $110$~kbps, $850$~kbps, and
$6.8$~Mbps. Channels 4 and~7 have a larger $900$~MHz bandwidth, while
the others are limited to $499.2$~MHz.
\fakeparagraph{Channel Impulse Response (CIR)} The perfect periodic
autocorrelation of the preamble code sequence enables coherent
receivers to determine the CIR~\cite{dw1000-manual-v218}, which provides
information about the multipath propagation characteristics of the
wireless channel between a transmitter and a receiver. The CIR allows
UWB radios to distinguish the signal leading edge,
commonly called\footnote{Hereafter,
we use the terms first path and direct path interchangeably.}
\emph{direct} or \emph{first} path,
from MPC and accurately estimate the ToA\xspace of the signal.
In this paper, we exploit the information available
in the CIR to perform these operations on \emph{several signals
transmitted concurrently}.
The DW1000 measures the CIR upon preamble reception with a sampling
period \mbox{$T_s = 1.0016$~ns}. The CIR is stored in a large internal
buffer of 4096B accessible by the firmware developer. The time span
of the CIR is the duration of a preamble symbol: 992~samples for a
16~MHz \ensuremath{\mathit{PRF}}\xspace or 1016 for a 64~MHz \ensuremath{\mathit{PRF}}\xspace. Each sample is a complex number
$a_k + jb_k$ whose real and imaginary parts are 16-bit signed
integers. The amplitude $A_k$ and phase $\theta_k$ at each time delay
$t_k$ is $A_k = \sqrt{\smash[b]{a_k^2 + b_k^2}}$ and
$\theta_k = \arctan{\frac{b_k}{a_k}}$. The DW1000 measures the CIR
even when RX errors occur, therefore offering signal timing
information even when a packet (e.g.,\ a \textsc{response}\xspace) cannot be
successfully decoded.
\fakeparagraph{TX/RX Timestamps} The TX and RX timestamps enabling ranging
are measured in a packet at the ranging marker (\texttt{RMARKER}\xspace)~\cite{dw1000-manual-v218},
which marks the first pulse of the PHR after the SFD (\ref{sec:uwb}).
These timestamps are measured with a very high time resolution
in radio units of $\approx\SI{15.65}{\pico\second}$.
The DW1000 first makes a coarse RX timestamp estimation,
then adjusts it based on
\begin{inparaenum}[\itshape i)]
\item the RX antenna delay, and
\item the first path in the CIR estimated by a proprietary
internal leading edge detection (LDE) algorithm.
\end{inparaenum}
The CIR index that LDE determines to be the first path
(\texttt{FP\_INDEX}\xspace) is stored together with the RX timestamp in the
\texttt{RX\_TIME} register. LDE detects the first path as the
first sampled amplitude that goes above a dynamic threshold based on
\begin{inparaenum}[\itshape i)]
\item the noise standard deviation \ensuremath{\sigma_n}\xspace and
\item the noise peak value.
\end{inparaenum}
Similar to the CIR, the RX signal timestamp is measured despite RX errors,
unless there is a rare PHR error~\cite[p. 97]{dw1000-manual-v218}.
\fakeparagraph{Delayed Transmissions} The DW1000 offers the capability to
schedule transmissions at a specified time in the
future~\cite[p. 20]{dw1000-manual-v218}, corresponding to the
\texttt{RMARKER}\xspace. To this end, the DW1000 internally computes the time at
which to begin the preamble transmission, considering also the TX
antenna delay~\cite{dw1000-antenna-delay}. This makes the TX
timestamp predictable, which is key for ranging.
\begin{table}[!tb]
\caption{Current consumption comparison of DW1000 vs.\ TI CC2650 BLE
SoC~\cite{cc2650-datasheet} and Intel 5300 WiFi
card~\cite{wifi-power}. Note that the CC2650 includes a 32-bit
ARM Cortex-M3 processor and the Intel~5300 can support multiple
antennas; further, consumption depends on radio configuration.}
\label{tab:current-consumption}
\begin{tabular}{l c c c}
\toprule
& \textbf{DW1000} & \textbf{TI CC2650~\cite{cc2650-datasheet}}& \textbf{Intel 5300~\cite{wifi-power}}\\
\textbf{State} & 802.15.4a & BLE~4.2 \& 802.15.4 & 802.11~a/b/g/n\\
\midrule
Deep Sleep & 50~\si{\nano\ampere} & 100--150~\si{\nano\ampere} & N/A\\
Sleep & 1~\si{\micro\ampere} & 1~\si{\micro\ampere} & 30.3 ~mA\\
Idle & 12--18~mA & 550~\si{\micro\ampere} & 248~mA\\
TX & 35--85~mA & 6.1--9.1~mA & 387--636~mA\\
RX & 57--126~mA & 5.9--6.1~mA & 248--484~mA\\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Power Consumption}
An important aspect of the DW1000 is its
low-power consumption w.r.t.\ previous UWB
transceivers (e.g.,\ ~\cite{timedomain-pulson400}).
Table~\ref{tab:current-consumption} compares the current consumption of
the DW1000 against other commonly-used technologies (BLE and WiFi) for
localization. The DW1000 consumes significantly less than the Intel
5300~\cite{wifi-power}, which provides channel state information
(CSI). However, it consumes much more than low-power
widespread technologies such as BLE or
IEEE~802.15.4\xspace~narrowband~\cite{cc2650-datasheet}. Hence, to ensure a long
battery lifetime of UWB devices it is essential to reduce the
radio activity, while retaining the accuracy and update rate of
ranging and localization required by applications.
\subsection{Time-of-Arrival (ToA) Ranging and Localization}
\label{sec:toa}
In ToA\xspace-based methods, distance is estimated by precisely measuring
RX and TX timestamps of packets exchanged between nodes. In this section,
we describe the popular SS-TWR\xspace ranging technique (\ref{sec:soa-sstwr})
we extend and build upon in this paper,
and show how distance estimates from known positions can be used to
determine the position of a target (\ref{sec:soa:toa-loc}).
\subsubsection{Single-sided Two-way Ranging (SS-TWR\xspace)}
\label{sec:soa-sstwr}
In SS-TWR\xspace, part of the IEEE~802.15.4\xspace standard~\cite{std154}, the initiator
transmits a unicast \textsc{poll}\xspace packet to the responder, storing the TX
timestamp $t_1$ (Figure~\ref{fig:two-way-ranging}). The responder
replies back with a \textsc{response}\xspace packet after a given response delay
\ensuremath{T_\mathit{RESP}}\xspace. Based on the corresponding RX timestamp $t_4$, the initiator can
compute the round trip time $\ensuremath{T_\mathit{RTT}}\xspace = t_4 - t_1 = 2\tau +
\ensuremath{T_\mathit{RESP}}\xspace$. However, to cope with the limited TX scheduling precision of
commercial UWB radios, the \textsc{response}\xspace payload includes the RX timestamp
$t_2$ of the \textsc{poll}\xspace and the TX timestamp $t_3$ of the \textsc{response}\xspace,
allowing the initiator to precisely measure the actual response delay
$\ensuremath{T_\mathit{RESP}}\xspace = t_3 - t_2$. The time of flight $\tau$ can be then computed as
\begin{equation*}\label{eq:sstwr-tof}
\tau = \frac{\ensuremath{T_\mathit{RTT}}\xspace - \ensuremath{T_\mathit{RESP}}\xspace}{2} = \frac{(t_4 - t_1) - (t_3 - t_2)}{2}
\end{equation*}
and the distance between the two nodes estimated as
$d = \tau \times c$, where $c$ is the speed of light in air.
SS-TWR\xspace is simple, yet provides accurate distance estimation for many
applications. The main source of error is the clock drift between
initiator and responder, each running an internal oscillator with an
offset w.r.t.\ the expected nominal frequency~\cite{dw-errors},
causing the actual time of flight measured by the initiator to be
\begin{equation*}
\hat{\tau} = \frac{\ensuremath{T_\mathit{RTT}}\xspace(1+e_I) - \ensuremath{T_\mathit{RESP}}\xspace(1+e_R)}{2}
\end{equation*}
where $e_I$ and $e_R$ are the crystal offsets of initiator and
responder, respectively. After some derivations, and by observing that
$\ensuremath{T_\mathit{RESP}}\xspace \gg 2\tau$, we can approximate the error
to~\cite{dstwr,dw-errors}
\begin{equation*}\label{eq:sstwr-drift}
\hat{\tau} - \tau \approx \frac{1}{2} \ensuremath{T_\mathit{RESP}}\xspace(e_I - e_R)
\end{equation*}
Therefore, to reduce the ranging error of SS-TWR\xspace one should
\begin{inparaenum}[\itshape i)]
\item compensate for the drift, and
\item minimize \ensuremath{T_\mathit{RESP}}\xspace, as the error grows linearly with it.
\end{inparaenum}
\subsubsection{Position Estimation}
\label{sec:soa:toa-loc}
The estimated distance $\hat{d_i}$ to each of the $N$ responders can be
used to determine the unknown initiator position $\mathbf{p}$, provided the
responder positions are known. In two-dimensional space, the
Euclidean distance $d_i$ to responder \resp{i} is defined by
\begin{equation}\label{eq:soa:dist-norm}
d_i = \norm{\mathbf{p} - \mathbf{p_i}} = \sqrt{(x - x_i)^2 + (y - y_i)^2}
\end{equation}
where $\mathbf{p_i} = [x_i, y_i]$ is the position of \resp{i},
$i \in [1, N]$. The geometric representation of
Eq.~\eqref{eq:soa:dist-norm} is a circle (a sphere in~3D) with radius
$d_i$ and center in $\mathbf{p_i}$. In the absence of noise, the
intersection of $N \geq 3$ circles yields the unique initiator
position $\mathbf{p}$. In practice, however, each distance estimate
$\hat{d_i} = d_i + n_i$ suffers from an additive zero-mean measurement
noise $n_i$. An estimate $\mathbf{\hat p}$ of the unknown initiator
position can be determined (in 2D) by minimizing the non-linear
least-squares (NLLS) problem
\begin{equation*}\label{eq:toa-solver-dist}
\mathbf{\hat p} = \argmin_{\mathbf{p}}
\sum_{i = 1}^{N}\left(\hat d_i - \sqrt{(x - x_i)^2 + (y - y_i)^2}\right)^2
\end{equation*}
In this paper, we solve the NLLS problem with state-of-the-art
methods, as our contribution is focused on ranging and not on the
computation of the position. Specifically, we employ an iterative
local search via a trust region reflective
algorithm~\cite{branch1999subspace}. This requires an initial
position estimate $\mathbf{p_0}$ that we set as the solution of a
linear least squares estimator that linearizes the system of equations
by applying the difference between any two of
them~\cite{source-loc-alg, toa-lls}.
\subsection{CIR Pre-processing}
\label{sec:crng-tosn:cir-proc}
We detail two techniques to reorder the CIR array and estimate the
signal noise standard deviation \ensuremath{\sigma_n}\xspace. These extend and
significantly enhance the techniques we originally proposed
in~\cite{chorus}, improving the robustness and accuracy
of the ToA\xspace estimation algorithms in~\ref{sec:crng-tosn:toa-est}.
\subsubsection{CIR Array Re-arrangement}
\label{sec:crng-tosn:cir-rearrangement}
In the conventional case of an isolated transmitter, the DW1000
arranges the CIR signal by placing the first path at \mbox{\texttt{FP\_INDEX}\xspace
$\approx750$} in the accumulator buffer (\ref{sec:dw1000}). In
concurrent ranging\xspace, one would expect the \texttt{FP\_INDEX}\xspace to similarly indicate the direct
path of the first responder \resp{1}, i.e.,\ the one with the shorter
time shift $\delta_1 = 0$. Unfortunately, this is not necessarily the
case, as the \texttt{FP\_INDEX}\xspace can be associated with the direct path of
\emph{any} of the involved responders
(Figure~\ref{fig:crng-tosn:cir-arrangement}).
Further, and worse, due to the TX time shifts $\delta_i$ we apply in
concurrent ranging\xspace, the paths associated to the later responders may be circularly
shifted at the beginning of the array, disrupting the implicit
temporal ordering at the core of
responder identification (\ref{sec:crng-tosn:resp-id}).
Therefore, before estimating the ToA\xspace of the concurrent signals, we must
\begin{inparaenum}[\itshape i)]
\item re-arrange the CIR array to match the order expected
from the assigned time shifts, and
\item correspondingly re-assign the index associated to the \texttt{FP\_INDEX}\xspace
and whose timestamp is available in radio time units.
\end{inparaenum}
In~\cite{chorus} we addressed a similar problem by
partially relying on knowledge of the responder ID contained in the
\textsc{response}\xspace payload (among the several concurrent ones) actually
decoded by the radio, which then usually places its
corresponding first path at \mbox{\texttt{FP\_INDEX}\xspace $\approx750$} in the
CIR. However, this technique relies on successfully decoding a
\textsc{response}\xspace, which is unreliable as we previously observed in~\ref{sec:questions}.
Here, we remove this dependency and enable a correct CIR re-arrangement
\emph{even in cases where the initiator is unable to successfully decode
any \textsc{response}\xspace}, significantly improving reliability.
\begin{figure}[!t]
\centering
\subfloat[Raw CIR array.\label{fig:crng-tosn:cir-arrangement-raw}]{
\includegraphics{figs/disi-t1-s41-raw.pdf}
}\\
\subfloat[Re-arranged CIR array.\label{fig:crng-tosn:cir-arrangement-sorted}]{
\includegraphics{figs/disi-t1-s41-sorted.pdf}
}
\caption{CIR re-arrangement. The DW1000 measured the \texttt{FP\_INDEX}\xspace as the
direct path of \resp{6} in the raw CIR
(Figure~\ref{fig:crng-tosn:cir-arrangement-raw}).
After finding the CIR sub-array with the lowest noise,
we re-arrange the CIR (Figure~\ref{fig:crng-tosn:cir-arrangement-sorted})
setting the response of \resp{1} at the beginning and the noise-only
sub-array at the end.}
\label{fig:crng-tosn:cir-arrangement}
\end{figure}
We achieve this goal by identifying the portion of
the CIR that contains \emph{only} noise, which appears in
between the peaks of the last and first responders. First,
we normalize the CIR w.r.t.\ its maximum amplitude sample and search
for the CIR sub-array of length $W$ with the lowest sum---the
aforementioned noise-only portion. Next, we determine the index at
which this noise sub-array begins (minimum noise index in
Figure~\ref{fig:crng-tosn:cir-arrangement}) and search for the next
sample index whose amplitude is above a threshold $\xi$. This latter
index is a rough estimate of the direct path of \resp{1},
the first expected responder. We then re-order the CIR array
by applying a circular shift, setting the $N$ responses
at the beginning of the array, followed by the noise-only portion at the end.
Finally, we re-assign the index corresponding to the original
\texttt{FP\_INDEX}\xspace measured by the DW1000 and whose radio timestamp is available.
We empirically found, by analyzing 10,000s of CIRs signals, that a
threshold $\xi \in [0.12, 0.2]$ yields an accurate CIR reordering.
Lower values may cause errors due to noise or MPC, while higher
values may disregard a weak first path from \resp{1}. The noise window
$W$ must be set based on the CIR time span, the time shifts $\delta_i$
applied, and the number $N$ of concurrent responders. Hereafter, we
set $\xi = 0.14$ and $W = 228$ samples with $N = 6$ responders and
$\ensuremath{T_\mathit{ID}}\xspace = 128$~ns.
\subsubsection{Estimating the Noise Standard Deviation}
\label{sec:crng-tosn:noise-std}
ToA\xspace estimation algorithms frequently rely on a threshold
derived from the noise standard deviation \ensuremath{\sigma_n}\xspace,
to detect the first path from noise and MPC.
The DW1000 estimates $\ensuremath{\sigma_n}\xspace^{DW}$ based on the measured
CIR~\cite{dw1000-manual-v218}. However,
in the presence of concurrent transmissions, the DW1000 sometimes
yields a significantly overestimated $\ensuremath{\sigma_n}\xspace^{DW}$,
likely because it considers the additional \textsc{response}\xspace signals as noise.
Therefore, we recompute our own estimate of \ensuremath{\sigma_n}\xspace
as the standard deviation of the last 128~samples of the re-arranged CIR
(Figure~\ref{fig:crng-tosn:cir-arrangement-sorted}). By design
(\ref{sec:crng-tosn:cir-rearrangement}) these samples belong to the
noise-only portion at the end of the re-arranged CIR, free from MPC from
responses; the noise estimate is therefore significantly more reliable
than the one computed by the DW1000, meant for non-concurrent
ranging.
\begingroup
\setlength{\columnsep}{8pt}
\setlength{\intextsep}{4pt}
\begin{wrapfigure}{R}{5.6cm}
\centering
\includegraphics{figs/crng-noiseth-cdf.pdf}
\caption{Threshold comparison.}
\label{fig:std-noise-cdf}
\end{wrapfigure}
Figure~\ref{fig:std-noise-cdf} offers evidence of this last statement
by comparing the two techniques across the 9,000 signals
with $N = 6$ concurrent responders we use in~\ref{sec:crng-exp-static}
to evaluate the performance of concurrent ranging\xspace with the initiator placed
in 18 different static positions.
The chart shows the actual noise threshold
computed as $$\mathcal{T}$\xspace = 11 \times \ensuremath{\sigma_n}\xspace$, which we empirically
found to be a good compromise for ToA\xspace estimation
(\ref{sec:crng-tosn:toa-est}). Using our technique, $\mathcal{T}$\xspace
converges to a $\nth{99}$ percentile of $0.213$ over the normalized
CIR amplitude, while the default $\ensuremath{\sigma_n}\xspace^{DW}$ yields
$\nth{99} = 0.921$; this value would lead to discard most of
the peaks from concurrent responders.
For instance, in Figure~\ref{fig:crng-tosn:cir-arrangement}
only 2 out of 6 direct paths would be detected with such a high threshold.
Across these 9,000 signals, using our estimated $\ensuremath{\sigma_n}\xspace$
instead of $\ensuremath{\sigma_n}\xspace^{DW}$ increases the ranging and localization
reliability of concurrent ranging by up to $16\%$ depending on the
ToA\xspace algorithm used, as we explain next.
\section{Discussion}
\label{sec:discussion}
The outcomes of our evaluation (\ref{sec:crng-tosn:eval}) across
several static positions and mobile trajectories in two indoor
environments prove that \emph{concurrent ranging reliably provides
distance and position estimates with decimeter-level accuracy and
high precision}. The results we presented confirm that concurrent
ranging achieves a performance akin to conventional schemes, and that
it satisfies the strict requirements of most applications, notably
including robot localization.
Nevertheless, \emph{concurrent ranging incurs only a small fraction of
the cost borne by conventional schemes}. SS-TWR\xspace requires $2\times N$
packets to measure the distance to $N$ nodes; concurrent ranging
achieves the same goal with a \emph{single} two-way exchange. At the
initiator, often a mobile, energy-bound node, only 2~packets need to
be TX/RX instead of $2\times N$, proportionally reducing energy
consumption and, dually, increasing lifetime. Overall, the ability to
perform ranging via shorter exchanges dramatically reduces channel
utilization and latency, therefore increasing scalability and
update rate. To concretely grasp these claims, consider that, with the
(conservative) response delay $\ensuremath{T_\mathit{RESP}}\xspace = \SI{800}{\micro\second}$ we
used, concurrent ranging could provide a location update rate of
$\geq$1,000~Hz, either to a single initiator or shared among
several ones.
Actually achieving these update rates, however, requires a better
hardware and software support than in our prototype. Currently we log
the CIR via USB/UART, as it is the only option with the off-the-shelf
Decawave\xspace EVB1000 boards we use. This choice simplifies our prototyping
and enables replication of our results by others, using the same
popular and easily available platform. However, it introduces
significant delays, reducing the location update rate down to only
$\approx$8~Hz; this is appropriate for many applications but
insufficient in others requiring the tracking of fast-moving targets,
e.g.,\ drones. Nevertheless, this limitation is easily overcome by
production systems exploiting more powerful and/or dedicated
components, as in the case of smartphones.
Further, this is an issue only if the high update rate theoretically
available must be exploited by a single initiator. Otherwise, when
shared across several ones, our non-optimized prototype could provide its
8~samples per second to $\approx$125~nodes. This would require a
proper scheduling across initiators to avoid collisions, e.g., as
in~\cite{surepoint,talla-ipin}, and incur overhead, ultimately
reducing the final update rate of the system. On the other hand, the
potential for collisions is significantly reduced with our technique,
given that a single concurrent ranging exchange retrieves the
information accrued via $N$ conventional ones. Further, communicating
the schedule could itself exploit concurrent
transmissions~\cite{surepoint,glossy-uwb,uwb-ctx-fire}, opening the
intriguing possibility of merging scheduling and ranging into a single
concurrent exchange abating at once the overhead of both procedures.
Similar issues arise in more dynamic scenarios where ranging is
performed against mobile nodes instead of fixed anchors, e.g., to
estimate distance between humans as in proxemics
applications~\cite{hidden-dimension, proxemic-interactions}. In cases
where the set of nodes is not known a priori, scheduling must be
complemented by continuous neighbor discovery, to determine the set
of potential ranging targets. The problem of jointly discovering,
scheduling, and ranging against nodes has received very little
attention by the research community, although it is likely to become
important for many applications once UWB becomes readily available on
smartphones. In this context, the ability to perform fast and
energy-efficient concurrent ranging against several nodes at once
brings a unique asset, which may be further enhanced by additional
techniques like the adaptive response delays we hinted at
in~\ref{sec:crng-tosn:resp-id}. The exploration of these and other
research avenues enabled by the concurrent ranging techniques we
presented in this paper is the subject of our ongoing work.
Finally, the research findings and system prototypes we describe
in this paper are derived for the DW1000, i.e.,\ the only UWB
transceiver available off-the-shelf today. Nevertheless, new
alternatives are surfacing on the market. We argue that the
fundamental concept of concurrent ranging and the associated
techniques outlined here are of general validity, and therefore in
principle transferable to these new transceivers. Moreover, it is
our hope that the remarkable benefits we have shown may inspire new
UWB architectures that natively support concurrent ranging directly
in hardware.
\subsection{Precision of TX Scheduling}
\label{sec:crng-tosn:exp-tx-comp}
\begin{figure}[!t]
\centering
\includegraphics{figs/crng-cir-mean-t2-final.pdf}
\caption{Average CIR amplitude and standard deviation per time delay
across 500 signals with the initiator in the left center position of
Figure~\ref{fig:crng-tosn:err-ellipse-disi}.}
\label{fig:crng-cir-meanstd}
\end{figure}
We begin by examining the ability of our TX compensation mechanism
(\ref{sec:crng-tosn:txfix}) to schedule transmissions
precisely, as this is crucial to improve the
accuracy of concurrent ranging and localization. To this end, we ran an
experiment with one initiator and six responders, collecting 500 CIR
signals for our analysis. Figure~\ref{fig:crng-cir-meanstd} shows the
average CIR amplitude and standard deviation after re-arranging the
CIRs (\ref{sec:crng-tosn:cir-rearrangement}) and aligning the
upsampled CIR signals based on the direct path of responder \resp{1}.
Across all time delays, the average CIR presents only minor amplitude
variations in the direct paths and MPC. Further, the precise scheduling of
\textsc{response}\xspace transmissions yields a high amplitude for the direct paths
of all signals; this is in contrast with the smoother and flatter
peaks we observed earlier (\ref{sec:questions},
Figure~\ref{fig:single-tx-cir-variations}) due to the TX uncertainty
$\epsilon \in [-8, 0)$~ns.
To quantitatively analyze the TX precision, we estimate the ToA\xspace of
each \textsc{response}\xspace and measure the time difference $\Delta t_{j, 1}$
between the ToA\xspace of responder \resp{j} and the one of \resp{1}, chosen
as reference, after removing the time delays $\delta_i$ used
for response identification. Then, we subtract the mean of the distribution
and look at the deviations of $\Delta t_{j, 1}$, which ideally
should be negligible.
Figure~\ref{fig:crng-ts-cdf} shows the CDF of the $\Delta t_{j, 1}$
variations from the mean, while Table~\ref{tab:crng-tx-dev} details
the percentiles of the absolute variations. All time differences
present a similar behavior with an aggregate mean error
$\mu = 0.004$~ns across the 2,500 $\Delta t_{j, 1}$ measurements, with
$\sigma = 0.38$~ns and a median of $0.03$~ns; the absolute \nth{90},
\nth{95}, and \nth{99} percentiles are 0.64, 0.77, and 1.09~ns,
respectively. These results confirm that our implementation is able
to \emph{reliably schedule transmissions with sub-ns precision}.
\begin{figure}[!t]
\centering
\includegraphics{figs/crng-ts-cdf.pdf}
\caption{Time difference deviation from the mean across 500 CIRs.}
\label{fig:crng-ts-cdf}
\end{figure}
\begin{table}[h]
\centering
\caption{Deviation percentiles for the absolute time difference
$\Delta t_{j,1}$ variations.}
\label{tab:crng-tx-dev}
\begin{tabular}{ccccccc}
\toprule
& \multicolumn{6}{c}{\bfseries Percentile [ns]}\\
\cmidrule(lr){2-7}
{\bfseries Time Difference} & {\nth{25}} & {\nth{50}} & {\nth{75}} & {\nth{90}}
& {\nth{95}} & {\nth{99}}\\
\midrule
$\Delta t_{2,1}$ & 0.15 & 0.32 & 0.52 & 0.72 &0.85 & 1.08\\
$\Delta t_{3,1}$ & 0.08 & 0.18 & 0.32 & 0.51 &0.68 & 1.12\\
$\Delta t_{4,1}$ & 0.06 & 0.13 & 0.20 & 0.30 &0.40 & 0.60\\
$\Delta t_{5,1}$ & 0.13 & 0.23 & 0.40 & 0.64 &0.74 & 0.90\\
$\Delta t_{6,1}$ & 0.24 & 0.39 & 0.54 & 0.76 &0.91 & 1.14\\
Aggregate & 0.10 & 0.23 & 0.42 & 0.64 &0.77 & 1.09\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Response Identification}
\label{sec:crng-tosn:resp-id}
As observed in~\ref{sec:questions}, if the distance between the
initiator and the responders is similar, their paths and MPC overlap
in the CIR, hindering responder identification and ToA\xspace estimation.
Previous work~\cite{crng-graz} proposed to assign a different pulse
shape to each responder and then use a matched filter to associate
paths with responders. However, this leads to responder
mis-identifications, as we showed in~\cite{chorus}, because
the channel cannot always be assumed to be separable, i.e.,\ the measured peaks in
the CIR can be a combination of multiple paths, and the
received pulse shapes can be deformed, creating ambiguity in the
matched filter output.
To reliably separate and identify responders, we resort to response
position modulation~\cite{crng-graz}, whose effectiveness has instead
been shown by our own work on Chorus~\cite{chorus} and by
SnapLoc~\cite{snaploc}. The technique consists of delaying each
\textsc{response}\xspace by \mbox{$\delta_i = (i - 1)\ensuremath{T_\mathit{ID}}\xspace$}, where $i \in \{1,N\}$
is the responder identifier. The resulting CIR consists of an ordered
sequence of signals that are time-shifted based
on \begin{inparaenum}[\itshape i)]
\item the assigned delays $\delta_i$, and
\item the propagation delays $\tau_i$,
\end{inparaenum}
as shown in Figure~\ref{fig:crng-tosn:cir-arrangement}.
The constant \ensuremath{T_\mathit{ID}}\xspace must be set according to
\begin{inparaenum}[\itshape i)]
\item the CIR time span,
\item the maximum propagation time, as determined by the dimensions of
the target deployment area, and
\item the multipath profile in it.
\end{inparaenum}
Figure~\ref{fig:uwb-pdp} shows the typical power decay profile in
three different environments obtained from the {IEEE~802.15.4\xspace}a radio
model~\cite{molisch-ieee-uwb-model}. MPC with a time shift
$\geq 60$~ns suffer from significant power decay w.r.t.\ the direct
path. Therefore, by setting $\ensuremath{T_\mathit{ID}}\xspace = 128$~ns as
in~\cite{chorus,snaploc} we are unlikely to suffer from significant
MPC and able to reliably distinguish the responses. Moreover,
considering that the DW1000 CIR has a maximum time span of
$~\approx 1017$~ns, we can accommodate up to 7~responders, leaving a
small portion of the CIR with only noise.
We observe that this technique relies on the correct
identification of the first and last responder to properly
reconstruct the sequence, and avoid mis-identifications;
our evaluation (\ref{sec:crng-tosn:eval}) shows that these rarely
occur in practice.
\begin{figure}[!t]
\centering
\includegraphics{figs/pdp.pdf}
\caption{Power decay profile in different environments according to the
{IEEE~802.15.4\xspace}a radio model~\cite{molisch-ieee-uwb-model}.}
\label{fig:uwb-pdp}
\end{figure}
Finally, although the technique is similar to the one
in~\cite{chorus,snaploc}, the different context in which it is applied
yields significant differences. In these systems, the time of flight
$\tau_i$ is \emph{known} and compensated for, based on the fixed and
known position of anchors.
In concurrent ranging\xspace, not only $\tau_i$ is not known a
priori, but it also has a twofold impact on the \textsc{response}\xspace RX
timestamps, making the problem more challenging. On the other hand,
concurrent ranging\xspace is more flexible as it does not rely on the known position of
anchors. Further, as packet exchanges are triggered by the initiator
rather than the anchors as in~\cite{chorus,snaploc}, the former
could determine time shifts on a per-exchange basis, assigning a
different $\delta_i$ to each responder via the broadcast \textsc{poll}\xspace. For
instance, in a case where responders $R_i$ and $R_{i+1}$ have a
distance $d_i \gg d_{i+1}$ from the initiator, a larger time shift
$\delta_{i+1}$ could help separating the pulse of $R_{i+1}$ from the
MPC of $R_i$. Similarly, when more responders are present than what
can be accommodated in the CIR, the initiator could dynamically
determine the responders that should reply and the delays $\delta_i$
they should apply. This adaptive, initiator-based time shift
assignment opens several intriguing opportunities, especially for
mobile, ranging-only applications; we are currently investigating them
as part of our ongoing work (\ref{sec:discussion}).
\section{Concurrent Ranging Reloaded}
\label{sec:reloaded}
The experimental campaign in the previous section confirms that
concurrent ranging is feasible, but also highlights several challenges
not tackled by the strawman implementation outlined in~\ref{sec:crng},
limiting the potential and applicability of our technique. In this
section, we overcome these limitations with a novel design
that, informed by the findings in~\ref{sec:questions}, significantly
improves the performance of concurrent ranging\xspace both in terms of accuracy and
reliability, bringing it in line with conventional methods but at a
fraction of the network and energy overhead.
We begin by removing the main source of inaccuracy, i.e.,\ the 8~ns
uncertainty in TX scheduling. The technique we present
(\ref{sec:crng-tosn:txfix}) not only achieves sub-ns precision, as
shown in our evaluation (\ref{sec:crng-tosn:exp-tx-comp}), but also
doubles as a mechanism to reduce the impact of clock drift, the main
source of error in SS-TWR\xspace (\ref{sec:soa-sstwr}). We then
present our technique to correctly associate responders with paths
in the CIR (\ref{sec:crng-tosn:resp-id}), followed by two necessary
CIR pre-processing techniques to discriminate the direct paths from MPC
and noise (\ref{sec:crng-tosn:cir-proc}). Finally, we illustrate two algorithms
for estimating the actual ToA\xspace of the direct paths and outline the overall
processing that, by combining all these steps, yields the final
distance estimation (\ref{sec:crng-tosn:time-dist}).
\subsection{Performance with Mobile Targets}
\label{sec:optitrack}
We now evaluate the ability of concurrent ranging\xspace to accurately determine the
position of a mobile node. This scenario is representative of several
real-world applications, e.g.,\ exploration in harsh
environments~\cite{thales-demo}, drone operation~\cite{guo2016ultra},
and user navigation in museums or shopping
centers~\cite{museum-tracking}.
To this end, we ran experiments with an EVB1000 mounted on a mobile
robot~\cite{diddyborg} in a $12 \times 8$~m$^2$ indoor area where we
placed 6~responders serving as localization anchors.
We compare both our concurrent ranging\xspace variants against only SS-TWR\xspace with clock drift
compensation, as this provides a more challenging baseline,
as discussed earlier.
The area is equipped with 14~OptiTrack cameras~\cite{optitrack}, which we
configured to output positioning samples with an update rate of
125~Hz and calibrated to obtain a mean 3D error $< 1$~mm,
therefore yielding reliable and accurate ground truth to
validate the UWB systems against. The mobile robot is controlled by a
RPi, enabling us to easily repeat trajectories by remotely driving the
robot over WiFi via a Web application on a smartphone. A second RPi enables the
flashing of the EVB1000 node with the desired binary and the upload of
serial output (CIRs and RX information) to our testbed server for
offline analysis.
\begin{figure}[!t]
\centering
\includegraphics{figs/optitrack/loc-path-ssr3-t5}
\hspace{5mm}
\includegraphics{figs/optitrack/loc-path-ssr3-t6}
\\
\includegraphics{figs/optitrack/loc-path-ssr3-t7}
\hspace{5mm}
\includegraphics{figs/optitrack/loc-path-ssr3-t9}
\caption{Localization with concurrent ranging\xspace across four trajectories using S{\footnotesize \&}S\xspace
with $K = 3$ iterations.}
\label{fig:crng-tosn:tracking}
\end{figure}
Before presenting in detail our evaluation,
Figure~\ref{fig:crng-tosn:tracking} offers the opportunity to visually
ascertain that our concurrent ranging\xspace prototype is able to \emph{continuously and
accurately} track the robot trajectory, by comparing it against the
ground truth obtained with OptiTrack. We observe a few position
samples with relatively high error, due to strong MPC; however, these
situations are rare and, in practice, easily handled with techniques
commonly used in position tracking, e.g.,\ extended or unscented Kalman
filters~\cite{ukf-julier}. Due to space constraints, the figure shows
only trajectories with S{\footnotesize \&}S\xspace because they are very similar to
threshold-based ones, as discussed next.
\fakeparagraph{Ranging Accuracy}
Across all samples, we compute the ranging error
$\hat{d}_{i} - d_{i}$ between the concurrent ranging\xspace or SS-TWR\xspace
estimate $\hat{d}_{i}$ for \resp{i} and the OptiTrack estimate $d_{i}$.
To obtain the latter, we interpolate the high-rate positioning traces
of OptiTrack to compute the exact robot position $\mathbf{p}$ at each
time instance of our concurrent ranging\xspace and SS-TWR\xspace traces and then estimate
the true distance $d_i = \norm{\mathbf{p} - \mathbf{p_i}}$, where
$\mathbf{p_i}$ is the known position of \resp{i}.
Table~\ref{tab:optitrack-rng-err} shows that the results exhibit the
very good trends we observed in the static case
(\ref{sec:crng-exp-static}). In terms of accuracy, the median and
average error are very small, and very close to SS-TWR\xspace.
However, SS-TWR\xspace is significantly more precise, while the
standard deviation $\sigma$ of concurrent ranging\xspace is in line with the one observed
with all 18~positions (Table~\ref{tab:rng-err-all}).
In contrast, however, the absolute error is
$\nth{99}\leq 37$~cm, significantly lower than in this latter
case. Further, the ToA\xspace algorithm employed for concurrent ranging\xspace has only a
marginal impact on accuracy and precision.
\begin{figure}[!t]
\centering
\subfloat[Concurrent ranging: Threshold.\label{fig:crng-tosn:pergine-th-err-hist}]{
\includegraphics{figs/optitrack/crng-th-disterr-hist-perg.pdf}
}
\subfloat[Concurrent ranging: S{\footnotesize \&}S\xspace.\label{fig:crng-tosn:pergine-ss3-err-hist}]{
\includegraphics{figs/optitrack/crng-ssr3-disterr-hist-perg.pdf}
}
\subfloat[SS-TWR\xspace with drift compensation.\label{fig:crng-tosn:pergine-sstwr-err-hist}]{
\includegraphics{figs/optitrack/sstwr-disterr-hist-perg.pdf}
}
\caption{Normalized histogram of the ranging error across multiple
mobile trajectories.}
\label{fig:crng-tosn:pergine-rng-err-hist}
\end{figure}
\begin{table}[!t]
\centering
\caption{Ranging error comparison across multiple mobile trajectories.}
\label{tab:optitrack-rng-err}
\begin{tabular}{l ccc ccccc}
\toprule
& \multicolumn{3}{c}{ $\hat{d}_i - d_i$ [cm]}
& \multicolumn{5}{c}{ $|\hat{d}_i - d_i|$ [cm]}\\
\cmidrule(lr){2-4} \cmidrule(lr){5-9}
{\bfseries Scheme} & Median & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 0.3 & -1.3 & 23.5 &8 &14 & 20 &25 & 37\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 0.2 & -1.4 & 21.6 &8 &13 & 20 &24 & 35\\
SS-TWR\xspace Compensated & -3.5 & -3.4 & 6.8 &5 &9 & 12 &15 & 19\\
\bottomrule
\end{tabular}
\end{table}
An alternate view confirming these observations is
offered by the normalized histograms in
Figure~\ref{fig:crng-tosn:pergine-rng-err-hist}, where
the long error tails observed in
Figure~\ref{fig:crng-tosn:err-hist-th}--\ref{fig:crng-tosn:err-hist-ssr3}
are absent in
Figure~\ref{fig:crng-tosn:pergine-th-err-hist}--\ref{fig:crng-tosn:pergine-ss3-err-hist}.
Overall, concurrent ranging\xspace follows closely the performance of SS-TWR\xspace with drift
compensation, providing a more scalable scheme with less overhead
and comparable accuracy. Notably, concurrent ranging\xspace measures the
distance to all responders simultaneously, an important factor
when tracking rapidly-moving targets to reduce the bias induced by
relative movements. Further, this aspect also enables a
significant increase of the attainable update rate.
\fakeparagraph{Localization Accuracy}
Figure~\ref{fig:crng-optitrack-cdf} compares the
CDFs of the localization error of the techniques under evaluation;
Table~\ref{tab:optitrack-loc-err} reports the value of the metrics
considered. The accuracy of SS-TWR\xspace is about 1~cm worse w.r.t.\ the static
case, while the precision is essentially unaltered. As for concurrent ranging\xspace, the
median error is also the same as in the static case, while the value
of the other metrics is by and large in between the case with all
positions and the one with only \textsc{center}\xspace ones. The precision is
closer to the case of all static positions
(Table~\ref{tab:loc-err-all}), which is mirrored in the slower
increase of the CDF for concurrent ranging\xspace variants w.r.t.\ SS-TWR\xspace
(Figure~\ref{fig:crng-optitrack-cdf}). Overall, the absolute error is
relatively small and closer to the case with \textsc{center}\xspace positions,
with $\nth{95}\leq 22$~cm. On the other hand, the \nth{99} percentile
is slightly higher than in Table~\ref{tab:loc-err-all},
possibly due to the different environment and the
higher impact of the orientation of the antenna relative to the
responders.
Another difference w.r.t.\ the static case is the
slightly higher precision and \nth{99} accuracy of S{\footnotesize \&}S\xspace vs.\
threshold-based estimation, in contrast with the opposite trend we observed
in~\ref{sec:crng-exp-static}. Again, this is likely to be
ascribed to the different environment and MPC profile.
In any case, this bears only
a minor impact on the aggregate performance, as shown in
Figure~\ref{fig:crng-optitrack-cdf}.
\fakeparagraph{Success Rate} Across the 4,015 signals from our
trajectories, concurrent ranging\xspace obtained 3,999 position estimates ($99.6\%$) with
both ToA\xspace techniques. Nevertheless, 43 of these are affected by an
error $\geq 10$~m and can be disregarded as outliers, yielding an
effective success rate of $98.8\%$, which nonetheless reasserts the
ability of concurrent ranging\xspace to provide reliable and robust localization.
Regarding ranging, threshold-based estimation yields a success rate of $93.18\%$
across the 24,090 expected estimates, while S{\footnotesize \&}S\xspace reaches $95.4\%$,
confirming its higher reliability. As expected, the localization
success rate is higher as the position can be computed even
if several $\hat d_i$ are lost.
\begin{figure}[!tb]
\centering
\includegraphics{figs/optitrack/crng-loc-cdf-perg.pdf}
\caption{Localization error CDF of concurrent ranging\xspace vs.\ compensated SS-TWR\xspace
across multiple trajectories.}
\label{fig:crng-optitrack-cdf}
\end{figure}
\begin{table}
\centering
\caption{Localization error comparison across multiple mobile trajectories.}
\label{tab:optitrack-loc-err}
\begin{tabular}{l cc ccccc}
\toprule
& \multicolumn{7}{c}{$\norm{\mathbf{\hat p} - \mathbf{p_r}}$ [cm]}\\
\cmidrule(lr){2-8}
{\bfseries Scheme} & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 12.1 & 17.2 &10 &14 & 18 &22 & 85\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 11 & 12.8 &9 &13 & 18 &20 & 60\\
SS-TWR\xspace Compensated & 5.8 & 2.3 &6 &7 & 9 &10 & 12\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Is Communication Even Possible?}
\label{sec:obs:prr}
Up to this point, we have implicitly assumed that the UWB transceiver
is able to successfully decode one of the concurrent TX with high
probability, similarly to what happens in narrowband and exploited,
e.g.,\ by Glossy~\cite{glossy} and other protocols~\cite{lwb, chaos, crystal}.
However, this may not be the case, given the different radio PHY and
the different degree of synchronization (ns vs.\ $\mu$s) involved.
\begin{figure}[!tb]
\centering
\begin{tikzpicture}[xscale=0.6]
\tikzstyle{device} = [circle, thick, draw=black, fill=gray!30, minimum size=8mm]
\node[device] (a) at (0, 0) {$R_1$};
\node[device] (b) at (4, 0) {$I$};
\node[device] (c) at (10, 0) {$R_2$};
\draw[<->, thick] (a) -- (b) node[pos=.5,above]{$d_1$};
\draw[<->, thick] (b) -- (c) node[pos=.5,above]{$d_2 = D - d_1$};
\draw[<->, thick] (0, -.5) -- (10, -.5) node[pos=.5,below]{$D = 12$~m};
\end{tikzpicture}
\caption{Experimental setup to investigate the reliability and accuracy of
concurrent ranging (\ref{sec:obs:prr}--\ref{sec:obs:accuracy}). $I$ is the
initiator, $R_1$ and $R_2$ are the responders.}
\label{fig:capture-exp-deployment}
\end{figure}
Our first goal is therefore to verify this hypothesis. We run a series
of experiments with three nodes, one initiator $I$ and two concurrent
responders $R_1$ and $R_2$, placed along a line
(Figure~\ref{fig:capture-exp-deployment}). The initiator is placed
between responders at a distance $d_1$ from $R_1$ and $d_2 = D - d_1$
from $R_2$, where $D = 12$~m is the fixed distance between the
responders. We vary $d_1$ between 0.4~m and 11.6~m in steps of
0.4~m. By changing the distance between initiator and
responders we affect the chances of successfully receiving a packet
from either responder due to the variation in power loss and
propagation delay. For each initiator position, we perform 3,000
ranging exchanges with concurrent ranging\xspace, measuring the packet reception ratio
(\ensuremath{\mathit{PRR}}\xspace) of \textsc{response}\xspace packets along with the resulting ranging
estimates. As a baseline, we also performed 1,000 ranging exchanges
with each responder in isolation, yielding $\ensuremath{\mathit{PRR}}\xspace=100\%$ for all
initiator positions.
\begin{figure}[!tb]
\centering
\includegraphics{figs/ewsn/tosn-capture-test-node-pdr}
\caption{Packet reception rate (\ensuremath{\mathit{PRR}}\xspace) vs.\ initiator position $d_1$,
with two concurrent transmissions.}
\label{fig:capture-test-prr}
\end{figure}
Figure~\ref{fig:capture-test-prr} shows the $\ensuremath{\mathit{PRR}}\xspace_i$ of each responder
and the overall $\ensuremath{\overline{\mathit{PRR}}}\xspace = \ensuremath{\mathit{PRR}}\xspace_{1} + \ensuremath{\mathit{PRR}}\xspace_{2}$ denoting the case in
which a packet from either responder is received correctly. Among all
initiator positions, the worst overall \ensuremath{\overline{\mathit{PRR}}}\xspace~$=$~75.93\% is achieved
for $d_1 = 8$~m. On the other hand, placing the initiator close to one of the
responders (i.e.,\ $d_1 \leq 2$~m or $d_1 \geq 10$~m) yields
$\ensuremath{\overline{\mathit{PRR}}}\xspace \geq 99.9\%$.
We also observe strong fluctuations in the center area. For instance,
placing the initiator at $d_1 = 5.2$~m yields $\ensuremath{\mathit{PRR}}\xspace_{1}= 93.6$\% and
$\ensuremath{\mathit{PRR}}\xspace_{2}=2.7$\%, while nudging it at $d_1 = 6$~m yields
$\ensuremath{\mathit{PRR}}\xspace_{1}= 6.43$\% and $\ensuremath{\mathit{PRR}}\xspace_{2}=85.73$\%.
\fakeparagraph{Summary}
Overall, this experiment confirms the ability of the
DW1000 to successfully decode, with high probability,
one of the packets from concurrent transmissions.
\subsection{How Concurrent Transmissions Affect Ranging Accuracy?}
\label{sec:obs:accuracy}
We also implicitly assumed that concurrent transmissions do not affect
the ranging accuracy. In practice, however, the UWB wireless channel
is far from being as ``clean'' as in the idealized view of
Figure~\ref{fig:concurrent-uwb}. The first path is typically followed
by several multipath reflections, which effectively create a ``tail''
after the leading signal. Depending on its temporal and spatial
displacement, this tail may interfere with the first path of other
responders by
\begin{inparaenum}[\itshape i)]
\item reducing its amplitude, or
\item generating MPC that can be mistaken for the first path,
inducing estimation errors.
\end{inparaenum}
Therefore, we now ascertain whether concurrent transmissions degrade
ranging accuracy.
\fakeparagraph{Baseline: Isolated Responders} We first look at the
ranging accuracy for all initiator positions with each responder
\emph{in isolation}, using the same setup of
Figure~\ref{fig:capture-exp-deployment}. Figure~\ref{fig:rng-hist}
shows the normalized histogram of the resulting ranging error from
58,000 ranging measurements. The average error is $\mu = 1.7$~cm, with
a standard deviation $\sigma=10.9$~cm. The maximum absolute error is
37~cm. The median of the absolute error is 8~cm, while the \nth{99}
percentile is 28~cm. These results are in accordance with previously
reported studies~\cite{surepoint,polypoint} employing the DW1000
transceiver.
\begin{figure}[!tb]
\centering
\subfloat[Isolated responders.\label{fig:rng-hist}]{
\includegraphics{./figs/ewsn/tosn-capt-hist-rng-err}}
\hfill
\subfloat[Concurrent responders.\label{fig:rng-stx-hist}]{
\includegraphics{./figs/ewsn/tosn-stx-hist-rng-err}}
\caption{Normalized histogram of the ranging error with responders in
isolation (Figure~\ref{fig:rng-hist})
vs.\ two concurrent responders (Figure~\ref{fig:rng-stx-hist}).
In the latter, the initiator sometimes receives
the \textsc{response}\xspace from the farthest responder while estimating the
first path from the closest one, therefore increasing the absolute error.}
\label{fig:rng-error-hist}
\end{figure}
\begin{figure}[!tb]
\centering
\subfloat[Ranging Error $\in {[-7.5, -0.5]}$.\label{fig:rng-error-hist-zoom-1}]{
\includegraphics{./figs/ewsn/tosn-stx-hist-rng-err-z1}}
\hfill
\subfloat[Ranging Error $\in {[-0.5, 0.5]}$.\label{fig:rng-error-hist-zoom-2}]{
\includegraphics{./figs/ewsn/tosn-stx-hist-rng-err-z2}}
\caption{Zoomed-in views of Figure~\ref{fig:rng-stx-hist}.}
\label{fig:rng-error-hist-zoom}
\end{figure}
\fakeparagraph{Concurrent Responders: Impact on Ranging Accuracy}
Figure~\ref{fig:rng-stx-hist} shows the normalized histogram of the
ranging error of 82,519 measurements using instead two concurrent
responders\footnote{Note we do not obtain valid ranging measurements
in case of RX errors due to collisions.}. The median of
the absolute error is 8~cm, as in the isolated case, while the
\nth{25} and \nth{75} percentiles are 4~cm and 15~cm, respectively.
However, while the average error $\mu = -0.42$~cm is comparable, the
standard deviation $\sigma = 1.05$~m is significantly higher.
Further, the error distribution is clearly different w.r.t.\ the case of
isolated responders (Figure~\ref{fig:rng-hist}); to better appreciate
the trends, Figure~\ref{fig:rng-error-hist-zoom} offers a zoomed-in
view of two key areas of the histogram in
Figure~\ref{fig:rng-stx-hist}. Indeed, the latter has a long tail of
measurements with significant errors; for 14.87\% of the measured
samples the ranging error is $<-0.5$~m, while in the isolated case the
maximum absolute error only reaches 37~cm.
\setlength{\columnsep}{8pt}
\setlength{\intextsep}{4pt}
\begin{wrapfigure}{R}{5.6cm}
\centering
\includegraphics{figs/ewsn/tosn-capt-rng-err-stx.pdf}
\caption{Ranging error vs.\ initiator position.}
\label{fig:stx-rng-error-vs-pos}
\end{wrapfigure}
\fakeparagraph{The Culprit:
Mismatch between Received \textsc{response}\xspace and Nearest Responder}
To understand why, we study the ranging error when the initiator
is located in the center area ($4 \leq d_1 \leq 8$), the one with major \ensuremath{\mathit{PRR}}\xspace
fluctuations (Figure~\ref{fig:capture-test-prr}).
Figure~\ref{fig:stx-rng-error-vs-pos} shows the average
absolute ranging error of the packets received from each responder as a
function of the initiator position. Colored areas represent the standard
deviation.
The ranging error of $R_1$ and $R_2$ increases dramatically for $d_1 \geq 6$~m
and $d_2 \geq 6$~m, respectively. Moreover, the magnitude of the error exhibits
an interesting phenomenon. For instance, when the initiator is at
$d_1 = 6.8$~m, the average error for \textsc{response}\xspace packets
received from $R_1$ is 1.68~m, very close to the displacement between responders,
${\ensuremath{\Delta d}\xspace = \mid d_1 - d_2 \mid = \mid 6.8 - 5.2\mid = 1.6}$~m. Similarly, for
$d_1 = 5.2$~m and $\ensuremath{\Delta d}\xspace = 1.6$~m, the average error for the packets received
from $R_2$ is 1.47~m.
The observation that the ranging error approximates the displacement
\ensuremath{\Delta d}\xspace between responders points to the fact that these high errors
appear when the initiator receives the \textsc{response}\xspace from the farthest
responder but estimates the first path of the signal with the CIR
peak corresponding instead to the nearest responder. This phenomenon
explains the high errors shown in Figure~\ref{fig:rng-stx-hist}
and~\ref{fig:rng-error-hist-zoom-1},
which are the result of this mismatch between the successful responder
and the origin of the obtained first path. In fact, the higher
probabilities in Figure~\ref{fig:rng-error-hist-zoom-1}
correspond to positions where the responder farther from the
initiator achieves the highest $\ensuremath{\mathit{PRR}}\xspace_i$ in
Figure~\ref{fig:capture-test-prr}. For example, for $d_1 = 7.6$~m, the
far responder $R_1$ achieves $\ensuremath{\mathit{PRR}}\xspace_1 = 94.46$\% and an average ranging
error of $-3.27$~m, which again corresponds to $\ensuremath{\Delta d}\xspace = 3.2$~m and
also to the highest probability in Figure~\ref{fig:rng-error-hist-zoom-1}.
\fakeparagraph{The Role of TX Scheduling Uncertainty} When this
mismatch occurs, we also observe a relatively large standard deviation
in the ranging error. This is generated by the 8~ns TX
scheduling granularity of the DW1000 transceiver (\ref{sec:crng}). In
SS-TWR\xspace (Figure~\ref{fig:two-way-ranging}), responders insert in the
\textsc{response}\xspace the elapsed time \mbox{$\ensuremath{T_\mathit{RESP}}\xspace = \ensuremath{t_3}\xspace - \ensuremath{t_2}\xspace$}
between receiving the \textsc{poll}\xspace and sending the \textsc{response}\xspace.
The initiator uses \ensuremath{T_\mathit{RESP}}\xspace to precisely
estimate the time of flight of the signal. However, the 8~ns
uncertainty produces a discrepancy on \ensuremath{t_3}\xspace,
and therefore between the \ensuremath{T_\mathit{RESP}}\xspace used by the
initiator and obtained from the successful \textsc{response}\xspace
and the \ensuremath{T_\mathit{RESP}}\xspace actually applied by the closest responder,
resulting in significant error variations.
\fakeparagraph{Summary}
Concurrent transmissions can negatively affect ranging
by producing a mismatch between the successful responder and the
detected CIR path used to compute the time of flight.
However, we also note that 84.59\% of the concurrent ranging samples
are quite accurate, achieving an absolute error $< 30$~cm.
\subsection{Does the CIR Contain Enough Information for Ranging?}
\label{sec:cir-enough}
In~\ref{sec:crng} we have mentioned that the limitation on the
granularity of TX scheduling in the DW1000 introduces an 8~ns
uncertainty. Given that an error of 1~ns in estimating the time of
flight results in a corresponding error of $\approx$30~cm, this
raises questions to whether the information in the CIR is sufficient
to recover the timing information necessary for distance estimation.
\begin{figure}[!b]
\centering
\begin{tikzpicture}[xscale=0.6]%
\tikzstyle{device} = [circle, thick, draw=black, fill=gray!30, minimum size=8mm]
\node[device] (a) at (0, 0) {$I$};
\node[device] (b) at (4, 0) {$R_1$};
\node[device] (c) at (10, 0) {$R_2$};
\draw[<->, thick] (a) -- (b) node[pos=.5,above]{$d_1 = 4~m$};
\draw[<->, thick] (b) -- (c) node[pos=.5,above]{$\Delta d = d_2 - d_1$};
\draw[<->, thick] (0, -.5) -- (10, -.5) node[pos=.5,below]{$d_2$};
\end{tikzpicture}
\caption{Experimental setup to analyze the CIR resulting from concurrent
ranging (\ref{sec:cir-enough}).}
\label{fig:chorus-exp-deployment}
\end{figure}
We run another series of experiments using again three nodes but
arranged slightly differently (Figure~\ref{fig:chorus-exp-deployment}).
We set $I$ and $R_1$ at a fixed distance $d_1 = 4$~m,
and place $R_2$ at a distance $d_2 > d_1$ from $I$;
the two responders are therefore separated by a distance
$\ensuremath{\Delta d}\xspace = d_2 - d_1$. Unlike previous experiments, we increase $d_2$
in steps of 0.8~m; we explore $4.8 \leq d_2 \leq 12$~m, and therefore
$0.8 \leq \ensuremath{\Delta d}\xspace \leq 8$~m. For each position of $R_2$, we run the
experiment until we successfully receive 500~\textsc{response}\xspace packets,
i.e., valid ranging estimates; we measure the CIR on the
initiator after each received \textsc{response}\xspace.
\fakeparagraph{Baseline: Isolated Responders}
Before using concurrent responders, we first measured the CIR of
$R_1$ ($d_1=4$~m) in isolation. Figure~\ref{fig:single-tx-cir-variations}
shows the average amplitude and standard deviation across 500~CIR signals,
averaged by aligning them to the first path index (\texttt{FP\_INDEX}\xspace)
reported by the DW1000~(\ref{sec:dw1000}).
The measured CIR presents an evident direct path at 50~ns,
followed by strong multipath.
We observe that the CIR barely changes across the 500~signals,
exhibiting only minor variations in these MPCs (around 55--65~ns).
\fakeparagraph{Concurrent Responders: Distance Estimation} We now
analyze the effect of $R_2$ transmitting \textit{concurrently} with
$R_1$, and show how the distance of $R_2$ can be estimated. We focus
on a single distance $d_2 = 9.6$~m and on a single CIR
(Figure~\ref{fig:chorus-cir}), to analyze in depth the phenomena at
stake; we later discuss results acquired from 500~CIR signals
(Figure~\ref{fig:chorus-cir-variations}) and for other $d_2$ values
(Table~\ref{table:concurrent-ranging}).
\begin{figure}[!t]
\centering
\includegraphics{figs/ewsn/tosn-d10-cir-avg}
\caption{Average amplitude and standard deviation of 500~CIR signals for an
isolated responder at $d_1=4$~m.}
\label{fig:single-tx-cir-variations}
\includegraphics{figs/ewsn/tosn-cir-comp-391-37}
\caption{Impact of concurrent transmissions on the CIR. The \textsc{response}\xspace TX from
\resp{2} introduces a second peak at a time shift $\Delta t = 38$~ns after
the direct path from \resp{1}.}
\label{fig:chorus-cir}
\includegraphics{figs/ewsn/tosn-d10-24-cir-avg}
\caption{Average amplitude and standard deviation of 500 CIR signals,
aligned based on the \texttt{FP\_INDEX}\xspace, for two concurrent responders
at distance ${d_1 = 4}$~m and ${d_2 = 9.6}$~m from the initiator.
}
\label{fig:chorus-cir-variations}
\end{figure}
Figure~\ref{fig:chorus-cir} shows that the \textsc{response}\xspace of $R_2$ introduces a
second peak in the CIR, centered around 90~ns. This is compatible with our
a-priori knowledge of $d_2 = 9.6$~m; the question is whether this distance can
be estimated from the CIR.
Placing the direct path from $R_2$ in time constitutes a
problem per se. In the case of $R_1$, this estimation is performed
accurately and automatically by the DW1000, enabling an accurate
estimate of $d_1$. The same could be performed for $R_2$ if it were in
isolation, but not concurrently with $R_1$.
Therefore, here we estimate the direct path from $R_2$
as the CIR index whose signal amplitude is closest to $20\%$ of
the maximum amplitude of the peak---a simple technique used, e.g.,\
in~\cite{harmonium}. The offset between the CIR index and the one
returned by the DW1000 for $R_1$, for which a precise estimate is
available, returns the delay \ensuremath{\Delta t}\xspace between the responses of $R_1$
and $R_2$. We investigate more sophisticated and accurate
techniques in~\ref{sec:reloaded}.
The value of \ensuremath{\Delta t}\xspace is induced by the propagation delay
caused by the difference $\ensuremath{\Delta d}\xspace = d_2 - d_1$ in the distance of the
responders from the initiator. Recall the basics of SS-TWR\xspace
(\ref{sec:toa}, Figure~\ref{fig:two-way-ranging}) and of concurrent ranging
(\ref{sec:crng}, Figure~\ref{fig:crng}). $R_2$ receives the
\textsc{poll}\xspace from $I$ slightly after $R_1$; the propagation of the \textsc{response}\xspace
back to $I$ incurs the same delay; therefore, the response from $R_2$
arrives at $I$ with a delay $\ensuremath{\Delta t}\xspace = 2 \times \frac{\ensuremath{\Delta d}\xspace}{c}$
w.r.t.\ $R_1$.
In our case, the estimate above from the CIR signal yields
$\ensuremath{\Delta t}\xspace=38$~ns, corresponding to
\mbox{$\ensuremath{\Delta d}\xspace \approx 5.6$~m}---indeed the displacement
of the two responders. Therefore, by knowing the distance $d_1$ between
$I$ and $R_1$, estimated precisely by the DW1000, we can easily estimate
the distance between $I$ and $R_2$ as $d_2 = d_1 + \ensuremath{\Delta d}\xspace$.
This confirms that a single concurrent ranging exchange
contains enough information to reconstruct both distance estimates.
\fakeparagraph{Concurrent Transmissions: Sources of Ranging Error}
Another way to look at Figure~\ref{fig:chorus-cir} is to compare it
against Figure~\ref{fig:concurrent-uwb}; while the latter provides an
\emph{idealized} view of what happens in the UWB channel,
Figure~\ref{fig:chorus-cir} provides a \emph{real} view.
Multipath propagation and interference among the different paths
of each signal affects the measured CIR;
it is therefore interesting to see whether this holds in general
and what is the impact on the (weaker) signal from $R_2$.
To this end, Figure~\ref{fig:chorus-cir-variations} shows the average
amplitude and standard deviation of 500~CIR signals aligned based on
the \texttt{FP\_INDEX}\xspace with $d_1 = 4$~m, and $d_2 = 9.6$~m. We observe that the
first pulse, the one from the closer $R_1$, presents only
minor variations in the amplitude of the direct path and of MPC,
coherently with Figure~\ref{fig:single-tx-cir-variations}. In
contrast, the pulse from $R_2$ exhibits stronger variations, as shown
by the colored area between 80 and 110~ns representing the standard
deviation. However, these variations can be ascribed only marginally
to interference with the pulse from $R_1$; we argue, and provide
evidence next, that these variations are caused by the result of small
time shifts of the observed CIR pulse, in turn caused by the
$\epsilon \in [-8,0)$~ns TX scheduling uncertainty.
\fakeparagraph{TX Uncertainty Affects Time Offsets}
Figure~\ref{fig:chorus-at-hist} shows the normalized histogram, for the same
500~CIR signals, of the time offset \ensuremath{\Delta t}\xspace between the times at which the
responses from $R_1$ and $R_2$ are received at $I$. The real value, computed
with exact knowledge of distances, is \ensuremath{\Delta t}\xspace=~37.37~ns; the average from the
CIR samples is instead \ensuremath{\Delta t}\xspace$= 36.11$~ns, with $\sigma = 2.85$~ns.
These values, and the trends in Figure~\ref{fig:chorus-at-hist}, are
compatible with the 8~ns uncertainty deriving from TX scheduling.
\fakeparagraph{Time Offsets Affect Distance Offsets} As shown in
Figure~\ref{fig:chorus-at-hist}, the uncertainty in time offset
directly translates into uncertainty in the distance offset, whose
real value is $\ensuremath{\Delta d}\xspace=~5.6$~m. In contrast, the average estimate is
$\ensuremath{\Delta d}\xspace = 5.41$~m, with $\sigma = 0.43$~m. The average error is
therefore $-18$~cm; the \nth{50}, \nth{75}, and \nth{99}
percentiles are 35~cm, 54~cm and 1.25~m, respectively. These results
still provide sub-meter ranging accuracy as long as the estimated
distance to $R_1$ is accurate enough.
\fakeparagraph{Distance Offsets Affect Ranging Error} Recall that the
distance $d_1$ from $R_1$ to $I$ is obtained directly from the
timestamps provided by the DW1000, while for $R_2$ is estimated as
$d_2 = d_1 + \ensuremath{\Delta d}\xspace$. Therefore, the uncertainty in the distance
offset \ensuremath{\Delta d}\xspace directly translates into an additional ranging error,
shown in Figure~\ref{fig:concurrent-ranging-error} for each responder.
$R_1$ exhibits a mean ranging error $\mu = 3.6$~cm with
$\sigma = 1.8$~cm and a \nth{99} percentile over the absolute error of
only 8~cm. Instead, the ranging error for $R_2$, computed indirectly
via \ensuremath{\Delta d}\xspace, yields $\mu = -15$~cm with $\sigma = 42.67$~cm.
The median of the absolute error of $R_2$ is 31~cm,
while the \nth{25}, \nth{75}, and \nth{99} percentiles
are 16~cm, 58~cm, and 1.18~m, respectively.
\begin{figure}[!t]
\centering
\includegraphics{figs/ewsn/tosn-d10-24-at-ad-hists}
\caption{Normalized histograms of the time offset \ensuremath{\Delta t}\xspace and corresponding
distance offset \ensuremath{\Delta d}\xspace between the leading CIR pulses from $R_1$ and $R_2$.}
\label{fig:chorus-at-hist}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics{figs/ewsn/tosn-d10-24-rng-hist}
\caption{Normalized histograms of the concurrent ranging error of each responder.}
\label{fig:concurrent-ranging-error}
\end{figure}
\fakeparagraph{Impact of Distance between Responders}
In principle, the results above demonstrate the feasibility of
concurrent ranging and its ability to achieve sub-meter accuracy.
Nevertheless, these results were obtained for a single value of $d_2$.
Table~\ref{table:concurrent-ranging} summarizes the results
obtained by varying this distance as described at the beginning of the
section. We only consider the \textsc{response}\xspace packets successfully sent by
$R_1$, since those received from $R_2$ produce the mismatch mentioned
in~\ref{sec:obs:accuracy}, increasing the error by $\approx$~\ensuremath{\Delta d}\xspace;
we describe a solution to this latter problem in~\ref{sec:reloaded}.
\input{rng-table1}
To automatically detect the direct path of \resp{2},
we exploit our a-priori knowledge of where it
should be located based on \ensuremath{\Delta d}\xspace, and therefore \ensuremath{\Delta t}\xspace.
We consider the slice of the CIR defined by $\ensuremath{\Delta t}\xspace \pm 8$~ns,
and detect the first peak in it,
estimating the direct path as the preceding index with the amplitude
closest to the $20\%$ of the maximum amplitude, as described earlier.
To abate false positives, we also enforce the additional constraints
that a peak has a minimum amplitude of 1,500 and that the minimum distance
between peaks is 8~ns.
As shown in Table~\ref{table:concurrent-ranging}, the distance to
$R_1$ is estimated with an average error $\mu<9$~cm and
$\sigma < 10$~cm for all tested $d_2$ distances. The \nth{99}
percentile absolute error is always $< 27$~cm. These results are in
line with those obtained in~\ref{sec:obs:accuracy}. As for $R_2$, we
observe that the largest error of the estimated $\Delta d$, and of
$d_2$, is obtained for the shortest distance $d_2 = 4.8$~m. In this
particular setting, the pulses from both responders are very close and
may even overlap in the CIR, increasing the resulting error,
$\mu=-43$~cm for $d_2$. The other distances exhibit $\mu\leq 25$~cm.
We observe that the error is significantly lower with
$\Delta d \geq 4$~m, achieving $\nth{75} <60$~cm for $d_2$. Similarly,
for all $\Delta d \geq 4$~m except $\Delta d = 5.6$~m, the \nth{99}
percentile is $< 1$~m. These results confirm that concurrent ranging
can achieve sub-meter ranging accuracy, as long as the distance
$\Delta d$ between responders is sufficiently large.
\fakeparagraph{Summary}
Concurrent ranging can achieve sub-meter accuracy, but requires
\begin{inparaenum}[\itshape i)]
\item a sufficiently large difference $\Delta d$ in distance (or
\ensuremath{\Delta t}\xspace in time) among concurrent responders, to distinguish the
responders first paths within the CIR, and
\item a successful receipt of the \textsc{response}\xspace packet from the closest
responder, otherwise the mismatch of responder identity increases
the ranging error to $\approx \Delta d$.
\end{inparaenum}
\subsection{Performance with Static Targets}
\label{sec:crng-exp-static}
We report the results from experiments in a $6.4 \times 6.4$~m$^2$
area inside our office building, using 6~concurrent responders that
serve as localization anchors. We place the initiator in
18~different positions and collect 500~CIR signals at each of them,
amounting to 9,000~signals.
The choice of initiator positions is key to our analysis. As shown in
Figure~\ref{fig:crng-tosn:err-ellipse-disi}, we split the 18~positions
in two halves with different purposes. The 9~positions in the center
dashed square are representative of the positions of interest for most
applications, as they are farther from walls and enjoy the best
coverage w.r.t.\ responders, when these serve as anchors. Dually, the
remaining 9~positions can be regarded as a stress test of sorts. They
are very close to walls, yielding significant MPC; this is an issue
with conventional SS-TWR\xspace but is exacerbated in concurrent ranging\xspace, as it increases
the possibility to confuse MPC with the direct paths of
responders. Further, these positions are at the edge of the area
delimited by anchors, therefore yielding a more challenging geometry
for localization.
Hereafter, we refer to these two sets of
positions as \textsc{center}\xspace and \textsc{edge}\xspace, respectively, and analyze the
performance in the common case represented by \textsc{center}\xspace as well as in
the more challenging, and somewhat less realistic, case where
\emph{all} positions are considered.
In each position, we measure the ranging and localization performance
of concurrent ranging\xspace with both our ToA\xspace estimation algorithms
(\ref{sec:crng-tosn:toa-est}) and compare it, in the same setup,
against the performance of the two SS-TWR\xspace variants we consider.
\fakeparagraph{Ranging Accuracy}
Figure~\ref{fig:crng-tosn:center-rng-err-cdf} shows the CDF of the
ranging error \mbox{$\hat{d}_i - d_i$} obtained with concurrent ranging\xspace and SS-TWR\xspace
in \textsc{center}\xspace positions; Table~\ref{tab:rng-err-center} offers an
alternate view by reporting the values of the metrics we consider
(\ref{sec:exp-metrics}).
The performance of concurrent ranging\xspace in this setting, arguably the one of interest
for most applications, is remarkable and in line with the one of
SS-TWR\xspace. All variants achieve a similar centimeter-level median and
average error. Although SS-TWR\xspace exhibits a smaller $\sigma$, both concurrent ranging\xspace
and SS-TWR\xspace achieve decimeter-level precision. This is also reflected
in the absolute error, which is nonetheless very small. Both variants
of concurrent ranging\xspace achieve $\nth{99} = 28$~cm, only a few cm higher than plain
SS-TWR\xspace, while its drift compensated variant achieves a lower
$\nth{99} = 18$~cm. The latter SS-TWR\xspace variant is the technique that,
as expected, achieves the best results across the board.
Nevertheless, concurrent ranging\xspace measures the distance to the $N=6$
responders concurrently, reducing the number of two-way exchanges
from~6 to~1, therefore providing a significant reduction
in channel utilization and other evident benefits in terms
of latency, energy, and scalability.
Interestingly, the difference in accuracy and precision
between the two concurrent ranging\xspace variants considered is essentially negligible.
\begin{figure}[!t]
\centering
\subfloat[\textsc{center}\xspace positions.\label{fig:crng-tosn:center-rng-err-cdf}]{
\includegraphics{figs/disi-crng-disterr-cdf-t9.pdf}
}
\subfloat[All positions.\label{fig:crng-tosn:all-rng-err-cdf}]{
\includegraphics{figs/disi-crng-disterr-cdf-t20.pdf}
}
\caption{CDF of ranging error with static positions.}
\label{fig:crng-tosn:center-err-hist-disi}
\end{figure}
\begin{table}
\centering
\caption{Ranging error comparison across the 9 \textsc{center}\xspace positions
considered.}
\label{tab:rng-err-center}
\begin{tabular}{l ccc ccccc}
\toprule
& \multicolumn{3}{c}{ $\hat{d}_i - d_i$ [cm]}
& \multicolumn{5}{c}{ $|\hat{d}_i - d_i|$ [cm]}\\
\cmidrule(lr){2-4} \cmidrule(lr){5-9}
{\bfseries Scheme} & Median & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 0.4 & 0.3 & 11.9 & 8 &14 & 19 &21 & 28\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 0.7 & 0.5 & 11.7 & 7 &12 & 18 &21 & 28\\
SS-TWR\xspace & -1.7 & -0.5 & 8.6 &5 &9 & 15 &19 & 22\\
SS-TWR\xspace Compensated & -0.3 & -0.3 & 6.9 &4 &8 & 12 &14 & 18\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Ranging error comparison across the 18 static positions
considered (both \textsc{center}\xspace and \textsc{edge}\xspace).}
\label{tab:rng-err-all}
\begin{tabular}{l ccc ccccc}
\toprule
& \multicolumn{3}{c}{ $\hat{d}_i - d_i$ [cm]}
& \multicolumn{5}{c}{ $|\hat{d}_i - d_i|$ [cm]}\\
\cmidrule(lr){2-4} \cmidrule(lr){5-9}
{\bfseries Scheme} & Median & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 0.4 & 2.0 & 17.7 &9 &15 & 21 &28 & 81\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 0.1 & 3.1 & 20.4 &8 &14 & 23 &44 & 91\\
SS-TWR\xspace & 1.5 & 2.1 & 8.8 &6 &10 & 16 &19 & 23\\
SS-TWR\xspace Compensated & 0.4 & 0.2 & 6.9 &5 &8 & 12 &14 & 18\\
\bottomrule
\end{tabular}
\end{table}
Figure~\ref{fig:crng-tosn:all-rng-err-cdf} shows instead the CDF of
the ranging error across all positions, i.e.,\ both \textsc{center}\xspace and
\textsc{edge}\xspace, while Table~\ref{tab:rng-err-all} shows the values of the
metrics we consider. The difference in accuracy between the two concurrent ranging\xspace
variants is still negligible in aggregate terms, but slightly worse
for S{\footnotesize \&}S\xspace when considering the absolute error; this is balanced by a
higher reliability w.r.t.\ the threshold-based variant, as discussed
later. In general, the accuracy of concurrent ranging\xspace is still comparable to the
\textsc{center}\xspace case in terms of median and average error, although with
slightly worse precision. This is also reflected in the absolute
error, which remains very small and essentially the same as in the
\textsc{center}\xspace case until the $75^\mathit{th}$ percentile, but reaches
$\nth{99} = 91$~cm with S{\footnotesize \&}S\xspace. In contrast, the performance of both
variants of SS-TWR\xspace is basically unaltered.
These trends can also be observed in the alternate view of
Figure~\ref{fig:crng-tosn:err-hist-disi}, based on normalized
histograms. The distributions of concurrent ranging\xspace and SS-TWR\xspace are similar,
although the latter is slightly narrower. Nevertheless, concurrent ranging\xspace has a
small tail of positive errors, not present in SS-TWR\xspace, yielding higher
values of $\sigma$ and $\geq\nth{90}$ percentiles in
Table~\ref{tab:rng-err-all}. Further, these tails are also not present
in the case of \textsc{center}\xspace, whose distribution is otherwise essentially
the same, and therefore not shown due to space limitations.
\begin{figure}[!t]
\centering
\subfloat[Threshold-based ToA\xspace estimation.\label{fig:crng-tosn:err-hist-th}]{
\includegraphics{figs/crng-th-t20-disi-disterr-hist.pdf}
}
\subfloat[S{\footnotesize \&}S\xspace with $K = 3$ iterations.\label{fig:crng-tosn:err-hist-ssr3}]{
\includegraphics{figs/crng-ssr3-t20-disi-disterr-hist.pdf}
}\\
\subfloat[SS-TWR\xspace.\label{fig:crng-tosn:err-hist-sstwr}]{
\includegraphics{figs/sstwr-t20-disi-disterr-hist.pdf}
}
\subfloat[SS-TWR\xspace with drift compensation.\label{fig:crng-tosn:err-hist-sstwr-drift}]{
\includegraphics{figs/sstwr-drift-t20-disi-disterr-hist.pdf}
}
\caption{Normalized histogram of ranging error across all 18 static
positions (both \textsc{center}\xspace and \textsc{edge}\xspace).}
\label{fig:crng-tosn:err-hist-disi}
\end{figure}
This is to be ascribed to \textsc{edge}\xspace positions, in which the initiator
\begin{inparaenum}[\itshape i)]
\item is next to a wall suffering from closely-spaced and strong MPC
next to the direct path, and
\item is very close to one or two anchors and far from the others,
resulting in significantly different power loss across responses.
\end{inparaenum}
This setup sometimes causes the direct path of some responses to be
buried in MPC noise or even unable to cross the noise threshold
$$\mathcal{T}$\xspace$. As a result, our ToA\xspace algorithms erroneously select one
of the MPC peaks as the first path, yielding an incorrect distance
estimate. Nevertheless, as mentioned, the absolute error remains
definitely acceptable with both the threshold-based and S{\footnotesize \&}S\xspace ToA\xspace
algorithms.
\fakeparagraph{Localization Accuracy}
Figure~\ref{fig:crng-tosn:err-ellipse-disi} shows the localization
error and $3\sigma$ ellipses for each initiator position and both ToA\xspace
estimation algorithms, while
Table~\ref{tab:loc-err-center}--\ref{tab:loc-err-all} show the values
of the metrics we consider.
Coherently with the analysis of ranging accuracy, the standard
deviation $\sigma$ for concurrent ranging\xspace is significantly lower in the \textsc{center}\xspace
positions than in the \textsc{edge}\xspace ones. This is a consequence of the
distance overestimation we observed, which causes larger ellipses and
a small bias w.r.t.\ the true position in a few \textsc{edge}\xspace
positions. Interestingly, both ToA\xspace algorithms underperform in the
same positions, although sometimes with different effects, e.g.,\ in
positions $(1.6,-3.2)$ and $(3.2,-1.6)$.
The difference between SS-TWR\xspace and concurrent ranging\xspace is also visible in the longer
tails of the localization error CDF (Figure~\ref{fig:crng-tosn:cdf-disi-all}),
where it is further exacerbated by the fact that, in our setup,
the worst-case \textsc{edge}\xspace positions are \emph{as many as}
the common-case \textsc{center}\xspace ones.
Nevertheless, even in this challenging case,
Table~\ref{tab:loc-err-all} shows that concurrent ranging\xspace still achieves
decimeter-level accuracy, with the median\footnote{As the localization
error is always positive, unlike the ranging error, the median is
the same as the \nth{50} percentile.} nearly the same as plain
SS-TWR\xspace. The error is also quite small; $\nth{75}\leq 17$~cm
and $\nth{99} \leq 57$~cm, with the threshold-based approach
performing marginally better than S{\footnotesize \&}S\xspace, as in ranging. However, the
drift compensated SS-TWR\xspace is still the most accurate and precise.
\begin{figure}[!t]
\centering
\subfloat[Threshold-based ToA\xspace estimation.\label{fig:crng-err-ellipse-th}]{
\includegraphics{figs/crng-err-ellipses-demo-th.png}
}
\subfloat[S{\footnotesize \&}S\xspace with $K = 3$ iterations.\label{fig:crng-err-ellipse-ssr}]{
\includegraphics{figs/crng-err-ellipses-demo-ssr3.png}
}
\caption{$3\sigma$ error ellipses with concurrent ranging\xspace and six concurrent responders.
Blue dots represent position estimates, brown crosses are anchors.
The dashed light red square denotes the positions of interest.}
\label{fig:crng-tosn:err-ellipse-disi}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[\textsc{center}\xspace positions.\label{fig:crng-tosn:cdf-disi-center}]{
\includegraphics{figs/disi-crng-loc-cdf-t9.pdf}
}
\subfloat[All positions.\label{fig:crng-tosn:cdf-disi-all}]{
\includegraphics{figs/disi-crng-loc-cdf-t20.pdf}
}
\caption{CDF of localization error in static positions.}
\label{fig:crng-tosn:cdf-disi}
\end{figure}
The gap with SS-TWR\xspace further reduces in the more common
\textsc{center}\xspace positions, where the accuracy of concurrent ranging\xspace is very high,
as shown in Figure~\ref{fig:crng-tosn:err-ellipse-disi} and
Figure~\ref{fig:crng-tosn:cdf-disi-center}. Position estimates are
also quite precise, with $\sigma \leq 5$~cm. Further, the error
remains $\leq 16$~cm in $95\%$ of the cases, regardless of the ToA\xspace
estimation technique; the threshold-based and S{\footnotesize \&}S\xspace ToA\xspace algorithms
show only a marginal difference, with a \nth{99} percentile of $21$~cm
and $30$~cm, respectively.
\begin{table}
\centering
\caption{Localization error comparison across the 9 \textsc{center}\xspace positions
considered.}
\label{tab:loc-err-center}
\begin{tabular}{l cc ccccc}
\toprule
& \multicolumn{7}{c}{$\norm{\mathbf{\hat p} - \mathbf{p_r}}$ [cm]}\\
\cmidrule(lr){2-8}
{\bfseries Scheme} & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 9 & 4.9 &8 &12 & 14 &16 & 21\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 8.8 & 5 &8 &11 & 14 &16 & 30\\
SS-TWR\xspace & 6.9 & 2.7 &7 &9 & 10 &11 & 12\\
SS-TWR\xspace Compensated & 4.1 & 2.3 &4 &6 & 8 &8 & 10\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Localization error comparison across the 18 static positions
considered (both \textsc{center}\xspace and \textsc{edge}\xspace).}
\label{tab:loc-err-all}
\begin{tabular}{l cc ccccc}
\toprule
& \multicolumn{7}{c}{$\norm{\mathbf{\hat p} - \mathbf{p_r}}$ [cm]}\\
\cmidrule(lr){2-8}
{\bfseries Scheme} & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 12.9 & 11 &10 &14 & 28 &41 & 51\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 14.5 & 12.6 &10 &17 & 33 &42 & 57\\
SS-TWR\xspace & 8.6 & 3.4 &9 &11 & 13 &14 & 16\\
SS-TWR\xspace Compensated & 5 & 2.4 &5 &7 & 8 &9 & 11\\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Success Rate} Across the 9,000 CIR signals considered
in this section, concurrent ranging\xspace is able to extract a position estimate in 8,663
and 8,973 of them using our threshold-based and S{\footnotesize \&}S\xspace ToA\xspace estimation,
respectively, yielding a remarkable localization success rate of
$96.25\%$ and $99.7\%$. Across the successful estimates, 6~samples
included very large errors $\geq 10$~m. These could be easily
discarded with common filtering techniques~\cite{ukf-julier}. In the
\textsc{center}\xspace positions of interest, the localization success rate with
both ToA\xspace techniques yields $99.7\%$.
Threshold-based ToA\xspace estimation is more susceptible to strong and late
MPC occurring at the beginning of the following CIR chunk, which
result in invalid distance estimates that are therefore discarded,
reducing the success rate.
As for S{\footnotesize \&}S\xspace, of the $27$ signals failing to provide an estimate, $21$
are caused by PHR errors where the DW1000 does not update the RX
timestamp. In the remaining 6~signals, S{\footnotesize \&}S\xspace was unable to detect the
first or last responder; these signals were therefore discarded, to
avoid a potential responder mis-identification
(\ref{sec:crng-tosn:toa-est}).
Regarding ranging, threshold-based estimation yields a success rate of
$95.98\%$ across the 54,000 expected estimates, while S{\footnotesize \&}S\xspace reaches
$99.58\%$, in line with the localization success rate.
\section{Evaluation}
\label{sec:crng-tosn:eval}
We evaluate our concurrent ranging\xspace prototype, embodying the techniques illustrated
in~\ref{sec:reloaded}. We begin by describing our experimental setup
(\ref{sec:crng-exp-setup}) and evaluation metrics
(\ref{sec:exp-metrics}). Then, we evaluate our
TX scheduling (\ref{sec:crng-tosn:exp-tx-comp}), confirming its
ability to achieve sub-ns precision. This is key to improve the
accuracy of ranging and localization, which we evaluate in
static positions (\ref{sec:crng-exp-static}) and via trajectories
generated by a mobile robot in an OptiTrack facility
(\ref{sec:optitrack}).
\subsection{Experimental Setup}\label{sec:crng-exp-setup}
We implemented concurrent ranging\xspace atop Contiki OS~\cite{contikiuwb} using the
EVB1000 platform~\cite{evb1000} as in~\ref{sec:questions}.
\fakeparagraph{UWB Radio Configuration} In all experiments, we set the
DW1000 to use channel~7 with center frequency $f_c = 6489.6$~GHz and
$900$~MHz receiver bandwidth. We use the shortest preamble length of
64~symbols with preamble code~17, the highest $PRF = 64~$MHz, and the
highest 6.8~Mbps data rate. Finally, we set the response delay
$\ensuremath{T_\mathit{RESP}}\xspace = $~\SI{800}{\micro\second} to provide enough time to compensate
for the TX scheduling uncertainty (\ref{sec:crng-tosn:txfix}).
\fakeparagraph{Concurrent Ranging Configuration}
Table~\ref{tab:crng-tosn:parameters} summarizes the default values of
concurrent ranging\xspace parameters. The time shift $\ensuremath{T_\mathit{ID}}\xspace = 128$~ns for \textsc{response}\xspace
identification (\ref{sec:crng-tosn:resp-id}) corresponds to a distance
of $38.36$~m, sufficiently larger than the maximum distance difference
($\approx 12$~m) among anchors in our setups. For ToA\xspace estimation
(\ref{sec:crng-tosn:toa-est}), we use a noise threshold
$$\mathcal{T}$\xspace = 11 \times \ensuremath{\sigma_n}\xspace$, computed as described
in~\ref{sec:crng-tosn:cir-proc}, and $\ensuremath{K}\xspace = 3$ iterations per
CIR chunk of the S{\footnotesize \&}S\xspace algorithm.
\begin{table}[!t]
\centering
\caption{Main parameters of concurrent ranging with default values.}
\label{tab:crng-tosn:parameters}
\begin{tabular}{llr}
\toprule
{\bfseries Symbol} & {\bfseries Description} & {\bfseries Default Value}\\
\midrule
$L$ & CIR upsampling factor & 30\\
$\ensuremath{T_\mathit{ID}}\xspace$ & Time shift for response identification & 128~ns\\
$\xi$ & Noise threshold for CIR re-arrangement & 0.14\\
$W$ & Window length for CIR re-arrangement & 228~samples\\
$$\mathcal{T}$\xspace$ & Noise threshold for ToA\xspace estimation algorithm & $11\times \ensuremath{\sigma_n}\xspace$\\
$\ensuremath{K}\xspace$ & Iterations (max. number of paths) of the S{\footnotesize \&}S\xspace ToA\xspace algorithm & 3\\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Infrastructure}
We run our experiments with a mobile testbed infrastructure
we deploy in the target environment.
Each testbed node consists of an EVB1000~\cite{evb1000} connected via USB to a
Raspberry~Pi (RPi)~v3, equipped with an ST-Link programmer enabling
firmware uploading. Each RPi reports its serial data via WiFi to a
server, which stores it in a log file.
Although our prototype supports runtime positioning,
hereafter we run our analysis offline.
In each test, we collect TX information from anchors and RX
information diagnostics and CIR signals from the initiator. We
collect a maximum of 8~CIR signals per second, as this requires
reading over SPI, logging over USB, and transmitting over WiFi the
4096B accumulator buffer (CIR) together with the rest of the measurements.
\fakeparagraph{Baseline: SS-TWR with and without Clock Drift Compensation}
We compare the performance of concurrent ranging\xspace against the
commonly-used SS-TWR scheme (\ref{sec:soa-sstwr}). We implemented
it for the EVB1000 platform atop Contiki OS using a response delay
$\ensuremath{T_\mathit{RESP}}\xspace = \SI{320}{\micro\second}$ to minimize the impact of clock
drift. Moreover, we added the possibility to compensate for the
estimated clock drift at the initiator based on the carrier frequency
offset (CFO) measured during the \textsc{response}\xspace packet RX as suggested by
Decawave\xspace~\cite{dw-cfo, dw1000-sw-api}. Hence, our evaluation results also
serve to quantitatively demonstrate the benefits brought by this
recent clock drift compensation mechanism.
As for localization, we perform a SS-TWR\xspace exchange every 2.5~ms against
the $N$ responders deployed, in round-robin, yielding an estimate of
the initiator position every $N \times 2.5$~ms. We use the exact same
RF configuration as in concurrent ranging\xspace, for comparison.
\subsection{Metrics}
\label{sec:exp-metrics}
Our main focus is on assessing the ranging and localization accuracy
of concurrent ranging\xspace in comparison with SS-TWR\xspace. Therefore, we consider the
following metrics, for which we report the median, average $\mu$, and
standard deviation $\sigma$, along with various percentiles of the
absolute values:
\begin{itemize}
\item \emph{Ranging Error.} We compute it w.r.t.\ each responder $R_i$ as
$\hat{d}_{i} - d_{i}$, where $\hat{d}_{i}$ is the distance estimated
and $d_{i}$ is the known distance.
\item \emph{Localization Error.} We compute the absolute positioning
error as $\norm{\mathbf{\hat p} - \mathbf{p_r}}$, where
$\mathbf{\hat p}$ is the initiator position estimate and
$\mathbf{p_r}$ its known position.
\end{itemize}
Moreover, we also consider the \emph{success rate}
as a measure of the reliability and robustness of concurrent ranging\xspace in real
environments. Specifically, we define the \textit{ranging success
rate} to responder \resp{i} and the \textit{localization success
rate} as the fraction of CIR signals where, respectively, we are able to
\begin{inparaenum}[\itshape i)]
\item measure the distance $d_i$ from the initiator to \resp{i} and
\item obtain enough information \mbox{($\geq 3$ ToA\xspace estimates)} to
compute the initiator position $\mathbf{\hat p}$.
\end{inparaenum}
\subsection{What about More Responders?}
\label{sec:cir-multiple}
We conclude the experimental campaign with our strawman implementation
by investigating the impact of more than two concurrent responders,
and their relative distance, on \ensuremath{\mathit{PRR}}\xspace and ranging accuracy. If
multiple responders are at a similar distance from the initiator,
their pulses are likely to overlap in the CIR, hampering the
discrimination of their direct paths from MPC. Dually, if the distance
between the initiator and the nearest responder is much smaller w.r.t.\
the others, power loss may render the transmissions of farther responders too
faint to be detected at the initiator, due to the interference from
those of the nearest responder.
To investigate these aspects, we run experiments with five concurrent
responders arranged in a line (Figure~\ref{fig:five-responders-setup}),
for which we change the inter-node distance $d_i$.
For every tested $d_i$, we repeat the experiment until we obtain
500~successfully received \textsc{response}\xspace packets, as done earlier.
\begin{figure}[!b]
\centering
\begin{tikzpicture}[xscale=0.6]%
\tikzstyle{device} = [circle, thick, draw=black, fill=gray!30, minimum size=8mm]
\node[device] (a) at (0, 0) {$I$};
\node[device] (b) at (3, 0) {$R_1$};
\node[device] (c) at (6, 0) {$R_2$};
\node[device] (d) at (9, 0) {$R_3$};
\node[device] (e) at (12, 0) {$R_4$};
\node[device] (f) at (15, 0) {$R_5$};
\draw[<->, thick] (a) -- (b) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (b) -- (c) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (c) -- (d) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (d) -- (e) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (e) -- (f) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (0, -.6) -- (15, -.6) node[pos=.5,below]{$D = 5 \times d_i$};
\end{tikzpicture}
\caption{Experimental setup to analyze the CIR resulting from five
concurrent responders (\ref{sec:cir-multiple}).}
\label{fig:five-responders-setup}
\end{figure}
\fakeparagraph{Dense Configuration} We begin by examining a very short
$d_i = 0.4$~m, yielding similar distances between each
responder and the initiator. In this setup, the overall
${\overline{\ensuremath{\mathit{PRR}}\xspace} = 99.36}$\%.
Nevertheless, recall that a time-of-flight difference of 1~ns
translates into a difference of $\approx30$~cm in distance and that
the duration of a UWB pulse is $\leq 2$~ns; pulses from neighboring
responders are therefore likely to overlap, as shown by the CIR in
Figure~\ref{fig:cir-d0-5tx}. Although we can visually observe
different peaks, discriminating the ones associated to responders from
those caused by MPC is very difficult, if not impossible, in absence
of a-priori knowledge about the number of concurrent responders and/or
the environment characteristics. Even when these are present, in some
cases the CIR shows a wider pulse that ``fuses'' the pulses of one or
more responders with MPC. In essence, when the difference in distance
$\ensuremath{\Delta d}\xspace = d_i$ among responders is too small, concurrent ranging
cannot be applied with the strawman technique we employed thus far; we
address this problem in~\ref{sec:reloaded}.
\begin{figure}[!t]
\centering
\subfloat[$d_i = 0.4$~m. The peaks corresponding to each responder are not
clearly distinguishable; the distance from the initiator cannot be estimated.
\label{fig:cir-d0-5tx}]{
\includegraphics{figs/ewsn/cir-5tx-d0-83.pdf}}
\hfill
\subfloat[$d_i = 6$~m. The peaks corresponding to each responder are clearly
separated; the distance from the initiator can be estimated.\label{fig:cir-d15-5tx}]{
\includegraphics{figs/ewsn/cir-5tx-d15-17.pdf}}
\caption{Impact of the relative distance $d_i$ among 5~responders, analyzed
via the corresponding CIR.}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics{figs/ewsn/crng-tosn-5tx-cdf.pdf}
\caption{Impact of the relative distance $d_i$ among 5 responders: CDF of absolute ranging error.}
\label{fig:crng-ewsn-5tx-cdf}
\end{figure}
\fakeparagraph{Sparser Configurations: \ensuremath{\mathit{PRR}}\xspace} We now explore
${2 \leq d_i \leq 10}$~m, up to a maximum distance $D~=~50$~m
between the initiator $I$ and the last responder \resp{5}.
The experiment achieved an overall ${\overline{\ensuremath{\mathit{PRR}}\xspace} = 96.59}$\%, with the minimum $\overline{\ensuremath{\mathit{PRR}}\xspace} = 88.2\%$ for the maximum $d_i = 10$~m,
and the maximum ${\overline{\ensuremath{\mathit{PRR}}\xspace} = 100}$\% for $d_i = 8$~m.
The closest responder $R_1$ achieved \mbox{$\ensuremath{\mathit{PRR}}\xspace_1 =90.56$\%}.
The \ensuremath{\mathit{PRR}}\xspace of the experiment is interestingly
high, considering that in narrowband technologies
increasing the number of concurrent transmitters sending different packets
typically decreases reliability due to the nature of the
capture effect~\cite{chaos,crystal}. In general, the
behavior of concurrent transmissions in UWB is slightly
different---and richer---than in narrowband; the interested reader
can find a more exhaustive treatment in~\cite{uwb-ctx-fire}. In this
specific case, the reason for the high \ensuremath{\mathit{PRR}}\xspace we observed is the closer
distance to the initiator of $R_1$ w.r.t.\ the other responders.
\fakeparagraph{Sparser Configurations: Ranging Error}
Figure~\ref{fig:crng-ewsn-5tx-cdf} shows the CDF of the ranging error
for all distances and responders. We use the same technique
of~\ref{sec:cir-enough} to detect the direct paths and, similarly,
only consider the exchanges (about 90\% in this case) where the
successfully received \textsc{response}\xspace is from the nearest responder
$R_1$, to avoid a mismatch (\ref{sec:obs:accuracy}).
We observe the worst performance for $d_i = 2$~m; peaks from different
responders are still relatively close to each other and affected by
the MPC of previously transmitted pulses. Instead,
Figure~\ref{fig:cir-d15-5tx} shows an example CIR for $d_i = 6$~m, the
intermediate value in the distance range considered. Five distinct
peaks are clearly visible, enabling the initiator to estimate the
distance to each responder. The time offset \ensuremath{\Delta t}\xspace between two
consecutive peaks is similar, as expected, given the same distance
offset $\ensuremath{\Delta d}\xspace = d_i$ between two neighboring responders. This yields
good sub-meter ranging accuracy for all $d_i\geq 4$, for which
the average error is $\mu\leq 40$~cm and the absolute error
$\nth{75}\leq 60$~cm.
\fakeparagraph{Summary} These results confirm that sub-meter
concurrent ranging is feasible even with multiple responders. However,
ranging accuracy is significantly affected by the relative distance
between responders, which limits practical applicability.
\subsubsection{Time of Arrival Estimation}
\label{sec:crng-tosn:toa-est}
To determine the first path of each responder, we use FFT to upsample
the re-arranged CIR signals by a factor $L = 30$, yielding a
resolution $T_s \approx 33.38$~ps. We then split the CIR into chunks
of length equal to the time shift \ensuremath{T_\mathit{ID}}\xspace used for responder
identification~(\ref{sec:crng-tosn:resp-id}), therefore effectively
separating the signals of each \textsc{response}\xspace. Finally, the actual ToA\xspace
estimation algorithm is applied to each chunk, yielding the CIR index
\ensuremath{\Tau_i}\xspace marking the ToA\xspace of each responder \resp{i}. We consider two
ToA\xspace estimation algorithms:
\begin{itemize}
\item \emph{Threshold-based.} This commonly-used algorithm simply
places the first path at the first $i^\mathit{th}$ index whose sampled
amplitude $A_i > $\mathcal{T}$\xspace$, where $\mathcal{T}$\xspace is the noise
threshold (\ref{sec:crng-tosn:cir-proc}).
\item \emph{Search and Subtract (S{\footnotesize \&}S\xspace).} This well-known algorithm
has been proposed in~\cite{dardari-toa-estimation}; here, we use our
adaptation~\cite{chorus} to the case of concurrent
transmissions\footnote{Hereafter, we refer to this adaptation simply as
S{\footnotesize \&}S\xspace, for brevity.}. S{\footnotesize \&}S\xspace determines the $K$ strongest
paths, considering \emph{all} signal paths whose peak
amplitude $A_i > $\mathcal{T}$\xspace$. The first path is then
estimated as the one with the minimum
time delay, i.e.,\ minimum index in the CIR chunk.
\end{itemize}
These two algorithms strike different trade-offs w.r.t.\ complexity,
accuracy, and resilience to multipath. The threshold-based algorithm is very
simple and efficient but also sensitive to high noise. For instance,
if a late MPC from a previous chunk appears at the beginning of the next
one with above-threshold amplitude, it is selected as the first
path, yielding an incorrect ToA\xspace estimate. S{\footnotesize \&}S\xspace is more resilient,
as these late MPC from previous responses would need to be stronger
than the $K$ strongest paths from the current chunk to cause a mismatch.
Still, when several strong MPC are in the same chunk, S{\footnotesize \&}S\xspace may incorrectly select
one of them as the first path, especially if the latter is weaker than
MPC. Moreover, S{\footnotesize \&}S\xspace relies on a matched filter, which
\begin{inparaenum}[\itshape i)]
\item requires to determine the filter template by measuring the shape
of the transmitted UWB pulses, and
\item increases computational complexity, as $K$ discrete convolutions
must be performed to find the $K$ strongest paths.
\end{inparaenum}
We compare these ToA\xspace estimation algorithms in our evaluation
(\ref{sec:crng-tosn:eval}).
\subsection{From Time to Distance}
\label{sec:crng-tosn:time-dist}
Enabling concurrent ranging\xspace on the DW1000 requires a dedicated algorithm
(\ref{sec:crng-tosn:toa-est}) to estimate the ToA\xspace of each \textsc{response}\xspace
in the CIR. This timing information must then be translated into the
corresponding distances (\ref{sec:crng-tosn:dist-est}), used directly
or in the computation of the initiator position
(\ref{sec:soa:toa-loc}).
\input{toa-est}
\subsubsection{Distance Estimation}
\label{sec:crng-tosn:dist-est}
These ToA\xspace estimation algorithms determine the CIR indexes \ensuremath{\Tau_i}\xspace
marking the direct path of each \textsc{response}\xspace. These, however, are only
\emph{array indexes}; each must be translated into a radio timestamp
marking the \emph{time of arrival} of the corresponding \textsc{response}\xspace, and
combined with other timing information to reconstruct the distance
$d_i$ between initiator and responder.
In~\ref{sec:crng}--\ref{sec:questions} we relied on the fact that the
radio \emph{directly} estimates the ToA\xspace of the first responder
\resp{1} with high accuracy, enabling accurate distance estimation by
using the timestamps embedded in the payload. Then, by looking at the
time difference $\Delta t_{i, 1}$ between the first path
of \resp{1} and another responder $R_i$ we can determine its distance
from the initiator as $d_i = d_1 + c\frac{\Delta t_{i, 1}}{2}$. This
approach assumes that the radio
\begin{inparaenum}[\itshape i)]
\item places the direct path of \resp{1} at the \texttt{FP\_INDEX}\xspace and
\item successfully decodes the \textsc{response}\xspace from \resp{1},
containing the necessary timestamps to accurately determine $d_1$.
\end{inparaenum}
However, the former is not necessarily true
(Figure~\ref{fig:crng-tosn:cir-arrangement}); as for the latter, the
radio may receive the \textsc{response}\xspace packet from any responder or
none. Therefore, we cannot rely on the distance estimation of
\resp{1}.
Interestingly, the compensation technique to eliminate the TX
scheduling uncertainty (\ref{sec:crng-tosn:txfix}) is also key to
enable an alternate approach avoiding these issues and yielding
additional benefits. Indeed, this technique enables TX scheduling with
sub-ns accuracy (\ref{sec:crng-tosn:exp-tx-comp}). Therefore, the
response delay $\ensuremath{T_\mathit{RESP}}\xspace$ and the additional delay $\delta_i$ for
responder identification in concurrent ranging\xspace can be enforced with high accuracy,
without relying on the timestamps embedded in the \textsc{response}\xspace.
In more detail, the time of flight \ensuremath{\tau_\mathit{i}}\xspace from the initiator to
responder \resp{i} is estimated as
\begin{equation}\label{eq:crng-tosn:tof}
\ensuremath{\tau_\mathit{i}}\xspace = \frac{\ensuremath{T_\mathit{RTT,i}}\xspace - \ensuremath{T_\mathit{RESP,i}}\xspace}{2}
\end{equation}
and the corresponding distance as \mbox{$d_i = c \times \ensuremath{\tau_\mathit{i}}\xspace$}. As
shown in Figure~\ref{fig:computedistance}, \ensuremath{T_\mathit{RESP,i}}\xspace is the delay between
the RX of \textsc{poll}\xspace and the TX of \textsc{response}\xspace at responder \resp{i}. This
delay is computed as the addition of three factors
\mbox{$\ensuremath{T_\mathit{RESP,i}}\xspace = \ensuremath{T_\mathit{RESP}}\xspace + \delta_i + \ensuremath{A_\mathit{TX}}\xspace$}, where \ensuremath{T_\mathit{RESP}}\xspace is
the fixed response delay inherited from SS-TWR\xspace (\ref{sec:soa-sstwr}),
\mbox{$\delta_i = (i - 1)\ensuremath{T_\mathit{ID}}\xspace$} is the responder-specific delay
enabling response identification (\ref{sec:crng-tosn:resp-id}), and
\ensuremath{A_\mathit{TX}}\xspace is the known antenna delay obtained in a previous calibration
step~\cite{dw1000-antenna-delay}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.80\textwidth]{figs/crng-dist-comp.png}
\caption{Concurrent ranging time of flight $\tau_i$ computation.
To determine the distance
$d_i = c \times \tau_i$ to responder \resp{i}, we need to accurately measure
the actual \textsc{response}\xspace delay \mbox{$\ensuremath{T_\mathit{RESP,i}}\xspace = \ensuremath{T_\mathit{RESP}}\xspace + \delta_i + \ensuremath{A_\mathit{TX}}\xspace$}
and the round-trip time \ensuremath{T_\mathit{RTT,i}}\xspace of each responder based on our ToA\xspace estimation.}
\label{fig:computedistance}
\end{figure}
\ensuremath{T_\mathit{RTT,i}}\xspace is the round-trip time for responder \resp{i}, measured at the
initiator as the difference between the \textsc{response}\xspace RX timestamp and the
\textsc{poll}\xspace TX timestamp. The latter is accurately determined at the
\texttt{RMARKER}\xspace by the DW1000, in device time units of $\approx 15.65$~ps,
while the former must be extracted from the CIR via ToA\xspace
estimation. Nevertheless, the algorithms
in~\ref{sec:crng-tosn:toa-est} return only the CIR index \ensuremath{\Tau_i}\xspace at
which the first path of responder \resp{i} is estimated; this index
must therefore be translated into a radio timestamp, similar to the TX
\textsc{poll}\xspace one. To this end, we rely on the fact that the precise
timestamp \ensuremath{T_\mathit{FP}}\xspace associated to the \texttt{FP\_INDEX}\xspace in the CIR is
known. Therefore, it serves as the accurate time baseline w.r.t.\ which to
derive the \textsc{response}\xspace RX by
\begin{inparaenum}[\itshape i)]
\item computing the difference $\Delta\ensuremath{\Tau_\mathit{FP,i}}\xspace = \texttt{FP\_INDEX}\xspace - \ensuremath{\Tau_i}\xspace$
between the indexes in the CIR, and
\item obtaining the actual RX timestamp as
$\ensuremath{T_\mathit{FP}}\xspace - T_s\times\Delta\ensuremath{\Tau_\mathit{FP,i}}\xspace$, where $T_s$
is the CIR sampling period after upsampling (\ref{sec:crng-tosn:toa-est}).
\end{inparaenum}
In our experiments, we noticed that concurrent ranging\xspace usually underestimates
distance. This is due to the fact that the responder
estimates the ToA\xspace of \textsc{poll}\xspace with the DW1000
LDE algorithm, while the initiator estimates the ToA\xspace of each \textsc{response}\xspace
with one of the algorithms in~\ref{sec:crng-tosn:toa-est}. For
instance, S{\footnotesize \&}S\xspace measures the ToA\xspace at the beginning of the path, while
LDE measures it at a peak height related to the noise standard
deviation reported by the DW1000. This underestimation is nonetheless
easily compensated by a constant offset ($\leq 20$~cm) whose value
can be determined during calibration at deployment time.
Together, the steps we described enable accurate estimation of the
distance to multiple responders \emph{solely based on the CIR and the
(single) RX timestamp provided by the radio}. In the DW1000,
\begin{inparaenum}[\itshape i)]
\item the CIR is measured and available to the application even if RX
errors occur, and
\item the RX timestamp necessary to translate our ToA\xspace estimates to
radio timestamps is always\footnote{Unless a very rare PHR error
occurs~\cite[p.97]{dw1000-manual-v218}.} updated,
\end{inparaenum}
therefore \emph{making our concurrent ranging\xspace prototype highly resilient to RX
errors}. Finally, the fact that we remove the dependency on
\resp{1} and therefore no longer need to embed/receive any timestamp
enables us to safely \emph{remove the entire payload from \textsc{response}\xspace
packets}. Unless application information is piggybacked on a
\textsc{response}\xspace, this can be composed only of preamble, SFD, and PHR,
reducing the length of the \textsc{response}\xspace packet, and therefore
the latency and energy consumption of concurrent ranging\xspace.
\section{Conclusions}
\label{sec:crng-tosn:conclusions}
In~\cite{crng}, we described the novel concept of concurrent ranging
for the first time in the literature, demonstrated its feasibility,
elicited the open challenges, and outlined the several benefits it
could potentially enable in terms of latency, scalability, update
rate, and energy consumption.
In this paper, we make these benefits a tangible reality. We tackle
the aforementioned challenges with a repertoire of techniques that,
without requiring modifications to off-the-shelf UWB transceivers,
turn concurrent ranging into a practical and immediately available
approach. Concurrent ranging empowers the designers of UWB ranging and
localization systems with a new option whose accuracy is comparable to
conventional techniques, but comes at a fraction of the latency and
energy costs, therefore unlocking application trade-offs hitherto
unavailable for these systems.
\section{Related Work}
\label{sec:relwork}
We place concurrent ranging in the context of other UWB ranging
schemes (\ref{sec:relwork:twr}), the literature on concurrent
transmissions in low-power wireless communications
(\ref{sec:relwork:glossy}), and techniques
that build upon the work~\cite{crng} in which we introduced the notion
of concurrent ranging for the first time (\ref{sec:relwork:crng}).
\subsection{Other UWB Ranging Schemes}
\label{sec:relwork:twr}
Although SS-TWR\xspace is a simple and popular scheme for UWB, several others
exist, focusing on improving different aspects of its operation.
A key issue is the linear relation between the ranging error and the
clock drift (\ref{sec:toa}). Some approaches extend SS-TWR\xspace by
\emph{adding} an extra packet from the initiator to the
responder~\cite{polypoint} or from the responder to the
initiator~\cite{ethz-sstwr-drift}. The additional packet
enables clock drift compensation.
Instead, double-sided two-way ranging (DS-TWR\xspace), also part of the
IEEE~802.15.4\xspace standard~\cite{std154}, includes a third packet from the
initiator to the responder in reply to its \textsc{response}\xspace, yielding a more
accurate distance estimate at the responder; a fourth, optional
packet back to the initiator relays the estimate to it. In the
classic \emph{symmetric} scheme~\cite{dstwr}, the response delay \ensuremath{T_\mathit{RESP}}\xspace
for the \textsc{response}\xspace is the same for the third packet from initiator to
responder.
This constraint reduces flexibility and increases development
complexity~\cite[p. 225]{dw1000-manual-v218}. In the alternative
\emph{asymmetric} scheme proposed by Decawave\xspace~\cite{dw-dstwr-patent,
dw-dstwr}, instead, the error does not depend on the delays of the
two packets; further, the clock drift is reduced to picoseconds,
making ToA estimation the main source of
error~\cite{dw1000-manual-v218}. However, DS-TWR\xspace has significantly
higher latency and energy consumption, requiring up to $4\times N$
packets (twice than SS-TWR\xspace) to measure the distance to $N$ nodes at
the initiator. We are currently investigating if and how concurrent
ranging can be extended towards DS-TWR\xspace.
PolyPoint~\cite{polypoint} and SurePoint~\cite{surepoint} improve
ranging and localization by using a custom-designed multi-antenna
hardware platform. These schemes exploit antenna and channel diversity,
yielding more accurate and reliable estimates; however,
this comes at the cost of a significantly higher latency
and energy consumption, decreasing scalability and battery lifetime.
Other schemes have instead targeted directly a reduction of the
packet overhead. The \emph{one-way ranging} in~\cite{ethz-one-way}
exploits \emph{sequential} transmissions from anchors to enable mobile
nodes to passively self-position, by precisely estimating the
time of flight and the clock drift. However, the update rate and
accuracy decrease as the number $N$ of anchors increases. Other
schemes replace the unicast \textsc{poll}\xspace of SS-TWR\xspace with a \emph{broadcast}
one, as in concurrent ranging. In N-TWR~\cite{ntwr}, responders send
their \textsc{response}\xspace \emph{sequentially}, to avoid collisions, reducing the
number of packets exchanged to $N + 1$.
An alternate scheme by Decawave\xspace~\cite[p.~227]{dw1000-manual-v218}
exploits a broadcast \textsc{poll}\xspace in asymmetric DS-TWR\xspace, rather than SS-TWR\xspace,
reducing the packet overhead to $2 + N$ or $2(N + 1)$ depending on
whether estimates are obtained at the responders or the initiator,
respectively.
In all these schemes, however, the number of packets required grows
linearly with $N$,
limiting scalability. In contrast, concurrent ranging measures the
distance to the $N$ nodes based on a \emph{single} two-way exchange,
reducing dramatically latency, consumption, and channel utilization,
yet providing similar accuracy as demonstrated
in~\ref{sec:crng-tosn:eval}.
\subsection{Concurrent Transmissions for Low-power Wireless
Communication}
\label{sec:relwork:glossy}
Our concurrent ranging technique was originally inspired by the body
of work on concurrent transmissions in narrowband low-power radios.
Pioneered by Glossy~\cite{glossy}, this technique exploits the
PHY-level phenomena of constructive interference and capture effect to
achieve unprecedented degrees of high reliability, low latency, and
low energy consumption, as shown by several follow-up
works~\cite{chaos,lwb,crystal}. However, these focus on IEEE~802.15.4\xspace
narrowband radios, leaving an open question about whether similar
benefits can be harvested for UWB radios.
In~\cite{uwb-ctx-fire} we ascertained empirically the conditions for
exploiting UWB concurrent transmissions for reliable communication,
exploring extensively the radio configuration space. The findings
serve as a foundation for adapting the knowledge and systems in
narrowband towards UWB and reaping similar benefits, as already
exemplified by~\cite{glossy-uwb}. Further, the work
in~\cite{uwb-ctx-fire} also examined the effect of concurrent
transmissions on ranging---a peculiarity of UWB not present in
narrowband---confirming our original findings in~\cite{crng}
(and~\ref{sec:questions}) and analyzing the radio configuration and
environmental conditions in more depth and breadth than what we can
report here.
\subsection{Concurrent Transmissions for Ranging and Localization}
\label{sec:relwork:crng}
We introduced the novel concept of concurrent ranging in~\cite{crng},
where we demonstrated the feasibility of exploiting UWB
concurrent transmissions together with CIR information for ranging;
\ref{sec:questions} contains an adapted account of the observations
we originally derived.
Our work was followed by~\cite{crng-graz}, which introduces the idea
of using pulse shapes and response position modulation to match CIR
paths with responders. We discarded the former
in~\ref{sec:crng-tosn:resp-id} and~\cite{chorus} as we verified
empirically that closely-spaced MPC can create ambiguity, and
therefore mis-identifications. Here, we resort to the latter as
in~\cite{chorus, snaploc}, i.e.,\ by adding a small time shift $\delta_i$
to each \textsc{response}\xspace, enough to separate the signals of each responder
throughout the CIR span. The work in~\cite{crng-graz} also suggested
a simpler version of Search \& Subtract for ToA\xspace estimation. Instead,
here we follow the original algorithm~\cite{dardari-toa-estimation}
but enforce that candidate paths reach a minimum peak amplitude, to
improve resilience to noise and MPC. Moreover, we introduce an
alternate threshold-based ToA\xspace algorithm that is significantly simpler
but yields similar results. Both preliminary works in~\cite{crng,
crng-graz} leave as open challenges the TX scheduling uncertainty
and the unreliability caused by packet loss. Here, we address these
challenges with the local compensation mechanism
in~\ref{sec:crng-tosn:txfix} and the other techniques
in~\ref{sec:reloaded}, making concurrent ranging not only accurate,
but also very reliable and, ultimately, usable in practice.
Decawave\xspace~\cite{dw:simulranging} filed a patent on ``simultaneous ranging''
roughly at the same time of our original work~\cite{crng},
similarly exploiting concurrent transmissions from responders.
The patent includes two variants:
\begin{inparaenum}[\itshape i)]
\item a \emph{parallel} version, where all responders transmit nearly simultaneously
as in~\ref{sec:crng}--\ref{sec:questions}, only aiming to measure
the distance to the closest responder, and
\item a \emph{staggered} version that exploits time shifts as
in~\ref{sec:crng-tosn:resp-id} to determine the distance to each
responder.
\end{inparaenum}
The latter, however, requires PHY-layer changes that will unavoidably
take time to be standardized and adopted by future UWB transceivers.
In contrast, the techniques we present here can be exploited with
current transceivers and can also serve as a reference for the
design and development of forthcoming UWB radios natively
supporting concurrent ranging.
Our original paper inspired follow-up work on concurrent
ranging~\cite{crng-graz,R3} but also on other techniques exploiting
concurrent transmissions for localization. Our own
Chorus~\cite{chorus} system and SnapLoc~\cite{snaploc}
realize a passive self-localization scheme supporting unlimited
targets. Both systems assume a known anchor infrastructure in which a
reference anchor transmits a first packet to which the others reply
concurrently. Mobile nodes in range listen for these concurrent
responses and estimate their own position based on time-difference
of arrival (TDoA\xspace) multilateration. In~\cite{chorus}, we modeled the accuracy of
estimation via concurrent transmissions if the TX uncertainty were to
be reduced, as expected in forthcoming UWB transceivers. This model is
applicable to concurrent ranging\xspace and, in fact, predicts the results we achieved
in~\ref{sec:crng-tosn:eval} by locally compensating for the TX
uncertainty (\ref{sec:crng-tosn:txfix}). SnapLoc instead proposed to
directly address the TX uncertainty with a correction that
requires either a wired backbone infrastructure that anchors exploit
to report their known TX error, or a reference anchor that receives
the \textsc{response}\xspace and measures each TX error from the CIR.
Both require an additional step to report the error to
mobile nodes, and introduce complexity in the deployment along with
communication overhead.
In contrast, the compensation
in~\ref{sec:crng-tosn:txfix} is \emph{entirely local} to the
responders, therefore imposing neither deployment constraints nor
overhead. Moreover, the compensation
in~\ref{sec:crng-tosn:txfix} can be directly incorporated
in Chorus and SnapLoc, improving their performance while simplifying
their designs.
Recently, these works have also inspired the use of UWB concurrent
transmissions with angle-of-arrival (AoA)
localization. In~\cite{crng-aoa}, a multi-antenna anchor sends a \textsc{poll}\xspace
to which mobile nodes in range reply concurrently, allowing the anchor
not only to measure their distance but also the AoA of their signals;
combining the two enables the anchor to estimate the position of each
node. The techniques we proposed in this paper (\ref{sec:reloaded})
addressing the TX uncertainty, clock drift, and unreliability caused
by packet loss, are applicable and likely beneficial also for this AoA
technique.
\section{Concurrent Ranging}
\label{sec:crng}
Ranging against $N$ responders (e.g.,\ anchors) with SS-TWR\xspace requires $N$
independent pairwise exchanges---essentially, $N$ instances of
Figure~\ref{fig:two-way-ranging}, one after the other. In contrast,
the notion of concurrent ranging we propose obtains the same
information within a \emph{single} exchange, as shown in
Figure~\ref{fig:crng} with only two responders. The
technique is conceptually very simple, and consists of changing the
basic SS-TWR\xspace scheme (\ref{sec:soa-sstwr}) by:
\begin{compactenum}
\item replacing the $N$ unicast \textsc{poll}\xspace packets necessary to solicit
ranging from the $N$ responders with a \emph{single} broadcast \textsc{poll}\xspace, and
\item having all responders reply to the \textsc{poll}\xspace after the \emph{same}
time interval \ensuremath{T_\mathit{RESP}}\xspace from its (timestamped) receipt.
\end{compactenum}
\begin{figure}[!t]
\centering
\subfloat[Narrowband.\label{fig:concurrent-narrowband}]{
\includegraphics{figs/ewsn/concurrent-ranging-narrowband}}
\hfill
\subfloat[UWB.\label{fig:concurrent-uwb}]{
\includegraphics{figs/ewsn/concurrent-ranging-uwb}}
\caption{Concurrent ranging, idealized, with
narrowband (\ref{fig:concurrent-narrowband}) and
UWB (\ref{fig:concurrent-uwb}) radios.
With narrowband it is infeasible to recover the timing
information of the signals from the individual responders. With UWB, instead,
the different distance from the initiator to responders $R_1$ and $R_2$
produces a time shift \ensuremath{\Delta t}\xspace between their signals.
By measuring \ensuremath{\Delta t}\xspace, we can determine the distance
difference ${\ensuremath{\Delta d}\xspace = \mid d_1 - d_2 \mid}$
between responders.
}
\end{figure}
This simple idea is impractical (if not infeasible) in narrowband
radios. As illustrated in the idealized view of
Figure~\ref{fig:concurrent-narrowband}, the signals from responders
$R_1$ and $R_2$ interfere with each other, yielding a combined
signal where the time information necessary to estimate distance is
essentially lost. In contrast, Figure~\ref{fig:concurrent-uwb} shows
why this is not the case in UWB; the time displacement \ensuremath{\Delta t}\xspace of
pulses from responders, caused by their different distances from
the initiator, is still clearly visible in the resulting signal. This
is due to the fact that UWB pulses are extremely short w.r.t.\ narrowband
waves, and therefore unlikely to interfere---although in practice
things are not so simple, as discussed in~\ref{sec:questions}.
\fakeparagraph{A Strawman Implementation} Concurrent ranging can be
implemented very easily, a direct consequence of the simplicity of the
concept. If a SS-TWR\xspace implementation is already available, it suffices to
replace the unicast \textsc{poll}\xspace with a broadcast one. The computation of the
actual ranging estimate requires processing the available CIR signal.
The time shift \ensuremath{\Delta t}\xspace can be measured as the difference
between the first path from the closest responder \resp{1} in the CIR,
automatically obtained from the DW1000 and used to compute the
accurate distance estimate $d_1$, and the first path
from \resp{2}, that must be instead determined in a custom way, as discussed
later (\ref{sec:cir-enough}). Indeed, the first path from \resp{2}, key to the
operation of concurrent ranging, is treated as MPC or noise
by the DW1000 but remains visible in the CIR, enabling
computation of its time of arrival (ToA\xspace). Once \ensuremath{\Delta t}\xspace is
determined, the spatial displacement $\ensuremath{\Delta d}\xspace=c\ensuremath{\Delta t}\xspace$ can be
computed, along with the distance $d_2 = d_1 + \ensuremath{\Delta d}\xspace$ of \resp{2};
a similar process must be repeated in the case of $N$ responders.
As for the value of the response delay,
crucial to the accuracy of SS-TWR\xspace (\ref{sec:soa-sstwr}), our implementation uses
$\ensuremath{T_\mathit{RESP}}\xspace=330$~\si{\micro\second}. We verified experimentally that this
provides a good trade-off; lower values do not leave enough time to
correctly prepare the radio for the \textsc{response}\xspace transmission,
and larger ones negatively affect ranging due to clock drift.
In concurrent ranging\xspace, as in SS-TWR\xspace, the \ensuremath{T_\mathit{RESP}}\xspace value also enables the responder
to determine the time \mbox{$\ensuremath{t_3}\xspace = \ensuremath{t_2}\xspace + \ensuremath{T_\mathit{RESP}}\xspace$} at which the \textsc{response}\xspace
must be sent (Figure~\ref{fig:sstwr-crng-cmp}).
The timestamp \ensuremath{t_2}\xspace associated to the RX of \textsc{poll}\xspace is estimated
by the DW1000 at the \texttt{RMARKER}\xspace with the extremely high precision of
15~ps (\ref{sec:uwb}). Unfortunately, the same precision is not
available when scheduling the delayed TX (\ref{sec:dw1000}) of the
corresponding \textsc{response}\xspace at time \ensuremath{t_3}\xspace. Due to the significantly
coarser granularity of TX scheduling in the DW1000, the TX of a
\textsc{response}\xspace expected at a time $\ensuremath{t_3}\xspace$ actually
occurs at $t + \epsilon$,
with $\epsilon \in [-8,0)$~ns~\cite{dw1000-datasheet}.
This is not a problem in SS-TWR\xspace,
as the timestamps \ensuremath{t_2}\xspace and \ensuremath{t_3}\xspace are embedded in the \textsc{response}\xspace and
decoded by the initiator. Instead, in concurrent ranging\xspace the additional \textsc{response}\xspace
packets are not decoded, and this technique cannot be used. Therefore,
the uncertainty of TX scheduling, which at first may appear a
negligible hardware detail, has significant repercussions on the
practical exploitation of our technique, as discussed next.
\section{Background}
\label{sec:background}
We concisely summarize the salient features of UWB radios in general
(\ref{sec:uwb}) and how they are made available by the popular DW1000
transceiver we use in this work (\ref{sec:dw1000}). Moreover, we
illustrate the SS-TWR\xspace technique we build upon,
and show how it is used to perform localization (\ref{sec:toa}).
\subsection{Ultra-wideband in the IEEE~802.15.4\xspace PHY Layer}
\label{sec:uwb}
UWB communications have been originally used for military applications due to
their very large bandwidth and interference resilience to mainstream
narrowband radios. In 2002, the FCC approved the unlicensed use of UWB under
strict power spectral masks, boosting a new wave of research from industry and
academia. Nonetheless, this research mainly focused on high data rate
communications, and remained largely based on theory and simulation, as most
UWB radios available then were bulky, energy-hungry, and expensive, hindering
the widespread adoption of UWB. In 2007, the {IEEE~802.15.4\xspace}a standard amendment
included a UWB PHY layer based on impulse radio (IR-UWB)~\cite{impulse-radio},
aimed at providing accurate ranging with low-power consumption.
A few years ago, Decawave\xspace released a standard-compliant IR-UWB radio,
the DW1000, saving UWB from a decade-long oblivion, and taking
by storm the field of real-time location systems (RTLS).
\begin{figure}[!t]
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics{figs/ewsn/monopulse}
\caption{UWB pulse.}
\label{fig:uwb-pulse}
\end{minipage}
\begin{minipage}[t]{0.51\linewidth}
\centering
\includegraphics{figs/ewsn/dest-bw}
\caption{Distance resolution vs.\ bandwidth.}
\label{fig:dest-bw}
\end{minipage}
\vspace{-2mm}
\end{figure}
\fakeparagraph{Impulse Radio}
According to the FCC, UWB signals are
characterized by a bandwidth $\geq 500$~MHz or a fractional bandwidth
$\geq 20\%$ during transmission. To achieve such a large bandwidth,
modern UWB systems are based on IR-UWB, using pulses
(Figure~\ref{fig:uwb-pulse}) very narrow in time ($\leq 2$~ns).
This reduces the power spectral density, the interference
produced to other wireless technologies, and the impact of multipath
components (MPC). Further, it enhances the ability of UWB signals to
propagate through obstacles and walls~\cite{uwb-idea} and simplifies
transceiver design. The large bandwidth also provides excellent time
resolution (Figure~\ref{fig:dest-bw}), enabling UWB receivers to
precisely estimate the time of arrival (ToA\xspace) of a signal and
distinguish the direct path from MPC. Time-hopping
codes~\cite{ir-uwb-maccess} enable multiple access to the
medium. Overall, these features make \mbox{IR-UWB} ideal for low-power
ranging and localization as well as communication.
\fakeparagraph{IEEE~802.15.4\xspace UWB PHY Layer} The IEEE~802.15.4\xspace-2011
standard~\cite{std154} specifies a PHY layer based on IR-UWB.
The highest frequency at which a compliant device shall emit
pulses is 499.2~MHz (fundamental frequency), yielding a
standard chip duration of $\approx2$~ns. A UWB frame is composed of
\begin{inparaenum}[\itshape i)]
\item a synchronization header (SHR) and
\item a data portion.
\end{inparaenum}
The SHR is encoded in single pulses and includes a preamble for
synchronization and the start frame delimiter (SFD), which delimits
the end of the SHR and the beginning of the data portion.
Instead, the data portion exploits a combination of burst
position modulation (BPM) and binary phase-shift keying (BPSK), and
includes a physical header (PHR) and the data payload.
The duration of the preamble is configurable and depends on
the number of repetitions of a predefined symbol, whose structure
is determined by the preamble code. Preamble codes also define the
pseudo-random sequence used for time-hopping in the transmission of
the data part. The standard defines preamble codes of $31$ and $127$
elements, which are then interleaved with zeros according to a
spreading factor. This yields a (mean) \emph{pulse repetition
frequency} (\ensuremath{\mathit{PRF}}\xspace) of $16$~MHz or $64$~MHz. Preamble codes and \ensuremath{\mathit{PRFs}}\xspace
can be exploited to configure non-interfering links within the same RF
channel~\cite{uwb-ctx-fire}.
\subsection{Decawave\xspace DW1000}
\label{sec:dw1000}
The Decawave\xspace DW1000~\cite{dw1000-datasheet} is a commercially
available low-power low-cost UWB transceiver compliant with IEEE~802.15.4\xspace,
for which it supports frequency channels 1--4 in the low band and 5, 7
in the high band, and data rates of $110$~kbps, $850$~kbps, and
$6.8$~Mbps. Channels 4 and~7 have a larger $900$~MHz bandwidth, while
the others are limited to $499.2$~MHz.
\fakeparagraph{Channel Impulse Response (CIR)} The perfect periodic
autocorrelation of the preamble code sequence enables coherent
receivers to determine the CIR~\cite{dw1000-manual-v218}, which provides
information about the multipath propagation characteristics of the
wireless channel between a transmitter and a receiver. The CIR allows
UWB radios to distinguish the signal leading edge,
commonly called\footnote{Hereafter,
we use the terms first path and direct path interchangeably.}
\emph{direct} or \emph{first} path,
from MPC and accurately estimate the ToA\xspace of the signal.
In this paper, we exploit the information available
in the CIR to perform these operations on \emph{several signals
transmitted concurrently}.
The DW1000 measures the CIR upon preamble reception with a sampling
period \mbox{$T_s = 1.0016$~ns}. The CIR is stored in a large internal
buffer of 4096B accessible by the firmware developer. The time span
of the CIR is the duration of a preamble symbol: 992~samples for a
16~MHz \ensuremath{\mathit{PRF}}\xspace or 1016 for a 64~MHz \ensuremath{\mathit{PRF}}\xspace. Each sample is a complex number
$a_k + jb_k$ whose real and imaginary parts are 16-bit signed
integers. The amplitude $A_k$ and phase $\theta_k$ at each time delay
$t_k$ is $A_k = \sqrt{\smash[b]{a_k^2 + b_k^2}}$ and
$\theta_k = \arctan{\frac{b_k}{a_k}}$. The DW1000 measures the CIR
even when RX errors occur, therefore offering signal timing
information even when a packet (e.g.,\ a \textsc{response}\xspace) cannot be
successfully decoded.
\fakeparagraph{TX/RX Timestamps} The TX and RX timestamps enabling ranging
are measured in a packet at the ranging marker (\texttt{RMARKER}\xspace)~\cite{dw1000-manual-v218},
which marks the first pulse of the PHR after the SFD (\ref{sec:uwb}).
These timestamps are measured with a very high time resolution
in radio units of $\approx\SI{15.65}{\pico\second}$.
The DW1000 first makes a coarse RX timestamp estimation,
then adjusts it based on
\begin{inparaenum}[\itshape i)]
\item the RX antenna delay, and
\item the first path in the CIR estimated by a proprietary
internal leading edge detection (LDE) algorithm.
\end{inparaenum}
The CIR index that LDE determines to be the first path
(\texttt{FP\_INDEX}\xspace) is stored together with the RX timestamp in the
\texttt{RX\_TIME} register. LDE detects the first path as the
first sampled amplitude that goes above a dynamic threshold based on
\begin{inparaenum}[\itshape i)]
\item the noise standard deviation \ensuremath{\sigma_n}\xspace and
\item the noise peak value.
\end{inparaenum}
Similar to the CIR, the RX signal timestamp is measured despite RX errors,
unless there is a rare PHR error~\cite[p. 97]{dw1000-manual-v218}.
\fakeparagraph{Delayed Transmissions} The DW1000 offers the capability to
schedule transmissions at a specified time in the
future~\cite[p. 20]{dw1000-manual-v218}, corresponding to the
\texttt{RMARKER}\xspace. To this end, the DW1000 internally computes the time at
which to begin the preamble transmission, considering also the TX
antenna delay~\cite{dw1000-antenna-delay}. This makes the TX
timestamp predictable, which is key for ranging.
\begin{table}[!tb]
\caption{Current consumption comparison of DW1000 vs.\ TI CC2650 BLE
SoC~\cite{cc2650-datasheet} and Intel 5300 WiFi
card~\cite{wifi-power}. Note that the CC2650 includes a 32-bit
ARM Cortex-M3 processor and the Intel~5300 can support multiple
antennas; further, consumption depends on radio configuration.}
\label{tab:current-consumption}
\begin{tabular}{l c c c}
\toprule
& \textbf{DW1000} & \textbf{TI CC2650~\cite{cc2650-datasheet}}& \textbf{Intel 5300~\cite{wifi-power}}\\
\textbf{State} & 802.15.4a & BLE~4.2 \& 802.15.4 & 802.11~a/b/g/n\\
\midrule
Deep Sleep & 50~\si{\nano\ampere} & 100--150~\si{\nano\ampere} & N/A\\
Sleep & 1~\si{\micro\ampere} & 1~\si{\micro\ampere} & 30.3 ~mA\\
Idle & 12--18~mA & 550~\si{\micro\ampere} & 248~mA\\
TX & 35--85~mA & 6.1--9.1~mA & 387--636~mA\\
RX & 57--126~mA & 5.9--6.1~mA & 248--484~mA\\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Power Consumption}
An important aspect of the DW1000 is its
low-power consumption w.r.t.\ previous UWB
transceivers (e.g.,\ ~\cite{timedomain-pulson400}).
Table~\ref{tab:current-consumption} compares the current consumption of
the DW1000 against other commonly-used technologies (BLE and WiFi) for
localization. The DW1000 consumes significantly less than the Intel
5300~\cite{wifi-power}, which provides channel state information
(CSI). However, it consumes much more than low-power
widespread technologies such as BLE or
IEEE~802.15.4\xspace~narrowband~\cite{cc2650-datasheet}. Hence, to ensure a long
battery lifetime of UWB devices it is essential to reduce the
radio activity, while retaining the accuracy and update rate of
ranging and localization required by applications.
\subsection{Time-of-Arrival (ToA) Ranging and Localization}
\label{sec:toa}
In ToA\xspace-based methods, distance is estimated by precisely measuring
RX and TX timestamps of packets exchanged between nodes. In this section,
we describe the popular SS-TWR\xspace ranging technique (\ref{sec:soa-sstwr})
we extend and build upon in this paper,
and show how distance estimates from known positions can be used to
determine the position of a target (\ref{sec:soa:toa-loc}).
\subsubsection{Single-sided Two-way Ranging (SS-TWR\xspace)}
\label{sec:soa-sstwr}
In SS-TWR\xspace, part of the IEEE~802.15.4\xspace standard~\cite{std154}, the initiator
transmits a unicast \textsc{poll}\xspace packet to the responder, storing the TX
timestamp $t_1$ (Figure~\ref{fig:two-way-ranging}). The responder
replies back with a \textsc{response}\xspace packet after a given response delay
\ensuremath{T_\mathit{RESP}}\xspace. Based on the corresponding RX timestamp $t_4$, the initiator can
compute the round trip time $\ensuremath{T_\mathit{RTT}}\xspace = t_4 - t_1 = 2\tau +
\ensuremath{T_\mathit{RESP}}\xspace$. However, to cope with the limited TX scheduling precision of
commercial UWB radios, the \textsc{response}\xspace payload includes the RX timestamp
$t_2$ of the \textsc{poll}\xspace and the TX timestamp $t_3$ of the \textsc{response}\xspace,
allowing the initiator to precisely measure the actual response delay
$\ensuremath{T_\mathit{RESP}}\xspace = t_3 - t_2$. The time of flight $\tau$ can be then computed as
\begin{equation*}\label{eq:sstwr-tof}
\tau = \frac{\ensuremath{T_\mathit{RTT}}\xspace - \ensuremath{T_\mathit{RESP}}\xspace}{2} = \frac{(t_4 - t_1) - (t_3 - t_2)}{2}
\end{equation*}
and the distance between the two nodes estimated as
$d = \tau \times c$, where $c$ is the speed of light in air.
SS-TWR\xspace is simple, yet provides accurate distance estimation for many
applications. The main source of error is the clock drift between
initiator and responder, each running an internal oscillator with an
offset w.r.t.\ the expected nominal frequency~\cite{dw-errors},
causing the actual time of flight measured by the initiator to be
\begin{equation*}
\hat{\tau} = \frac{\ensuremath{T_\mathit{RTT}}\xspace(1+e_I) - \ensuremath{T_\mathit{RESP}}\xspace(1+e_R)}{2}
\end{equation*}
where $e_I$ and $e_R$ are the crystal offsets of initiator and
responder, respectively. After some derivations, and by observing that
$\ensuremath{T_\mathit{RESP}}\xspace \gg 2\tau$, we can approximate the error
to~\cite{dstwr,dw-errors}
\begin{equation*}\label{eq:sstwr-drift}
\hat{\tau} - \tau \approx \frac{1}{2} \ensuremath{T_\mathit{RESP}}\xspace(e_I - e_R)
\end{equation*}
Therefore, to reduce the ranging error of SS-TWR\xspace one should
\begin{inparaenum}[\itshape i)]
\item compensate for the drift, and
\item minimize \ensuremath{T_\mathit{RESP}}\xspace, as the error grows linearly with it.
\end{inparaenum}
\subsubsection{Position Estimation}
\label{sec:soa:toa-loc}
The estimated distance $\hat{d_i}$ to each of the $N$ responders can be
used to determine the unknown initiator position $\mathbf{p}$, provided the
responder positions are known. In two-dimensional space, the
Euclidean distance $d_i$ to responder \resp{i} is defined by
\begin{equation}\label{eq:soa:dist-norm}
d_i = \norm{\mathbf{p} - \mathbf{p_i}} = \sqrt{(x - x_i)^2 + (y - y_i)^2}
\end{equation}
where $\mathbf{p_i} = [x_i, y_i]$ is the position of \resp{i},
$i \in [1, N]$. The geometric representation of
Eq.~\eqref{eq:soa:dist-norm} is a circle (a sphere in~3D) with radius
$d_i$ and center in $\mathbf{p_i}$. In the absence of noise, the
intersection of $N \geq 3$ circles yields the unique initiator
position $\mathbf{p}$. In practice, however, each distance estimate
$\hat{d_i} = d_i + n_i$ suffers from an additive zero-mean measurement
noise $n_i$. An estimate $\mathbf{\hat p}$ of the unknown initiator
position can be determined (in 2D) by minimizing the non-linear
least-squares (NLLS) problem
\begin{equation*}\label{eq:toa-solver-dist}
\mathbf{\hat p} = \argmin_{\mathbf{p}}
\sum_{i = 1}^{N}\left(\hat d_i - \sqrt{(x - x_i)^2 + (y - y_i)^2}\right)^2
\end{equation*}
In this paper, we solve the NLLS problem with state-of-the-art
methods, as our contribution is focused on ranging and not on the
computation of the position. Specifically, we employ an iterative
local search via a trust region reflective
algorithm~\cite{branch1999subspace}. This requires an initial
position estimate $\mathbf{p_0}$ that we set as the solution of a
linear least squares estimator that linearizes the system of equations
by applying the difference between any two of
them~\cite{source-loc-alg, toa-lls}.
\subsection{CIR Pre-processing}
\label{sec:crng-tosn:cir-proc}
We detail two techniques to reorder the CIR array and estimate the
signal noise standard deviation \ensuremath{\sigma_n}\xspace. These extend and
significantly enhance the techniques we originally proposed
in~\cite{chorus}, improving the robustness and accuracy
of the ToA\xspace estimation algorithms in~\ref{sec:crng-tosn:toa-est}.
\subsubsection{CIR Array Re-arrangement}
\label{sec:crng-tosn:cir-rearrangement}
In the conventional case of an isolated transmitter, the DW1000
arranges the CIR signal by placing the first path at \mbox{\texttt{FP\_INDEX}\xspace
$\approx750$} in the accumulator buffer (\ref{sec:dw1000}). In
concurrent ranging\xspace, one would expect the \texttt{FP\_INDEX}\xspace to similarly indicate the direct
path of the first responder \resp{1}, i.e.,\ the one with the shorter
time shift $\delta_1 = 0$. Unfortunately, this is not necessarily the
case, as the \texttt{FP\_INDEX}\xspace can be associated with the direct path of
\emph{any} of the involved responders
(Figure~\ref{fig:crng-tosn:cir-arrangement}).
Further, and worse, due to the TX time shifts $\delta_i$ we apply in
concurrent ranging\xspace, the paths associated to the later responders may be circularly
shifted at the beginning of the array, disrupting the implicit
temporal ordering at the core of
responder identification (\ref{sec:crng-tosn:resp-id}).
Therefore, before estimating the ToA\xspace of the concurrent signals, we must
\begin{inparaenum}[\itshape i)]
\item re-arrange the CIR array to match the order expected
from the assigned time shifts, and
\item correspondingly re-assign the index associated to the \texttt{FP\_INDEX}\xspace
and whose timestamp is available in radio time units.
\end{inparaenum}
In~\cite{chorus} we addressed a similar problem by
partially relying on knowledge of the responder ID contained in the
\textsc{response}\xspace payload (among the several concurrent ones) actually
decoded by the radio, which then usually places its
corresponding first path at \mbox{\texttt{FP\_INDEX}\xspace $\approx750$} in the
CIR. However, this technique relies on successfully decoding a
\textsc{response}\xspace, which is unreliable as we previously observed in~\ref{sec:questions}.
Here, we remove this dependency and enable a correct CIR re-arrangement
\emph{even in cases where the initiator is unable to successfully decode
any \textsc{response}\xspace}, significantly improving reliability.
\begin{figure}[!t]
\centering
\subfloat[Raw CIR array.\label{fig:crng-tosn:cir-arrangement-raw}]{
\includegraphics{figs/disi-t1-s41-raw.pdf}
}\\
\subfloat[Re-arranged CIR array.\label{fig:crng-tosn:cir-arrangement-sorted}]{
\includegraphics{figs/disi-t1-s41-sorted.pdf}
}
\caption{CIR re-arrangement. The DW1000 measured the \texttt{FP\_INDEX}\xspace as the
direct path of \resp{6} in the raw CIR
(Figure~\ref{fig:crng-tosn:cir-arrangement-raw}).
After finding the CIR sub-array with the lowest noise,
we re-arrange the CIR (Figure~\ref{fig:crng-tosn:cir-arrangement-sorted})
setting the response of \resp{1} at the beginning and the noise-only
sub-array at the end.}
\label{fig:crng-tosn:cir-arrangement}
\end{figure}
We achieve this goal by identifying the portion of
the CIR that contains \emph{only} noise, which appears in
between the peaks of the last and first responders. First,
we normalize the CIR w.r.t.\ its maximum amplitude sample and search
for the CIR sub-array of length $W$ with the lowest sum---the
aforementioned noise-only portion. Next, we determine the index at
which this noise sub-array begins (minimum noise index in
Figure~\ref{fig:crng-tosn:cir-arrangement}) and search for the next
sample index whose amplitude is above a threshold $\xi$. This latter
index is a rough estimate of the direct path of \resp{1},
the first expected responder. We then re-order the CIR array
by applying a circular shift, setting the $N$ responses
at the beginning of the array, followed by the noise-only portion at the end.
Finally, we re-assign the index corresponding to the original
\texttt{FP\_INDEX}\xspace measured by the DW1000 and whose radio timestamp is available.
We empirically found, by analyzing 10,000s of CIRs signals, that a
threshold $\xi \in [0.12, 0.2]$ yields an accurate CIR reordering.
Lower values may cause errors due to noise or MPC, while higher
values may disregard a weak first path from \resp{1}. The noise window
$W$ must be set based on the CIR time span, the time shifts $\delta_i$
applied, and the number $N$ of concurrent responders. Hereafter, we
set $\xi = 0.14$ and $W = 228$ samples with $N = 6$ responders and
$\ensuremath{T_\mathit{ID}}\xspace = 128$~ns.
\subsubsection{Estimating the Noise Standard Deviation}
\label{sec:crng-tosn:noise-std}
ToA\xspace estimation algorithms frequently rely on a threshold
derived from the noise standard deviation \ensuremath{\sigma_n}\xspace,
to detect the first path from noise and MPC.
The DW1000 estimates $\ensuremath{\sigma_n}\xspace^{DW}$ based on the measured
CIR~\cite{dw1000-manual-v218}. However,
in the presence of concurrent transmissions, the DW1000 sometimes
yields a significantly overestimated $\ensuremath{\sigma_n}\xspace^{DW}$,
likely because it considers the additional \textsc{response}\xspace signals as noise.
Therefore, we recompute our own estimate of \ensuremath{\sigma_n}\xspace
as the standard deviation of the last 128~samples of the re-arranged CIR
(Figure~\ref{fig:crng-tosn:cir-arrangement-sorted}). By design
(\ref{sec:crng-tosn:cir-rearrangement}) these samples belong to the
noise-only portion at the end of the re-arranged CIR, free from MPC from
responses; the noise estimate is therefore significantly more reliable
than the one computed by the DW1000, meant for non-concurrent
ranging.
\begingroup
\setlength{\columnsep}{8pt}
\setlength{\intextsep}{4pt}
\begin{wrapfigure}{R}{5.6cm}
\centering
\includegraphics{figs/crng-noiseth-cdf.pdf}
\caption{Threshold comparison.}
\label{fig:std-noise-cdf}
\end{wrapfigure}
Figure~\ref{fig:std-noise-cdf} offers evidence of this last statement
by comparing the two techniques across the 9,000 signals
with $N = 6$ concurrent responders we use in~\ref{sec:crng-exp-static}
to evaluate the performance of concurrent ranging\xspace with the initiator placed
in 18 different static positions.
The chart shows the actual noise threshold
computed as $$\mathcal{T}$\xspace = 11 \times \ensuremath{\sigma_n}\xspace$, which we empirically
found to be a good compromise for ToA\xspace estimation
(\ref{sec:crng-tosn:toa-est}). Using our technique, $\mathcal{T}$\xspace
converges to a $\nth{99}$ percentile of $0.213$ over the normalized
CIR amplitude, while the default $\ensuremath{\sigma_n}\xspace^{DW}$ yields
$\nth{99} = 0.921$; this value would lead to discard most of
the peaks from concurrent responders.
For instance, in Figure~\ref{fig:crng-tosn:cir-arrangement}
only 2 out of 6 direct paths would be detected with such a high threshold.
Across these 9,000 signals, using our estimated $\ensuremath{\sigma_n}\xspace$
instead of $\ensuremath{\sigma_n}\xspace^{DW}$ increases the ranging and localization
reliability of concurrent ranging by up to $16\%$ depending on the
ToA\xspace algorithm used, as we explain next.
\section{Conclusions}
\label{sec:crng-tosn:conclusions}
In~\cite{crng}, we described the novel concept of concurrent ranging
for the first time in the literature, demonstrated its feasibility,
elicited the open challenges, and outlined the several benefits it
could potentially enable in terms of latency, scalability, update
rate, and energy consumption.
In this paper, we make these benefits a tangible reality. We tackle
the aforementioned challenges with a repertoire of techniques that,
without requiring modifications to off-the-shelf UWB transceivers,
turn concurrent ranging into a practical and immediately available
approach. Concurrent ranging empowers the designers of UWB ranging and
localization systems with a new option whose accuracy is comparable to
conventional techniques, but comes at a fraction of the latency and
energy costs, therefore unlocking application trade-offs hitherto
unavailable for these systems.
\section{Concurrent Ranging}
\label{sec:crng}
Ranging against $N$ responders (e.g.,\ anchors) with SS-TWR\xspace requires $N$
independent pairwise exchanges---essentially, $N$ instances of
Figure~\ref{fig:two-way-ranging}, one after the other. In contrast,
the notion of concurrent ranging we propose obtains the same
information within a \emph{single} exchange, as shown in
Figure~\ref{fig:crng} with only two responders. The
technique is conceptually very simple, and consists of changing the
basic SS-TWR\xspace scheme (\ref{sec:soa-sstwr}) by:
\begin{compactenum}
\item replacing the $N$ unicast \textsc{poll}\xspace packets necessary to solicit
ranging from the $N$ responders with a \emph{single} broadcast \textsc{poll}\xspace, and
\item having all responders reply to the \textsc{poll}\xspace after the \emph{same}
time interval \ensuremath{T_\mathit{RESP}}\xspace from its (timestamped) receipt.
\end{compactenum}
\begin{figure}[!t]
\centering
\subfloat[Narrowband.\label{fig:concurrent-narrowband}]{
\includegraphics{figs/ewsn/concurrent-ranging-narrowband}}
\hfill
\subfloat[UWB.\label{fig:concurrent-uwb}]{
\includegraphics{figs/ewsn/concurrent-ranging-uwb}}
\caption{Concurrent ranging, idealized, with
narrowband (\ref{fig:concurrent-narrowband}) and
UWB (\ref{fig:concurrent-uwb}) radios.
With narrowband it is infeasible to recover the timing
information of the signals from the individual responders. With UWB, instead,
the different distance from the initiator to responders $R_1$ and $R_2$
produces a time shift \ensuremath{\Delta t}\xspace between their signals.
By measuring \ensuremath{\Delta t}\xspace, we can determine the distance
difference ${\ensuremath{\Delta d}\xspace = \mid d_1 - d_2 \mid}$
between responders.
}
\end{figure}
This simple idea is impractical (if not infeasible) in narrowband
radios. As illustrated in the idealized view of
Figure~\ref{fig:concurrent-narrowband}, the signals from responders
$R_1$ and $R_2$ interfere with each other, yielding a combined
signal where the time information necessary to estimate distance is
essentially lost. In contrast, Figure~\ref{fig:concurrent-uwb} shows
why this is not the case in UWB; the time displacement \ensuremath{\Delta t}\xspace of
pulses from responders, caused by their different distances from
the initiator, is still clearly visible in the resulting signal. This
is due to the fact that UWB pulses are extremely short w.r.t.\ narrowband
waves, and therefore unlikely to interfere---although in practice
things are not so simple, as discussed in~\ref{sec:questions}.
\fakeparagraph{A Strawman Implementation} Concurrent ranging can be
implemented very easily, a direct consequence of the simplicity of the
concept. If a SS-TWR\xspace implementation is already available, it suffices to
replace the unicast \textsc{poll}\xspace with a broadcast one. The computation of the
actual ranging estimate requires processing the available CIR signal.
The time shift \ensuremath{\Delta t}\xspace can be measured as the difference
between the first path from the closest responder \resp{1} in the CIR,
automatically obtained from the DW1000 and used to compute the
accurate distance estimate $d_1$, and the first path
from \resp{2}, that must be instead determined in a custom way, as discussed
later (\ref{sec:cir-enough}). Indeed, the first path from \resp{2}, key to the
operation of concurrent ranging, is treated as MPC or noise
by the DW1000 but remains visible in the CIR, enabling
computation of its time of arrival (ToA\xspace). Once \ensuremath{\Delta t}\xspace is
determined, the spatial displacement $\ensuremath{\Delta d}\xspace=c\ensuremath{\Delta t}\xspace$ can be
computed, along with the distance $d_2 = d_1 + \ensuremath{\Delta d}\xspace$ of \resp{2};
a similar process must be repeated in the case of $N$ responders.
As for the value of the response delay,
crucial to the accuracy of SS-TWR\xspace (\ref{sec:soa-sstwr}), our implementation uses
$\ensuremath{T_\mathit{RESP}}\xspace=330$~\si{\micro\second}. We verified experimentally that this
provides a good trade-off; lower values do not leave enough time to
correctly prepare the radio for the \textsc{response}\xspace transmission,
and larger ones negatively affect ranging due to clock drift.
In concurrent ranging\xspace, as in SS-TWR\xspace, the \ensuremath{T_\mathit{RESP}}\xspace value also enables the responder
to determine the time \mbox{$\ensuremath{t_3}\xspace = \ensuremath{t_2}\xspace + \ensuremath{T_\mathit{RESP}}\xspace$} at which the \textsc{response}\xspace
must be sent (Figure~\ref{fig:sstwr-crng-cmp}).
The timestamp \ensuremath{t_2}\xspace associated to the RX of \textsc{poll}\xspace is estimated
by the DW1000 at the \texttt{RMARKER}\xspace with the extremely high precision of
15~ps (\ref{sec:uwb}). Unfortunately, the same precision is not
available when scheduling the delayed TX (\ref{sec:dw1000}) of the
corresponding \textsc{response}\xspace at time \ensuremath{t_3}\xspace. Due to the significantly
coarser granularity of TX scheduling in the DW1000, the TX of a
\textsc{response}\xspace expected at a time $\ensuremath{t_3}\xspace$ actually
occurs at $t + \epsilon$,
with $\epsilon \in [-8,0)$~ns~\cite{dw1000-datasheet}.
This is not a problem in SS-TWR\xspace,
as the timestamps \ensuremath{t_2}\xspace and \ensuremath{t_3}\xspace are embedded in the \textsc{response}\xspace and
decoded by the initiator. Instead, in concurrent ranging\xspace the additional \textsc{response}\xspace
packets are not decoded, and this technique cannot be used. Therefore,
the uncertainty of TX scheduling, which at first may appear a
negligible hardware detail, has significant repercussions on the
practical exploitation of our technique, as discussed next.
\section{Discussion}
\label{sec:discussion}
The outcomes of our evaluation (\ref{sec:crng-tosn:eval}) across
several static positions and mobile trajectories in two indoor
environments prove that \emph{concurrent ranging reliably provides
distance and position estimates with decimeter-level accuracy and
high precision}. The results we presented confirm that concurrent
ranging achieves a performance akin to conventional schemes, and that
it satisfies the strict requirements of most applications, notably
including robot localization.
Nevertheless, \emph{concurrent ranging incurs only a small fraction of
the cost borne by conventional schemes}. SS-TWR\xspace requires $2\times N$
packets to measure the distance to $N$ nodes; concurrent ranging
achieves the same goal with a \emph{single} two-way exchange. At the
initiator, often a mobile, energy-bound node, only 2~packets need to
be TX/RX instead of $2\times N$, proportionally reducing energy
consumption and, dually, increasing lifetime. Overall, the ability to
perform ranging via shorter exchanges dramatically reduces channel
utilization and latency, therefore increasing scalability and
update rate. To concretely grasp these claims, consider that, with the
(conservative) response delay $\ensuremath{T_\mathit{RESP}}\xspace = \SI{800}{\micro\second}$ we
used, concurrent ranging could provide a location update rate of
$\geq$1,000~Hz, either to a single initiator or shared among
several ones.
Actually achieving these update rates, however, requires a better
hardware and software support than in our prototype. Currently we log
the CIR via USB/UART, as it is the only option with the off-the-shelf
Decawave\xspace EVB1000 boards we use. This choice simplifies our prototyping
and enables replication of our results by others, using the same
popular and easily available platform. However, it introduces
significant delays, reducing the location update rate down to only
$\approx$8~Hz; this is appropriate for many applications but
insufficient in others requiring the tracking of fast-moving targets,
e.g.,\ drones. Nevertheless, this limitation is easily overcome by
production systems exploiting more powerful and/or dedicated
components, as in the case of smartphones.
Further, this is an issue only if the high update rate theoretically
available must be exploited by a single initiator. Otherwise, when
shared across several ones, our non-optimized prototype could provide its
8~samples per second to $\approx$125~nodes. This would require a
proper scheduling across initiators to avoid collisions, e.g., as
in~\cite{surepoint,talla-ipin}, and incur overhead, ultimately
reducing the final update rate of the system. On the other hand, the
potential for collisions is significantly reduced with our technique,
given that a single concurrent ranging exchange retrieves the
information accrued via $N$ conventional ones. Further, communicating
the schedule could itself exploit concurrent
transmissions~\cite{surepoint,glossy-uwb,uwb-ctx-fire}, opening the
intriguing possibility of merging scheduling and ranging into a single
concurrent exchange abating at once the overhead of both procedures.
Similar issues arise in more dynamic scenarios where ranging is
performed against mobile nodes instead of fixed anchors, e.g., to
estimate distance between humans as in proxemics
applications~\cite{hidden-dimension, proxemic-interactions}. In cases
where the set of nodes is not known a priori, scheduling must be
complemented by continuous neighbor discovery, to determine the set
of potential ranging targets. The problem of jointly discovering,
scheduling, and ranging against nodes has received very little
attention by the research community, although it is likely to become
important for many applications once UWB becomes readily available on
smartphones. In this context, the ability to perform fast and
energy-efficient concurrent ranging against several nodes at once
brings a unique asset, which may be further enhanced by additional
techniques like the adaptive response delays we hinted at
in~\ref{sec:crng-tosn:resp-id}. The exploration of these and other
research avenues enabled by the concurrent ranging techniques we
presented in this paper is the subject of our ongoing work.
Finally, the research findings and system prototypes we describe
in this paper are derived for the DW1000, i.e.,\ the only UWB
transceiver available off-the-shelf today. Nevertheless, new
alternatives are surfacing on the market. We argue that the
fundamental concept of concurrent ranging and the associated
techniques outlined here are of general validity, and therefore in
principle transferable to these new transceivers. Moreover, it is
our hope that the remarkable benefits we have shown may inspire new
UWB architectures that natively support concurrent ranging directly
in hardware.
\subsection{Performance with Static Targets}
\label{sec:crng-exp-static}
We report the results from experiments in a $6.4 \times 6.4$~m$^2$
area inside our office building, using 6~concurrent responders that
serve as localization anchors. We place the initiator in
18~different positions and collect 500~CIR signals at each of them,
amounting to 9,000~signals.
The choice of initiator positions is key to our analysis. As shown in
Figure~\ref{fig:crng-tosn:err-ellipse-disi}, we split the 18~positions
in two halves with different purposes. The 9~positions in the center
dashed square are representative of the positions of interest for most
applications, as they are farther from walls and enjoy the best
coverage w.r.t.\ responders, when these serve as anchors. Dually, the
remaining 9~positions can be regarded as a stress test of sorts. They
are very close to walls, yielding significant MPC; this is an issue
with conventional SS-TWR\xspace but is exacerbated in concurrent ranging\xspace, as it increases
the possibility to confuse MPC with the direct paths of
responders. Further, these positions are at the edge of the area
delimited by anchors, therefore yielding a more challenging geometry
for localization.
Hereafter, we refer to these two sets of
positions as \textsc{center}\xspace and \textsc{edge}\xspace, respectively, and analyze the
performance in the common case represented by \textsc{center}\xspace as well as in
the more challenging, and somewhat less realistic, case where
\emph{all} positions are considered.
In each position, we measure the ranging and localization performance
of concurrent ranging\xspace with both our ToA\xspace estimation algorithms
(\ref{sec:crng-tosn:toa-est}) and compare it, in the same setup,
against the performance of the two SS-TWR\xspace variants we consider.
\fakeparagraph{Ranging Accuracy}
Figure~\ref{fig:crng-tosn:center-rng-err-cdf} shows the CDF of the
ranging error \mbox{$\hat{d}_i - d_i$} obtained with concurrent ranging\xspace and SS-TWR\xspace
in \textsc{center}\xspace positions; Table~\ref{tab:rng-err-center} offers an
alternate view by reporting the values of the metrics we consider
(\ref{sec:exp-metrics}).
The performance of concurrent ranging\xspace in this setting, arguably the one of interest
for most applications, is remarkable and in line with the one of
SS-TWR\xspace. All variants achieve a similar centimeter-level median and
average error. Although SS-TWR\xspace exhibits a smaller $\sigma$, both concurrent ranging\xspace
and SS-TWR\xspace achieve decimeter-level precision. This is also reflected
in the absolute error, which is nonetheless very small. Both variants
of concurrent ranging\xspace achieve $\nth{99} = 28$~cm, only a few cm higher than plain
SS-TWR\xspace, while its drift compensated variant achieves a lower
$\nth{99} = 18$~cm. The latter SS-TWR\xspace variant is the technique that,
as expected, achieves the best results across the board.
Nevertheless, concurrent ranging\xspace measures the distance to the $N=6$
responders concurrently, reducing the number of two-way exchanges
from~6 to~1, therefore providing a significant reduction
in channel utilization and other evident benefits in terms
of latency, energy, and scalability.
Interestingly, the difference in accuracy and precision
between the two concurrent ranging\xspace variants considered is essentially negligible.
\begin{figure}[!t]
\centering
\subfloat[\textsc{center}\xspace positions.\label{fig:crng-tosn:center-rng-err-cdf}]{
\includegraphics{figs/disi-crng-disterr-cdf-t9.pdf}
}
\subfloat[All positions.\label{fig:crng-tosn:all-rng-err-cdf}]{
\includegraphics{figs/disi-crng-disterr-cdf-t20.pdf}
}
\caption{CDF of ranging error with static positions.}
\label{fig:crng-tosn:center-err-hist-disi}
\end{figure}
\begin{table}
\centering
\caption{Ranging error comparison across the 9 \textsc{center}\xspace positions
considered.}
\label{tab:rng-err-center}
\begin{tabular}{l ccc ccccc}
\toprule
& \multicolumn{3}{c}{ $\hat{d}_i - d_i$ [cm]}
& \multicolumn{5}{c}{ $|\hat{d}_i - d_i|$ [cm]}\\
\cmidrule(lr){2-4} \cmidrule(lr){5-9}
{\bfseries Scheme} & Median & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 0.4 & 0.3 & 11.9 & 8 &14 & 19 &21 & 28\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 0.7 & 0.5 & 11.7 & 7 &12 & 18 &21 & 28\\
SS-TWR\xspace & -1.7 & -0.5 & 8.6 &5 &9 & 15 &19 & 22\\
SS-TWR\xspace Compensated & -0.3 & -0.3 & 6.9 &4 &8 & 12 &14 & 18\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Ranging error comparison across the 18 static positions
considered (both \textsc{center}\xspace and \textsc{edge}\xspace).}
\label{tab:rng-err-all}
\begin{tabular}{l ccc ccccc}
\toprule
& \multicolumn{3}{c}{ $\hat{d}_i - d_i$ [cm]}
& \multicolumn{5}{c}{ $|\hat{d}_i - d_i|$ [cm]}\\
\cmidrule(lr){2-4} \cmidrule(lr){5-9}
{\bfseries Scheme} & Median & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 0.4 & 2.0 & 17.7 &9 &15 & 21 &28 & 81\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 0.1 & 3.1 & 20.4 &8 &14 & 23 &44 & 91\\
SS-TWR\xspace & 1.5 & 2.1 & 8.8 &6 &10 & 16 &19 & 23\\
SS-TWR\xspace Compensated & 0.4 & 0.2 & 6.9 &5 &8 & 12 &14 & 18\\
\bottomrule
\end{tabular}
\end{table}
Figure~\ref{fig:crng-tosn:all-rng-err-cdf} shows instead the CDF of
the ranging error across all positions, i.e.,\ both \textsc{center}\xspace and
\textsc{edge}\xspace, while Table~\ref{tab:rng-err-all} shows the values of the
metrics we consider. The difference in accuracy between the two concurrent ranging\xspace
variants is still negligible in aggregate terms, but slightly worse
for S{\footnotesize \&}S\xspace when considering the absolute error; this is balanced by a
higher reliability w.r.t.\ the threshold-based variant, as discussed
later. In general, the accuracy of concurrent ranging\xspace is still comparable to the
\textsc{center}\xspace case in terms of median and average error, although with
slightly worse precision. This is also reflected in the absolute
error, which remains very small and essentially the same as in the
\textsc{center}\xspace case until the $75^\mathit{th}$ percentile, but reaches
$\nth{99} = 91$~cm with S{\footnotesize \&}S\xspace. In contrast, the performance of both
variants of SS-TWR\xspace is basically unaltered.
These trends can also be observed in the alternate view of
Figure~\ref{fig:crng-tosn:err-hist-disi}, based on normalized
histograms. The distributions of concurrent ranging\xspace and SS-TWR\xspace are similar,
although the latter is slightly narrower. Nevertheless, concurrent ranging\xspace has a
small tail of positive errors, not present in SS-TWR\xspace, yielding higher
values of $\sigma$ and $\geq\nth{90}$ percentiles in
Table~\ref{tab:rng-err-all}. Further, these tails are also not present
in the case of \textsc{center}\xspace, whose distribution is otherwise essentially
the same, and therefore not shown due to space limitations.
\begin{figure}[!t]
\centering
\subfloat[Threshold-based ToA\xspace estimation.\label{fig:crng-tosn:err-hist-th}]{
\includegraphics{figs/crng-th-t20-disi-disterr-hist.pdf}
}
\subfloat[S{\footnotesize \&}S\xspace with $K = 3$ iterations.\label{fig:crng-tosn:err-hist-ssr3}]{
\includegraphics{figs/crng-ssr3-t20-disi-disterr-hist.pdf}
}\\
\subfloat[SS-TWR\xspace.\label{fig:crng-tosn:err-hist-sstwr}]{
\includegraphics{figs/sstwr-t20-disi-disterr-hist.pdf}
}
\subfloat[SS-TWR\xspace with drift compensation.\label{fig:crng-tosn:err-hist-sstwr-drift}]{
\includegraphics{figs/sstwr-drift-t20-disi-disterr-hist.pdf}
}
\caption{Normalized histogram of ranging error across all 18 static
positions (both \textsc{center}\xspace and \textsc{edge}\xspace).}
\label{fig:crng-tosn:err-hist-disi}
\end{figure}
This is to be ascribed to \textsc{edge}\xspace positions, in which the initiator
\begin{inparaenum}[\itshape i)]
\item is next to a wall suffering from closely-spaced and strong MPC
next to the direct path, and
\item is very close to one or two anchors and far from the others,
resulting in significantly different power loss across responses.
\end{inparaenum}
This setup sometimes causes the direct path of some responses to be
buried in MPC noise or even unable to cross the noise threshold
$$\mathcal{T}$\xspace$. As a result, our ToA\xspace algorithms erroneously select one
of the MPC peaks as the first path, yielding an incorrect distance
estimate. Nevertheless, as mentioned, the absolute error remains
definitely acceptable with both the threshold-based and S{\footnotesize \&}S\xspace ToA\xspace
algorithms.
\fakeparagraph{Localization Accuracy}
Figure~\ref{fig:crng-tosn:err-ellipse-disi} shows the localization
error and $3\sigma$ ellipses for each initiator position and both ToA\xspace
estimation algorithms, while
Table~\ref{tab:loc-err-center}--\ref{tab:loc-err-all} show the values
of the metrics we consider.
Coherently with the analysis of ranging accuracy, the standard
deviation $\sigma$ for concurrent ranging\xspace is significantly lower in the \textsc{center}\xspace
positions than in the \textsc{edge}\xspace ones. This is a consequence of the
distance overestimation we observed, which causes larger ellipses and
a small bias w.r.t.\ the true position in a few \textsc{edge}\xspace
positions. Interestingly, both ToA\xspace algorithms underperform in the
same positions, although sometimes with different effects, e.g.,\ in
positions $(1.6,-3.2)$ and $(3.2,-1.6)$.
The difference between SS-TWR\xspace and concurrent ranging\xspace is also visible in the longer
tails of the localization error CDF (Figure~\ref{fig:crng-tosn:cdf-disi-all}),
where it is further exacerbated by the fact that, in our setup,
the worst-case \textsc{edge}\xspace positions are \emph{as many as}
the common-case \textsc{center}\xspace ones.
Nevertheless, even in this challenging case,
Table~\ref{tab:loc-err-all} shows that concurrent ranging\xspace still achieves
decimeter-level accuracy, with the median\footnote{As the localization
error is always positive, unlike the ranging error, the median is
the same as the \nth{50} percentile.} nearly the same as plain
SS-TWR\xspace. The error is also quite small; $\nth{75}\leq 17$~cm
and $\nth{99} \leq 57$~cm, with the threshold-based approach
performing marginally better than S{\footnotesize \&}S\xspace, as in ranging. However, the
drift compensated SS-TWR\xspace is still the most accurate and precise.
\begin{figure}[!t]
\centering
\subfloat[Threshold-based ToA\xspace estimation.\label{fig:crng-err-ellipse-th}]{
\includegraphics{figs/crng-err-ellipses-demo-th.png}
}
\subfloat[S{\footnotesize \&}S\xspace with $K = 3$ iterations.\label{fig:crng-err-ellipse-ssr}]{
\includegraphics{figs/crng-err-ellipses-demo-ssr3.png}
}
\caption{$3\sigma$ error ellipses with concurrent ranging\xspace and six concurrent responders.
Blue dots represent position estimates, brown crosses are anchors.
The dashed light red square denotes the positions of interest.}
\label{fig:crng-tosn:err-ellipse-disi}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[\textsc{center}\xspace positions.\label{fig:crng-tosn:cdf-disi-center}]{
\includegraphics{figs/disi-crng-loc-cdf-t9.pdf}
}
\subfloat[All positions.\label{fig:crng-tosn:cdf-disi-all}]{
\includegraphics{figs/disi-crng-loc-cdf-t20.pdf}
}
\caption{CDF of localization error in static positions.}
\label{fig:crng-tosn:cdf-disi}
\end{figure}
The gap with SS-TWR\xspace further reduces in the more common
\textsc{center}\xspace positions, where the accuracy of concurrent ranging\xspace is very high,
as shown in Figure~\ref{fig:crng-tosn:err-ellipse-disi} and
Figure~\ref{fig:crng-tosn:cdf-disi-center}. Position estimates are
also quite precise, with $\sigma \leq 5$~cm. Further, the error
remains $\leq 16$~cm in $95\%$ of the cases, regardless of the ToA\xspace
estimation technique; the threshold-based and S{\footnotesize \&}S\xspace ToA\xspace algorithms
show only a marginal difference, with a \nth{99} percentile of $21$~cm
and $30$~cm, respectively.
\begin{table}
\centering
\caption{Localization error comparison across the 9 \textsc{center}\xspace positions
considered.}
\label{tab:loc-err-center}
\begin{tabular}{l cc ccccc}
\toprule
& \multicolumn{7}{c}{$\norm{\mathbf{\hat p} - \mathbf{p_r}}$ [cm]}\\
\cmidrule(lr){2-8}
{\bfseries Scheme} & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 9 & 4.9 &8 &12 & 14 &16 & 21\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 8.8 & 5 &8 &11 & 14 &16 & 30\\
SS-TWR\xspace & 6.9 & 2.7 &7 &9 & 10 &11 & 12\\
SS-TWR\xspace Compensated & 4.1 & 2.3 &4 &6 & 8 &8 & 10\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Localization error comparison across the 18 static positions
considered (both \textsc{center}\xspace and \textsc{edge}\xspace).}
\label{tab:loc-err-all}
\begin{tabular}{l cc ccccc}
\toprule
& \multicolumn{7}{c}{$\norm{\mathbf{\hat p} - \mathbf{p_r}}$ [cm]}\\
\cmidrule(lr){2-8}
{\bfseries Scheme} & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 12.9 & 11 &10 &14 & 28 &41 & 51\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 14.5 & 12.6 &10 &17 & 33 &42 & 57\\
SS-TWR\xspace & 8.6 & 3.4 &9 &11 & 13 &14 & 16\\
SS-TWR\xspace Compensated & 5 & 2.4 &5 &7 & 8 &9 & 11\\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Success Rate} Across the 9,000 CIR signals considered
in this section, concurrent ranging\xspace is able to extract a position estimate in 8,663
and 8,973 of them using our threshold-based and S{\footnotesize \&}S\xspace ToA\xspace estimation,
respectively, yielding a remarkable localization success rate of
$96.25\%$ and $99.7\%$. Across the successful estimates, 6~samples
included very large errors $\geq 10$~m. These could be easily
discarded with common filtering techniques~\cite{ukf-julier}. In the
\textsc{center}\xspace positions of interest, the localization success rate with
both ToA\xspace techniques yields $99.7\%$.
Threshold-based ToA\xspace estimation is more susceptible to strong and late
MPC occurring at the beginning of the following CIR chunk, which
result in invalid distance estimates that are therefore discarded,
reducing the success rate.
As for S{\footnotesize \&}S\xspace, of the $27$ signals failing to provide an estimate, $21$
are caused by PHR errors where the DW1000 does not update the RX
timestamp. In the remaining 6~signals, S{\footnotesize \&}S\xspace was unable to detect the
first or last responder; these signals were therefore discarded, to
avoid a potential responder mis-identification
(\ref{sec:crng-tosn:toa-est}).
Regarding ranging, threshold-based estimation yields a success rate of
$95.98\%$ across the 54,000 expected estimates, while S{\footnotesize \&}S\xspace reaches
$99.58\%$, in line with the localization success rate.
\section{Evaluation}
\label{sec:crng-tosn:eval}
We evaluate our concurrent ranging\xspace prototype, embodying the techniques illustrated
in~\ref{sec:reloaded}. We begin by describing our experimental setup
(\ref{sec:crng-exp-setup}) and evaluation metrics
(\ref{sec:exp-metrics}). Then, we evaluate our
TX scheduling (\ref{sec:crng-tosn:exp-tx-comp}), confirming its
ability to achieve sub-ns precision. This is key to improve the
accuracy of ranging and localization, which we evaluate in
static positions (\ref{sec:crng-exp-static}) and via trajectories
generated by a mobile robot in an OptiTrack facility
(\ref{sec:optitrack}).
\subsection{Experimental Setup}\label{sec:crng-exp-setup}
We implemented concurrent ranging\xspace atop Contiki OS~\cite{contikiuwb} using the
EVB1000 platform~\cite{evb1000} as in~\ref{sec:questions}.
\fakeparagraph{UWB Radio Configuration} In all experiments, we set the
DW1000 to use channel~7 with center frequency $f_c = 6489.6$~GHz and
$900$~MHz receiver bandwidth. We use the shortest preamble length of
64~symbols with preamble code~17, the highest $PRF = 64~$MHz, and the
highest 6.8~Mbps data rate. Finally, we set the response delay
$\ensuremath{T_\mathit{RESP}}\xspace = $~\SI{800}{\micro\second} to provide enough time to compensate
for the TX scheduling uncertainty (\ref{sec:crng-tosn:txfix}).
\fakeparagraph{Concurrent Ranging Configuration}
Table~\ref{tab:crng-tosn:parameters} summarizes the default values of
concurrent ranging\xspace parameters. The time shift $\ensuremath{T_\mathit{ID}}\xspace = 128$~ns for \textsc{response}\xspace
identification (\ref{sec:crng-tosn:resp-id}) corresponds to a distance
of $38.36$~m, sufficiently larger than the maximum distance difference
($\approx 12$~m) among anchors in our setups. For ToA\xspace estimation
(\ref{sec:crng-tosn:toa-est}), we use a noise threshold
$$\mathcal{T}$\xspace = 11 \times \ensuremath{\sigma_n}\xspace$, computed as described
in~\ref{sec:crng-tosn:cir-proc}, and $\ensuremath{K}\xspace = 3$ iterations per
CIR chunk of the S{\footnotesize \&}S\xspace algorithm.
\begin{table}[!t]
\centering
\caption{Main parameters of concurrent ranging with default values.}
\label{tab:crng-tosn:parameters}
\begin{tabular}{llr}
\toprule
{\bfseries Symbol} & {\bfseries Description} & {\bfseries Default Value}\\
\midrule
$L$ & CIR upsampling factor & 30\\
$\ensuremath{T_\mathit{ID}}\xspace$ & Time shift for response identification & 128~ns\\
$\xi$ & Noise threshold for CIR re-arrangement & 0.14\\
$W$ & Window length for CIR re-arrangement & 228~samples\\
$$\mathcal{T}$\xspace$ & Noise threshold for ToA\xspace estimation algorithm & $11\times \ensuremath{\sigma_n}\xspace$\\
$\ensuremath{K}\xspace$ & Iterations (max. number of paths) of the S{\footnotesize \&}S\xspace ToA\xspace algorithm & 3\\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Infrastructure}
We run our experiments with a mobile testbed infrastructure
we deploy in the target environment.
Each testbed node consists of an EVB1000~\cite{evb1000} connected via USB to a
Raspberry~Pi (RPi)~v3, equipped with an ST-Link programmer enabling
firmware uploading. Each RPi reports its serial data via WiFi to a
server, which stores it in a log file.
Although our prototype supports runtime positioning,
hereafter we run our analysis offline.
In each test, we collect TX information from anchors and RX
information diagnostics and CIR signals from the initiator. We
collect a maximum of 8~CIR signals per second, as this requires
reading over SPI, logging over USB, and transmitting over WiFi the
4096B accumulator buffer (CIR) together with the rest of the measurements.
\fakeparagraph{Baseline: SS-TWR with and without Clock Drift Compensation}
We compare the performance of concurrent ranging\xspace against the
commonly-used SS-TWR scheme (\ref{sec:soa-sstwr}). We implemented
it for the EVB1000 platform atop Contiki OS using a response delay
$\ensuremath{T_\mathit{RESP}}\xspace = \SI{320}{\micro\second}$ to minimize the impact of clock
drift. Moreover, we added the possibility to compensate for the
estimated clock drift at the initiator based on the carrier frequency
offset (CFO) measured during the \textsc{response}\xspace packet RX as suggested by
Decawave\xspace~\cite{dw-cfo, dw1000-sw-api}. Hence, our evaluation results also
serve to quantitatively demonstrate the benefits brought by this
recent clock drift compensation mechanism.
As for localization, we perform a SS-TWR\xspace exchange every 2.5~ms against
the $N$ responders deployed, in round-robin, yielding an estimate of
the initiator position every $N \times 2.5$~ms. We use the exact same
RF configuration as in concurrent ranging\xspace, for comparison.
\subsection{Metrics}
\label{sec:exp-metrics}
Our main focus is on assessing the ranging and localization accuracy
of concurrent ranging\xspace in comparison with SS-TWR\xspace. Therefore, we consider the
following metrics, for which we report the median, average $\mu$, and
standard deviation $\sigma$, along with various percentiles of the
absolute values:
\begin{itemize}
\item \emph{Ranging Error.} We compute it w.r.t.\ each responder $R_i$ as
$\hat{d}_{i} - d_{i}$, where $\hat{d}_{i}$ is the distance estimated
and $d_{i}$ is the known distance.
\item \emph{Localization Error.} We compute the absolute positioning
error as $\norm{\mathbf{\hat p} - \mathbf{p_r}}$, where
$\mathbf{\hat p}$ is the initiator position estimate and
$\mathbf{p_r}$ its known position.
\end{itemize}
Moreover, we also consider the \emph{success rate}
as a measure of the reliability and robustness of concurrent ranging\xspace in real
environments. Specifically, we define the \textit{ranging success
rate} to responder \resp{i} and the \textit{localization success
rate} as the fraction of CIR signals where, respectively, we are able to
\begin{inparaenum}[\itshape i)]
\item measure the distance $d_i$ from the initiator to \resp{i} and
\item obtain enough information \mbox{($\geq 3$ ToA\xspace estimates)} to
compute the initiator position $\mathbf{\hat p}$.
\end{inparaenum}
\subsection{Precision of TX Scheduling}
\label{sec:crng-tosn:exp-tx-comp}
\begin{figure}[!t]
\centering
\includegraphics{figs/crng-cir-mean-t2-final.pdf}
\caption{Average CIR amplitude and standard deviation per time delay
across 500 signals with the initiator in the left center position of
Figure~\ref{fig:crng-tosn:err-ellipse-disi}.}
\label{fig:crng-cir-meanstd}
\end{figure}
We begin by examining the ability of our TX compensation mechanism
(\ref{sec:crng-tosn:txfix}) to schedule transmissions
precisely, as this is crucial to improve the
accuracy of concurrent ranging and localization. To this end, we ran an
experiment with one initiator and six responders, collecting 500 CIR
signals for our analysis. Figure~\ref{fig:crng-cir-meanstd} shows the
average CIR amplitude and standard deviation after re-arranging the
CIRs (\ref{sec:crng-tosn:cir-rearrangement}) and aligning the
upsampled CIR signals based on the direct path of responder \resp{1}.
Across all time delays, the average CIR presents only minor amplitude
variations in the direct paths and MPC. Further, the precise scheduling of
\textsc{response}\xspace transmissions yields a high amplitude for the direct paths
of all signals; this is in contrast with the smoother and flatter
peaks we observed earlier (\ref{sec:questions},
Figure~\ref{fig:single-tx-cir-variations}) due to the TX uncertainty
$\epsilon \in [-8, 0)$~ns.
To quantitatively analyze the TX precision, we estimate the ToA\xspace of
each \textsc{response}\xspace and measure the time difference $\Delta t_{j, 1}$
between the ToA\xspace of responder \resp{j} and the one of \resp{1}, chosen
as reference, after removing the time delays $\delta_i$ used
for response identification. Then, we subtract the mean of the distribution
and look at the deviations of $\Delta t_{j, 1}$, which ideally
should be negligible.
Figure~\ref{fig:crng-ts-cdf} shows the CDF of the $\Delta t_{j, 1}$
variations from the mean, while Table~\ref{tab:crng-tx-dev} details
the percentiles of the absolute variations. All time differences
present a similar behavior with an aggregate mean error
$\mu = 0.004$~ns across the 2,500 $\Delta t_{j, 1}$ measurements, with
$\sigma = 0.38$~ns and a median of $0.03$~ns; the absolute \nth{90},
\nth{95}, and \nth{99} percentiles are 0.64, 0.77, and 1.09~ns,
respectively. These results confirm that our implementation is able
to \emph{reliably schedule transmissions with sub-ns precision}.
\begin{figure}[!t]
\centering
\includegraphics{figs/crng-ts-cdf.pdf}
\caption{Time difference deviation from the mean across 500 CIRs.}
\label{fig:crng-ts-cdf}
\end{figure}
\begin{table}[h]
\centering
\caption{Deviation percentiles for the absolute time difference
$\Delta t_{j,1}$ variations.}
\label{tab:crng-tx-dev}
\begin{tabular}{ccccccc}
\toprule
& \multicolumn{6}{c}{\bfseries Percentile [ns]}\\
\cmidrule(lr){2-7}
{\bfseries Time Difference} & {\nth{25}} & {\nth{50}} & {\nth{75}} & {\nth{90}}
& {\nth{95}} & {\nth{99}}\\
\midrule
$\Delta t_{2,1}$ & 0.15 & 0.32 & 0.52 & 0.72 &0.85 & 1.08\\
$\Delta t_{3,1}$ & 0.08 & 0.18 & 0.32 & 0.51 &0.68 & 1.12\\
$\Delta t_{4,1}$ & 0.06 & 0.13 & 0.20 & 0.30 &0.40 & 0.60\\
$\Delta t_{5,1}$ & 0.13 & 0.23 & 0.40 & 0.64 &0.74 & 0.90\\
$\Delta t_{6,1}$ & 0.24 & 0.39 & 0.54 & 0.76 &0.91 & 1.14\\
Aggregate & 0.10 & 0.23 & 0.42 & 0.64 &0.77 & 1.09\\
\bottomrule
\end{tabular}
\end{table}
\section{Introduction}
\label{sec:intro}
A new generation of localization systems is rapidly gaining interest,
fueled by countless applications~\cite{benini2013imu,follow-me-drone,
museum-tracking, mattaboni1987autonomous, guo2016ultra, irobot-lawnmower,
fontana2003commercialization} for which global navigation satellite
systems do not provide sufficient reliability, accuracy, or update
rate. These so-called real-time location systems (RTLS) rely on
several technologies, including optical~\cite{optitrack, vicon},
ultrasonic~\cite{cricket, alps, ultrasonic-tdoa},
inertial~\cite{benini2013imu}, and radio frequency (RF). Among these,
RF is predominant, largely driven by the opportunity of exploiting
ubiquitous wireless communication technologies like WiFi and Bluetooth
also towards localization. Localization systems based on these radios
enjoy, in principle, wide applicability; however, they typically
achieve meter-level accuracy, enough for several use cases but
insufficient for many others.
Nevertheless, another breed of RF-based localization recently
re-emerged from a decade-long oblivion: ultra-wideband (UWB).
The recent availability of tiny, low-cost, and low-power
UWB transceivers has renewed interest in this technology,
whose peculiarity is to enable accurate distance
estimation (\emph{ranging}) along with high-rate communication. These
characteristics are rapidly placing UWB in a dominant position in the
RTLS arena, and defining it as a key enabler for several Internet of
Things (IoT) and consumer scenarios. UWB is currently not as
widespread as WiFi or BLE, but the fact that the latest
Apple iPhone~11 is equipped with a UWB transceiver is a witness
that the trend may change dramatically in the near future.
The Decawave\xspace DW1000 transceiver~\cite{dw1000-datasheet} has been at
the forefront of this technological advancement, as it provides
centimeter-level ranging accuracy with a tiny form factor and a power
consumption an order of magnitude lower than its bulky UWB
predecessors. On the other hand, this consumption is still an order of
magnitude higher than other IoT low-power wireless radios;
further, its impact is exacerbated when ranging---the key
asset of UWB---is exploited, due to the long packet exchanges required.
\begin{figure}[!t]
\centering
\subfloat[Single-sided two-way ranging (SS-TWR\xspace).\label{fig:two-way-ranging}]{
\includegraphics[width=.49\textwidth, valign=t]{figs/sstwr-tosn.png}}
\hfill
\subfloat[Concurrent ranging.\label{fig:crng}]{
\includegraphics[width=.49\textwidth, valign=t]{figs/crng-tosn.png}}
\caption{In SS-TWR\xspace, the initiator transmits a unicast
\textsc{poll}\xspace to which a single responder replies with a \textsc{response}\xspace. In
concurrent ranging, the initiator transmits a \emph{broadcast}
\textsc{poll}\xspace to which responders in range reply concurrently.}
\label{fig:sstwr-crng-cmp}
\end{figure}
\fakeparagraph{UWB Two-way Ranging (TWR)}
Figure~\ref{fig:two-way-ranging} illustrates
single-sided two-way ranging (SS-TWR), the simplest
scheme, part of the IEEE~802.15.4-2011
standard~\cite{std154} and further illustrated in~\ref{sec:background}.
The \emph{initiator}\footnote{The IEEE
standard uses \emph{originator} instead of \emph{initiator}; we
follow the terminology used by the Decawave\xspace documentation.} requests
a ranging measurement via a \textsc{poll}\xspace packet; the responder, after a known
delay \ensuremath{T_\mathit{RESP}}\xspace, replies with a \textsc{response}\xspace packet containing the timestamps
marking the receipt of \textsc{poll}\xspace and the sending of \textsc{response}\xspace. This
information, along with the dual timestamps marking the sending of
\textsc{poll}\xspace and the receipt of \textsc{response}\xspace measured locally at the initiator,
enable the latter to accurately compute the
time~of~flight $\tau$
and estimate the distance from the responder as $d=\ensuremath{\tau}\xspace \times c$,
where $c$ is the speed of light in air.
Two-way ranging, as the name suggests, involves a \emph{pairwise}
exchange between the initiator and \emph{every} responder. In other
words, if the initiator must estimate its distance w.r.t.\ $N$ nodes,
$2\times N$ packets are required. The situation is even worse with
other schemes that improve accuracy by acquiring more timestamps via
additional packet transmissions, e.g.,\ up to $4\times N$ in
popular double-sided two-way ranging
(DS-TWR\xspace) schemes~\cite{dstwr, dw-dstwr-patent, dw-dstwr}.
\fakeparagraph{UWB Concurrent Ranging}
We propose a novel approach to ranging in which,
instead of \emph{separating} the pairwise exchanges necessary
to ranging, these are \emph{overlapping} in time (Figure~\ref{fig:crng}).
Its mechanics are extremely simple: when the single
(broadcast) \textsc{poll}\xspace sent by the initiator is received,
each responder sends back its \textsc{response}\xspace as if it were alone,
effectively yielding concurrent replies to the initiator.
This \emph{concurrent ranging} technique enables the initiator
to \emph{range with $N$ nodes at once by using only 2~packets},
i.e.,\ as if it were ranging against a single responder.
This significantly reduces latency and energy consumption,
increasing scalability and battery lifetime,
but causes the concurrent signals from different
responders to ``fuse'' in the communication channel, potentially
yielding a collision at the initiator.
This is precisely where the peculiarities of UWB communications
come into play. UWB transmissions rely on very
short ($\leq$2~ns) pulses, enabling very precise timestamping of
incoming radio signals. This is what makes UWB intrinsically more
amenable to accurate ranging than narrowband, whose reliance on
carrier waves that are more ``spread in time'' induces physical bounds
on the precision that can be attained in establishing a time reference
for an incoming signal.
Moreover, it is what enables our novel idea of
concurrent ranging. In narrowband, the fact that concurrent signals
are spread over time makes them very difficult to tell apart once
fused into a single signal. In practice, this is possible only if
detailed channel state information is available---usually not the case
on narrowband low-power radios, e.g.,\ the popular CC2420~\cite{cc2420}
and its recent descendants. In contrast, the reliance of UWB
on short pulses makes concurrent signals less likely to collide and combine
therefore enabling, under certain conditions discussed later,
their identification if channel impulse response (CIR) information is available.
Interestingly, the DW1000
\begin{inparaenum}[\itshape i)]
\item bases its own operation precisely on the processing of the CIR, and
\item makes the CIR available also to the application layer (\ref{sec:background}).
\end{inparaenum}
\fakeparagraph{Goals and Contributions}
As discussed in~\ref{sec:crng}, a strawman implementation of concurrent ranging\xspace is
very simple. Therefore, using our prototype deployed in a small-scale
setup, we begin by investigating the \emph{feasibility} of concurrent ranging\xspace
(\ref{sec:questions}), given the inevitable degradation in accuracy
w.r.t.\ isolated ranging caused by the interference among the signals of
responders, in turn determined by their relative placement. Our
results, originally published in~\cite{crng},
offer empirical evidence that it is indeed possible to derive accurate
ranging information from UWB signals overlapping in time.
On the other hand, these results also point out the significant
\emph{challenges} that must be overcome to transform concurrent ranging\xspace from an
enticing opportunity to a practical system. Solving these
challenges is the specific goal of this paper w.r.t.\ the original
one~\cite{crng} where, for the first time in the literature, we have
introduced the concept and shown the feasibility of concurrent
ranging.
Among these challenges, a key one is the \emph{limited precision of scheduling
transmissions} in commercial UWB transceivers. For instance, the
popular Decawave\xspace DW1000 we use in this work can timestamp packet receptions (RX)
with a precision of $\approx$15~ps, but can schedule
transmissions (TX) with a precision of only $\approx$8~ns. This is
not an issue in conventional ranging schemes like SS-TWR\xspace; as mentioned
above, the responder embeds the necessary timestamps in the \textsc{response}\xspace
payload, allowing the initiator to correct for the limited TX
granularity. However, in concurrent ranging\xspace only one \textsc{response}\xspace is decoded,
if any; the timing information of the others must be
derived solely from the appearance of their corresponding signal
paths in the CIR. This process is greatly affected by the TX uncertainty,
which significantly reduces accuracy and consequently
hampers the practical adoption of concurrent ranging\xspace.
In this paper,
we tackle and solve this key challenge with a mechanism that
significantly improves the TX scheduling precision via a \emph{local}
compensation (\ref{sec:reloaded}). Indeed, both the precise and
imprecise information about TX scheduling are available at the
responder; the problem arises because the radio discards the less
significant 9~bits of the precise 40-bit timestamp. Therefore, the
responder can correct for the \emph{known} TX timing error when
preparing its \textsc{response}\xspace. We achieve this by fine-tuning the frequency
of the crystal oscillator entirely in firmware and locally to the
responder, i.e.,\ without additional hardware or external out-of-band
infrastructure. Purposely, the technique also compensates for the
oscillator frequency offset between initiator and responders,
significantly reducing the impact of clock drift, the main cause of
ranging error in SS-TWR\xspace.
Nevertheless, precisely scheduling transmissions
is not the only challenge of concurrent ranging\xspace. A full-fledged,
practically usable system also requires tackling
\begin{inparaenum}[\itshape i)]
\item the reliable identification of the concurrent responders, and
\item the precise estimation of the time of arrival (ToA\xspace) of their signals;
\end{inparaenum}
both are complicated by the intrinsic mutual interference of
concurrent transmissions. In this paper, we build upon techniques developed
by us~\cite{chorus} and other groups~\cite{crng-graz,snaploc} since we
first proposed concurrent ranging in~\cite{crng}. Nevertheless, we
\emph{adapt and improve} these techniques (\ref{sec:reloaded}) to
accommodate the specifics of concurrent ranging in general and
the TX scheduling compensation technique in particular.
Interestingly, our novel design significantly increases
not only the accuracy but also the \emph{reliability} of
concurrent ranging\xspace w.r.t.\ our original strawman design in~\cite{crng}.
The latter relied heavily
\begin{inparaenum}[\itshape i)]
\item on the successful RX of at least one \textsc{response}\xspace,
containing the necessary timestamps for
accurate time-of-flight calculation, and
\item on the ToA\xspace estimation of this \textsc{response}\xspace performed by the DW1000,
used to determine the difference in the signal ToA\xspace
(and therefore distance) to the other responders.
\end{inparaenum}
However, the fusion of concurrent signals may cause
the decoding of the \textsc{response}\xspace to be matched to the
wrong responder or fail altogether, yielding grossly incorrect
estimates or none at all, respectively. Thanks to the ability to
precisely schedule the TX of \textsc{response}\xspace packets, we
\begin{inparaenum}[\itshape i)]
\item remove the need to
decode at least one of them, and
\item enable distance estimation \emph{solely} based on the CIR.
\end{inparaenum}
We can actually \emph{remove the payload entirely} from \textsc{response}\xspace packets,
further reducing latency and energy consumption.
We evaluate concurrent ranging\xspace extensively (\ref{sec:crng-tosn:eval}). We first
show via dedicated experiments that our prototype can schedule TX with
$<1$~ns error. We then analyze the \emph{raw} positioning information
obtained by concurrent ranging\xspace, to assess its quality without the help of
additional filtering techniques~\cite{ukf-julier, ukf} that, as
shown in~\cite{atlas-tdoa, guo2016ultra, 7374232, ethz-one-way}, would
nonetheless improve performance. Our experiments in two environments,
both with static positions and mobile trajectories, confirm that the
near-perfect TX scheduling precision we achieve, along with our dedicated
techniques to accurately extract distance information from the CIR,
enable reliable decimeter-level ranging and positioning
accuracy---same as conventional schemes for UWB but at a fraction of
the network and energy cost.
These results, embodied in our prototype implementation, confirm that
UWB concurrent ranging is a concrete option, immediately applicable to
real-world applications where it strikes new trade-offs w.r.t.\ accuracy,
latency, energy, and scalability, offering a valid (and often more
competitive) alternative to established conventional methods,
as discussed in~\ref{sec:discussion}.
Finally, in~\ref{sec:relwork} we place concurrent ranging in the
context of related work, before ending in~\ref{sec:crng-tosn:conclusions}
with brief concluding remarks.
\subsection{Is Communication Even Possible?}
\label{sec:obs:prr}
Up to this point, we have implicitly assumed that the UWB transceiver
is able to successfully decode one of the concurrent TX with high
probability, similarly to what happens in narrowband and exploited,
e.g.,\ by Glossy~\cite{glossy} and other protocols~\cite{lwb, chaos, crystal}.
However, this may not be the case, given the different radio PHY and
the different degree of synchronization (ns vs.\ $\mu$s) involved.
\begin{figure}[!tb]
\centering
\begin{tikzpicture}[xscale=0.6]
\tikzstyle{device} = [circle, thick, draw=black, fill=gray!30, minimum size=8mm]
\node[device] (a) at (0, 0) {$R_1$};
\node[device] (b) at (4, 0) {$I$};
\node[device] (c) at (10, 0) {$R_2$};
\draw[<->, thick] (a) -- (b) node[pos=.5,above]{$d_1$};
\draw[<->, thick] (b) -- (c) node[pos=.5,above]{$d_2 = D - d_1$};
\draw[<->, thick] (0, -.5) -- (10, -.5) node[pos=.5,below]{$D = 12$~m};
\end{tikzpicture}
\caption{Experimental setup to investigate the reliability and accuracy of
concurrent ranging (\ref{sec:obs:prr}--\ref{sec:obs:accuracy}). $I$ is the
initiator, $R_1$ and $R_2$ are the responders.}
\label{fig:capture-exp-deployment}
\end{figure}
Our first goal is therefore to verify this hypothesis. We run a series
of experiments with three nodes, one initiator $I$ and two concurrent
responders $R_1$ and $R_2$, placed along a line
(Figure~\ref{fig:capture-exp-deployment}). The initiator is placed
between responders at a distance $d_1$ from $R_1$ and $d_2 = D - d_1$
from $R_2$, where $D = 12$~m is the fixed distance between the
responders. We vary $d_1$ between 0.4~m and 11.6~m in steps of
0.4~m. By changing the distance between initiator and
responders we affect the chances of successfully receiving a packet
from either responder due to the variation in power loss and
propagation delay. For each initiator position, we perform 3,000
ranging exchanges with concurrent ranging\xspace, measuring the packet reception ratio
(\ensuremath{\mathit{PRR}}\xspace) of \textsc{response}\xspace packets along with the resulting ranging
estimates. As a baseline, we also performed 1,000 ranging exchanges
with each responder in isolation, yielding $\ensuremath{\mathit{PRR}}\xspace=100\%$ for all
initiator positions.
\begin{figure}[!tb]
\centering
\includegraphics{figs/ewsn/tosn-capture-test-node-pdr}
\caption{Packet reception rate (\ensuremath{\mathit{PRR}}\xspace) vs.\ initiator position $d_1$,
with two concurrent transmissions.}
\label{fig:capture-test-prr}
\end{figure}
Figure~\ref{fig:capture-test-prr} shows the $\ensuremath{\mathit{PRR}}\xspace_i$ of each responder
and the overall $\ensuremath{\overline{\mathit{PRR}}}\xspace = \ensuremath{\mathit{PRR}}\xspace_{1} + \ensuremath{\mathit{PRR}}\xspace_{2}$ denoting the case in
which a packet from either responder is received correctly. Among all
initiator positions, the worst overall \ensuremath{\overline{\mathit{PRR}}}\xspace~$=$~75.93\% is achieved
for $d_1 = 8$~m. On the other hand, placing the initiator close to one of the
responders (i.e.,\ $d_1 \leq 2$~m or $d_1 \geq 10$~m) yields
$\ensuremath{\overline{\mathit{PRR}}}\xspace \geq 99.9\%$.
We also observe strong fluctuations in the center area. For instance,
placing the initiator at $d_1 = 5.2$~m yields $\ensuremath{\mathit{PRR}}\xspace_{1}= 93.6$\% and
$\ensuremath{\mathit{PRR}}\xspace_{2}=2.7$\%, while nudging it at $d_1 = 6$~m yields
$\ensuremath{\mathit{PRR}}\xspace_{1}= 6.43$\% and $\ensuremath{\mathit{PRR}}\xspace_{2}=85.73$\%.
\fakeparagraph{Summary}
Overall, this experiment confirms the ability of the
DW1000 to successfully decode, with high probability,
one of the packets from concurrent transmissions.
\subsection{How Concurrent Transmissions Affect Ranging Accuracy?}
\label{sec:obs:accuracy}
We also implicitly assumed that concurrent transmissions do not affect
the ranging accuracy. In practice, however, the UWB wireless channel
is far from being as ``clean'' as in the idealized view of
Figure~\ref{fig:concurrent-uwb}. The first path is typically followed
by several multipath reflections, which effectively create a ``tail''
after the leading signal. Depending on its temporal and spatial
displacement, this tail may interfere with the first path of other
responders by
\begin{inparaenum}[\itshape i)]
\item reducing its amplitude, or
\item generating MPC that can be mistaken for the first path,
inducing estimation errors.
\end{inparaenum}
Therefore, we now ascertain whether concurrent transmissions degrade
ranging accuracy.
\fakeparagraph{Baseline: Isolated Responders} We first look at the
ranging accuracy for all initiator positions with each responder
\emph{in isolation}, using the same setup of
Figure~\ref{fig:capture-exp-deployment}. Figure~\ref{fig:rng-hist}
shows the normalized histogram of the resulting ranging error from
58,000 ranging measurements. The average error is $\mu = 1.7$~cm, with
a standard deviation $\sigma=10.9$~cm. The maximum absolute error is
37~cm. The median of the absolute error is 8~cm, while the \nth{99}
percentile is 28~cm. These results are in accordance with previously
reported studies~\cite{surepoint,polypoint} employing the DW1000
transceiver.
\begin{figure}[!tb]
\centering
\subfloat[Isolated responders.\label{fig:rng-hist}]{
\includegraphics{./figs/ewsn/tosn-capt-hist-rng-err}}
\hfill
\subfloat[Concurrent responders.\label{fig:rng-stx-hist}]{
\includegraphics{./figs/ewsn/tosn-stx-hist-rng-err}}
\caption{Normalized histogram of the ranging error with responders in
isolation (Figure~\ref{fig:rng-hist})
vs.\ two concurrent responders (Figure~\ref{fig:rng-stx-hist}).
In the latter, the initiator sometimes receives
the \textsc{response}\xspace from the farthest responder while estimating the
first path from the closest one, therefore increasing the absolute error.}
\label{fig:rng-error-hist}
\end{figure}
\begin{figure}[!tb]
\centering
\subfloat[Ranging Error $\in {[-7.5, -0.5]}$.\label{fig:rng-error-hist-zoom-1}]{
\includegraphics{./figs/ewsn/tosn-stx-hist-rng-err-z1}}
\hfill
\subfloat[Ranging Error $\in {[-0.5, 0.5]}$.\label{fig:rng-error-hist-zoom-2}]{
\includegraphics{./figs/ewsn/tosn-stx-hist-rng-err-z2}}
\caption{Zoomed-in views of Figure~\ref{fig:rng-stx-hist}.}
\label{fig:rng-error-hist-zoom}
\end{figure}
\fakeparagraph{Concurrent Responders: Impact on Ranging Accuracy}
Figure~\ref{fig:rng-stx-hist} shows the normalized histogram of the
ranging error of 82,519 measurements using instead two concurrent
responders\footnote{Note we do not obtain valid ranging measurements
in case of RX errors due to collisions.}. The median of
the absolute error is 8~cm, as in the isolated case, while the
\nth{25} and \nth{75} percentiles are 4~cm and 15~cm, respectively.
However, while the average error $\mu = -0.42$~cm is comparable, the
standard deviation $\sigma = 1.05$~m is significantly higher.
Further, the error distribution is clearly different w.r.t.\ the case of
isolated responders (Figure~\ref{fig:rng-hist}); to better appreciate
the trends, Figure~\ref{fig:rng-error-hist-zoom} offers a zoomed-in
view of two key areas of the histogram in
Figure~\ref{fig:rng-stx-hist}. Indeed, the latter has a long tail of
measurements with significant errors; for 14.87\% of the measured
samples the ranging error is $<-0.5$~m, while in the isolated case the
maximum absolute error only reaches 37~cm.
\setlength{\columnsep}{8pt}
\setlength{\intextsep}{4pt}
\begin{wrapfigure}{R}{5.6cm}
\centering
\includegraphics{figs/ewsn/tosn-capt-rng-err-stx.pdf}
\caption{Ranging error vs.\ initiator position.}
\label{fig:stx-rng-error-vs-pos}
\end{wrapfigure}
\fakeparagraph{The Culprit:
Mismatch between Received \textsc{response}\xspace and Nearest Responder}
To understand why, we study the ranging error when the initiator
is located in the center area ($4 \leq d_1 \leq 8$), the one with major \ensuremath{\mathit{PRR}}\xspace
fluctuations (Figure~\ref{fig:capture-test-prr}).
Figure~\ref{fig:stx-rng-error-vs-pos} shows the average
absolute ranging error of the packets received from each responder as a
function of the initiator position. Colored areas represent the standard
deviation.
The ranging error of $R_1$ and $R_2$ increases dramatically for $d_1 \geq 6$~m
and $d_2 \geq 6$~m, respectively. Moreover, the magnitude of the error exhibits
an interesting phenomenon. For instance, when the initiator is at
$d_1 = 6.8$~m, the average error for \textsc{response}\xspace packets
received from $R_1$ is 1.68~m, very close to the displacement between responders,
${\ensuremath{\Delta d}\xspace = \mid d_1 - d_2 \mid = \mid 6.8 - 5.2\mid = 1.6}$~m. Similarly, for
$d_1 = 5.2$~m and $\ensuremath{\Delta d}\xspace = 1.6$~m, the average error for the packets received
from $R_2$ is 1.47~m.
The observation that the ranging error approximates the displacement
\ensuremath{\Delta d}\xspace between responders points to the fact that these high errors
appear when the initiator receives the \textsc{response}\xspace from the farthest
responder but estimates the first path of the signal with the CIR
peak corresponding instead to the nearest responder. This phenomenon
explains the high errors shown in Figure~\ref{fig:rng-stx-hist}
and~\ref{fig:rng-error-hist-zoom-1},
which are the result of this mismatch between the successful responder
and the origin of the obtained first path. In fact, the higher
probabilities in Figure~\ref{fig:rng-error-hist-zoom-1}
correspond to positions where the responder farther from the
initiator achieves the highest $\ensuremath{\mathit{PRR}}\xspace_i$ in
Figure~\ref{fig:capture-test-prr}. For example, for $d_1 = 7.6$~m, the
far responder $R_1$ achieves $\ensuremath{\mathit{PRR}}\xspace_1 = 94.46$\% and an average ranging
error of $-3.27$~m, which again corresponds to $\ensuremath{\Delta d}\xspace = 3.2$~m and
also to the highest probability in Figure~\ref{fig:rng-error-hist-zoom-1}.
\fakeparagraph{The Role of TX Scheduling Uncertainty} When this
mismatch occurs, we also observe a relatively large standard deviation
in the ranging error. This is generated by the 8~ns TX
scheduling granularity of the DW1000 transceiver (\ref{sec:crng}). In
SS-TWR\xspace (Figure~\ref{fig:two-way-ranging}), responders insert in the
\textsc{response}\xspace the elapsed time \mbox{$\ensuremath{T_\mathit{RESP}}\xspace = \ensuremath{t_3}\xspace - \ensuremath{t_2}\xspace$}
between receiving the \textsc{poll}\xspace and sending the \textsc{response}\xspace.
The initiator uses \ensuremath{T_\mathit{RESP}}\xspace to precisely
estimate the time of flight of the signal. However, the 8~ns
uncertainty produces a discrepancy on \ensuremath{t_3}\xspace,
and therefore between the \ensuremath{T_\mathit{RESP}}\xspace used by the
initiator and obtained from the successful \textsc{response}\xspace
and the \ensuremath{T_\mathit{RESP}}\xspace actually applied by the closest responder,
resulting in significant error variations.
\fakeparagraph{Summary}
Concurrent transmissions can negatively affect ranging
by producing a mismatch between the successful responder and the
detected CIR path used to compute the time of flight.
However, we also note that 84.59\% of the concurrent ranging samples
are quite accurate, achieving an absolute error $< 30$~cm.
\subsection{Does the CIR Contain Enough Information for Ranging?}
\label{sec:cir-enough}
In~\ref{sec:crng} we have mentioned that the limitation on the
granularity of TX scheduling in the DW1000 introduces an 8~ns
uncertainty. Given that an error of 1~ns in estimating the time of
flight results in a corresponding error of $\approx$30~cm, this
raises questions to whether the information in the CIR is sufficient
to recover the timing information necessary for distance estimation.
\begin{figure}[!b]
\centering
\begin{tikzpicture}[xscale=0.6]%
\tikzstyle{device} = [circle, thick, draw=black, fill=gray!30, minimum size=8mm]
\node[device] (a) at (0, 0) {$I$};
\node[device] (b) at (4, 0) {$R_1$};
\node[device] (c) at (10, 0) {$R_2$};
\draw[<->, thick] (a) -- (b) node[pos=.5,above]{$d_1 = 4~m$};
\draw[<->, thick] (b) -- (c) node[pos=.5,above]{$\Delta d = d_2 - d_1$};
\draw[<->, thick] (0, -.5) -- (10, -.5) node[pos=.5,below]{$d_2$};
\end{tikzpicture}
\caption{Experimental setup to analyze the CIR resulting from concurrent
ranging (\ref{sec:cir-enough}).}
\label{fig:chorus-exp-deployment}
\end{figure}
We run another series of experiments using again three nodes but
arranged slightly differently (Figure~\ref{fig:chorus-exp-deployment}).
We set $I$ and $R_1$ at a fixed distance $d_1 = 4$~m,
and place $R_2$ at a distance $d_2 > d_1$ from $I$;
the two responders are therefore separated by a distance
$\ensuremath{\Delta d}\xspace = d_2 - d_1$. Unlike previous experiments, we increase $d_2$
in steps of 0.8~m; we explore $4.8 \leq d_2 \leq 12$~m, and therefore
$0.8 \leq \ensuremath{\Delta d}\xspace \leq 8$~m. For each position of $R_2$, we run the
experiment until we successfully receive 500~\textsc{response}\xspace packets,
i.e., valid ranging estimates; we measure the CIR on the
initiator after each received \textsc{response}\xspace.
\fakeparagraph{Baseline: Isolated Responders}
Before using concurrent responders, we first measured the CIR of
$R_1$ ($d_1=4$~m) in isolation. Figure~\ref{fig:single-tx-cir-variations}
shows the average amplitude and standard deviation across 500~CIR signals,
averaged by aligning them to the first path index (\texttt{FP\_INDEX}\xspace)
reported by the DW1000~(\ref{sec:dw1000}).
The measured CIR presents an evident direct path at 50~ns,
followed by strong multipath.
We observe that the CIR barely changes across the 500~signals,
exhibiting only minor variations in these MPCs (around 55--65~ns).
\fakeparagraph{Concurrent Responders: Distance Estimation} We now
analyze the effect of $R_2$ transmitting \textit{concurrently} with
$R_1$, and show how the distance of $R_2$ can be estimated. We focus
on a single distance $d_2 = 9.6$~m and on a single CIR
(Figure~\ref{fig:chorus-cir}), to analyze in depth the phenomena at
stake; we later discuss results acquired from 500~CIR signals
(Figure~\ref{fig:chorus-cir-variations}) and for other $d_2$ values
(Table~\ref{table:concurrent-ranging}).
\begin{figure}[!t]
\centering
\includegraphics{figs/ewsn/tosn-d10-cir-avg}
\caption{Average amplitude and standard deviation of 500~CIR signals for an
isolated responder at $d_1=4$~m.}
\label{fig:single-tx-cir-variations}
\includegraphics{figs/ewsn/tosn-cir-comp-391-37}
\caption{Impact of concurrent transmissions on the CIR. The \textsc{response}\xspace TX from
\resp{2} introduces a second peak at a time shift $\Delta t = 38$~ns after
the direct path from \resp{1}.}
\label{fig:chorus-cir}
\includegraphics{figs/ewsn/tosn-d10-24-cir-avg}
\caption{Average amplitude and standard deviation of 500 CIR signals,
aligned based on the \texttt{FP\_INDEX}\xspace, for two concurrent responders
at distance ${d_1 = 4}$~m and ${d_2 = 9.6}$~m from the initiator.
}
\label{fig:chorus-cir-variations}
\end{figure}
Figure~\ref{fig:chorus-cir} shows that the \textsc{response}\xspace of $R_2$ introduces a
second peak in the CIR, centered around 90~ns. This is compatible with our
a-priori knowledge of $d_2 = 9.6$~m; the question is whether this distance can
be estimated from the CIR.
Placing the direct path from $R_2$ in time constitutes a
problem per se. In the case of $R_1$, this estimation is performed
accurately and automatically by the DW1000, enabling an accurate
estimate of $d_1$. The same could be performed for $R_2$ if it were in
isolation, but not concurrently with $R_1$.
Therefore, here we estimate the direct path from $R_2$
as the CIR index whose signal amplitude is closest to $20\%$ of
the maximum amplitude of the peak---a simple technique used, e.g.,\
in~\cite{harmonium}. The offset between the CIR index and the one
returned by the DW1000 for $R_1$, for which a precise estimate is
available, returns the delay \ensuremath{\Delta t}\xspace between the responses of $R_1$
and $R_2$. We investigate more sophisticated and accurate
techniques in~\ref{sec:reloaded}.
The value of \ensuremath{\Delta t}\xspace is induced by the propagation delay
caused by the difference $\ensuremath{\Delta d}\xspace = d_2 - d_1$ in the distance of the
responders from the initiator. Recall the basics of SS-TWR\xspace
(\ref{sec:toa}, Figure~\ref{fig:two-way-ranging}) and of concurrent ranging
(\ref{sec:crng}, Figure~\ref{fig:crng}). $R_2$ receives the
\textsc{poll}\xspace from $I$ slightly after $R_1$; the propagation of the \textsc{response}\xspace
back to $I$ incurs the same delay; therefore, the response from $R_2$
arrives at $I$ with a delay $\ensuremath{\Delta t}\xspace = 2 \times \frac{\ensuremath{\Delta d}\xspace}{c}$
w.r.t.\ $R_1$.
In our case, the estimate above from the CIR signal yields
$\ensuremath{\Delta t}\xspace=38$~ns, corresponding to
\mbox{$\ensuremath{\Delta d}\xspace \approx 5.6$~m}---indeed the displacement
of the two responders. Therefore, by knowing the distance $d_1$ between
$I$ and $R_1$, estimated precisely by the DW1000, we can easily estimate
the distance between $I$ and $R_2$ as $d_2 = d_1 + \ensuremath{\Delta d}\xspace$.
This confirms that a single concurrent ranging exchange
contains enough information to reconstruct both distance estimates.
\fakeparagraph{Concurrent Transmissions: Sources of Ranging Error}
Another way to look at Figure~\ref{fig:chorus-cir} is to compare it
against Figure~\ref{fig:concurrent-uwb}; while the latter provides an
\emph{idealized} view of what happens in the UWB channel,
Figure~\ref{fig:chorus-cir} provides a \emph{real} view.
Multipath propagation and interference among the different paths
of each signal affects the measured CIR;
it is therefore interesting to see whether this holds in general
and what is the impact on the (weaker) signal from $R_2$.
To this end, Figure~\ref{fig:chorus-cir-variations} shows the average
amplitude and standard deviation of 500~CIR signals aligned based on
the \texttt{FP\_INDEX}\xspace with $d_1 = 4$~m, and $d_2 = 9.6$~m. We observe that the
first pulse, the one from the closer $R_1$, presents only
minor variations in the amplitude of the direct path and of MPC,
coherently with Figure~\ref{fig:single-tx-cir-variations}. In
contrast, the pulse from $R_2$ exhibits stronger variations, as shown
by the colored area between 80 and 110~ns representing the standard
deviation. However, these variations can be ascribed only marginally
to interference with the pulse from $R_1$; we argue, and provide
evidence next, that these variations are caused by the result of small
time shifts of the observed CIR pulse, in turn caused by the
$\epsilon \in [-8,0)$~ns TX scheduling uncertainty.
\fakeparagraph{TX Uncertainty Affects Time Offsets}
Figure~\ref{fig:chorus-at-hist} shows the normalized histogram, for the same
500~CIR signals, of the time offset \ensuremath{\Delta t}\xspace between the times at which the
responses from $R_1$ and $R_2$ are received at $I$. The real value, computed
with exact knowledge of distances, is \ensuremath{\Delta t}\xspace=~37.37~ns; the average from the
CIR samples is instead \ensuremath{\Delta t}\xspace$= 36.11$~ns, with $\sigma = 2.85$~ns.
These values, and the trends in Figure~\ref{fig:chorus-at-hist}, are
compatible with the 8~ns uncertainty deriving from TX scheduling.
\fakeparagraph{Time Offsets Affect Distance Offsets} As shown in
Figure~\ref{fig:chorus-at-hist}, the uncertainty in time offset
directly translates into uncertainty in the distance offset, whose
real value is $\ensuremath{\Delta d}\xspace=~5.6$~m. In contrast, the average estimate is
$\ensuremath{\Delta d}\xspace = 5.41$~m, with $\sigma = 0.43$~m. The average error is
therefore $-18$~cm; the \nth{50}, \nth{75}, and \nth{99}
percentiles are 35~cm, 54~cm and 1.25~m, respectively. These results
still provide sub-meter ranging accuracy as long as the estimated
distance to $R_1$ is accurate enough.
\fakeparagraph{Distance Offsets Affect Ranging Error} Recall that the
distance $d_1$ from $R_1$ to $I$ is obtained directly from the
timestamps provided by the DW1000, while for $R_2$ is estimated as
$d_2 = d_1 + \ensuremath{\Delta d}\xspace$. Therefore, the uncertainty in the distance
offset \ensuremath{\Delta d}\xspace directly translates into an additional ranging error,
shown in Figure~\ref{fig:concurrent-ranging-error} for each responder.
$R_1$ exhibits a mean ranging error $\mu = 3.6$~cm with
$\sigma = 1.8$~cm and a \nth{99} percentile over the absolute error of
only 8~cm. Instead, the ranging error for $R_2$, computed indirectly
via \ensuremath{\Delta d}\xspace, yields $\mu = -15$~cm with $\sigma = 42.67$~cm.
The median of the absolute error of $R_2$ is 31~cm,
while the \nth{25}, \nth{75}, and \nth{99} percentiles
are 16~cm, 58~cm, and 1.18~m, respectively.
\begin{figure}[!t]
\centering
\includegraphics{figs/ewsn/tosn-d10-24-at-ad-hists}
\caption{Normalized histograms of the time offset \ensuremath{\Delta t}\xspace and corresponding
distance offset \ensuremath{\Delta d}\xspace between the leading CIR pulses from $R_1$ and $R_2$.}
\label{fig:chorus-at-hist}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics{figs/ewsn/tosn-d10-24-rng-hist}
\caption{Normalized histograms of the concurrent ranging error of each responder.}
\label{fig:concurrent-ranging-error}
\end{figure}
\fakeparagraph{Impact of Distance between Responders}
In principle, the results above demonstrate the feasibility of
concurrent ranging and its ability to achieve sub-meter accuracy.
Nevertheless, these results were obtained for a single value of $d_2$.
Table~\ref{table:concurrent-ranging} summarizes the results
obtained by varying this distance as described at the beginning of the
section. We only consider the \textsc{response}\xspace packets successfully sent by
$R_1$, since those received from $R_2$ produce the mismatch mentioned
in~\ref{sec:obs:accuracy}, increasing the error by $\approx$~\ensuremath{\Delta d}\xspace;
we describe a solution to this latter problem in~\ref{sec:reloaded}.
\input{rng-table1}
To automatically detect the direct path of \resp{2},
we exploit our a-priori knowledge of where it
should be located based on \ensuremath{\Delta d}\xspace, and therefore \ensuremath{\Delta t}\xspace.
We consider the slice of the CIR defined by $\ensuremath{\Delta t}\xspace \pm 8$~ns,
and detect the first peak in it,
estimating the direct path as the preceding index with the amplitude
closest to the $20\%$ of the maximum amplitude, as described earlier.
To abate false positives, we also enforce the additional constraints
that a peak has a minimum amplitude of 1,500 and that the minimum distance
between peaks is 8~ns.
As shown in Table~\ref{table:concurrent-ranging}, the distance to
$R_1$ is estimated with an average error $\mu<9$~cm and
$\sigma < 10$~cm for all tested $d_2$ distances. The \nth{99}
percentile absolute error is always $< 27$~cm. These results are in
line with those obtained in~\ref{sec:obs:accuracy}. As for $R_2$, we
observe that the largest error of the estimated $\Delta d$, and of
$d_2$, is obtained for the shortest distance $d_2 = 4.8$~m. In this
particular setting, the pulses from both responders are very close and
may even overlap in the CIR, increasing the resulting error,
$\mu=-43$~cm for $d_2$. The other distances exhibit $\mu\leq 25$~cm.
We observe that the error is significantly lower with
$\Delta d \geq 4$~m, achieving $\nth{75} <60$~cm for $d_2$. Similarly,
for all $\Delta d \geq 4$~m except $\Delta d = 5.6$~m, the \nth{99}
percentile is $< 1$~m. These results confirm that concurrent ranging
can achieve sub-meter ranging accuracy, as long as the distance
$\Delta d$ between responders is sufficiently large.
\fakeparagraph{Summary}
Concurrent ranging can achieve sub-meter accuracy, but requires
\begin{inparaenum}[\itshape i)]
\item a sufficiently large difference $\Delta d$ in distance (or
\ensuremath{\Delta t}\xspace in time) among concurrent responders, to distinguish the
responders first paths within the CIR, and
\item a successful receipt of the \textsc{response}\xspace packet from the closest
responder, otherwise the mismatch of responder identity increases
the ranging error to $\approx \Delta d$.
\end{inparaenum}
\subsection{What about More Responders?}
\label{sec:cir-multiple}
We conclude the experimental campaign with our strawman implementation
by investigating the impact of more than two concurrent responders,
and their relative distance, on \ensuremath{\mathit{PRR}}\xspace and ranging accuracy. If
multiple responders are at a similar distance from the initiator,
their pulses are likely to overlap in the CIR, hampering the
discrimination of their direct paths from MPC. Dually, if the distance
between the initiator and the nearest responder is much smaller w.r.t.\
the others, power loss may render the transmissions of farther responders too
faint to be detected at the initiator, due to the interference from
those of the nearest responder.
To investigate these aspects, we run experiments with five concurrent
responders arranged in a line (Figure~\ref{fig:five-responders-setup}),
for which we change the inter-node distance $d_i$.
For every tested $d_i$, we repeat the experiment until we obtain
500~successfully received \textsc{response}\xspace packets, as done earlier.
\begin{figure}[!b]
\centering
\begin{tikzpicture}[xscale=0.6]%
\tikzstyle{device} = [circle, thick, draw=black, fill=gray!30, minimum size=8mm]
\node[device] (a) at (0, 0) {$I$};
\node[device] (b) at (3, 0) {$R_1$};
\node[device] (c) at (6, 0) {$R_2$};
\node[device] (d) at (9, 0) {$R_3$};
\node[device] (e) at (12, 0) {$R_4$};
\node[device] (f) at (15, 0) {$R_5$};
\draw[<->, thick] (a) -- (b) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (b) -- (c) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (c) -- (d) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (d) -- (e) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (e) -- (f) node[pos=.5,above]{$d_i$};
\draw[<->, thick] (0, -.6) -- (15, -.6) node[pos=.5,below]{$D = 5 \times d_i$};
\end{tikzpicture}
\caption{Experimental setup to analyze the CIR resulting from five
concurrent responders (\ref{sec:cir-multiple}).}
\label{fig:five-responders-setup}
\end{figure}
\fakeparagraph{Dense Configuration} We begin by examining a very short
$d_i = 0.4$~m, yielding similar distances between each
responder and the initiator. In this setup, the overall
${\overline{\ensuremath{\mathit{PRR}}\xspace} = 99.36}$\%.
Nevertheless, recall that a time-of-flight difference of 1~ns
translates into a difference of $\approx30$~cm in distance and that
the duration of a UWB pulse is $\leq 2$~ns; pulses from neighboring
responders are therefore likely to overlap, as shown by the CIR in
Figure~\ref{fig:cir-d0-5tx}. Although we can visually observe
different peaks, discriminating the ones associated to responders from
those caused by MPC is very difficult, if not impossible, in absence
of a-priori knowledge about the number of concurrent responders and/or
the environment characteristics. Even when these are present, in some
cases the CIR shows a wider pulse that ``fuses'' the pulses of one or
more responders with MPC. In essence, when the difference in distance
$\ensuremath{\Delta d}\xspace = d_i$ among responders is too small, concurrent ranging
cannot be applied with the strawman technique we employed thus far; we
address this problem in~\ref{sec:reloaded}.
\begin{figure}[!t]
\centering
\subfloat[$d_i = 0.4$~m. The peaks corresponding to each responder are not
clearly distinguishable; the distance from the initiator cannot be estimated.
\label{fig:cir-d0-5tx}]{
\includegraphics{figs/ewsn/cir-5tx-d0-83.pdf}}
\hfill
\subfloat[$d_i = 6$~m. The peaks corresponding to each responder are clearly
separated; the distance from the initiator can be estimated.\label{fig:cir-d15-5tx}]{
\includegraphics{figs/ewsn/cir-5tx-d15-17.pdf}}
\caption{Impact of the relative distance $d_i$ among 5~responders, analyzed
via the corresponding CIR.}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics{figs/ewsn/crng-tosn-5tx-cdf.pdf}
\caption{Impact of the relative distance $d_i$ among 5 responders: CDF of absolute ranging error.}
\label{fig:crng-ewsn-5tx-cdf}
\end{figure}
\fakeparagraph{Sparser Configurations: \ensuremath{\mathit{PRR}}\xspace} We now explore
${2 \leq d_i \leq 10}$~m, up to a maximum distance $D~=~50$~m
between the initiator $I$ and the last responder \resp{5}.
The experiment achieved an overall ${\overline{\ensuremath{\mathit{PRR}}\xspace} = 96.59}$\%, with the minimum $\overline{\ensuremath{\mathit{PRR}}\xspace} = 88.2\%$ for the maximum $d_i = 10$~m,
and the maximum ${\overline{\ensuremath{\mathit{PRR}}\xspace} = 100}$\% for $d_i = 8$~m.
The closest responder $R_1$ achieved \mbox{$\ensuremath{\mathit{PRR}}\xspace_1 =90.56$\%}.
The \ensuremath{\mathit{PRR}}\xspace of the experiment is interestingly
high, considering that in narrowband technologies
increasing the number of concurrent transmitters sending different packets
typically decreases reliability due to the nature of the
capture effect~\cite{chaos,crystal}. In general, the
behavior of concurrent transmissions in UWB is slightly
different---and richer---than in narrowband; the interested reader
can find a more exhaustive treatment in~\cite{uwb-ctx-fire}. In this
specific case, the reason for the high \ensuremath{\mathit{PRR}}\xspace we observed is the closer
distance to the initiator of $R_1$ w.r.t.\ the other responders.
\fakeparagraph{Sparser Configurations: Ranging Error}
Figure~\ref{fig:crng-ewsn-5tx-cdf} shows the CDF of the ranging error
for all distances and responders. We use the same technique
of~\ref{sec:cir-enough} to detect the direct paths and, similarly,
only consider the exchanges (about 90\% in this case) where the
successfully received \textsc{response}\xspace is from the nearest responder
$R_1$, to avoid a mismatch (\ref{sec:obs:accuracy}).
We observe the worst performance for $d_i = 2$~m; peaks from different
responders are still relatively close to each other and affected by
the MPC of previously transmitted pulses. Instead,
Figure~\ref{fig:cir-d15-5tx} shows an example CIR for $d_i = 6$~m, the
intermediate value in the distance range considered. Five distinct
peaks are clearly visible, enabling the initiator to estimate the
distance to each responder. The time offset \ensuremath{\Delta t}\xspace between two
consecutive peaks is similar, as expected, given the same distance
offset $\ensuremath{\Delta d}\xspace = d_i$ between two neighboring responders. This yields
good sub-meter ranging accuracy for all $d_i\geq 4$, for which
the average error is $\mu\leq 40$~cm and the absolute error
$\nth{75}\leq 60$~cm.
\fakeparagraph{Summary} These results confirm that sub-meter
concurrent ranging is feasible even with multiple responders. However,
ranging accuracy is significantly affected by the relative distance
between responders, which limits practical applicability.
\subsection{Performance with Mobile Targets}
\label{sec:optitrack}
We now evaluate the ability of concurrent ranging\xspace to accurately determine the
position of a mobile node. This scenario is representative of several
real-world applications, e.g.,\ exploration in harsh
environments~\cite{thales-demo}, drone operation~\cite{guo2016ultra},
and user navigation in museums or shopping
centers~\cite{museum-tracking}.
To this end, we ran experiments with an EVB1000 mounted on a mobile
robot~\cite{diddyborg} in a $12 \times 8$~m$^2$ indoor area where we
placed 6~responders serving as localization anchors.
We compare both our concurrent ranging\xspace variants against only SS-TWR\xspace with clock drift
compensation, as this provides a more challenging baseline,
as discussed earlier.
The area is equipped with 14~OptiTrack cameras~\cite{optitrack}, which we
configured to output positioning samples with an update rate of
125~Hz and calibrated to obtain a mean 3D error $< 1$~mm,
therefore yielding reliable and accurate ground truth to
validate the UWB systems against. The mobile robot is controlled by a
RPi, enabling us to easily repeat trajectories by remotely driving the
robot over WiFi via a Web application on a smartphone. A second RPi enables the
flashing of the EVB1000 node with the desired binary and the upload of
serial output (CIRs and RX information) to our testbed server for
offline analysis.
\begin{figure}[!t]
\centering
\includegraphics{figs/optitrack/loc-path-ssr3-t5}
\hspace{5mm}
\includegraphics{figs/optitrack/loc-path-ssr3-t6}
\\
\includegraphics{figs/optitrack/loc-path-ssr3-t7}
\hspace{5mm}
\includegraphics{figs/optitrack/loc-path-ssr3-t9}
\caption{Localization with concurrent ranging\xspace across four trajectories using S{\footnotesize \&}S\xspace
with $K = 3$ iterations.}
\label{fig:crng-tosn:tracking}
\end{figure}
Before presenting in detail our evaluation,
Figure~\ref{fig:crng-tosn:tracking} offers the opportunity to visually
ascertain that our concurrent ranging\xspace prototype is able to \emph{continuously and
accurately} track the robot trajectory, by comparing it against the
ground truth obtained with OptiTrack. We observe a few position
samples with relatively high error, due to strong MPC; however, these
situations are rare and, in practice, easily handled with techniques
commonly used in position tracking, e.g.,\ extended or unscented Kalman
filters~\cite{ukf-julier}. Due to space constraints, the figure shows
only trajectories with S{\footnotesize \&}S\xspace because they are very similar to
threshold-based ones, as discussed next.
\fakeparagraph{Ranging Accuracy}
Across all samples, we compute the ranging error
$\hat{d}_{i} - d_{i}$ between the concurrent ranging\xspace or SS-TWR\xspace
estimate $\hat{d}_{i}$ for \resp{i} and the OptiTrack estimate $d_{i}$.
To obtain the latter, we interpolate the high-rate positioning traces
of OptiTrack to compute the exact robot position $\mathbf{p}$ at each
time instance of our concurrent ranging\xspace and SS-TWR\xspace traces and then estimate
the true distance $d_i = \norm{\mathbf{p} - \mathbf{p_i}}$, where
$\mathbf{p_i}$ is the known position of \resp{i}.
Table~\ref{tab:optitrack-rng-err} shows that the results exhibit the
very good trends we observed in the static case
(\ref{sec:crng-exp-static}). In terms of accuracy, the median and
average error are very small, and very close to SS-TWR\xspace.
However, SS-TWR\xspace is significantly more precise, while the
standard deviation $\sigma$ of concurrent ranging\xspace is in line with the one observed
with all 18~positions (Table~\ref{tab:rng-err-all}).
In contrast, however, the absolute error is
$\nth{99}\leq 37$~cm, significantly lower than in this latter
case. Further, the ToA\xspace algorithm employed for concurrent ranging\xspace has only a
marginal impact on accuracy and precision.
\begin{figure}[!t]
\centering
\subfloat[Concurrent ranging: Threshold.\label{fig:crng-tosn:pergine-th-err-hist}]{
\includegraphics{figs/optitrack/crng-th-disterr-hist-perg.pdf}
}
\subfloat[Concurrent ranging: S{\footnotesize \&}S\xspace.\label{fig:crng-tosn:pergine-ss3-err-hist}]{
\includegraphics{figs/optitrack/crng-ssr3-disterr-hist-perg.pdf}
}
\subfloat[SS-TWR\xspace with drift compensation.\label{fig:crng-tosn:pergine-sstwr-err-hist}]{
\includegraphics{figs/optitrack/sstwr-disterr-hist-perg.pdf}
}
\caption{Normalized histogram of the ranging error across multiple
mobile trajectories.}
\label{fig:crng-tosn:pergine-rng-err-hist}
\end{figure}
\begin{table}[!t]
\centering
\caption{Ranging error comparison across multiple mobile trajectories.}
\label{tab:optitrack-rng-err}
\begin{tabular}{l ccc ccccc}
\toprule
& \multicolumn{3}{c}{ $\hat{d}_i - d_i$ [cm]}
& \multicolumn{5}{c}{ $|\hat{d}_i - d_i|$ [cm]}\\
\cmidrule(lr){2-4} \cmidrule(lr){5-9}
{\bfseries Scheme} & Median & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 0.3 & -1.3 & 23.5 &8 &14 & 20 &25 & 37\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 0.2 & -1.4 & 21.6 &8 &13 & 20 &24 & 35\\
SS-TWR\xspace Compensated & -3.5 & -3.4 & 6.8 &5 &9 & 12 &15 & 19\\
\bottomrule
\end{tabular}
\end{table}
An alternate view confirming these observations is
offered by the normalized histograms in
Figure~\ref{fig:crng-tosn:pergine-rng-err-hist}, where
the long error tails observed in
Figure~\ref{fig:crng-tosn:err-hist-th}--\ref{fig:crng-tosn:err-hist-ssr3}
are absent in
Figure~\ref{fig:crng-tosn:pergine-th-err-hist}--\ref{fig:crng-tosn:pergine-ss3-err-hist}.
Overall, concurrent ranging\xspace follows closely the performance of SS-TWR\xspace with drift
compensation, providing a more scalable scheme with less overhead
and comparable accuracy. Notably, concurrent ranging\xspace measures the
distance to all responders simultaneously, an important factor
when tracking rapidly-moving targets to reduce the bias induced by
relative movements. Further, this aspect also enables a
significant increase of the attainable update rate.
\fakeparagraph{Localization Accuracy}
Figure~\ref{fig:crng-optitrack-cdf} compares the
CDFs of the localization error of the techniques under evaluation;
Table~\ref{tab:optitrack-loc-err} reports the value of the metrics
considered. The accuracy of SS-TWR\xspace is about 1~cm worse w.r.t.\ the static
case, while the precision is essentially unaltered. As for concurrent ranging\xspace, the
median error is also the same as in the static case, while the value
of the other metrics is by and large in between the case with all
positions and the one with only \textsc{center}\xspace ones. The precision is
closer to the case of all static positions
(Table~\ref{tab:loc-err-all}), which is mirrored in the slower
increase of the CDF for concurrent ranging\xspace variants w.r.t.\ SS-TWR\xspace
(Figure~\ref{fig:crng-optitrack-cdf}). Overall, the absolute error is
relatively small and closer to the case with \textsc{center}\xspace positions,
with $\nth{95}\leq 22$~cm. On the other hand, the \nth{99} percentile
is slightly higher than in Table~\ref{tab:loc-err-all},
possibly due to the different environment and the
higher impact of the orientation of the antenna relative to the
responders.
Another difference w.r.t.\ the static case is the
slightly higher precision and \nth{99} accuracy of S{\footnotesize \&}S\xspace vs.\
threshold-based estimation, in contrast with the opposite trend we observed
in~\ref{sec:crng-exp-static}. Again, this is likely to be
ascribed to the different environment and MPC profile.
In any case, this bears only
a minor impact on the aggregate performance, as shown in
Figure~\ref{fig:crng-optitrack-cdf}.
\fakeparagraph{Success Rate} Across the 4,015 signals from our
trajectories, concurrent ranging\xspace obtained 3,999 position estimates ($99.6\%$) with
both ToA\xspace techniques. Nevertheless, 43 of these are affected by an
error $\geq 10$~m and can be disregarded as outliers, yielding an
effective success rate of $98.8\%$, which nonetheless reasserts the
ability of concurrent ranging\xspace to provide reliable and robust localization.
Regarding ranging, threshold-based estimation yields a success rate of $93.18\%$
across the 24,090 expected estimates, while S{\footnotesize \&}S\xspace reaches $95.4\%$,
confirming its higher reliability. As expected, the localization
success rate is higher as the position can be computed even
if several $\hat d_i$ are lost.
\begin{figure}[!tb]
\centering
\includegraphics{figs/optitrack/crng-loc-cdf-perg.pdf}
\caption{Localization error CDF of concurrent ranging\xspace vs.\ compensated SS-TWR\xspace
across multiple trajectories.}
\label{fig:crng-optitrack-cdf}
\end{figure}
\begin{table}
\centering
\caption{Localization error comparison across multiple mobile trajectories.}
\label{tab:optitrack-loc-err}
\begin{tabular}{l cc ccccc}
\toprule
& \multicolumn{7}{c}{$\norm{\mathbf{\hat p} - \mathbf{p_r}}$ [cm]}\\
\cmidrule(lr){2-8}
{\bfseries Scheme} & {$\mu$} & {$\sigma$}
& \nth{50} & \nth{75} & \nth{90} & \nth{95} & \nth{99}\\
\midrule
Concurrent Ranging: Threshold & 12.1 & 17.2 &10 &14 & 18 &22 & 85\\
Concurrent Ranging: S{\footnotesize \&}S\xspace & 11 & 12.8 &9 &13 & 18 &20 & 60\\
SS-TWR\xspace Compensated & 5.8 & 2.3 &6 &7 & 9 &10 & 12\\
\bottomrule
\end{tabular}
\end{table}
\section{Feasibility and Challenges: Empirical Observations}
\label{sec:questions}
Although the idea of concurrent ranging is extremely simple and can be
implemented straightforwardly on the DW1000, several questions must be
answered to ascertain its practical feasibility. We discuss them
next, providing answers based on empirical observations.
\subsection{Experimental Setup}
\label{sec:ewsn:setup}
All our experiments employ the Decawave\xspace EVB1000 development
platform~\cite{evb1000}, equipped with the DW1000 transceiver, an
STM32F105 ARM Cortex-M3 MCU, and a PCB antenna.
\fakeparagraph{UWB Radio Configuration} We use a preamble length of
128~symbols and a data rate of 6.8~Mbps. Further, we use channel~4,
whose wider bandwidth provides better resolution in determining
the timing of the direct path and therefore better ranging estimates.
\fakeparagraph{Firmware} We program the behavior of initiator and responder
nodes directly atop Decawave\xspace libraries, without any OS layer, by adapting
towards our goals the demo code provided by Decawave\xspace. Specifically, we provide
support to log, via the USB interface,
\begin{inparaenum}[\itshape i)]
\item the packets transmitted and received,
\item the ranging measurements, and
\item the CIR measured upon packet reception.
\end{inparaenum}
\fakeparagraph{Environment} All our experiments are carried out in a
university building, in a long corridor whose width
is 2.37~m. This is arguably a challenging environment due to the
presence of strong multipath, but also very realistic to test the
feasibility of concurrent ranging\xspace, given that one of the main applications of
UWB is for localization in indoor environments.
\fakeparagraph{Network Configuration}
In all experiments, one initiator node and one or more responders
are arranged in a line, placed exactly in the middle of the
aforementioned corridor. This one-dimensional configuration
allows us to clearly and intuitively relate the temporal
displacements of the received signals to the spatial displacement of
their source nodes. For instance, Figure~\ref{fig:capture-exp-deployment}
shows the network used in~\ref{sec:obs:prr}; we change
the arrangement and number of nodes depending
on the question under investigation.
\section{Concurrent Ranging Reloaded}
\label{sec:reloaded}
The experimental campaign in the previous section confirms that
concurrent ranging is feasible, but also highlights several challenges
not tackled by the strawman implementation outlined in~\ref{sec:crng},
limiting the potential and applicability of our technique. In this
section, we overcome these limitations with a novel design
that, informed by the findings in~\ref{sec:questions}, significantly
improves the performance of concurrent ranging\xspace both in terms of accuracy and
reliability, bringing it in line with conventional methods but at a
fraction of the network and energy overhead.
We begin by removing the main source of inaccuracy, i.e.,\ the 8~ns
uncertainty in TX scheduling. The technique we present
(\ref{sec:crng-tosn:txfix}) not only achieves sub-ns precision, as
shown in our evaluation (\ref{sec:crng-tosn:exp-tx-comp}), but also
doubles as a mechanism to reduce the impact of clock drift, the main
source of error in SS-TWR\xspace (\ref{sec:soa-sstwr}). We then
present our technique to correctly associate responders with paths
in the CIR (\ref{sec:crng-tosn:resp-id}), followed by two necessary
CIR pre-processing techniques to discriminate the direct paths from MPC
and noise (\ref{sec:crng-tosn:cir-proc}). Finally, we illustrate two algorithms
for estimating the actual ToA\xspace of the direct paths and outline the overall
processing that, by combining all these steps, yields the final
distance estimation (\ref{sec:crng-tosn:time-dist}).
\section{Related Work}
\label{sec:relwork}
We place concurrent ranging in the context of other UWB ranging
schemes (\ref{sec:relwork:twr}), the literature on concurrent
transmissions in low-power wireless communications
(\ref{sec:relwork:glossy}), and techniques
that build upon the work~\cite{crng} in which we introduced the notion
of concurrent ranging for the first time (\ref{sec:relwork:crng}).
\subsection{Other UWB Ranging Schemes}
\label{sec:relwork:twr}
Although SS-TWR\xspace is a simple and popular scheme for UWB, several others
exist, focusing on improving different aspects of its operation.
A key issue is the linear relation between the ranging error and the
clock drift (\ref{sec:toa}). Some approaches extend SS-TWR\xspace by
\emph{adding} an extra packet from the initiator to the
responder~\cite{polypoint} or from the responder to the
initiator~\cite{ethz-sstwr-drift}. The additional packet
enables clock drift compensation.
Instead, double-sided two-way ranging (DS-TWR\xspace), also part of the
IEEE~802.15.4\xspace standard~\cite{std154}, includes a third packet from the
initiator to the responder in reply to its \textsc{response}\xspace, yielding a more
accurate distance estimate at the responder; a fourth, optional
packet back to the initiator relays the estimate to it. In the
classic \emph{symmetric} scheme~\cite{dstwr}, the response delay \ensuremath{T_\mathit{RESP}}\xspace
for the \textsc{response}\xspace is the same for the third packet from initiator to
responder.
This constraint reduces flexibility and increases development
complexity~\cite[p. 225]{dw1000-manual-v218}. In the alternative
\emph{asymmetric} scheme proposed by Decawave\xspace~\cite{dw-dstwr-patent,
dw-dstwr}, instead, the error does not depend on the delays of the
two packets; further, the clock drift is reduced to picoseconds,
making ToA estimation the main source of
error~\cite{dw1000-manual-v218}. However, DS-TWR\xspace has significantly
higher latency and energy consumption, requiring up to $4\times N$
packets (twice than SS-TWR\xspace) to measure the distance to $N$ nodes at
the initiator. We are currently investigating if and how concurrent
ranging can be extended towards DS-TWR\xspace.
PolyPoint~\cite{polypoint} and SurePoint~\cite{surepoint} improve
ranging and localization by using a custom-designed multi-antenna
hardware platform. These schemes exploit antenna and channel diversity,
yielding more accurate and reliable estimates; however,
this comes at the cost of a significantly higher latency
and energy consumption, decreasing scalability and battery lifetime.
Other schemes have instead targeted directly a reduction of the
packet overhead. The \emph{one-way ranging} in~\cite{ethz-one-way}
exploits \emph{sequential} transmissions from anchors to enable mobile
nodes to passively self-position, by precisely estimating the
time of flight and the clock drift. However, the update rate and
accuracy decrease as the number $N$ of anchors increases. Other
schemes replace the unicast \textsc{poll}\xspace of SS-TWR\xspace with a \emph{broadcast}
one, as in concurrent ranging. In N-TWR~\cite{ntwr}, responders send
their \textsc{response}\xspace \emph{sequentially}, to avoid collisions, reducing the
number of packets exchanged to $N + 1$.
An alternate scheme by Decawave\xspace~\cite[p.~227]{dw1000-manual-v218}
exploits a broadcast \textsc{poll}\xspace in asymmetric DS-TWR\xspace, rather than SS-TWR\xspace,
reducing the packet overhead to $2 + N$ or $2(N + 1)$ depending on
whether estimates are obtained at the responders or the initiator,
respectively.
In all these schemes, however, the number of packets required grows
linearly with $N$,
limiting scalability. In contrast, concurrent ranging measures the
distance to the $N$ nodes based on a \emph{single} two-way exchange,
reducing dramatically latency, consumption, and channel utilization,
yet providing similar accuracy as demonstrated
in~\ref{sec:crng-tosn:eval}.
\subsection{Concurrent Transmissions for Low-power Wireless
Communication}
\label{sec:relwork:glossy}
Our concurrent ranging technique was originally inspired by the body
of work on concurrent transmissions in narrowband low-power radios.
Pioneered by Glossy~\cite{glossy}, this technique exploits the
PHY-level phenomena of constructive interference and capture effect to
achieve unprecedented degrees of high reliability, low latency, and
low energy consumption, as shown by several follow-up
works~\cite{chaos,lwb,crystal}. However, these focus on IEEE~802.15.4\xspace
narrowband radios, leaving an open question about whether similar
benefits can be harvested for UWB radios.
In~\cite{uwb-ctx-fire} we ascertained empirically the conditions for
exploiting UWB concurrent transmissions for reliable communication,
exploring extensively the radio configuration space. The findings
serve as a foundation for adapting the knowledge and systems in
narrowband towards UWB and reaping similar benefits, as already
exemplified by~\cite{glossy-uwb}. Further, the work
in~\cite{uwb-ctx-fire} also examined the effect of concurrent
transmissions on ranging---a peculiarity of UWB not present in
narrowband---confirming our original findings in~\cite{crng}
(and~\ref{sec:questions}) and analyzing the radio configuration and
environmental conditions in more depth and breadth than what we can
report here.
\subsection{Concurrent Transmissions for Ranging and Localization}
\label{sec:relwork:crng}
We introduced the novel concept of concurrent ranging in~\cite{crng},
where we demonstrated the feasibility of exploiting UWB
concurrent transmissions together with CIR information for ranging;
\ref{sec:questions} contains an adapted account of the observations
we originally derived.
Our work was followed by~\cite{crng-graz}, which introduces the idea
of using pulse shapes and response position modulation to match CIR
paths with responders. We discarded the former
in~\ref{sec:crng-tosn:resp-id} and~\cite{chorus} as we verified
empirically that closely-spaced MPC can create ambiguity, and
therefore mis-identifications. Here, we resort to the latter as
in~\cite{chorus, snaploc}, i.e.,\ by adding a small time shift $\delta_i$
to each \textsc{response}\xspace, enough to separate the signals of each responder
throughout the CIR span. The work in~\cite{crng-graz} also suggested
a simpler version of Search \& Subtract for ToA\xspace estimation. Instead,
here we follow the original algorithm~\cite{dardari-toa-estimation}
but enforce that candidate paths reach a minimum peak amplitude, to
improve resilience to noise and MPC. Moreover, we introduce an
alternate threshold-based ToA\xspace algorithm that is significantly simpler
but yields similar results. Both preliminary works in~\cite{crng,
crng-graz} leave as open challenges the TX scheduling uncertainty
and the unreliability caused by packet loss. Here, we address these
challenges with the local compensation mechanism
in~\ref{sec:crng-tosn:txfix} and the other techniques
in~\ref{sec:reloaded}, making concurrent ranging not only accurate,
but also very reliable and, ultimately, usable in practice.
Decawave\xspace~\cite{dw:simulranging} filed a patent on ``simultaneous ranging''
roughly at the same time of our original work~\cite{crng},
similarly exploiting concurrent transmissions from responders.
The patent includes two variants:
\begin{inparaenum}[\itshape i)]
\item a \emph{parallel} version, where all responders transmit nearly simultaneously
as in~\ref{sec:crng}--\ref{sec:questions}, only aiming to measure
the distance to the closest responder, and
\item a \emph{staggered} version that exploits time shifts as
in~\ref{sec:crng-tosn:resp-id} to determine the distance to each
responder.
\end{inparaenum}
The latter, however, requires PHY-layer changes that will unavoidably
take time to be standardized and adopted by future UWB transceivers.
In contrast, the techniques we present here can be exploited with
current transceivers and can also serve as a reference for the
design and development of forthcoming UWB radios natively
supporting concurrent ranging.
Our original paper inspired follow-up work on concurrent
ranging~\cite{crng-graz,R3} but also on other techniques exploiting
concurrent transmissions for localization. Our own
Chorus~\cite{chorus} system and SnapLoc~\cite{snaploc}
realize a passive self-localization scheme supporting unlimited
targets. Both systems assume a known anchor infrastructure in which a
reference anchor transmits a first packet to which the others reply
concurrently. Mobile nodes in range listen for these concurrent
responses and estimate their own position based on time-difference
of arrival (TDoA\xspace) multilateration. In~\cite{chorus}, we modeled the accuracy of
estimation via concurrent transmissions if the TX uncertainty were to
be reduced, as expected in forthcoming UWB transceivers. This model is
applicable to concurrent ranging\xspace and, in fact, predicts the results we achieved
in~\ref{sec:crng-tosn:eval} by locally compensating for the TX
uncertainty (\ref{sec:crng-tosn:txfix}). SnapLoc instead proposed to
directly address the TX uncertainty with a correction that
requires either a wired backbone infrastructure that anchors exploit
to report their known TX error, or a reference anchor that receives
the \textsc{response}\xspace and measures each TX error from the CIR.
Both require an additional step to report the error to
mobile nodes, and introduce complexity in the deployment along with
communication overhead.
In contrast, the compensation
in~\ref{sec:crng-tosn:txfix} is \emph{entirely local} to the
responders, therefore imposing neither deployment constraints nor
overhead. Moreover, the compensation
in~\ref{sec:crng-tosn:txfix} can be directly incorporated
in Chorus and SnapLoc, improving their performance while simplifying
their designs.
Recently, these works have also inspired the use of UWB concurrent
transmissions with angle-of-arrival (AoA)
localization. In~\cite{crng-aoa}, a multi-antenna anchor sends a \textsc{poll}\xspace
to which mobile nodes in range reply concurrently, allowing the anchor
not only to measure their distance but also the AoA of their signals;
combining the two enables the anchor to estimate the position of each
node. The techniques we proposed in this paper (\ref{sec:reloaded})
addressing the TX uncertainty, clock drift, and unreliability caused
by packet loss, are applicable and likely beneficial also for this AoA
technique.
\subsection{Response Identification}
\label{sec:crng-tosn:resp-id}
As observed in~\ref{sec:questions}, if the distance between the
initiator and the responders is similar, their paths and MPC overlap
in the CIR, hindering responder identification and ToA\xspace estimation.
Previous work~\cite{crng-graz} proposed to assign a different pulse
shape to each responder and then use a matched filter to associate
paths with responders. However, this leads to responder
mis-identifications, as we showed in~\cite{chorus}, because
the channel cannot always be assumed to be separable, i.e.,\ the measured peaks in
the CIR can be a combination of multiple paths, and the
received pulse shapes can be deformed, creating ambiguity in the
matched filter output.
To reliably separate and identify responders, we resort to response
position modulation~\cite{crng-graz}, whose effectiveness has instead
been shown by our own work on Chorus~\cite{chorus} and by
SnapLoc~\cite{snaploc}. The technique consists of delaying each
\textsc{response}\xspace by \mbox{$\delta_i = (i - 1)\ensuremath{T_\mathit{ID}}\xspace$}, where $i \in \{1,N\}$
is the responder identifier. The resulting CIR consists of an ordered
sequence of signals that are time-shifted based
on \begin{inparaenum}[\itshape i)]
\item the assigned delays $\delta_i$, and
\item the propagation delays $\tau_i$,
\end{inparaenum}
as shown in Figure~\ref{fig:crng-tosn:cir-arrangement}.
The constant \ensuremath{T_\mathit{ID}}\xspace must be set according to
\begin{inparaenum}[\itshape i)]
\item the CIR time span,
\item the maximum propagation time, as determined by the dimensions of
the target deployment area, and
\item the multipath profile in it.
\end{inparaenum}
Figure~\ref{fig:uwb-pdp} shows the typical power decay profile in
three different environments obtained from the {IEEE~802.15.4\xspace}a radio
model~\cite{molisch-ieee-uwb-model}. MPC with a time shift
$\geq 60$~ns suffer from significant power decay w.r.t.\ the direct
path. Therefore, by setting $\ensuremath{T_\mathit{ID}}\xspace = 128$~ns as
in~\cite{chorus,snaploc} we are unlikely to suffer from significant
MPC and able to reliably distinguish the responses. Moreover,
considering that the DW1000 CIR has a maximum time span of
$~\approx 1017$~ns, we can accommodate up to 7~responders, leaving a
small portion of the CIR with only noise.
We observe that this technique relies on the correct
identification of the first and last responder to properly
reconstruct the sequence, and avoid mis-identifications;
our evaluation (\ref{sec:crng-tosn:eval}) shows that these rarely
occur in practice.
\begin{figure}[!t]
\centering
\includegraphics{figs/pdp.pdf}
\caption{Power decay profile in different environments according to the
{IEEE~802.15.4\xspace}a radio model~\cite{molisch-ieee-uwb-model}.}
\label{fig:uwb-pdp}
\end{figure}
Finally, although the technique is similar to the one
in~\cite{chorus,snaploc}, the different context in which it is applied
yields significant differences. In these systems, the time of flight
$\tau_i$ is \emph{known} and compensated for, based on the fixed and
known position of anchors.
In concurrent ranging\xspace, not only $\tau_i$ is not known a
priori, but it also has a twofold impact on the \textsc{response}\xspace RX
timestamps, making the problem more challenging. On the other hand,
concurrent ranging\xspace is more flexible as it does not rely on the known position of
anchors. Further, as packet exchanges are triggered by the initiator
rather than the anchors as in~\cite{chorus,snaploc}, the former
could determine time shifts on a per-exchange basis, assigning a
different $\delta_i$ to each responder via the broadcast \textsc{poll}\xspace. For
instance, in a case where responders $R_i$ and $R_{i+1}$ have a
distance $d_i \gg d_{i+1}$ from the initiator, a larger time shift
$\delta_{i+1}$ could help separating the pulse of $R_{i+1}$ from the
MPC of $R_i$. Similarly, when more responders are present than what
can be accommodated in the CIR, the initiator could dynamically
determine the responders that should reply and the delays $\delta_i$
they should apply. This adaptive, initiator-based time shift
assignment opens several intriguing opportunities, especially for
mobile, ranging-only applications; we are currently investigating them
as part of our ongoing work (\ref{sec:discussion}).
\subsection{From Time to Distance}
\label{sec:crng-tosn:time-dist}
Enabling concurrent ranging\xspace on the DW1000 requires a dedicated algorithm
(\ref{sec:crng-tosn:toa-est}) to estimate the ToA\xspace of each \textsc{response}\xspace
in the CIR. This timing information must then be translated into the
corresponding distances (\ref{sec:crng-tosn:dist-est}), used directly
or in the computation of the initiator position
(\ref{sec:soa:toa-loc}).
\input{toa-est}
\subsubsection{Distance Estimation}
\label{sec:crng-tosn:dist-est}
These ToA\xspace estimation algorithms determine the CIR indexes \ensuremath{\Tau_i}\xspace
marking the direct path of each \textsc{response}\xspace. These, however, are only
\emph{array indexes}; each must be translated into a radio timestamp
marking the \emph{time of arrival} of the corresponding \textsc{response}\xspace, and
combined with other timing information to reconstruct the distance
$d_i$ between initiator and responder.
In~\ref{sec:crng}--\ref{sec:questions} we relied on the fact that the
radio \emph{directly} estimates the ToA\xspace of the first responder
\resp{1} with high accuracy, enabling accurate distance estimation by
using the timestamps embedded in the payload. Then, by looking at the
time difference $\Delta t_{i, 1}$ between the first path
of \resp{1} and another responder $R_i$ we can determine its distance
from the initiator as $d_i = d_1 + c\frac{\Delta t_{i, 1}}{2}$. This
approach assumes that the radio
\begin{inparaenum}[\itshape i)]
\item places the direct path of \resp{1} at the \texttt{FP\_INDEX}\xspace and
\item successfully decodes the \textsc{response}\xspace from \resp{1},
containing the necessary timestamps to accurately determine $d_1$.
\end{inparaenum}
However, the former is not necessarily true
(Figure~\ref{fig:crng-tosn:cir-arrangement}); as for the latter, the
radio may receive the \textsc{response}\xspace packet from any responder or
none. Therefore, we cannot rely on the distance estimation of
\resp{1}.
Interestingly, the compensation technique to eliminate the TX
scheduling uncertainty (\ref{sec:crng-tosn:txfix}) is also key to
enable an alternate approach avoiding these issues and yielding
additional benefits. Indeed, this technique enables TX scheduling with
sub-ns accuracy (\ref{sec:crng-tosn:exp-tx-comp}). Therefore, the
response delay $\ensuremath{T_\mathit{RESP}}\xspace$ and the additional delay $\delta_i$ for
responder identification in concurrent ranging\xspace can be enforced with high accuracy,
without relying on the timestamps embedded in the \textsc{response}\xspace.
In more detail, the time of flight \ensuremath{\tau_\mathit{i}}\xspace from the initiator to
responder \resp{i} is estimated as
\begin{equation}\label{eq:crng-tosn:tof}
\ensuremath{\tau_\mathit{i}}\xspace = \frac{\ensuremath{T_\mathit{RTT,i}}\xspace - \ensuremath{T_\mathit{RESP,i}}\xspace}{2}
\end{equation}
and the corresponding distance as \mbox{$d_i = c \times \ensuremath{\tau_\mathit{i}}\xspace$}. As
shown in Figure~\ref{fig:computedistance}, \ensuremath{T_\mathit{RESP,i}}\xspace is the delay between
the RX of \textsc{poll}\xspace and the TX of \textsc{response}\xspace at responder \resp{i}. This
delay is computed as the addition of three factors
\mbox{$\ensuremath{T_\mathit{RESP,i}}\xspace = \ensuremath{T_\mathit{RESP}}\xspace + \delta_i + \ensuremath{A_\mathit{TX}}\xspace$}, where \ensuremath{T_\mathit{RESP}}\xspace is
the fixed response delay inherited from SS-TWR\xspace (\ref{sec:soa-sstwr}),
\mbox{$\delta_i = (i - 1)\ensuremath{T_\mathit{ID}}\xspace$} is the responder-specific delay
enabling response identification (\ref{sec:crng-tosn:resp-id}), and
\ensuremath{A_\mathit{TX}}\xspace is the known antenna delay obtained in a previous calibration
step~\cite{dw1000-antenna-delay}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.80\textwidth]{figs/crng-dist-comp.png}
\caption{Concurrent ranging time of flight $\tau_i$ computation.
To determine the distance
$d_i = c \times \tau_i$ to responder \resp{i}, we need to accurately measure
the actual \textsc{response}\xspace delay \mbox{$\ensuremath{T_\mathit{RESP,i}}\xspace = \ensuremath{T_\mathit{RESP}}\xspace + \delta_i + \ensuremath{A_\mathit{TX}}\xspace$}
and the round-trip time \ensuremath{T_\mathit{RTT,i}}\xspace of each responder based on our ToA\xspace estimation.}
\label{fig:computedistance}
\end{figure}
\ensuremath{T_\mathit{RTT,i}}\xspace is the round-trip time for responder \resp{i}, measured at the
initiator as the difference between the \textsc{response}\xspace RX timestamp and the
\textsc{poll}\xspace TX timestamp. The latter is accurately determined at the
\texttt{RMARKER}\xspace by the DW1000, in device time units of $\approx 15.65$~ps,
while the former must be extracted from the CIR via ToA\xspace
estimation. Nevertheless, the algorithms
in~\ref{sec:crng-tosn:toa-est} return only the CIR index \ensuremath{\Tau_i}\xspace at
which the first path of responder \resp{i} is estimated; this index
must therefore be translated into a radio timestamp, similar to the TX
\textsc{poll}\xspace one. To this end, we rely on the fact that the precise
timestamp \ensuremath{T_\mathit{FP}}\xspace associated to the \texttt{FP\_INDEX}\xspace in the CIR is
known. Therefore, it serves as the accurate time baseline w.r.t.\ which to
derive the \textsc{response}\xspace RX by
\begin{inparaenum}[\itshape i)]
\item computing the difference $\Delta\ensuremath{\Tau_\mathit{FP,i}}\xspace = \texttt{FP\_INDEX}\xspace - \ensuremath{\Tau_i}\xspace$
between the indexes in the CIR, and
\item obtaining the actual RX timestamp as
$\ensuremath{T_\mathit{FP}}\xspace - T_s\times\Delta\ensuremath{\Tau_\mathit{FP,i}}\xspace$, where $T_s$
is the CIR sampling period after upsampling (\ref{sec:crng-tosn:toa-est}).
\end{inparaenum}
In our experiments, we noticed that concurrent ranging\xspace usually underestimates
distance. This is due to the fact that the responder
estimates the ToA\xspace of \textsc{poll}\xspace with the DW1000
LDE algorithm, while the initiator estimates the ToA\xspace of each \textsc{response}\xspace
with one of the algorithms in~\ref{sec:crng-tosn:toa-est}. For
instance, S{\footnotesize \&}S\xspace measures the ToA\xspace at the beginning of the path, while
LDE measures it at a peak height related to the noise standard
deviation reported by the DW1000. This underestimation is nonetheless
easily compensated by a constant offset ($\leq 20$~cm) whose value
can be determined during calibration at deployment time.
Together, the steps we described enable accurate estimation of the
distance to multiple responders \emph{solely based on the CIR and the
(single) RX timestamp provided by the radio}. In the DW1000,
\begin{inparaenum}[\itshape i)]
\item the CIR is measured and available to the application even if RX
errors occur, and
\item the RX timestamp necessary to translate our ToA\xspace estimates to
radio timestamps is always\footnote{Unless a very rare PHR error
occurs~\cite[p.97]{dw1000-manual-v218}.} updated,
\end{inparaenum}
therefore \emph{making our concurrent ranging\xspace prototype highly resilient to RX
errors}. Finally, the fact that we remove the dependency on
\resp{1} and therefore no longer need to embed/receive any timestamp
enables us to safely \emph{remove the entire payload from \textsc{response}\xspace
packets}. Unless application information is piggybacked on a
\textsc{response}\xspace, this can be composed only of preamble, SFD, and PHR,
reducing the length of the \textsc{response}\xspace packet, and therefore
the latency and energy consumption of concurrent ranging\xspace.
\subsubsection{Time of Arrival Estimation}
\label{sec:crng-tosn:toa-est}
To determine the first path of each responder, we use FFT to upsample
the re-arranged CIR signals by a factor $L = 30$, yielding a
resolution $T_s \approx 33.38$~ps. We then split the CIR into chunks
of length equal to the time shift \ensuremath{T_\mathit{ID}}\xspace used for responder
identification~(\ref{sec:crng-tosn:resp-id}), therefore effectively
separating the signals of each \textsc{response}\xspace. Finally, the actual ToA\xspace
estimation algorithm is applied to each chunk, yielding the CIR index
\ensuremath{\Tau_i}\xspace marking the ToA\xspace of each responder \resp{i}. We consider two
ToA\xspace estimation algorithms:
\begin{itemize}
\item \emph{Threshold-based.} This commonly-used algorithm simply
places the first path at the first $i^\mathit{th}$ index whose sampled
amplitude $A_i > $\mathcal{T}$\xspace$, where $\mathcal{T}$\xspace is the noise
threshold (\ref{sec:crng-tosn:cir-proc}).
\item \emph{Search and Subtract (S{\footnotesize \&}S\xspace).} This well-known algorithm
has been proposed in~\cite{dardari-toa-estimation}; here, we use our
adaptation~\cite{chorus} to the case of concurrent
transmissions\footnote{Hereafter, we refer to this adaptation simply as
S{\footnotesize \&}S\xspace, for brevity.}. S{\footnotesize \&}S\xspace determines the $K$ strongest
paths, considering \emph{all} signal paths whose peak
amplitude $A_i > $\mathcal{T}$\xspace$. The first path is then
estimated as the one with the minimum
time delay, i.e.,\ minimum index in the CIR chunk.
\end{itemize}
These two algorithms strike different trade-offs w.r.t.\ complexity,
accuracy, and resilience to multipath. The threshold-based algorithm is very
simple and efficient but also sensitive to high noise. For instance,
if a late MPC from a previous chunk appears at the beginning of the next
one with above-threshold amplitude, it is selected as the first
path, yielding an incorrect ToA\xspace estimate. S{\footnotesize \&}S\xspace is more resilient,
as these late MPC from previous responses would need to be stronger
than the $K$ strongest paths from the current chunk to cause a mismatch.
Still, when several strong MPC are in the same chunk, S{\footnotesize \&}S\xspace may incorrectly select
one of them as the first path, especially if the latter is weaker than
MPC. Moreover, S{\footnotesize \&}S\xspace relies on a matched filter, which
\begin{inparaenum}[\itshape i)]
\item requires to determine the filter template by measuring the shape
of the transmitted UWB pulses, and
\item increases computational complexity, as $K$ discrete convolutions
must be performed to find the $K$ strongest paths.
\end{inparaenum}
We compare these ToA\xspace estimation algorithms in our evaluation
(\ref{sec:crng-tosn:eval}).
\subsection{Locally Compensating for TX Scheduling Uncertainty}
\label{sec:crng-tosn:txfix}
The DW1000 transceiver can schedule a TX in the future with a
precision of $4 / (499.2\times10^6)\approx 8$~ns, much less than the
signal timestamping resolution. SS-TWR\xspace responders circumvent this lack
of precision by embedding the necessary TX/RX timestamps in their
\textsc{response}\xspace. This is not possible in concurrent ranging\xspace, and an uncertainty
$\epsilon$ from a uniform distribution $U[-8, 0)$~ns directly affects
concurrent transmissions from responders. The empirical observations
in~\ref{sec:questions} show that mitigating this TX uncertainty is
crucial to enhance accuracy. This section illustrates a
technique, inspired by Decawave\xspace engineers during private
communication, that achieves this goal effectively.
A key observation is that both the accurate desired TX timestamp and
the inaccurate one actually used by the radio are \emph{known} at the
responder. Indeed, the DW1000 obtains the latter from the former by
simply discarding its less significant 9~bits.
Therefore, given that the responder knows beforehand the TX timing
error that will occur, it can \emph{compensate} for it while preparing
its \textsc{response}\xspace.
We achieve this by fine-tuning the frequency of the oscillator, an
operation that can be performed entirely in firmware and
\emph{locally} to the responder. In the technique described here, the
compensation relies on the ability of the DW1000 to \emph{trim} its
crystal oscillator frequency~\cite[p.~197]{dw1000-manual-v218} during
operation. The parameter accessible via firmware is the radio
\emph{trim index}, whose value determines the correction currently
applied to the crystal oscillator. By modifying the index by a given
negative or positive amount (\emph{trim step}) we can
increase or decrease the oscillator frequency (i.e.,\ clock
speed) and compensate for the aforementioned known TX timing
error. Interestingly, this technique can also be exploited to reduce
the relative \emph{carrier frequency offset} (CFO) between transmitter and
receiver, with the effect of increasing receiver sensitivity,
enhancing CIR estimation, and ultimately improving ranging
accuracy and precision.
\fakeparagraph{Trim Step Characterization} To design a compensation
strategy, it is necessary to first characterize the impact of a trim
step. To this end, we ran several experiments with a transmitter and a
set of 6~receivers, to assess the impact on the CFO. The transmitter
is initially configured with a trim index of~0, the minimum allowed,
and sends a packet every 10~ms. After each TX, a trim step of~+1 is
applied, gradually increasing the index until~31, the maximum allowed,
after which the minimum index of 0 is re-applied; increasing the trim index
reduces the crystal frequency. Receivers do not apply a trim step;
they use a fixed index of~15. For each received packet, we read the
CFO between the transmitter and the corresponding receiver from the
DW1000, which stores in the \texttt{DRX\_CONF} register the receiver
carrier integrator value~\cite[p.~80--81]{dw1000-sw-api} measured
during RX, and convert this value first to Hz and then to
parts-per-million (ppm).
Figure~\ref{fig:ppm-offset} shows the CFO measured for each receiver
as a function of the transmitter trim index, over $\geq$100,000
packets. If the CFO is positive (negative), the receiver local clock
is slower (faster) than the transmitter
clock~\cite[p.~81]{dw1000-sw-api}. All receivers exhibit a
quasi-linear trend, albeit with a different offset. Across many
experiments, we found that the average curve slope is
$\approx -1.48$~ppm per unit trim step. This knowledge is crucial to
properly trim the clock of the responders to match the frequency of
the initiator and compensate for TX uncertainty, as described next.
\begin{figure}[!t]
\centering
\includegraphics{figs/cfo.png}
\caption{CFO between a transmitter and a set of six receivers,
as a function of the transmitter trim index.}
\label{fig:ppm-offset}
\end{figure}
\fakeparagraph{CFO Adjustment} After receiving the broadcast \textsc{poll}\xspace,
responders obtain the CFO from their carrier integrator and trim their
clock to better match the frequency of the initiator. For instance, if
a given responder measures a CFO of $+3$~ppm, this means that its
clock is slower than the initiator, and its frequency must be
increased by applying a trim step of
$-\frac{\SI{3}{ppm}}{\SI{1.48}{ppm}} \approx -2$. Repeating this
adjustment limits at $\leq 1$~ppm the absolute value of the CFO
between initiator and responders, reducing the impact of clock drift
and improving RX sensitivity. Moreover, it also improves CIR
estimation, enabling the initiator to better discern the signals from
multiple, concurrent responders and estimate their ToA\xspace more
accurately. Finally, this technique can be used to \emph{detune} the
clock (i.e.,\ alter its speed), key to compensating for TX uncertainty.
\fakeparagraph{TX Uncertainty Compensation} The DW1000 measures TX and
RX times at the \texttt{RMARKER}\xspace (\ref{sec:dw1000}) with \mbox{40-bit}
timestamps in radio time units of $\approx 15.65$~ps. However,
when scheduling transmissions, it ignores the lowest 9~bits of the
desired TX timestamp.
The \emph{known} 9 bits ignored directly inform us of the TX error
$\epsilon \in [-8, 0)$~ns to be compensated for. The compensation
occurs by \emph{temporarily} altering the clock frequency via the trim
index only for a given \emph{detuning interval}, at the end of which
the previous index is restored. Based on the known error $\epsilon$
and the predefined detuning interval $\ensuremath{T_\mathit{det}}\xspace$, we can easily
compute the trim step
\mbox{$\ensuremath{\mathcal{S}}\xspace = \lfloor \frac{\epsilon}{\SI{1.48}{ppm}\times
\ensuremath{T_\mathit{det}}\xspace}\rceil$} to be applied to compensate for the TX
scheduling error. For instance, assume that a responder knows that
its TX will be anticipated by an error $\epsilon=-5$~ns; its clock
must be slowed down. Assuming a configured detuning interval
$\ensuremath{T_\mathit{det}}\xspace=\SI{400}{\micro\second}$, a trim step
\mbox{
$\ensuremath{\mathcal{S}}\xspace = \lfloor \frac{\SI{5}{ns}}{\SI{1.48}{ppm} \times
\SI{400}{\micro\second}} \rceil = \lfloor 8.45 \rceil = 8$ } must
be applied through the entire interval \ensuremath{T_\mathit{det}}\xspace. The rounding,
necessary to map the result on the available integer values of the trim index,
translates into a residual TX scheduling error. This can be easily
removed, after the trim step \ensuremath{\mathcal{S}}\xspace is determined, by recomputing
the detuning interval as
\mbox{$\ensuremath{T_\mathit{det}}\xspace = \frac{\epsilon}{\SI{1.48}{ppm} \times \ensuremath{\mathcal{S}}\xspace}$},
equal to \SI{422.3}{\micro\second} in our example. Indeed, the
duration of \ensuremath{T_\mathit{det}}\xspace can be easily controlled in firmware and with a
significantly higher resolution than the trim index, yielding a more
accurate compensation.
\fakeparagraph{Implementation} In our prototype, we determine the trim
step \ensuremath{\mathcal{S}}\xspace, adjust the CFO, and compensate the TX scheduling error
in a single operation. While detuning the clock, we set the data
payload and carry out the other operations necessary before TX,
followed by an idle loop until the detuning interval is over. We then
restore the trim index to the value determined during CFO adjustment
and configure the DW1000 to transmit the \textsc{response}\xspace at the desired
timestamp. To compensate for an error $\epsilon \in [-8, 0)$~ns
without a very large trim step (i.e.,\ abrupt changes of the trim
index) we set a default detuning interval
$\ensuremath{T_\mathit{det}}\xspace=\SI{560}{\micro\second}$ and increase the ranging response
delay to $\ensuremath{T_\mathit{RESP}}\xspace = \SI{800}{\micro\second}$. This value is higher than
the one ($\ensuremath{T_\mathit{RESP}}\xspace=\SI{330}{\micro\second}$) used in~\ref{sec:questions}
and, in general, would yield worse SS-TWR\xspace ranging accuracy due to a
larger clock drift (\ref{sec:toa}). Nevertheless, here we directly
limit the impact of the clock drift with the CFO adjustment, precisely
scheduling transmissions with $<1$~ns errors, as shown
in~\ref{sec:crng-tosn:exp-tx-comp}; therefore, in practice,
the minor increase in \ensuremath{T_\mathit{RESP}}\xspace bears little to no impact.
|
1,116,691,498,390 | arxiv | \section{Introduction}
Most stars form in protoclusters in dense molecular clumps
(\citealt{lada03}; \citealt{mckee07b}).
The energy and momentum injected by young stars then removes
the remaining interstellar material (ISM), thus ending further
star formation and reducing the gravitational binding energy
of the protoclusters.
This feedback limits the efficiency of star formation---the
ratio of final stellar mass to initial interstellar mass---to
only $20-30\%$, and leaves many protoclusters unbound, with
their constituent stars free to disperse.
Even those protoclusters that survive will lose some stars by
ISM removal and subsequent processes.
Two of the best probes of these formation and disruption
processes are the mass functions of molecular clouds and
young star clusters, defined as the number of objects
per unit mass, $\psi(M) \equiv dN/dM$.
For molecular clouds, the best-studied galaxies are the
Milky Way and the Large Magellanic Cloud (LMC), while for
star clusters, they are the Antennae and the LMC.
In these and other cases, the observed mass functions can
be represented by power laws, $\psi(M) \propto M^{\beta}$,
from $10^4M_{\odot}$ or below to $10^6M_{\odot}$ or above.
Giant molecular clouds (GMCs) identified in CO surveys have
$\beta \approx -1.7$ \citep{rosolowsky05b, blitz07a, fukui08a}.
This exponent is also found for massive self-gravitating clumps
within GMCs, the formation sites of star clusters, whether they
are identified by CO emission \citep{bertoldi92} or higher-density
tracers such as C$^{18}$O, $^{13}$CO, and thermal dust emission
\citep{reid06b, munoz07a, wong08a}.
Young star clusters have $\beta \approx -2.0$
\citep{elmegreen97a, mckee97, zhang99b, dowell08a,
fall09a, chandar09a}.
The similar exponents for clouds and clusters indicate
that the efficiency of star formation and probability
of disruption are at most weak functions of mass.
This conclusion is reinforced by the fact that $\beta$
is the same for $10^7-10^8$ yr-old clusters as it is for
$10^6-10^7$ yr-old clusters \citep{zhang99b, fall09a,
chandar09a}.
These empirical results may at first seem puzzling.
Low-mass protoclusters have lower binding energy per
unit mass and should therefore be easier to disrupt
than high-mass protoclusters.
Indeed, several authors have proposed that feedback would
cause a bend in the mass function of young clusters at
$M \sim 10^5M_{\odot}$, motivated in part by the well-known
turnover in the mass function of old globular clusters
\citep{kroupa02a, baumgardt08a, parmentier08a}.
For young clusters, such a feature is not observed (as noted
above), while for globular clusters, it arises from almost
any initial conditions as a consequence of stellar escape
driven by two-body relaxation over $\sim 10^{10}$~yr
\citep[and references therein]{fall01a, mclaughlin08a}.
Nevertheless, we are left with an important question:
What are the physical reasons for the observed similarity
of the mass functions of molecular clouds and young star
clusters?
The goal of this Letter is to answer this question.
In Section~\ref{energymomentum}, we derive some general
relations between the mass functions of clouds and
clusters.
In Section~\ref{sec:efficiency}, we review a variety of
specific feedback processes and estimate the star formation
efficiency for radiation pressure, the dominant process in
massive, compact protoclusters.
We summarize in Section~\ref{sec:conclusion}.
\section{Mass Functions}
\label{energymomentum}
The radiative losses inside protoclusters determine how much of
the energy input by stellar feedback is available for ISM removal.
This in turn depends on the cloud structure and the specific
feedback mechanisms involved, but two limiting regimes bracket
all realistic situations: energy-driven, with no radiative
losses, and momentum-driven, with maximum radiative losses.
We estimate the mass of stars $M_*$ and the corresponding
efficiency of star formation, ${\cal E} = M_*/M$, needed to
remove the ISM from protoclusters in these regimes as follows.
We characterize a protocluster by its mass $M$, half-mass radius
$R_h$, mean surface density $\Sigma$, velocity dispersion $V_m$
(including the orbital motions of the stars and the turbulent
and thermal motions of the interstellar particles), RMS escape
velocity $V_e$, and crossing time $\tau_c$.
For simplicity, we neglect rotation, magnetic support, and
external pressure (but see Section 3).
Then the properties of a protocluster are related by
$V_m^2 = 0.4 GM/R_h$, $V_e = 2V_m$, $\tau_c = R_h/V_m$
\citep{spitzer87a} and $\Sigma \approx (M/2)/(\pi R_h^2)$.
We also assume that the sizes and masses of protoclusters
are correlated, with a power-law trend, $R_h \propto M^{\alpha}$.
In Figure \ref{msplot}, we plot $\Sigma$ and $R_h$ against $M$
for star-forming molecular clumps in the Milky Way, based on
measurements of CS, C$^{17}$O, and 1.2 mm dust emission in three
independent surveys \citep{shirley03, faundez04, fontani05}.
These clumps were selected for their star-formation activity
(water masers, IRAS colors), not their surface density.
Evidently, there is a strong correlation between $R_h$ and $M$,
and almost none between $\Sigma$ and $M$, corresponding to
$\alpha \approx 1/2$.
The typical surface density is close to the value $\Sigma
\sim 1$~g~cm$^{-2}$ expected from theory \citep{mckee03a,
krumholz07a, krumholz08a}.\footnote{For reference, the
\citet{larson81} relation for CO-selected clouds corresponds
to a much lower surface density, $\Sigma \sim 0.02$~g~cm$^{-2}$.}
We assume that the Milky Way relations also hold in other
galaxies and extend up to $\sim 10^6 M_\odot$, although it is
conceivable that they break down above $\sim 10^5 M_\odot$.
Indeed, \citet{baumgardt08a} and \citet{parmentier08a} assume that
$R_h$ is not correlated with $M$ (corresponding to $\alpha = 0$),
based on observations of gas-free clusters \citep[e.g.][]{murray09b}.
However, since ISM removal necessarily occurs during the earlier,
gas-dominated phase, $\alpha \approx 1/2$ seems more appropriate
in the present context.
As we show here, $\alpha \approx 1/2$ is also needed to reconcile
the observed mass functions of molecular clouds and star clusters.
\begin{figure}
\plotone{msigma}
\caption{
\label{msplot}
Surface density $\Sigma$ and radius $R$ plotted against mass $M$ for
star-forming molecular clumps from measurements by Shirley et al.
(2003; circles, CS emission), Fa{\'u}ndez et al. (2004; triangles,
dust emission), and Fontani et al. (2005; squares, C$^{17}$O and dust
emission).
We exclude clouds with $M < 100 M_\odot$, since they cannot form clusters.
The lines are least-squares regressions ($\log R$ against $\log M$) with
$\alpha = 0.5$ fixed (solid) and $\alpha = 0.38 \pm 0.023$ (dashed).
The true uncertainty on $\alpha$ is undoubtedly larger than the quoted
one-sigma error.}
\end{figure}
The rates of energy and momentum input are proportional to the
stellar mass\footnote{This is a good approximation for all
feedback mechanisms except protostellar outflows, which inject
energy and momentum in proportion to the star formation rate. Outflows,
however, are non-dominant in massive protoclusters; see Table 1.}:
$\dot{E} \propto {\cal E} M$ and $\dot{P} \propto {\cal E} M$.
We assume that the timescale for ISM removal is a few crossing times:
$\Delta t \sim (1-10) \times \tau_c$ \citep{elmegreen00,
elmegreen07, hartmann01, tan06a, krumholz07e}.
Thus, the total energy and momentum input are
$E \approx \dot{E}{\Delta}t \propto {\cal E} M R_h/V_m$ and
$P \approx \dot{P}{\Delta}t \propto {\cal E} M R_h/V_m$.
These reach the critical values needed to remove the ISM,
$E_{\rm crit} = {\onehalf} M V_e^2$ and $P_{\rm crit} = M V_e$,
for
\begin{subequations}
\begin{eqnarray}
{\cal E} & \propto & V_e^3/R_h \propto M^{(3 - 5\alpha)/2}
\,\,\,\,\,\,\,\,\,\,
{\rm (energy-driven)},
\label{effic:energy}
\\
{\cal E} & \propto & V_e^2/R_h \propto M^{1 - 2\alpha}
\,\,\,\,\,\,
{\rm (momentum-driven)}.
\label{effic:momentum}
\end{eqnarray}
\end{subequations}
For $\alpha=1/2$, the efficiency has little or no dependence on mass:
${\cal E} \propto M^{1/4}$ in the energy-driven regime, ${\cal E}
= {\rm constant}$ in the momentum-driven regime.
For $\alpha=0$, the variation is much stronger: ${\cal E} \propto M^{3/2}$
and ${\cal E}\propto M$, respectively.
These relations are valid for ${\cal E} \la 0.5$.
Any dependence of ${\cal E}$ on $M$ will cause the mass functions of
star clusters $\psi_*(M_*)$ and molecular clouds $\psi(M)$ to have
different shapes.
For the moment, we confine our attention to clusters young enough
to be easily recognizable even if they are unbound and dispersing.
This limit is $\sim 10^7$~yr for extragalactic clusters such as
those in the Antennae \citep{fall05a}.
In this case, the mass functions of the clusters and clouds are
related by $\psi_*(M_*)dM_* \propto \psi(M)dM$ (with a coefficient
greater than unity if several clusters form within each cloud).
For $\psi(M) \propto M^{\beta}$ and ${\cal E} \propto M^{\gamma}$,
we have $\psi_*(M_*) \propto M_*^{\beta_*}$ with
$\beta_* = (\beta - \gamma)/(1 + \gamma)$.
Equations (\ref{effic:energy}) and (\ref{effic:momentum}) then imply
\begin{subequations}
\begin{eqnarray}
\beta_* & = & \frac{2\beta + 5\alpha - 3}{5(1 - \alpha)}
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
{\rm (energy-driven)},
\label{beta1e}
\\
\beta_* & = & \frac{\beta + 2\alpha - 1}{2(1 - \alpha)}
\,\,\,\,\,\,\,\,\,\,\,\,\,\,
{\rm (momentum-driven)}.
\label{beta1p}
\end{eqnarray}
\end{subequations}
These expressions give $\beta_* = \beta$ for $\alpha = 3/5$ and 1/2,
respectively.
Thus, the similarity of the mass functions of clusters
and clouds ($\beta_* \approx \beta$) {\it requires}
that the latter have approximately constant mean surface
density ($0.5 \la \alpha \la 0.6$), no matter what type of
feedback is involved.
Before proceeding, we make a small correction.
For clouds, the observed mass function $\psi_o(M)$ represents the
true mass function at formation $\psi(M)$ (i.e., the birthrate)
weighted by the lifetime: $\psi_o(M) \propto \psi(M)\tau_l(M)$.
We assume, as before, that lifetime is proportional to crossing
time: $\tau_l \propto \tau_c \propto M^{(3\alpha - 1)/2}$.
Then the exponents of the true and observed mass functions are
related by $\beta = \beta_o - (3\alpha - 1)/2$.
Inserting this into Equations (\ref{beta1e}) and (\ref{beta1p}),
we obtain
\begin{subequations}
\begin{eqnarray}
\beta_* & = & \frac{2(\beta_o + \alpha - 1)}{5(1 - \alpha)}
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
{\rm (energy-driven)},
\label{betastar:energy}
\\
\beta_* & = & \frac{2\beta_o + \alpha - 1}{4(1 - \alpha)}
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
{\rm (momentum-driven)}.
\label{betastar:momentum}
\end{eqnarray}
\end{subequations}
We now evaluate Equations (\ref{betastar:energy}) and
(\ref{betastar:momentum}) with $\beta_o = -1.7$, the
observed exponent of the mass function of molecular clouds
\citep{rosolowsky05b, reid06b, munoz07a, wong08a, fukui08a}.
For constant mean surface density ($\alpha = 1/2$), we find
$\beta_* = -1.8$ in the energy-driven regime and $\beta_* = -2.0$
in the momentum-driven regime.
These predictions agree nicely with the observed exponents of
the mass functions of young star clusters, $\beta_* \approx -2.0$
(with typical uncertainty $\Delta\beta_* \approx 0.2$).
Our model is clearly idealized, but the scalings, and thus the agreement
between the predicted and observed $\beta_*$, should be robust.
For constant size ($\alpha = 0$), however, we find $\beta_* = -1.1$
in both the energy-driven and momentum-driven regimes, in definite
conflict with observations.
The mass function of star clusters older than $\sim 10^7$~yr
depends on the proportion that remain gravitationally bound.
This in turn depends on the efficiency of star formation ${\cal E}$
and the timescale for ISM removal $\Delta t$ relative to the crossing
time $\tau_c$.
Both analytical arguments and $N$-body simulations indicate that
young clusters lose most of their stars for ${\cal E} \la 0.3$ and
${\Delta t} \ll \tau_c$ but retain most of them for ${\cal E} \ga 0.5$
or ${\Delta t} \gg \tau_c$ \citep{hills80, kroupa01b, kroupa02a,
baumgardt07a}.
Thus, as long as ${\cal E}$ and ${\Delta t}/\tau_c$ are, on average,
independent of $M$, as they are for protoclusters with constant
mean surface density ($\alpha = 1/2$) and momentum-driven feedback,
ISM removal will not alter the shape of the mass function
(although its amplitude will decline).
This is consistent with the observed exponents $\beta_* \approx -2.0$
for clusters both younger and older than $10^7$~yr in the Antennae
and LMC \citep{zhang99b, fall09a, chandar09a}.
In all other cases, ${\cal E}$ increases with $M$, and a higher
proportion of low-mass clusters is disrupted, causing a flattening
or a bend at ${\cal E} \approx 0.3 - 0.5$ in the mass function.
The exact shape depends on $\Delta t/\tau_c$, clumpiness within
protoclusters, and other uncertain factors.
If the efficiency has a weak dependence on mass, as it does for
constant mean surface density ($\alpha = 1/2$) and energy-driven
feedback (${\cal E} \propto M^{1/4}$), the predicted $\beta_*$
might be marginally consistent with observations over a limited
range of masses ($10^4 M_{\odot} \la M \la 10^6 M_{\odot}$).
However, for constant size ($\alpha = 0$), the variations are
so strong (${\cal E} \propto M^{3/2}$ and ${\cal E} \propto M$)
that we expect major differences between the mass functions of
clusters younger and older than $10^7$~yr, in clear contradiction
with observations.
Our simple analytical model agrees, at least qualitatively, with
the numerical calculations by \citet{baumgardt08a} and
\citet{parmentier08a}.
They present results for energy-driven feedback by supernovae in
protoclusters with uncorrelated sizes and masses.
In some cases, they find a bend in the mass function of young clusters
at $M \sim 10^5 M_{\odot}$, while in others, they find a flattened power
law with $\beta_* \approx -1$ (see Figure 4 of \citealt{baumgardt08a}).
As we have already noted, these results are expected for $\alpha = 0$,
and they are inconsistent with the observed mass functions of young
clusters.
\section{Star Formation Efficiency}
\label{sec:efficiency}
\begin{deluxetable*}{ccccc}
\tabletypesize{\scriptsize}
\tablecaption{Feedback Mechanisms}
\tablehead{ \colhead{Mechanism} & \colhead{Type} & \colhead{Limitation} &
\colhead{Threshold\tablenotemark{\dag}} &
\colhead{Evaluated\tablenotemark{\dag}} }
\startdata
Supernovae & Energy & Too late & $\tau_c \approx 1.8$ Myr &
$\Sigma_0 \approx 0.022 M_4^{1/3} $ \\
Main-sequence winds & Either\tablenotemark{a} & Relatively weak\tablenotemark{a}
& Never & \nodata \\
Protostellar outflows & Momentum & Confined in massive clusters\tablenotemark{b}
& $V_e \approx 7$ km s$^{-1}$ & $\Sigma_0 \approx 0.17 M_4^{-1}$ \\
Photoionized gas & Momentum & Crushed by $P_{\rm rad}$\tablenotemark{c} &
$S_{49} \approx 21 R_h/\mbox{pc}$ & $\Sigma_0 \approx 0.15 M_4^{-1}$ \\
Radiation pressure & Momentum &\nodata & Equations (\ref{efficiency}) and
(\ref{Sigma_crit}) & $\Sigma_0 \approx 1.2$
\enddata
\tablenotetext{\dag}{Parameters required for ${\cal E} = 0.5$.
Evaluations assume a fully-sampled stellar IMF. Notation:
$S_{49} \equiv S/10^{49}$~s$^{-1}$ (ionization rate),
$M_4 \equiv M/10^4 M_\odot$, $\Sigma_0 \equiv \Sigma/$g~cm$^{-2}$.}
\tablenotetext{a}{Stellar winds are energy-driven and dominant if trapped,
but are expected to leak, making them momentum-driven and weak.}
\tablenotetext{b}{Based on Equation (55) of \citet{matzner00},
updated with $f_w v_w = 80$ km s$^{-1}$ \citep{matzner07}.}
\tablenotetext{c}{Based on Equation (4) of \citet{krumholz09d}
for the blister case, with the coefficient reduced by a factor
of $2.2^2$ to correct an error in the published paper and
updated with $\langle L/M_*\rangle = 1140 L_{\odot} M_{\odot}^{-1}$ and
$\langle S/M_*\rangle = 6.3\times 10^{46}$ s$^{-1}$ $M_{\odot}^{-1}$
\citep{murray09c}.}
\label{Table}
\end{deluxetable*}
We now consider five specific feedback mechanisms:
supernovae, main-sequence winds, protostellar outflows,
photoionized gas, and radiation pressure.
For the first four, we review results from the literature.
Supernova feedback begins only after the $>3.6$ Myr lifetimes of
massive stars.
Unless turbulence within a protocluster is maintained by feedback
or external forcing, stars would form rapidly and consume its ISM,
with ${\cal E} \rightarrow 1$ in $1-2$ crossing times.
This implies that supernovae can dominate only for $2\tau_c \ga
3.6$ Myr unless another mechanism somehow keeps ${\cal E}$ small
without expelling much ISM \citep{krumholz09d}.
However, even in this contrived situation, supernovae would play
only a secondary role.
Main-sequence winds are not effective if their energy is able to
leak out of the bubbles they blow \citep{harper-clark09a}.
As a result of this leakage, winds simply provide an order-unity
enhancement to radiation pressure \citep{krumholz09d}.
Protostellar outflows can only remove the ISM from protoclusters
with escape velocities below about 7 kms$^{-1}$ \citep{matzner00}.
Photoionized gas is important as a feedback mechanism only when
its pressure exceeds that of radiation throughout most of an
H~\textsc{ii}\ region.
This in turn requires that the H~\textsc{ii}\ region be larger than the
radius $r_{\rm ch}$ at which $P_{\rm rad} = P_{\rm gas}$, a condition
harder to satisfy in massive, compact protoclusters
\citep{krumholz09d}.
\begin{figure}
\plotone{mechanisms}
\caption{
\label{Fig:ejection}
Feedback in protoclusters of mean surface density $\Sigma$ and mass $M$.
Radiation pressure is the dominant mechanism throughout the shaded region.
The lines show where each mechanism alone achieves ${\cal E} = 0.5$.
These allow for partial sampling of the stellar IMF and hence differ
slightly from the power laws in Table \ref{Table} (noticeable only for
$M \la 10^4 M_\odot$).}
\end{figure}
We summarize these results in Table \ref{Table} and Figure \ref{Fig:ejection}.
As the plot shows, the mechanisms discussed thus far are relatively ineffective
in protoclusters with $M \ga 10^4$ $M_{\odot}$ and $\Sigma \ga 0.1$ g cm$^{-2}$.
We therefore turn to radiation pressure.
This would be an energy-driven feedback mechanism if all photons, even those
re-radiated by dust grains, remained trapped within a protocluster.
However, this is possible only if the protocluster is so dense and
smooth that the covering fraction seen from its center exceeds $\sim90$
\% in the infrared \citep{krumholz09d}.
More realistically, the protocluster would be porous enough that photons
could escape after only a few interactions with dust grains, and radiation
pressure would then be a momentum-driven feedback mechanism.
The following analysis extends that of
\citet{elmegreen83}, \citet{scoville01}, \citet{thompson05},
\citet{krumholz09d}, and \citet{murray09a}.
We consider an idealized, spherical cloud of mass $M$ and outer
radius $R$, with an internal density profile $\rho\propto r^{-k}$
(hence $R_h=2^{-1/(3-k)} R$).
Radiation from young stars near the center of the cloud ionizes the gas
and drives the expanding outer shell of this H~\textsc{ii}\ region.
After a time $t$, the momentum imparted to the shell is
$ p_s = f_{\rm trap} L t/c$, where $L$ is the stellar luminosity
(assumed constant for simplicity), and $f_{\rm trap}\sim 2-5$
accounts for assistance from main-sequence winds and incomplete leakage
of starlight and wind energy \citep{krumholz09d}.
Neglecting gravity for the moment, the velocity and radius of the
shell are related by $v_s = \eta r_s/t$ with $\eta= 2/(4-k)$.
Thus, when the shell reaches the cloud surface ($r_s = R$), it has
swept up all the remaining ISM, with mass $M_g=(1-{\cal E})M$, and
has a velocity given by
\begin{equation}
v_s^2(R) =\frac{\eta f_{\rm trap} L R}{c (1 - {\cal E}) M}.
\end{equation}
We specify the condition for ISM removal against gravity of the
protocluster by $v_s^2(R) = \alpha_{\rm crit} G M/(5 R)$, where
$\alpha_{\rm crit}$ is a parameter of order unity that accounts
for magnetic support and other uncertain factors (discussed below).
The required luminosity, from equation (4), is
\begin{equation} \label{L_crit}
L = \frac{\alpha_{\rm crit} G c (1-{\cal E})M^2}{5 \eta f_{\rm trap} R^2}.
\end{equation}
The fundamental scaling $L\propto (M/R)^2 \propto V_m^4$ arises here in
the same way it does for the growth of supermassive black holes and
galactic spheroids \citep{fabian99, king03, murray05a}.
Rewriting Equation (\ref{L_crit}) in terms of $\Sigma = M/(\pi R^2)$
and $M_* = {\cal E} M$ and solving for ${\cal E}$, we obtain our basic
result
\begin{equation} \label{efficiency}
{\cal E} = \frac{\Sigma }{\Sigma + \Sigma_{\rm crit}},
\end{equation}
with
\begin{equation}\label{Sigma_crit}
\Sigma_{\rm crit} =\frac{ 5 \eta f_{\rm trap} (L/M_*) }{ \pi \alpha_{\rm crit} G c}
\approx 1.2 \left(\frac{f_{\rm trap}}{\alpha_{\rm crit}}\right) {\rm g\,cm^{-2}}.
\end{equation}
The coefficient in the last equation is based on $\eta = 2/3$ and
$L/M_* = 1140 L_\odot/M_\odot$ (see notes to Table \ref{Table}).
Regardless of the exact value of $f_{\rm trap}/\alpha_{\rm crit}$, we note that
${\cal E}$ depends on $M$ and $R$ only through $\Sigma$.
Thus, when $\Sigma$ is constant, ${\cal E}$ is independent of $M$, and the mass
functions of clusters and clouds have the same exponent ($\beta_* = \beta
\approx \beta_o$).
Figure \ref{sfeplot} shows ${\cal E}(\Sigma)$ computed from Equations
(\ref{efficiency}) and (\ref{Sigma_crit}).
Clearly, ${\cal E}$ increases monotonically with $\Sigma$ from 0 to 1, reaching
${\cal E} = 0.3$ for $\Sigma \sim 0.5 (f_{\rm trap}/\alpha_{\rm crit})$ g cm$^{-2}$.
We expect $f_{\rm trap} \sim \alpha_{\rm crit} \sim$~2--5.
The escape velocity from the surface of an unmagnetized cloud corresponds to
$\alpha_{\rm crit}=10$, while the internal velocity dispersion, possibly sufficient
for some ISM removal, corresponds to $\alpha_{\rm crit} \approx 1.3$.
A shell driven by a constant force requires $\alpha_{\rm crit}=2.3$ (for $k=1$;
see Equation (A17) of \citet{matzner00}).
We consider $\alpha_{\rm crit}\approx 2$ to be plausible; certainly a
protocluster boils violently and loses mass rapidly using this condition.
Our intent here is not to make a detailed comparison between the model
and observations. Given the simplicity of the former and the uncertainties
in the latter, it is gratifying that they agree even roughly with each other.
\begin{figure}
\plotone{sfe}
\caption{
\label{sfeplot}
Star formation efficiency ${\cal E}$ as a function of mean surface density
$\Sigma$, computed from Equations (\ref{efficiency}) and (\ref{Sigma_crit})
with the indicated values of $f_{\rm trap}/\alpha_{\rm crit}$.}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
This Letter contains two main results.
The first is the relation between the power-law exponents of the
mass functions of molecular clouds and young star clusters,
$\beta_o$ and $\beta_*$, in the limiting regimes in which
stellar feedback is energy-driven and momentum-driven, Equations
(\ref{betastar:energy}) and (\ref{betastar:momentum}), which
bracket all realistic cases.
The predicted $\beta_*$ depends significantly
on the initial size-mass relation of the protoclusters.
We find good agreement between the predicted and observed $\beta_*$,
especially for momentum-driven feedback, for $\Sigma \propto M/R_h^2
\approx {\rm constant}$,
the relation indicated by observations of gas-dominated protoclusters.
In this case, the star formation efficiency is independent of
protocluster mass, ensuring that the fraction of clusters that
remain gravitationally bound following ISM removal
is also independent of mass.
The second main result is an estimate of the star formation efficiency
in protoclusters regulated by radiation pressure,
Equations (\ref{efficiency}) and (\ref{Sigma_crit}).
This is likely to be the dominant feedback process in massive
protoclusters.
We show that ${\cal E}$ depends on $M$ and $R_h$ only through the
mean surface density $\Sigma$, which in turn guarantees
consistency between the observed power-law exponents of the mass
functions of molecular clouds and young star clusters according to
our general relations.
For $\Sigma \sim 1$~g~cm$^{-2}$, we estimate ${\cal E} \sim 0.3$,
in satisfactory agreement with observations.
\acknowledgements
We thank Bruce Elmegreen, Chris McKee, Dean McLaughlin, Norm Murray,
John Scalo, Nathan Smith, and the referee for helpful comments.
We are grateful for research grants from NASA (SMF, MRK, CDM), NSF
(MRK), Sloan Foundation (MRK), NSERC (CDM), and Ontario MRI (CDM).
|
1,116,691,498,391 | arxiv | \section{Introduction}
A molecular theoretical account of the free energetics of the
reactions
\begin{eqnarray}
\label{eq:hydro} [\mbox{Fe(H$_2$O)$_n$(OH)$_{(m-1)}$]$^{4-m}$} +
\mbox{H$_2$O} \rightarrow [\mbox{Fe(H$_2$O)$_{n-1}$(OH)$_{m}$]$^{3-m}$}
+ \mbox{H$_3$O$^+$}
\end{eqnarray}
in water can benefit from discrimination of the conformers that are
present under common conditions. Entropic contributions to the
solution thermodynamics may reflect multiple configurations that
occur. Thus information on the conformers present should assist in
accurately describing temperature variations of solvation properties.
In addition, theoretical study of the molecular structure of the
species participating these reactions should teach us about the
molecular mechanisms involved and provide a benchmark of current
theoretical tools for modeling speciation of metal ions in
groundwaters\cite{Rustad:95,Rustad:96a,Rustad:96b}.
The condition of ferrous and ferric ions in solution has long been of
specific interest to explanations of electron exchange processes
\cite{RMarcus,Friedman,Jafri,Newton}. The hydrolysis product
Fe(H$_2$O)$_5$OH$^{2+}$ often contributes significantly to the rate of
ferric ion reduction in water because the specific rate constant for
ferric-ferrous electron exchange is about a thousand times larger for
the hydrolyzed compared to unhydrolyzed hexaaquoferric ion
\cite{Silverman}. The net standard free energy change (about
3~kcal/mol\cite{Flynn}) for the first hydrolysis reaction,
\begin{eqnarray}
\label{eq:hydro1} \mbox{Fe(H$_2$O)$_6$$^{3+}$} + \mbox{H$_2$O}
\rightarrow \mbox{Fe(H$_2$O)$_{5}$(OH)$^{2+}$} + \mbox{H$_3$O$^+$} ,
\end{eqnarray}
is small compared to the size of the various contributions that must
be considered in theoretical modeling of these solution species.
In contrast to the great volume of work on electron transfer involving
such species, reactions of particular interest here transfer a proton
from a water ligated to a Fe$^{3+}$ ion to a free water molecule. The
second hydrolysis,
\begin{eqnarray}
\label{eq:hydro2} \mbox{Fe(H$_2$O)$_5$OH$^{2+}$} + \mbox{H$_2$O}
\rightarrow \mbox{Fe(H$_2$O)$_{4}$(OH)$_2$$^+$} + \mbox{H$_3$O$^+$},
\end{eqnarray}
raises more seriously the possibility of participation by several
isomers in the mechanism and thermodynamics of these reactions.
This work takes ferric ion hydrolysis as a demonstration system for
determination of the utility of current theoretical tools for modeling
of speciation of metal ions in aqueous solution. At this stage we
have not considered multiple metal center aggregates. We report below
encouraging agreement with experimental thermochemistries but with
unexpected results that must be addressed for further progress in
describing these chemical systems in molecular terms.
\section{Approach}
The physical idea of treating an inner solvation shell on a different
footing from the rest of the solvent has considerable
precedent\cite{Friedman:73}. Friedman and Krishnan referred to these
approaches as {\it hybrid models} and criticized them on the grounds
that individual contributions to the thermodynamic properties of final
interest could not be determined with sufficient accuracy to draw new
conclusions from the final results. However, particularly for highly
charged and chemically complex ions, the hybrid approaches seem
indispensable \cite{Marcos}. Computational and conceptual progress in
the 25 years since that Friedman-Krishnan review has made these
approaches interesting once again. In what follows, we first elaborate
on the calculational techniques used. An appendix discusses some of the
statistical thermodynamic issues underlying these hybrid models.
\subsection{Electronic Structure Calculations}
Reliable estimation of free energies of dissociation in aqueous media
requires a similarly reliable determination of the process in the gas
phase. Given the well known difficulties associated with electron
correlation in transition metal complexes, and our intent to extend
this work to larger and more complex systems, we chose to explore this
problem with the B3LYP hybrid density functional theory
(DFT)\cite{b3lyp,G94}. This approximation is an effective compromise
between accuracy and computational expense, and has been shown to give
usefully accurate predictions of metal-ligand bond energies in a
number of molecules\cite{ricca:MCO,russo:MF}. In the present
application however, we were less interested in the metal-ligand bond
energy than in the energies for successive elimination of protons from
the H$_2$O ligands bound to Fe$^{3+}$. This may be a somewhat less
severe test of the functional. The details of our approach and
estimates of the errors follow.
The geometry and vibrational frequencies of the metal complex were
determined using the 6-311+G basis set for the metal ion and the 6-31G*
basis set for the ligands\cite{G94}. The former is contracted from
Wachters'\cite{wachters} primitive Gaussian basis (including the
functions describing the 4s and 4p atomic orbitals) and augmented with
the diffuse d-function of Hay\cite{hay-dif-d}. The ligand basis set
contains a polarization function of d-character on the oxygen. At the
optimum geometry, a single calculation was done to determine the energy
in a more extended basis (6-31++G**) that includes polarization and
diffuse functions on both the oxygen and hydrogen centers. Atomic
charges were determined in this extended basis as well using the
ChelpG\cite{chelpg} capability (R$_{Fe}$=2.02 \ang) in the Gaussian94
package\cite{G94}. All Fe species are high-spin (d$^5$) treated in the
spin-unrestricted formalism.
One might argue, on the basis of the OH$^-$ character expected in the
product complexes, that the geometry optimization should also be done
with a diffuse basis set for the ligands. This was found unnecessary
by explicit calculations on the partial reaction
\begin{eqnarray}
\mbox{Fe(H$_2$O)$_6$$^{3+}$} \rightarrow
\mbox{Fe(H$_2$O)$_5$(OH)$^{2+}$} + \mbox{H$^+$}. \label{partial}
\end{eqnarray}
The endothermicity of this reaction computed from a single point
calculation with the 6-31+G* basis at its optimum geometry differs by
only 0.2~kcal/mole from the result obtained with the smaller basis.
More important are additional polarization and diffuse functions on
the hydrogens; for example, neglecting zero-point corrections, the
B3LYP endothermicities are 29.2~kcal/mole(6-31G*),
25.4~kcal/mole(6-31+G*), and 28.1~kcal/mole(6-31++G**).
A final consideration regarding basis set convergence is the
importance of polarization (f-functions) on the metal center. This
was examined by augmenting the 6-311+G Fe basis with two f-functions
($\alpha$ = 0.25,0.75) split about a value ($\alpha$ = 0.4) optimized
for Fe(H$_2$O)$_6$$^{3+}$. In this larger basis an endothermicity of
27.9~kcal/mole results compared to the value 28.1~kcal/mole obtained
without the f-functions. The correction to the second hydrolysis is
only slightly larger, $\sim$ 0.5~kcal/mole. We thus conclude that the
B3LYP deprotonation energies are nearly converged with respect to
further improvements in the basis set. A conservative estimate might
be associated with the change observed between the 6-311+G/6-31+G*
results and the 6-311+G(2f)/6-31++G** results, or about 2-3~kcal/mole.
Convergence of the results with respect to basis set does not address
the accuracy expected with the B3LYP method. In so far as the water
molecules retain their identity when bound to the metal ion, we expect
the error in the hydrolysis reaction to be approximated by the error
observed for the analogous reaction for a single H$_2$O molecule. The
procedure outlined above was therefore used to determine the energies
and zero-point energies (ZPE) for a number of species associated with
the neutral and ionic dissociation channels of H$_2$O. They are
reported in Table~\ref{T_0}, and can be used to determine the
properties and reaction energies reported in Table~\ref{T_1}. The
H$_2$O thermochemistries shown there are in excellent agreement with
experiment. For example, the error associated with the reaction
\begin{eqnarray}
\mbox{H$_2$O} \rightarrow \mbox{H$^+$} + \mbox{OH$^-$}
\label{partial:h}
\end{eqnarray}
is underestimated by less than 2~kcal/mole. The errors in the other
dissociation channels are similar.
\subsection{Electrostatic Solvation Calculations}
Electrostatic interactions of hydrated ferric ions with the aqueous
environment are expected to be of first importance in this solution
chemistry. These hydrolysis reactions are written so that other
contributions to the net free energy, {\it e.g.\/} packing, might
balance reasonably. Thus, continuum dielectric solvation modeling is
a reasonable approach to studying these solution processes; it is
physical, computationally feasible, and can provide a basis for more
molecular theory \cite{Pratt:94,Tawa:94,Tawa:95,Hummer:95a,Corcelli,%
Hummer:96,Tawa:96,Hummer:97a,Pratt:97a,Hummer:97b,Hummer:97c}. The
dielectric model produces an approximation to the interaction part of
the chemical potential, $\mu^\ast $ of the solute; in notation used
below $\mu^\ast = -RT\ln \langle \exp(-\Delta U) \rangle_0$ where for
this case $\Delta U$ is the solute-solvent electrostatic interaction
potential energy and the brackets indicate the average over the
thermal motion of the solvent uninfluenced by the charge distribution
of the solute\cite{Pratt:94,Tawa:94,Tawa:95,Hummer:95a,Corcelli,%
Hummer:96,Pratt:97a,Hummer:97c}. Free energy contributions due to
non-electrostatic interactions are neglected. We address free energy
changes due to atomic motions internal to the complexes on the basis
of gas phase determinations of the vibrational frequencies and
partition functions. The solvation calculation treats all species as
rigid; we comment later on the consequences of this approximation and
on the possibilities for relaxing it . The solute molecular surface
was the boundary between volume enclosed by spheres centered on all
atoms and the exterior. The sphere radii were those determined
empirically by Stefanovich \cite{Stefanovich}, except
R$_{Fe}$=2.08~\ang for the ferric ion. The ferric ion is well buried
by the ligands of these complexes and slight variations of this latter
value were found to be unimportant. The numerical solutions of the
Poisson equation were produced with a boundary integral procedure as
sketched in references \cite{Corcelli,Pratt:97a} typically employing
approximately 16K boundary elements. The accuracy of the numerical
solution of the dielectric model is expected to be better than a
kcal/mol for the electrostatic solvation free energy.
\section{Results and Discussion}
The pertinent energies for Fe(H$_2$O)$_6$$^{3+}$ and other species
involved in hydrolysis processes are compiled in Tables \ref{T_0} and
\ref{T_1}. The magnitudes of the various components entering into the
gas phase hydrolysis free energy are reported in Table~\ref{T_1A},
while Table~\ref{T_1B} adds the solvation contributions to the free
energy and compares the total with experiment. Table~\ref{T_2}
summarizes geometrical information and partial atomic charges computed
from the electrostatic potential fit are presented in Table~\ref{T_3}.
\subsection{Fe(H$_2$O)$_6$$^{3+}$} The structure of the
Fe(H$_2$O)$_6$$^{3+}$ complex, T$_h$ symmetry, is shown in Figure
~\ref{fig1}. The B3LYP approximation gives an Fe-O distance of
2.061~\ang. By way of comparison, with the identical basis set the
Hartree-Fock approximation yields 2.066~\ang. These distances are
similar to the recent Hartree-Fock results of {\AA}kesson, {\it et
al.\/}\cite{akesson:3+},R$_{FeO}$ = 2.062~\ang, and the
gradient-corrected DFT(BPW86) calculations of Li, {\it et
al.\/}\cite{li}, R$_{FeO}$= 2.067~\ang. These theory values cluster in
the upper end of the range of distances (1.97~\ang -- 2.05~\ang)
determined experimentally\cite{Neilson,Brunschwig,ohtaki}. Neutron
scattering measurements report 2.01~\ang\ in concentrated electrolyte
solutions \cite{Neilson}. The solution EXAFS result (1.98~\ang) is
close to the crystallographic determinations, 1.97~\ang --
2.00~\ang\cite{Brunschwig}. In their recent review, Ohtaki and Radnai
discuss a number of other experiments and conclude the distance lies
in the range of 2.01 -- 2.05~\ang\ \cite{ohtaki}.
Though these theory values are compatible with the upper end of this
range, they are far from the EXAFS result (1.98~\ang) that might be
the most reliable experimental determination. While investigating this
question, we found that the B3LYP value is stable with respect to
further improvements in the basis; if the basis is augmented with an
f-function on the metal and diffuse functions on the oxygens, the
equilibrium distance decreases only slightly (R$_{Fe-O}$ = 2.053~\ang).
As noted above, the B3LYP result is in close agreement with that from
the Hartree-Fock approximation. It would not be surprising, however,
if the HF approximation overestimated the bond length. Akesson, {\it
et al.\/}\cite{akesson:2+,akesson:more2+} have previously observed
that the HF bond lengths for a series of first row transition metal
hydrates are systematically too long. They find that correlation
effects and the influence of the second hydration shell each act to
reduce the bond length by about 0.01~\ang. Even with these corrections
the theoretical bond length is still much larger than 1.98~\ang. As an
additional data point, the local-density-approximation with the
present basis yields R$_{Fe-O}$ = 2.013~\ang, in better agreement with
the EXAFS and crystallographic determinations. However, while the LDA
generally gives reasonable geometries for transition metal
complexes\cite{ziegler,sosa}, Sosa, {\it et al.\/}\cite{sosa} find
that it tends to underestimate the lengths of dative bonds, such as
the one discussed here, and this might further support a bond length
toward the upper end of the range.
The atomic partial charges computed here for the hexaaquoferric ion are
given in Table ~\ref{T_3}. Note that the charge assigned to the ferric
ion is substantially less than 3e, and that the magnitudes of the
charges on the oxygen and hydrogen atoms are substantially greater than
is common with effective force fields used for simulation of liquid
water.
As discussed in the appendix, a value for the absolute free energy of
the ferric ion can be obtained from a free energy of formation of the
Fe(H$_2$O)$_6$$^{3+}$ complex from an isolated Fe$^{3+}$ ($^6S$) ion
and six water molecules (Table~\ref{T_1A}), provided that no solvation
contribution is included for the atomic ion and that the actual, not
standard state, species concentrations are used. That absolute free
energy in aqueous solution is -1020~kcal/mol. Experimental values
range from -1037~kcal/mol to -1019~kcal/mol
\cite{Rosseinsky,urban,Friedman:73,YMarcus}. We note that this
theoretical value takes no account of solvation effects due to
non-electrostatic interactions.
This agreement with experiment is encouraging. In computing the
absolute free energy there are a number of large contributions to the
final result. This contrasts to the hydrolysis reactions where we have
arranged things to encourage cancellation of errors. It is interesting
to examine the components contributing to the free energy in the gas
phase (Table~\ref{T_1A}). Not surprisingly, the zero-point correction,
+15~kcal/mole, is significant. Under the standard conditions,
hypothetical p=1~atm ideal gas with T=298.15~K, there is the large,
unfavorable differential entropy contribution, +47~kcal/mole. This
arises from the necessity of sequestering six water molecules in a
dilute gas. The net free energy found for the gas phase reaction is
-602~kcal/mole. Table~\ref{T_1B} addresses the solution phase aspects
of this thermochemistry. (See the Appendix also.) About half of the
ideal entropy penalty mentioned is regained in the liquid because of
the higher concentration of water molecules. The differential
solvation free energy adds another -391~kcal/mole of (favorable) free
energy for a net absolute free energy of hydration of -1020~kcal/mole.
This estimate of the absolute free energy of the ferric ion is close
to the value -1037~kcal/mole reported by Li, {\it et al.\/} \cite{li}
as a hydration {\it enthalpy.\/} The Li, {\it et al.,\/} value
contains some terms that would be appropriate if the hydration
enthalpy were sought and some terms that would contribute to the
hydration free energy; thus comparison of that previous value with the
present result is not straightforward. In fact, the most pragmatic
calculation of the hydration enthalpy within the dielectric continuum
model is nontrivial. The enthalpy would be then obtained by
determining a temperature derivative of the chemical potential. The
appropriate temperature derivative determines the enthalpy directly
or, alternatively, the solvation entropy so that the desired enthalpy
might be determined by differencing with the already known chemical
potential. That temperature derivative would generally involve also
the temperature variation of the
radii-parameters\cite{Pratt:94,Tawa:94,Tawa:95,Pratt:97a,Hummer:97c}.
The variations with thermodynamic state of the radii-parameters have
not been well studied\cite{Tawa:95}. There is good agreement, however,
in many of the components contributing to this hydration enthalpy. Li,
{\it et al.,\/} report a gas phase energy of formation of
-652.2~kcal/mole with the BPW86 gradient-corrected functional compared
to the present B3LYP result of -655.2~kcal/mole, and their solvation
free energy is -444~kcal/mole {\it vs.\/} the -441~kcal/mole found
here. Such close agreement is encouraging, although somewhat
fortuitous. Our gas phase energy of formation includes a correction,
not considered by Li, {\it et al.,\/} of some 15~kcal/mole for the
zero-point energies. It is encouraging, however, that the estimates
disagree by only about 15~kcal/mole, or some 1.5\%. Additionally, the
definition of the molecular surface adopted by Li, {\it et al.,\/} for
the solvation calculation was substantially different from that used
here, as were the radii-parameters used.
\subsection{First and second hydrolysis reactions} We turn now to the
first deprotonation reaction Eq.~\ref{eq:hydro1}. The structure found
for Fe(H$_2$O)$_5$OH$^{2+}$ is displayed in Figure ~\ref{fig2}. The
Fe-O (hydroxide) distance is 1.76~\ang\ and the Fe-O (water) distances
lengthen to 2.10~\ang -- 2.15~\ang. Assembling the results for the
standard free energy of this reaction we find 2~kcal/mol, in
surprisingly good agreement with the experimental value of
3~kcal/mol\cite{Flynn}. This computed net free energy change is
composed of approximately -148~kcal/mol exothermic (favorable) change
in isolated molecule free energy and +150~kcal/mol (unfavorable) net
increase in solvation free energy. The solvation contribution favors
the reactant side here because it presents the most highly charged
ion. Changes in the radius assigned to the iron atom in the range
2.06~\ang$\le$R$_{Fe}\le$2.10~\ang lead to changes of $\pm$ 1~kcal/mol
in the predicted reaction free energy. This reaction was also
considered by Li, {\it et al.,\/}\cite{li} who find it to be
exothermic by 14~kcal/mole.
To treat the next hydrolysis Eq.~\ref{eq:hydro2} we must consider
Fe(H$_2$O)$_4$(OH)$_2$$^+$ species. Figure ~\ref{fig3} shows the stable
structures found. Further lengthening of both the Fe-O(water) and
Fe-O(hydroxide) distances is noted(Table~\ref{T_2}) in the {\it cis\/}
and {\it trans\/} six-coordinate species by 0.07-0.14~\ang compared to
the Fe(H$_2$O)$_5$OH$^{2+}$. In the gas phase, the {\it cis\/}
structure is predicted to be the lowest energy conformer, slightly
(1~kcal/mole) below the {\it trans\/} isomer. This preference is
reversed in solution, where the {\it trans\/} isomer is predicted to be
slightly more stable. We note that the most current force fields applied
to simulation of ferric ions in solution place the {\it cis\/} structure
significantly higher in energy than the {\it trans\/}\cite{Rustad:96b}.
We were surprised to discover a stable {\it outer\/} sphere complex
during our search for the stable {\it trans\/} structure. The
structure is given in Figure ~\ref{fig3}. The distances between the
hydroxyl oxygens and the outer sphere water are typical of hydrogen
bonds. Explicit calculation of the vibrational frequencies show it to
be a true local minimum, lying less than a kcal/mol higher in energy
than the {\it cis\/} conformer.
The interaction of the outer sphere water with the remainder of the
ferric hydrate complex can be partially characterized by finding the
energy of that complex without the outer sphere partner. The
structure obtained for that penta-coordinate ferric ion is shown in
Figure ~\ref{fig4}. The penta-coordinate complex can stably adopt a
conformation similar to that of Figure ~\ref{fig3} also in the absence
of the outer sphere water. In terms of the zero-point corrected
electronic energy, this {\it outer\/} sphere complex is stable with
respect to loss of the H$_2$O by about 7~kcal/mole. Consideration of
the entropic contributions to the free energy find the complex still
stable with respect to loss of H$_2$O, but the differential solvation
contributions reverse this conclusion. Thus, the dissociation of the
outer sphere complex is an essentially thermoneutral process. We
suspect such intermediates play an important role in the mechanism of
the ligand exchange with the solvent.
These three conformers lead to estimates of 16~kcal/mol, 16~kcal/mol,
and 18~kcal/mol for the reaction free energy of the second hydrolysis
for {\it trans, outer,\/} and {\it cis sphere\/} products,
respectively; the experimental value is approximately
5~kcal/mol\cite{Flynn,Meagher}. We note that as in the case of the
first hydrolysis, the gas phase predictions lead to an exothermic
reaction; it is the differential solvation free energy that tips the
scales in favor of endothermicity.
\subsection{Role of conformational entropy} The issue of the internal
motions of the complexes, and the near degeneracy of several
conformers, raises also the issue of conformational entropy of these
species. For example, in the second hydrolysis reaction, factors such
as -RT$\ln 3$ arise if the three isomers discussed here are considered
isoenergetic. [However, the multiplicity of isoenergetic states will
surely not be just 3 and, furthermore, entropy {\it differences\/} are
required here.] As another example, the T$_h$ hexaaquo complex surely
has a number of low-lying structures involving the rotation of the
plane of an individual H$_2$O. More generally, this conformational
entropy would be appropriately included by computing the solvation
contribution to the chemical potential, $\mu^\ast$, of the complex
according to
\begin{eqnarray}
\mu^\ast = -RT\ln \sum_c x_c^{(0)} e^{-\mu^\ast(c)/RT}
\label{average}
\end{eqnarray}
where the sum indicated is over conformations $c$ weighted by the
normalized population, the mole fractions $ x_c^{(0)} $ of conformers
when there is no interaction between solute and solvent, and further
the summand is the Boltzmann factor of the solvation free energies,
perhaps from a physical model such as the dielectric model used here,
for each conformation. [The treatment of the thermodynamics of
flexible complexes in solution is also discussed further in the
appendix and in Reference \cite{Pratt:97b}.] The solvation
contribution to the chemical potential Eq.~\ref{average} would then be
combined with the isolated cluster partition function to obtain the
free energies of the species involved and free energy changes for the
reactions\cite{Pratt:97b}. Finally, an entropic contribution,
including any conformational entropy, would be obtained by temperature
differentiation of the full chemical potential. The isolated cluster
partition functions would properly include an entropy associated with
the multiplicity of isoenergetic conformational states. It is
reasonable to expect that this conformational entropy increases
progressively with hydrolysis, {\it i.e.\/} the products here are
``less ordered,'' and have higher conformational entropy than their
reactants. It is thus significant that the predicted reaction free
energies are higher than the experiment; inclusion of conformational
entropy should lower these reaction free energies.
We note that Li, {\it et al.\/}\cite{li} argue that inclusion of a
second solvation shell substantially improves the accuracy of the
thermochemical predictions. The present results do not seem to force
us to larger clusters. Although it is true that proper inclusion of
more water molecules should permit a more convincing treatment,
inclusion of more distant water molecules makes the neglect of
conformational entropy less tenable. In any case,
comparison of dielectric model treatments with thermochemical results
only permits limited conclusions because of the empirical
adjustability of radii-parameters.
\section{Conclusions}
Given the balance of the large contributions that must be considered,
the observed accuracy of the computed hydrolysis reaction free
energies reactions is encouraging. Evidently many structural
possibilities will have to be treated for a full description of ferric
ion speciation in water, eventually considering higher aggregates and
anharmonic vibrational motions of the strongly interacting water
molecules and other ligands. These issues are probably best pursued
through development of a molecular mechanics force field to screen
structures rapidly, reserving electronic structure calculations for
verification of the important structures found and refinement of the
force field. The energetic ordering found here for isomers of
Fe(H$_2$O)$_4$(OH)$_2$$^+$ is {\it cis\/} (lowest), {\it outer
sphere\/}, and {\it trans\/} (highest), but all these energies are
within about 2~kcal/mol. Molecular mechanics force fields might be
reparameterized to account for these
results\cite{Rustad:95,Rustad:96a,Rustad:96b}. Identification of
prominent isomers should simplify and improve the modeling of the
temperature variations of thermodynamic properties. It deserves
emphasis also that substantial contributions to these reaction free
energies, and to the accumulated uncertainties, are associated with
the water partners in these reactions, {\it i.e.\/} the free energies
associated with solvation of H$_2$O and H$_3$O$^+$. A similar comment
would apply for other common oxy-acids and ligands in water, {\it
e.g.\/} carbonate, nitrate, sulfate, and phosphate. Molecular
descriptions of metal ion speciation will be incomplete without
accurate molecular characterizations of these species in aqueous
solution.
\section*{Acknowledgement} This work was supported by the LDRD
program at Los Alamos. LRP thanks Marshall Newton for helpful
discussions on these topics.
\section*{APPENDIX}
Here we specify with greater care some of the statistical
thermodynamic considerations relevant to solvation free energies
obtained from cluster calculations of the present variety\cite{G94}.
We note that these issues are of minor importance for the hydrolysis
reactions that are the focus of this paper. However, these
considerations become more important for the absolute free energy
reported for the aqueous ferric ion and for the free energy of
dissociation of the outer sphere complex.
\subsection*{Utilizing calculations on clusters} \noindent We first
specify more fully and explain the procedure from computing the absolute
free energy of the ferric ion given in Table~\ref{T_1B} on the basis of
cluster results obtained from density functional theory and the
dielectric continuum estimate of solvation contributions. The result of
the development here is a formula for the chemical potential of the
ferric ion:
\begin{eqnarray}
\mu_{{Fe^{3+}}} = RT\ln\lbrack{\rho_{{Fe^{3+}}} V\over q_M
\left\langle \left\langle e^{-\Delta U / RT}\right\rangle
\right\rangle _{0,M}} \rbrack - 6\mu_W,
\label{answer3}
\end{eqnarray}
Here $\rho_{{Fe^{3+}}}$ the number density of ferric ions in the
solution, $V$ is the volume of the system, $ q_M $ is the partition
function of the isolated hexaaquoferric complex
(M=Fe(H$_2$O)$_6$$^{3+}$), $-RT\ln\langle\langle e^{-\Delta U /
RT}\rangle\rangle _{0,M} $ is the solvation free energy of the complex,
and $\mu_W$ is the chemical potential of the water. The conclusion
drawn from this formula is that the desired absolute free energy of the
hydrated ferric ion is obtained from the chemical potential change of
the reaction Fe$^{3+}$+6H$_2$O $\rightarrow$ Fe(H$_2$O)$_6$$^{3+}$
except with the additional provisions that the chemical potentials
should be evaluated on a fully molecular basis at the water
concentration of interest and no solvation contribution should be
included for the atomic ion. The subsequent section notes how the
isolated molecule partition functions should be used to obtain the
chemical potentials at the required concentrations.
As suggested above, the fundamental issue is the use of the cluster
calculation Fe(H$_2$O)$_6$$^{3+}$ to obtain the chemical potential,
$\mu_{Fe^{3+}}$, of the hydrated ferric ion, Fe$^{3+}$. It is clear on
physical grounds that such an approach should be advantageous when the
identification of a relevant cluster is physically obvious and when
the inner shell of the cluster requires a specialized treatment. What
happens when the relevant cluster is not so obvious? What about cases
when more than one cluster should be considered? How might this
approach be justified more fully and how might the calculations be
improved?
The statistical mechanical topic underlying the considerations here is
that of association equilibrium
\cite{Stillinger:63,Pratt:76a,Pratt:76b} and is often associated with
considerations\cite{hill} of `physical clusters.' A suitable
clustering definition\cite{Stillinger:63,LaViolette:83} is required
for these discussions to be explicit and heavier statistical
mechanical formalisms can be
deployed\cite{Stillinger:63,Pratt:76a,Pratt:76b}. However, the
treatment here aims for maximal simplicity. This argument is an
adaptation of the potential distribution theorem\cite{widom}.
In order to involve information on clusters, we express the density of
interest in terms of cluster concentrations. Thus, if the Fe$^{3+}$
ion appears only once in each cluster, {\it i.e.} if {\it mono}nuclear
clusters need be considered, then we would write
\begin{eqnarray}
\rho_{Fe^{3+}} = \sum_M \rho_M \label{stoichiometry}
\end{eqnarray}
where $M$ identifies a molecular cluster considered and the sum is over
all molecular clusters that can form. A satisfactory clustering
definition\cite{Stillinger:63,LaViolette:83} insures that each ferric
ion can be assigned to only one molecular cluster, {\it e.g.\/} that a
molecular cluster with one ferric ion and six water molecules is not
counted as six clusters of a ferric ion with five water molecules. The
calculations above assumed M = Fe(H$_2$O)$_6$$^{3+}$, and that was it.
In the more general case that not all clusters are mononuclear,
Eq.~\ref{stoichiometry} would involve the obvious stoichiometric
coefficients. The concentrations $\rho_M$ are obtained from
\begin{eqnarray} \rho_M = z_{{Fe^{3+}}}z_W^{n_M} (q_M/V) \left\langle
\left\langle e^{-\Delta U / RT}\right\rangle \right\rangle _{0,M} .
\label{pdt} \end{eqnarray}
$n_M$ is the number of water molecules in the cluster of type $M$,
(six in the example carried long here); $z_{{Fe^{3+}}}$ and $z_W$ are
the activity of the ferric ion and the water, respectively; {\it i.e.}
$z_\gamma = e^{\mu_\gamma/RT}$; $q_M=q_M(T)$ is a conventionally
defined canonical partition function for the cluster of type
$M$\cite{Stillinger:63,Pratt:76a,Pratt:76b,LaViolette:83}. The
indicated average utilizes the thermal distribution of cluster and
solvent under the conditions that there is no interaction between
them. $\Delta U$ is the potential energy of interaction between the
cluster and the solvent. In the example carried along here we do not
pay attention to any counter-ions since those issues are tangential to
the current considerations.
Eq.~\ref{pdt} is most conveniently derived by considering a grand
ensemble. Suppose we have a definite clustering criterion: a cluster
of a ferric ion and $n_M$ water molecules is formed when exactly $n_M$
water molecules are within a specified distance $d$ of a ferric ion.
In the example we have been carrying along, water molecules with
oxygens within about $2.2$~\AA\ of a ferric ion are in chemical
interaction with the ferric ion. It would be natural to specify
$d\le2.2$~\AA\ for clustered ferric-water(oxygen) distances. The
average number $ <N_M>$ of such clusters is composed as
\begin{eqnarray}
\Xi(z_{{Fe^{3+}}},z_W,T,V)<N_M> & = & z_{{Fe^{3+}}}z_W^{n_M} \\
\nonumber & \times & \sum_{N_{{Fe^{3+}}} \ge 1, N_W \ge n_M}
N_{{Fe^{3+}}} z_{{Fe^{3+}}}^{N_{{Fe^{3+}}}-1} { N_W \choose n_M }
z_W^{N_W-n_M} Q({\bf N},V,T \vert n_M+1) \label{derive}
\end{eqnarray}
Here $\Xi(z_{{Fe^{3+}}},z_W,T,V) $ is the grand canonical partition
function; $ N_{{Fe^{3+}}}$ is the number of ferric ions in the systems
and $N_W $ is the number of water molecules; $Q({\bf N},V,T \vert
n_M+1) $ is the canonical ensemble partition function with one
specific ferric ion and $n_M$ specific water molecules constrained to
be clustered. The binomial coefficient $ { N_W \choose n_M }$
provides the number of $n_M$-tuples of water molecules that can be
selected from $N_W$ water molecules. Because of the particle number
factors in the summand the partition function there can also be
considered to be the partition function for $N-1$ ferric ions and
$N_W-n_M$ water molecules but with an extra $n_M+1$ objects that
constitute the cluster of interest. A reasonable distribution of
those $n_M+1$ extraneous objects is the distribution they would have
in an ideal gas phase; the Boltzmann factor for that distribution
appears already in the integrand of the $Q({\bf N},V,T \vert n_M+1) $
and the normalizing denominator for that distribution is $n_M!
q_M(T)$. The acquired factor $n_M!$ cancels the remaining part of the
combinatorial $ { N_W \choose n_M }$. Adjusting the dummy summation
variables them leads to Eq.~\ref{pdt}.
If we were to identify an activity for an M-cluster as $z_M =
z_{{Fe^{3+}}}z_W^{n_M} $ we would obtain from Eq.~\ref{pdt} the
general statistical thermodynamic formulae of Reference
\cite{Pratt:97b}.
A virtue of the derivation of Eq.~\ref{pdt} sketched here is that the
primordial activities $z_{{Fe^{3+}}} $ and $z_W $ are clear from the
beginning. This helps in the present circumstance where
concentrations and chemical potentials of many other species and
combinations will be of interest also.
Combining our preceding results, we obtain
\begin{eqnarray}
{\rho_{{Fe^{3+}}}\over z_{{Fe^{3+}}}} = \sum_M z_W^{n_M} (q_M/V)
\left\langle \left\langle e^{-\Delta U / RT}\right\rangle
\right\rangle_{0,M}.
\label{answer1}
\end{eqnarray}
This is a reexpression according to clusters of a basic result known
both within the context of the potential distribution
theorem\cite{widom} and diagrammatic (mathematical) cluster
expansions. For the latter context see Eq.~2.7 of
Reference~\cite{Pratt:76b}. Rearranging, we obtain the desired
chemical potential
\begin{eqnarray}
\mu_{{Fe^{3+}}} = -RT\ln\lbrack \sum_M ({z_W^{n_M}q_M\over
\rho_{{Fe^{3+}}} V}) \left\langle \left\langle e^{-\Delta U /
RT}\right\rangle \right\rangle _{0,M} \rbrack . \label{answer2}
\end{eqnarray}
This is the result that was sought. If higher-order clusters had been
considered with a suitable clustering definition, the final result
would have involved a more general polynomial of $z_{{Fe^{3+}}} $;
higher powers of $z_{{Fe^{3+}}} $ would appear in the sums that would
replace Eq.~\ref{stoichiometry} because of the presence of higher
powers of $z_{{Fe^{3+}}} $ in some instances of Eq.~\ref{pdt}. A
virtue of Eq.~\ref{answer2} result is that the thermodynamic
activity of the water appears explicitly and that contribution may be
included in a variety of convenient ways, perhaps utilizing
experimental results. Furthermore, we see that there is no question
whether a particular standard state for the water is relevant or, for
example, whether only the excess part of the chemical potential of the
water is required.
Notice that if the cluster definition had been restrictive enough that
only the atomic M=Fe$^{3+}$ were present in appreciable concentration,
{\it i.e.\/} $n_M$=0 for all clusters that need be considered, then
Eq.~\ref{answer2} produces the previously known general answer for the
chemical potential of a ferric ion in solution
\cite{Pratt:97b,widom}:
\begin{eqnarray} {{\mu _{Fe^{3+}}}/ RT}=\ln [\rho _{Fe^{3+}}V/q
_{Fe^{3+}}]-\ln \left\langle {e^{-\Delta U / RT}} \right\rangle _0
\label{fun1}
\end{eqnarray}
It would be natural to choose the electronic energy of the atomic
ferric ion as the zero of energy and to anticipate that the degeneracy
of the electronic degrees of freedom will be physically irrelevant.
Then it would be sufficient to put $V/q _{Fe^{3+}} = \Lambda
_{Fe^{3+}}^3 = (h^2 / 2 \pi m_{Fe^{3+}} RT)^{3/2} $, the cube of the
deBroglie wavelength of the ferric ion. $ m_{Fe^{3+}} $ is the mass
of the ion, and $h$ is the Planck constant. In any case, we define
the absolute free energy of the hydrated ferric ion as the second term
on the right $\Delta \mu_{Fe^{3+}} = -RT\ln \left\langle {e^{-\Delta U
/ RT}} \right\rangle _0 $. In view of Eq.~\ref{answer3} we then have
\begin{eqnarray}
\Delta\mu_{{Fe^{3+}}} = RT\ln\lbrack{q_{{Fe^{3+}}} \over q_M}\rbrack
-RT\ln\left\langle \left\langle e^{-\Delta U /
RT}\right\rangle\right\rangle _{0,M} - 6\mu_W,
\label{answer4}
\end{eqnarray}
where M=Fe(H$_2$O)$_6$$^{3+}$. This Eq.~\ref{answer4} is the formula
that was used.
A generalization of interest is the case that the solvent contains more
than one species that may complex with a specific metal ion. For
example, suppose that ammonia may be present in addition to water, that
mixed complexes may form with ferric ion, and that these complexes have
been studied as clusters in the same way that the hexaaquoferric ion
complex was studied above. Then the result Eq.~\ref{answer2} is
straightforwardly generalized by including the proper combinations of
activities of the additional ligands possible.
\subsection*{Standard State Modifications} \noindent Many {\it ab
initio} electronic structure packages, such as the Gaussian94 package
used here, can produce molecular (or cluster) partition functions
$q_M=q_M(T)$ and on this basis free energies for the species and
reactions considered. It should be emphasized that these are
typically applicable to a hypothetical ideal gas at concentrations
corresponding to pressure p=1 atm, see Table~\ref{T_1A}. Thus, for
example, these results determine the ideal chemical potential $ \mu_W
= RT\ln\lbrack
\rho_W V/q_W(T)\rbrack$. However, because of the choice of hypothetical
p=1~atm ideal gas standard state, those results use $\rho_W = p/RT $
with p=1~atm. In Gaussian94, for example, this logarithmic concentration
dependence is considered a translational entropy contribution and,
therefore, the entropic contribution to the reaction free energy of the
first reaction in Table~\ref{T_1A} is substantial and unfavorable.
Because of the dilute reaction medium associated with p=1~atm, the water
molecules written as reactants in this reaction have more freedom before
complexation than they do after. This is an entirely expected physical
effect but inappropriate in solution. To obtain results applicable at
the concentration of liquid water we determine that pressure parameter
$p=\rho_W RT$ from the experimental density of liquid water $\rho_W $=
997.02~kg/m${^3}$. The required value is p=1354~atm as indicated in
Table~\ref{T_1B}. When this value is utilized in the expression for the
translational entropy contributions, the translational entropy penalty of
Table~\ref{T_1A} for the first reaction is about half recovered.
Because the second through fifth reactions of Table~\ref{T_1A} were
written to have the same molecule numbers for reactions and products
this translational entropy contribution is very minor in those cases.
We note also that the experimental tabulation of
Reference~\cite{Friedman:73} suggested as an experimental result for the
absolute free energy of the hydrated ferric ion (-1037~kcal/mol) of
Table~\ref{T_1B} requires an adjustment of about 1.9 kcal/mol
\cite{Hummer:96,YMarcus} for similar reasons. This adjustment is
insignificant for that property with the methods used here.
|
1,116,691,498,392 | arxiv | \section{Introduction}
Consider a scenario in which a government in some country has to be populated, e.g. by assigning the minister of health, the minister of education, etc. Usually the process for such an assignment is non-participatory.
Here we describe a participatory process for electing the government. In essence, we would like each of the citizens of the country to describe her preferences regarding the assignment of alternatives to each office.
While the above setting is indeed quite imaginary, as this is not the way it is usually done in practice, our particular motivation for this work comes from populating governments in coalitional systems; indeed, in coalitional systems, following coalition negotiations, each party in the coalition is being allocated some set of offices that, in turn, has to be populated with ministers.
Specifically, we are interested in the process by which a party that got allocated some offices through such a negotiation, shall decide internally---via a democratic vote of its members--how to assign ministers to each of her allocated offices.
We view this process as a social choice setting and first observe that one natural and simple way to approach it is to view it as $k$ independent elections, where $k$ is the number of offices allocated to the party. For example, each voter can select a set of alternatives for each of the offices and, for each office independently, the elected minister is the one that got the highest number of votes.
Observe, however, that such a process might disregard the preferences of minorities; in particular, if a strict majority vote for some alternatives for each of the offices, then only the alternatives voted by the majority would be selected, and none of the alternatives of the minority would be, even if the minority consist of $50 - \epsilon$ percent of the votes.
To overcome this weakness, we view the setting as a whole, in which we have one election whose output would be the complete assignment of alternatives to the full set of $k$ offices, and we aim to find a process that would guarantee some sort of proportionality.
To this end, we offer an adaptation of the sequential variant of Proportional Approval Voting (in short, SPAV) to this setting and show -- via computer-based simulations -- that indeed, in many cases, it guarantees proportional representation to minorities.
\section{Related Work}
To the best of our knowledge, the specific setting we are considering in this paper is new.
However, the model suggested by Aziz and Lee~\cite{aziz2018sub},
while using a different jargon, generalizes our setting, mainly by considering selecting several candidates from each office.
Furthermore, the model studied by Conitzer et al.~\cite{conitzer2017fair} study a different generalization of our model, using general cardinal utilities and not approval ballots; moreover, they study different axioms and different rules, concentrating on fairness notions adapted from the fair division literature.
We also mention the literature on voting on combinatorial domains~\cite{lang2016voting} that formally captures our setting as well; that literature, however, concentrates on interconnections and logical relations between the different ``domains'' (in our jargon -- the different offices).
Further apart, below we mention some more related models studied in the social choice literature.
First,
we mention the work of Boehmer et al.~\cite{boehmer2020line} that considers an assignment social choice problem, but differs from our model in that voters provide numerical utilities and alternatives can run for few offices in parallel (so the output decision shall take into account the suitability of alternatives to offices, while we derive the suitability directly for the votes).
Generally speaking, our social choice task is of selecting a committee, and thus is related to the extensive work on committee selection and multiwinner elections~\cite{mwchapter}. In our setting, however, we do not aim simply at selecting $k$ alternatives, but at selecting an assignment to $k$ offices.
Related, the line of work dealing with committee selection with diversity constraints (e.g., see~\cite{izsak2017working,aziz2019rule}) has some relation to our work, in particular, as one can choose a quota of ``at most one health minister'', ``at most one education minister'', and so on.
\section{Formal Model}
We model our situation as follows:
We have a set of $k$ offices and $k$ corresponding disjoint sets of alternatives, $A_j$, $j \in [k]$ so that each candidate runs for at most one office. Let us denote the set of all alternatives by $A := \cup_{i \in [k]} A_j$.
Here we consider the approval setting, so, in particular, we have a set $V = \{v_1, \ldots, v_n\}$ of $n$ votes such that $v \subseteq A$, $v \in V$.
Then, an aggregation method for our setting takes as input such an instance $(A, V)$ and outputs one alternatives $a_j \in A_j$ for each $j \in [k]$.
\begin{example}\label{example:toy}
Consider the $k$ sets of alternatives being $A_1 = \{a, b\}$, $A_2 = \{c, d\}$, and $A_3 = \{e, f, g\}$, and the set of votes being $v_1 = \{a, c, e\}$, $v_2 = \{a, c, f\}$, and $v_3 = \{a, d, f\}$.
An output of an aggregation method might be $\{a, c, g\}$, corresponding to alternative $a$ being selected for the first office, alternative $c$ being selected for the second office, and alternative $g$ being selected for the third office.
\end{example}
\section{The Problem with Independence}
Perhaps the most natural and simple solution would be to view the setting as running $k$ independent elections; for example, selecting to each office the alternative that got the highest number of approvals.
This, however, would be problematic; in particular it would not be proportional.
\begin{example}\label{example:one}
Consider a society with strict majority voting for $a_j \in A_j$ for each $j \in [k]$. Now, disregarding how the other voters vote, $a_j \in A_j$, $j \in [k]$ would be selected.
In particular, even a minority of $49\%$ would not be represented in the government.
\end{example}
\section{Adapting SPAV}
To overcome the difficulty highlighted above, here we aim at identifying a voting rule for our setting that does not completely disregard minorities.
To this end, here we adapt SPAV.
SPAV is used for multiwinner elections and is known to be proportional for that setting. It works as follows:
Initially, each voter has a weight of $1$; the rule works in $k$ iterations (as the task in standard multiwinner elections is to select a set of $k$ alternatives), where in each iteration one alternative will be added to the initially-empty committee.
In particular, in each iteration, the alternative with the highest total weight from voters approving it is selected, and then the weight of all voters who approve this alternative is reduced; the reduction follows the harmonic series, so that a voter whose weight is reduced $i$ times will have a weight of $1 / (i + 1)$ (e.g., initially the weight is $1$; then, a voter reduced once would have a weight of $1/2$, then of $1/3$, and so on).
In the proposed adaptation of SPAV to our setting of electing an executive branch, in each iteration, we again select the alternative with the highest weight from approving voters; say this is some $a_j \in A_j$. Now, we fix the $j$th office to be populated by $a_j$; then, as it is fixed, we remove all other $a \ne a_j \in A_j$ from further consideration (as the $j$th office is already populated) and reweight approving voters as described above (in the description of SPAV for the standard setting of multiwinner elections).
\begin{example}
Consider again the election described in Example~\ref{example:toy}, consisting of $k$ sets of alternatives: $A_1 = \{a, b\}$, $A_2 = \{c, d\}$, and $A_3 = \{e, f, g\}$; $3$ voters: $v_1 = \{a, c, e\}$, $v_2 = \{a, c, f\}$, and $v_3 = \{a, d, f\}$.
In the first iteration of SPAV, we will select alternative $a$ to populate the first office; then we reweight all votes to be $1/2$ (as all voters approve $a$). In the next iteration we will select either $c$ or $f$ (as both has total weight of $1$); say that our tie-breaking selects $c$.\footnote{We do not discuss tie breaking as it technically clutters the presentation; say that we do it arbitrarily following some predefined order over all alternatives.} Then, we reweight $v_1$ and $v_2$ to be both $1/3$. In the third and last iteration, alternative $e$ has $1/3$ weight, while alternative $f$ has $1/3 + 1/2$ so we select $f$. Thus, SPAV assigns $a$ to the first office, $c$ to the second office, and $f$ to the third office.
\end{example}
\subsection{The Merits of SPAV}
Our main aim is to achieve some sort of proportionality, in that minorities would not be completely disregarded when populating the offices.
Consider first the following example.
\begin{example}
Consider a toy society instantiating the general situation presented in Example~\ref{example:one}, with $3$ offices and $3$ voters, $v_1$, $v_2$, and $v_3$, voting as follows:
For each office $i$, $i \in \{1, 2, 3\}$, $v_1$ votes for $a_i$, while $v_2$ and $v_3$ vote for $b_i$.
Note that, indeed $v_2$ and $v_3$ are a cohesive majority, however SPAV might select $b_1$ for the first office, resulting in reducing the weight of $v_2$ and $v_3$ by half, thus making sure that at least $a_2$ or $a_3$ would be selected, thus the minority would also be taken into account.
\end{example}
We ran computer-based simulations to evaluate the extent of such minority representation. To this end, we generated artificial voter profiles and checked, for each of them, whether the following property holds, namely: whether it is the case that, for each group $V'$ of $n / k$ voters for which, for each office, there is at least one alternative approved by all voters of $V'$, it holds that there is at least one office for which the selected alternative is approved by at least one voter from $V'$; we view satisfying this property to mean that the voting rule does not completely disregard the preferences of big enough and cohesive minorities.
We first generated completely random profiles, in particular, profiles in which each voter has some probability to approve each of the alternatives.
For such profiles, when we consider, e.g., profiles with $30$ voters, $10$ offices, and $10$ alternatives in each office, we see that, for $p \in \{0.25, 0.5, 0.75\}$, where $p$ is the probability of a voter to approve an alternative, after generating $1000$ such random profiles, in each of them the property mentioned above is indeed satisfied.
We then generated profiles differently, in particular, we pick a group of $n / k$ voters, set all of them to approve some randomly-chosen alternative for each office, and generate all remaining $n - n / k$ voters randomly as described above. For this distribution of profiles, again, we get $100\%$ satisfaction of the property described above for $1000$ profiles, each with $9$ offices with $9$ alternatives in each, and $27$ voters, for $p \in \{0.25, 0.5, 0.75\}$.
While, indeed, there might be profiles for which SPAV does not satisfy the proportionality property described above, we view this preliminary experimental evaluation as demonstrating that in practice there is reason to believe that SPAV would indeed not violate the property, thus providing sufficient representation to big-enough cohesive minorities.
\section{Declined Candidates}\label{section:declined}
We briefly discuss how to deal with candidates who are selected to an office but decline to serve in the office:
In particular, assume that, for a given instance, there is a candidate $c$ that is selected to some office $A_j$ as a winner; however, when the day comes, $c$ refuses to populate the $j$th office (say, e.g., that $c$ accepts a different career).
A simple solution would be to simply run the aggregation method again, after removing $c$ from the election. However, when using SPAV, it might be the case that, as a result, other offices will get different candidates as winners. As this might be unacceptable (as it means, e.g., that if the foreign minister declines to accept then we change the environmental minister), we offer a different option, as follows.
In particular, we can keep the other $k - 1$ ministers intact, remove $c$ from the election, and run a single further iteration of SPAV, resulting in a different candidate to be selected for the office that $c$ was originally elected for. This ensures that the other winners are kept as they were, while the weights of all voters are calculated properly, and a different candidate is being selected for that office.
An unexpected beneficial byproduct of this method is that the replacement of an impeached minister is not known in advance, before calculating it from the ballot. This reduces the motivation of any particular candidate or their supporters to initiate an unjustified impeachment or recall elections for a minister.
\section{Paper-and-Pencil Realization}
An essential property of a voting rule is that it can be easily explained to the voter. Another important property is that its realization does not require trusting external elements (e.g., hardware and software). Fortunately, SPAV was invented before computers and hence must have been realized initially without them.
For completeness, clarity, and ease of implementation, we describe here a pencil-and-paper
realization of our SPAV protocol for electing the executive branch, including for determining replacements for declined candidates.
The basic process is as follows:
\begin{enumerate}
\item Before the vote commences, there is a finite list of candidates and a finite list of voters. Each candidate name is associated with one office.
\item During the vote, every voter writes a list of names on a note, places the note in an envelope and then in the ballot box.
\item All envelopes are collected and opened. If there is a limit on the number of names a voter can vote for, then all excess names on a note, as well as names of non-candidates, are stricken with X's. If any name in the note is stricken with a line, then it is stricken again with X's.
\item The \emph{weight} of a name in a note is $1/(k+1)$, where $k$ is the number of names stricken with a line in the note, if the name appears in the note, and zero otherwise.
\item Before vote counting commences, all offices are vacant and no name on any note is stricken with a line.
\item Counting proceeds in rounds until all offices are occupied (or no vacant office has a candidate named in a note) as follows: In each round, the combined weight of each name in all notes is computed. The highest-weighted name for a vacant office is elected, occupies its office, and the name is stricken with a line from each note it appears in.
\end{enumerate}
This completes the description of the basic voting process. In case an elected minister cannot fill her office, a replacement is needed (as described in Section~\ref{section:declined}).
To realize the concept described there (in Section \ref{section:declined}), the following simple procedure is followed:
The name of the declining minister-elect is stricken from all notes with X's, the highest-weight candidate for the vacated office is elected, and her name is stricken with a line from all notes. The resulting notes can be kept for calculating any future replacements.
\section{Conclusions}
We have described the setting of selecting the executive branch via direct democracy. For this setting we suggest the use of an adaptation of SPAV and show, via computer-based simulations, that it indeed does not disregard minorities in many cases.
\bibliographystyle{plain}
|
1,116,691,498,393 | arxiv | \section{Introduction}
\label{sec:introduction}
The {\it persistence} is a quantitative measure of the memory of a reference state, typically chosen to be the
initial condition after an instantaneous quench, that a stochastic process
keeps along its evolution. It is a very general notion related to first passage times. Research on this
topic has been intense in the last two decades and it has been thoroughly summarised in a recent review article~\cite{BMS}.
For an unbiased random walker on a line that starts from a positive position at the initial time $t=0$, the persistence is the probability that its
position remains positive up to time $t$~\cite{math}. For spin systems the zero-temperature local persistence simply
equals the fraction of spins that have never flipped at time $t$ since a zero-temperature quench performed
at time $t=0$. Equivalently, it is the probability that a single spin has never flipped between the initial time $t=0$ and
time $t$ under these conditions. Many other physical and mathematical problems where the persistence ideas can be explored are described in~\cite{BMS}. It is a particularly interesting quantity
as it depends on the whole history of the system.
In many relevant extended systems the persistence probability, or persistence in short, decreases algebraically at long times
\begin{equation}
P(t, L\to\infty)\sim t^{-\theta}
\end{equation}
(in the infinite size limit) with $\theta$ a non-trivial dynamical exponent.
Derrida, Bray and Godr\`eche defined and computed $P(t,L)$ numerically in the zero-temperature Glauber Ising chain
evolved from a random initial condition~\cite{DBG0}. They later
calculated $P(t,L\to\infty)$ in the one-dimensional time-dependent Ginzburg-Landau equation with an initial condition with
a finite density of domain walls~\cite{DBG}.
More generally, the persistence was evaluated in the $d$-dimensional $Q$-state Potts model's Glauber evolution
after a zero-temperature quench from infinite temperature~\cite{DBG0,Stauffer}. The exponent $\theta$ turns out to be
$Q$ and $d$ dependent. The
exact value of $\theta$ was obtained explicitly in $d=1$ for all values of $Q$ by mapping the domain walls to particles
which diffuse and annihilate after collision~\cite{derrida_exact_1995,derrida_exact_1996}. No such exact results exist in
higher dimensions and one has to rely on numerical estimates of $\theta$. In~\cite{DBG0}, an estimate $\theta = 0.22 (3) $ was obtained for the
Ising model in dimension two and this result
was confirmed in~\cite{Stauffer} where the dimensions one to five were considered.
These first numerical estimates were followed by the
analytic value $\theta=0.19$ obtained by Majumdar and Sire~\cite{MS,sire_analytical_2000}
with a perturbation scheme around the
Gaussian and Markovian stochastic process that mimics curvature driven domain growth~\cite{Ohta}.
In~\cite{yurke_experimental_1997} Yurke \textit{et al.} measured $\theta=0.19(3)$ in a $2d$
liquid crystal sample in the same universality class as the $2d$IM with non-conserved order parameter dynamics. Other
numerical estimates are $\theta = 0.21$~\cite{manoj_persistence_2000}, $\theta = 0.209(4)$~\cite{jain_zero-temperature_1999}
and $\theta = 0.22$~\cite{Drouffe-Godreche}. These
various estimates are roughly compatible with each other. Still, as we will explain in the main text, they are not fully satisfactory and there is room
for improvement in the determination of $\theta$.
The focus of our paper is the precise numerical evaluation of the $\theta$ exponent in the $2d$ Ising model with zero-temperature dynamics
that do not conserve the local magnetization.
As detailed in the previous paragraph, the persistence in this problem
has been mostly studied by using infinite temperature, \textit{i.e.} uncorrelated, initial
conditions. We will first present a more accurate numerical estimate of $\theta$ for this kind of initial states, and we will later analyze the
persistence for critical Ising, that is to say long-range correlated, initial conditions. We will study the model on square and triangular lattices with
free and periodic boundary conditions to check for universality. We will pay special attention to finite time and finite size effects.
The paper is organised as follows. In Sec.~\ref{sec:model}
we define the model and numerical method. We also discuss the system sizes and time-scales to be used numerically.
Section~\ref{sec:persistence} is devoted to the presentation of our results. In Sec.~\ref{sec:outlook} we discuss some
lines for future analytic and numeric research on persistence in spin models.
\section{Model and simulations}
\label{sec:model}
We consider the ferromagnetic finite dimensional Ising model
\begin{equation}
H=-J \sum_{\langle ij\rangle} S_i S_j,
\label{H}
\end{equation}
where the spin variables $S_i=\pm 1$ sit on each site of a two dimensional lattice,
the sum over $\langle ij \rangle$ is restricted to the nearest neighbours on the lattice,
and $J$ is a positive parameter that we fix to take the value $J=1$.
With this choice, the model undergoes a second order phase transition at
$\beta^{sq} = 1/(k_BT) = {1\over 2} \log{(1+\sqrt{2})}$ for the square
lattice and $\beta^{tr} = 1/(k_BT) = \log{(1+\sqrt{3})}$ for the triangular lattice.
We will consider both types of lattices with $N=L\times L$ spins and either
free boundary conditions (FBC) or periodic boundary conditions (PBC).
The choice of boundary conditions can have an influence on the final state after a quench
to zero-temperature\cite{SKR} --\cite{BCCP}. More precisely, the dynamics at low temperature are dominated by
the coarsening of domains with a linear length scale that grows in time as $t^{1/z}$ with the dynamic exponent $z=2$ for non-conserved
order parameter dynamics~\cite{Bray,Malte}, the kind of evolution we use here. Thus, there is a characteristic
equilibration time, $t_{eq} \simeq L^2$, in this problem. After this time, most of the finite domains have disappeared
and the configuration is either completely magnetised, such that all the spins take the same value, or in a striped state
with interfaces crossing the lattice. Depending on the geometry of the lattice and the choice of boundary conditions,
these striped states can be stable or not. In the latter case, there is some additional evolution taking place in a longer time scale.
In particular, for PBC on the square lattice, there exist diagonal stripe states with a characteristic time
$t^d_{eq} \simeq L^{3.5}$~\cite{SKR2}, while these do not exist for FBC or the triangular lattice.
In our simulations, a quench from infinite temperature is mimicked by a random configuration at $t=0$
that corresponds to a totally uncorrelated paramagnetic state.
Such a configuration is obtained by choosing each spin at random taking the values
$S_i = +1 $ or $S_i = -1$ with probability a half. We will also compare our results to the case
in which the system is constrained such that the total magnetisation is strictly zero.
This state is obtained by starting with all the spins $S_i = +1$ and then choosing at random $L^2/2$ among them
to be reversed ($L^2$ has to be even).
The critical Ising initial states were generated by equilibrating the
samples with a standard cluster algorithm.
Next we evolved the system at zero-temperature. At this temperature the dynamics are particularly simple.
After choosing at random one site, the spin
on this site is oriented along the sign of the local field (which is the sum of the nearest
neighbour spins). If this local field is zero, the value of the spin is chosen randomly.
$L^2$ such operations correspond to an increase of time $\delta t = 1$.
Quite naturally, the number of flippable spins under this rule decreases in time
and testing all the possible spins in the sample results in a waste of computer time.
It is much faster to consider only the spins which can be actually reversed. Therefore, in order to accelerate
our numerical simulations, we used the Continuous Time Monte Carlo (CTMC) method~\cite{Bortz}. In the following,
the time unit is given in terms of the equivalent Monte Carlo time step.
\vspace{0.25cm}
\begin{table}[h]
\begin{center}
\begin{tabular}{ | l || c | c | c | c | c | c | c |}
\hline
\; Lattice type \; & \ $L^M_{eq}$ \ & \ $N_M$ \ & \ $L^M_{ne}$ \ \\
\hline
\; Square Lattice with PBC \; & \ $362$ \ & \ $10^6$ \ & \ $8192$ \ \\
\; Square Lattice with FBC \; & \ 512 \ & \ $10^7$ \ & \ $8192$ \ \\
\; Triangular Lattice with PBC \; & \ $1024$ \ & \ $10^6$ \ & \ $4096$ \ \\
\hline
\end{tabular}
\end{center}
\caption{Largest size $L^M_{eq}$ at which we equilibrated $N_M$ samples, and the largest size
$L^N_{ne}$ ran until $10^7$ time steps and not necessarily reaching equilibrium for each type of lattice and boundary conditions.}
\label{Table1}
\end{table}
The sizes and number of configurations that we can simulate are limited by the computation time.
For example, for PBC on the square lattice, we simulated systems until they reached a stable state for sizes up to $L=362$.
We also simulated larger systems with a linear size up to $L=8192$ but with a maximum running time $t = 10^7$ at which the
system is not necessarily blocked yet.
In Table~\ref{Table1}, we specify the largest size $L^M_{eq}$ up to which we equilibrated $N_{M}$ samples.
We also indicate the largest size $L^M_{ne}$ for which we ran the code up to a finite time $t = 10^7$. For the largest sizes, the number of
samples $N$ have been chosen such that $N \times L^2 \simeq 2 \ 10^{11}$.
\section{Persistence}
\label{sec:persistence}
In this Section we present our numerical results.
The possible finite size dependence of the persistence probability, $P(t,L)$, is made explicit by the second argument in this function.
In a coarsening process, the persistence
is expected to decay algebraically, $P(t,L) \sim t^{-\theta}$, as long as the growing length $\xi(t) \simeq t^{1/z}$ be
shorter than the system size $L$, and it is expected to saturate to an $L$-dependent value, $P(t,L) \sim L^{-z\theta}$,
as long as $z\theta <d$ with $d$ the space dimension. The crossover between the two regimes should be captured by the scaling
form
\begin{eqnarray}
P(t,L) \simeq L^{-z\theta} \ f\left( \frac{t}{L^z} \right)
\qquad
\mbox{with}
\qquad
f(x) \sim
\left\{
\begin{array}{ll}
x^{-\theta} \qquad & x \ll 1\; ,
\\
\mbox{cst} \qquad & x \gg 1 \; .
\end{array}
\right.
\label{eq:scaling-P}
\end{eqnarray}
For $z\theta>d$ the crossover should be pushed to infinity and the persistence
should decay to zero for all $L$~\cite{manoj_spatial_2000}. In the present case $d=2$, $z=2$ and $\theta$ will turn out to be smaller than one. Therefore,
there will be a non-trivial time and size dependence of $P$ that we analyze in detail.
\subsection{Infinite temperature initial condition}
\label{subsec:Tinfty}
In the main panel in Fig.~\ref{Pers} we show numerical results for the persistence probability as a function of time in the ferromagnetic $2d$IM
instantaneously quenched from infinite to zero-temperature. This figure contains data for the square lattice with $L=1024$ and both FBC and PBC.
At first sight, it seems that both cases have the same algebraic decay for $t > 10$.
The best fit of the data for PBC on the interval $t \in [100:10000]$ gives $\theta =0.2218 (4)$, in excellent agreement
with previous results~\cite{DBG0,Stauffer}. This fit is shown in Fig.~\ref{Pers} as with a solid thin line.
For longer times, $t \gtrsim 10^6$, there is saturation of data, \textit{i.e.} $P(t,L=1024) \simeq~cst$ for
$t \gtrsim 10^6 \simeq L^2$.
This is expected since for $t \simeq L^2$, the
system gets close to equilibrium and the persistence reaches a finite $L$-dependent value due to finite size effects.
We obtain that $\lim_{t\rightarrow \infty} P(t,L) = P_{\infty}(L) \sim L^{-2\theta}$
with $2\theta = 0.45 (1)$. This is also consistent with
previous results in the literature~\cite{manoj_persistence_2000}.
The other two smaller panels on the right display the scaling plots of $P(t,L)$ according to
Eq.~(\ref{eq:scaling-P}). From these plots one could conclude that the scaling is very good and
that the value of $\theta$ used is the correct one. We will see below that this is not the case.
\begin{figure}[h]
\begin{center}
\epsfxsize=450pt\epsfysize=300pt{\epsffile{Plot_pers1.ps}}
\end{center}
\caption{The persistence for the ferromagnetic $2d$IM on a square lattice
with PBC and FBC (red and blue data, respectively, see the key) quenched at $t=0$ from a $1/T \to 0$
initial state to $T=0$.
Main panel: raw data for $P(t,L=1024) \; vs. \; t$ in double logarithmic scale.
The line is the best power-law fit to the PBC data on the interval $t\in [100:10000]$ that
yields $\theta \simeq 0.2218(4)$.
Secondary panels on the right: scaling plots for PBC (above) and FBC (below) using this value of $\theta$ and $z=2$. The
system sizes are given in the key and the symbol (and color) code is the same as in all other figures in the paper.}
\label{Pers}
\end{figure}
\subsubsection{Finite time effects}
\paragraph{Systems with PBC.}
Although the time and size dependence of the data in Fig.~\ref{Pers} seems to agree well with the power-law expectations and the
value of the exponent $\theta$ found in the past by other authors, we would like to examine these dependencies more carefully.
Indeed, though the fit in
Fig.~\ref{Pers} looks good, in fact it is not. The reduced chi-squared for the power law fit
is $\simeq 6000$ which means that it is actually a terribly bad fit.
(The data used for the plot in Fig.~\ref{Pers} contain one million samples.)
By changing the fitting region, we observe that the value of the exponent
is changing slightly, still in the range $\theta = 0.20 - 0.225$ though always with a very large reduced chi-squared.
In order to improve our analysis, in the first panel of Fig.~\ref{Pers2}, we show the same data as in Fig.~\ref{Pers}
after rescaling $P(t,L)$ by the power $t^\theta$ with $\theta=0.2218$, the value obtained from the analysis
done in Fig.~\ref{Pers}.
For $t/L^2 \ll 1$ this quantity should be constant if the decay were well-described by this value of the exponent $\theta$.
This is clearly not the case. We observe different regimes as a function of time that are characterized by different
effective exponents $\theta_{\rm eff}(t,L)$ represented as a function of time in the right panel in the same figure.
$\theta_{\mathrm{eff}}(t,L)$ was obtained from a fit
of the persistence probability $P(t,L)$ to a power law in the range $[t/3,3t]$.
At short times, $ 10 \lesssim t \lesssim 100$, $\theta_{\rm eff} \simeq 0.2214$.
In Stauffer's work~\cite{Stauffer}, measurements were done on very short time scales,
up to $t=200$, thus in this first regime of our analysis.
Next, for $ 100 \lesssim t \lesssim 1000 $ the exponent increases towards $\theta_{\rm eff} \simeq 0.2241$. For still
longer times, $t \gtrsim 100 000$, it decreases back to $\theta_{\rm eff} \simeq 0.207$.
We also note in the left panel that the persistence does not depend on the size of the system
until times of the order of $t \simeq L^2/10$. For each size, we observe a drop of $P$ beyond this time
(this drop is followed by a rapid increase that, for clarity, we removed from the presentation since it
corresponds to the saturation due to the finite system size) which signals the approach to equilibrium.
The existence of finite size effects at $t \gtrsim L^2/10$ was already observed in~\cite{Stauffer}.
\vspace{0.5cm}
\begin{figure}[h]
\begin{center}
\epsfxsize=220pt\epsfysize=150pt{\epsffile{Plot_persPBC.ps}}
\epsfxsize=220pt\epsfysize=150pt{\epsffile{Plot_exp_persPBC.ps}}
\end{center}
\caption{The ferromagnetic $2d$ Ising model on a square lattice
with PBC quenched from $T\to\infty$ to $T=0$ at $t=0$. The system sizes are given in the
keys and the symbol (and color) code is the same as in all other figures in the paper.
Left panel: $ t^\theta P(t,L) \; vs. \; t$ with $\theta=0.2218$. The line is a power law $t^{x}$ with $x=0.2218-0.2069=0.0149$
($0.2069$ is the value of the effective exponent in the last time-interval).
Right panel: the effective exponent $\theta_{\rm eff} \; vs. \; t$. The line is the fit (\ref{falpha}) with parameters
$\theta_0 = 0.198 (3)$, $\theta_1 \simeq 0.07$ and $\overline\beta \simeq 0.15$.
}
\label{Pers2}
\end{figure}
\vspace{0.5cm}
From the right panel one sees that for times
$1000 \lesssim t \lesssim L^2/10$, the effective exponent
slowly decreases with time and it is well described by a fit to the form
\begin{equation}
\label{falpha}
\theta_{\rm eff}(t,L) = \theta_0 + \theta_1 t ^{-\overline\beta}
\end{equation}
with $\theta_0 = 0.198 (3)$, $\theta_1 \simeq 0.07$, and $\overline\beta \simeq 0.15$ (shown with a solid line). In conclusion, we obtain the following numerical
estimate of the persistence exponent in the ferromagnetic $2d$IM on the square lattice
with PBC,
\begin{equation}
\theta^{\rm PBC}_{\square} = 0.198 (3)
\; ,
\end{equation}
in the large size and long time limits.
In Fig.~\ref{Pers3}, we show the same quantities for the triangular lattice with PBC.
\begin{figure}
\begin{center}
\epsfxsize=220pt\epsfysize=150pt{\epsffile{Plot_TR_persPBC.ps}}
\epsfxsize=220pt\epsfysize=150pt{\epsffile{Plot_exp_TR_persPBC.ps}}
\end{center}
\caption{The ferromagnetic $2d$ Ising model on a triangular lattice
with PBC quenched from $T\to\infty$ to $T=0$ at $t=0$. The system sizes are given in the
keys.
Left panel: $t^\theta P(t,L) \; vs. \; t$ with $\theta=0.2218$. The line is the power-law $t^x$ with $x=0.2218-0.2065=0.0153$
($0.2065$ is the value of the effective exponent in the last time-interval).
Right panel: the effective exponent $\theta_{\rm eff} \; vs. \; t$. The line is the fit (\ref{falpha})
with parameters $\theta_0 = 0.200 (3)$, $\theta_1 \simeq 0.04$ and $\overline\beta \simeq 0.15$.
}
\label{Pers3}
\end{figure}
In the left panel we rescale the persistence with the value $\theta=0.2218$. The curves for different sizes
coincide for a given time but after this rescaling the persistence is still not constant indicating again that there are finite time effects.
We computed the effective exponent $\theta_{\rm eff}$ as explained above and we plotted it as a function of time
for different sizes $L$ in the right panel of Fig.~\ref{Pers3}. The situation is similar to the one on the square lattice.
To obtain a good estimate we also use a good quality fit to the form (\ref{falpha}) with $\theta_0 = 0.200 (3)$,
$\theta_1 \simeq 0.04$, and $\overline\beta \simeq 0.15$.
The asymptotic value
\begin{equation}
\theta_{\triangle}^{\rm PBC} = 0.200 (3)
\end{equation}
is compatible with the one obtained on the square lattice $\theta_{\square}^{\rm PBC}=0.198 (3)$. We think that these
estimates are more reliable and accurate than the ones obtained with a fit of the persistence over the whole time interval or
over just short times as done in previous studies.
\paragraph{Systems with FBC.}
Now we turn to the case of FBC on the square lattice. Going back to Fig.~\ref{Pers}, we can observe that in this case
there is a deviation from scaling behaviour for $t > 10 000$.
A fit to a power law also gives a very large reduced chi-squared, but the fit improves as we approach
longer times. This can be better explained in Fig.~\ref{Pers4}.
\begin{figure}[t]
\begin{center}
\epsfxsize=220pt\epsfysize=150pt{\epsffile{Plot_persFBC.ps}}
\epsfxsize=220pt\epsfysize=150pt{\epsffile{Plot_exp_persFBC.ps}}
\end{center}
\caption{The ferromagnetic $2d$ Ising model on a square lattice
with FBC quenched from $T\to\infty$ to $T=0$ at $t=0$. The system sizes are given in the
keys with the same symbol (and color) code as in all other figures.
Left panel: $t^\theta P(t,L) \; vs. \; t$ with $\theta=0.2218$. The thin line is the power-law $t^x$ with $x=0.2218-0.195=0.0268$
that is in better agreement with the data sets.
Right panel: the effective exponent $\theta_{\rm eff} \; vs. \; t$. Note the spreading of data for different $L$
and the finite length plateau at a size-dependent height close to 0.195 (see the text for a discussion). In the inset we show $\theta_{\rm eff}$ as a function
of $t/L^2$.
}
\label{Pers4}
\end{figure}
In the left panel in Fig.~\ref{Pers4}, we plot $t^\theta P(t,L)$
as a function of $t$ with $\theta=0.2218$.
The plot does not approach a constant showing that this value of $\theta$ is not the
definitive one. We observe that $t^\theta P(t,L)$ actually goes to a power law $t^x$ with $x \simeq 0.2218-0.195=0.0268$ for long
times, just before reaching equilibration, suggesting that $\theta$ is close to $0.195$ in this case.
We also reckon that $t^\theta P(t,L)$ still depends on the system size, contrary to what was observed for PBC
({\it cfr.} the left panels in Figs.~\ref{Pers2} and \ref{Pers3}).
This fact can also be seen in the right panel of Fig.~\ref{Pers4} where we plot the effective exponent
$\theta_{\rm eff}$ as a function of time. For each size, we observe a plateau at times of the order of $t \simeq L^2/10$.
Thus, for FBC, finite size effects are much stronger than for PBC.
We found that the value of $\theta$ for long times, seen as
the height of the (finite-length) plateau in the right panel figure,
has a weak system size dependency.
We measured $\theta=0.1950 (1) $ for $L=256$, $\theta=0.1953 (1)$ for $L=512$ and
$\theta=0.1954 (1)$ for $L=1024$. This last value is obtained while considering only one million samples. For large sizes,
we have much less samples and the precision deteriorates.
Still, it seems that a large size extrapolation will be compatible with the values obtained
for PBC, $\theta_\square= 0.198 (3)$ or $\theta_\triangle=0.200(3)$.
\subsubsection{Finite size effects}
\paragraph{Difference between FBC and PBC.}
Let us come back to the behaviour of the effective persistence exponent
$\theta_{\rm eff}$
with the system size for PBC and FBC. Note that while for
FBC $\theta_{\rm eff}$ has a constant value for long times and finite $L$,
a similar result is obtained
for PBC in the long time {\it and} large size limit only.
When rescaling by a power of
time the PBC effective persistence exponent all the curves fall on the same master curve. Thus, for PBC, $\theta_{\rm eff}(t,L) \simeq f(t)$ with no
apparent dependence on the system size. On the contrary, for FBC, this is not the case
and one can show that $\theta_{\rm eff}(t,L) \simeq f(t/L^2)$ which is a more usual form of scaling, see the inset in Fig.~\ref{Pers4}. We have no explanation for the
intriguing difference between these two cases.
\paragraph{Saturated values.}
We now turn to the determination of the exponent $\theta$ from the final value of the persistence,
$P(t\to\infty, L) = P_{\infty}(L) \simeq L^{-2\theta}$
once the system is equilibrated. The exponent $2\theta$ can be determined in this case by doing a two points fit of two successive
increasing sizes $L$. The measurement of $P_{\infty}(L)$ is in fact very time consuming since we
need to trully equilibrate all samples ({\it i.e.} eliminate all diagonal stripes in them).
As a consequence, for this measurement, the largest systems that we considered are much smaller
than for the finite time analysis.
The largest sizes are given in Table~\ref{Table1} except for the square lattice with FBC in which case we have
data up to $L=1024$ but with only $10^6$ samples.
The results for $2\theta$ are shown in Fig.~\ref{Pers5} for FBC and PBC on the square and triangular lattices.
The effective exponent between two systems with sizes $L$ and $L'$ is
\begin{equation}
2\theta_{\rm eff}\left(\frac{L+L'}{2}\right) = - \frac{\log \left(\displaystyle{\frac{P_\infty(L) }{P_\infty(L') }} \right) } { \log\left( \displaystyle{\frac{L}{L'} }\right) } \; .
\label{eq:eff-theta}
\end{equation}
We take $L'=2L$.
In all cases, we observe that there are strong finite size corrections.
The value of $2\theta_{\rm eff}$ decreases with size but it is hard to extrapolate an asymptotic value
from these data points.
We also show in Fig.~\ref{Pers5} data on a triangular lattice with PBC.
There are two advantages with this lattice. First, the diagonal crossing states are absent~\cite{BP}
and we can therefore reach equilibrium for relatively large sizes, up to $L=1024$.
Second, the results for PBC have smaller finite size corrections than the ones for FBC. Moreover, we observe
that the finite size corrections on the PBC triangular lattice are much smaller than on the PBC square lattice
for no obvious reason. Thus, the results on the former case are much more accurate.
A large size extrapolation of these data points gives a prediction in the range
$2\theta_{\rm eff} = 0.40 - 0.41$ which is compatible with our previous estimates obtained from the time
evolution. However, since the sizes are more limited, and it is harder to
fit these data, we conclude that this is not the optimal way to determine
$\theta$. In Fig.~\ref{Pers5} we also show the values of $2\theta_{\rm eff}$ obtained by starting from configurations with
strictly zero magnetisation. This constraint does not seem to change the behaviour. These are also subject to strong finite size corrections.
\begin{figure}[t]
\begin{center}
\epsfxsize=280pt\epsfysize=220pt{\epsffile{plot_exp_pers.ps}}
\end{center}
\caption{The effective exponent $2 \theta_{\rm eff}$ defined in Eq.~(\ref{eq:eff-theta}) from the saturated value of $P$ {\it vs.} $L$
in the ferromagnetic $2d$IM
with different boundary conditions and on different lattices (see the key for the symbol code) quenched from $T\to\infty$ to $T=0$ at $t=0$.}
\label{Pers5}
\end{figure}
\subsection{Critical Ising initial conditions}
We will now concentrate on a zero-temperature quench from a critical Ising initial condition. As for the infinite temperature
initial states, we can either follow the behaviour of the persistence with time, $P(t,L\to\infty) \simeq t^{-\theta_c}$
or check the size dependence of the asymptotic,
$t\to\infty$, value, $P_\infty(L) \simeq L^{-2\theta_c}$.
Both cases are shown in Fig.~\ref{Perstc}. The left panel shows $P(t,L)$ versus $t$ for the triangular lattice with PBC.
As is the case for a quench from infinite temperature,
the study of the time dependence is tricky as
the long time regime where the real persistence dominates is only attained for big systems ($L\ge 1000$).
One measures in this way $\theta_c=0.033 (2)$ leading to the line $t^{-0.033}$ also shown in the main part of the plot.
The inset displays $t^{\theta_c} P(t,L)$ with $\theta_c=0.033$ and we see that the curves do not really
have a very flat plateau (the upturning parts are due to the finite size saturation) and demonstrate that this
determination of $\theta_c$ is not fully reliable.
In the right panel, we show the effective exponent $2 \theta_{\rm eff}$ versus $L$ for the square lattice (with one million samples) and
the triangular lattice (with ten million samples) and PBC. The solid line shows a fit of the triangular lattice data to the form $2\theta_{\rm eff} = 2 \theta_0 + 2\theta_1 L^{-\overline\beta}$
that yields $2 \theta_0 = 0.066 (2)$, $\theta_1 \simeq 0.015$ and $\overline\beta \simeq 0.25$.
Therefore, we estimate $\theta_c = 0.033(1)$ which is in good agreement
with the value measured from the long time limit. It is likely that the value obtained from the fit of $P_\infty(L)$ be more accurate than the one
extracted from the time dependence.
\begin{figure}
\begin{center}
\epsfxsize=220pt\epsfysize=150pt{\epsffile{Plot_TR_perstcPBC.ps}}
\epsfxsize=220pt\epsfysize=150pt{\epsffile{Plot_ee_Tr_cPBC.ps}}
\end{center}
\caption{The ferromagnetic $2d$ Ising model on a triangular lattice
with PBC quenched from $T_c$ to $T=0$ at $t=0$. The system sizes are given in the
key. Left panel: $P(t,L) \; vs. \; t$ in double logarithmic scale.
The solid line is a best fit to $t^{-\theta_c}$ with $\theta_c \simeq 0.033$. The inset is $t^{\theta_c} P(t,L) \; vs. \; t$ with $\theta_c=0.033$.
Right panel: the effective exponent $2 \theta_{\rm eff} \; vs. \; t$ on the square and the triangular lattices with PBCs.
The solid line is a fit of the triangular lattice data to the form $2 \theta_{\rm eff} = 2 \theta_0 + 2 \theta_1 L^{-\overline\beta}$
with parameters $2 \theta_0 = 0.066 (2)$, $\theta_1 \simeq 0.015$ and $\overline\beta \simeq 0.25$.
}
\label{Perstc}
\end{figure}
One can derive a bound on the persistence exponent from
geometrical arguments. It was shown in~\cite{BP} that the number of spanning (for FBC) or wrapping (for PBC) clusters does not change
under a quench of the $2d$IM from its critical point up to zero-temperature. But one can refine the analysis and
show that, for each dynamic run, there is a one-to-one correspondence between the spanning or wrapping clusters in
the initial and final configurations (there is no coalescence nor breaking of domains)~\cite{ABCS,BCCP}.
Thus, the spanning or wrapping clusters survive the coarsening process while all
other clusters shrink and eventually disappear. This means that the only spins contributing to $P_\infty(L)$
have to belong to the initial spanning clusters. In the simplest case where one cluster spans both directions for a
system with FBC, the initial spanning cluster is certainly the largest one and its mass scales as $\sim L^{D}$ with
$D$ its fractal dimension. Thus, the persistence $P_\infty(L)$ must decay at least as fast or even faster than the
fraction of spins in the initial spanning cluster, $L^{D-d}$. As $P_\infty(L)\sim L^{-2\theta_c}$,
we have $L^{D-d} \leq L^{-2\theta_c}$ and
\begin{equation}
\label{eq:bound_pers}
2\theta_c \ge d-D
\; .
\end{equation}
Remembering that $D=d-(\beta/\nu)_{tri}$ where $(\beta/\nu)_{tri}=5/96$ are the exponents\footnote{These exponents are the same as the ones of the magnetisation of the tricritical $Q=1$ Potts model \cite{ck,SV}.}
associated to the size of the biggest spin cluster, one can rewrite
Eq.~(\ref{eq:bound_pers}) as $2\theta_c\ge (\beta/\nu)_{tri}$.
The exponent that we measured complies to this inequality as
$2\theta_c = 0.066$ and $(\beta/\nu)_{tri}=5/96\simeq 0.052$. We must note that
we have assumed that all the persistent spins in the final state belong to the initial spanning cluster.
During evolution the persistent sites can also belong to finite size clusters, so that we cannot apply a
similar argument at finite times.
One may rewrite the inequality (\ref{eq:bound_pers}) in another way by considering the spatial distribution of persistent sites. Several authors
have shown that the persistent sites of a system quenched from infinite temperature have a non trivial spatial
distribution~\cite{manoj_scaling_2000,manoj_spatial_2000,jain_scaling_2000,manoj_persistence_2000,ray_persistence_2004}.
Indeed, persistent sites form fractal clusters of dimension $D_p=d-z\theta$ with $z=2$, as can be easily seen from the
infinite time value of the persistence $P_\infty(L)$ which scales as $L^{-2\theta}$ so the number of
persistent sites behaves as $L^{d-2\theta}$. While we have not checked that the persistent sites have a fractal
structure in the case of a critical initial condition, they quite likely do in analogy to the previous case. We will
therefore assume that the persistent clusters have a fractal dimension $D_p=d-z\theta_c$, which is compatible with the
value of the persistence in the final state. The inequality~(\ref{eq:bound_pers}) can then be rewritten as $D_p\le D$.
This is quite natural since the persistent sites in the final state are a subset of the initial largest cluster and their
fractal dimension must be less than the one of the largest initial cluster.
\section{Outlook}
\label{sec:outlook}
From the time-dependent analysis of systems with PBC instantaneously quenched from infinite to zero-temperature
we estimated $\theta$ to be $\theta^{\rm PBC}_\square = 0.198(3)$ and $\theta^{\rm PBC}_\triangle=0.200(3)$. Taking the mean
between these two values we have
\begin{equation}
\theta=0.199(2)
\; .
\end{equation}
The analysis of systems with FBC yielded results that are compatible with this value.
The value of the persistence exponent in a zero-temperature quench from critical Ising initial conditions
that we measured,
\begin{equation}
\theta_c\simeq 0.033(1)
\; ,
\end{equation}
is clearly different from the value measured, and computed analytically,
for a quench from infinite temperature initial conditions. The difference between
the two situations lies in the absence of ferromagnetic fluctuation long-range correlations in the infinite temperature initial conditions contrarily to the existence of
long-range correlations of this type in the critical Ising ones. The absence of correlations
is one of the assumptions made in the analytical derivation of $\theta$, \textit{i.e.} that the initial correlation are
short-ranged so that they play no role in the universal behaviour of the dynamics~\cite{MS}.
We have seen from the simulations that $\theta_c \le \theta$.
This inequality seems quite natural as one expects that less spins will flip when the initial condition is more preserved, as it is the
case for the critical initial states compared to the infinite temperature ones. One could check this idea in at least two other
cases.
Spatial correlations can be included in the initial conditions such $\langle s_i(0)s_{i+r}(0)\rangle\sim
1/r^{d+\sigma}$ with $\sigma=\eta-2$. We expect that
there should be a special value $\sigma_c$ of the exponent of the initial correlations such that for correlations decaying
faster than $r^{-(d+\sigma_c)}$ we recover the short-range case. This prediction could be tested numerically.
One could also try to extend the analysis in~\cite{MS}, possibly combined with the one in~\cite{Bray-Humayun},
to obtain an analytical approximation for $\theta$ in this case.
Another interesting route is to study persistence after a waiting-time at zero-tem\-per\-a\-ture.
Let us define the generalised local persistence $P(t_w,t,L)$ with $t>t_w$ as the
fraction of spins which have never flipped between time $t_w$ and $t$.
As the interval $[0,t]$ can be split into two disjoint intervals $[0,t_w]$ and $[t_w, t]$ one has
$P(0,t,L)=P(0,t_w,L)P(t_w,t,L)$. Using the scaling laws for $P(0,t;L)$ and $P(0,t_w; L)$
one obtains that the generalised persistence should decay as:
\begin{equation}
P(t_w,t;L) = \frac{P(0,t;L)}{P(0, t_w; L)} \sim \frac{f(t/L^z)}{f(t_w,L^z)}
\; .
\end{equation}
Choosing $t_w \ll L^z$ one has two possible limiting expressions for $P(t_w,t;L)$:
\begin{eqnarray}
P(t_w, t; L) \simeq
\left\{
\begin{array}{l}
\left( \displaystyle{\frac{t}{t_w}} \right)^{-\theta}
\qquad
\mbox{for}
\qquad
t\ll L^z
\\
\left( \displaystyle{\frac{L^z}{t_w}} \right)^{-\theta}
\qquad
\mbox{for}
\qquad
t\gg L^z
\end{array}
\right.
\end{eqnarray}
Taking now $t_w \simeq L^\alpha$ with $\alpha < z$
\begin{eqnarray}
P(t_w,t;L) \simeq
L^{-\theta (z-\alpha)}
\qquad
\mbox{for}
\qquad
t_w \simeq L^\alpha \ll L^z \ll t
\label{eq:gen_pers}
\end{eqnarray}
We checked Eq.~(\ref{eq:gen_pers}) numerically by putting the law $P(t_w, t\gg L^2; L)\sim
L^{-2\theta(\alpha)}$ to the test. The data are compatible with an exponent
$\theta(\alpha)$ that decreases linearly with $\alpha$, $\theta(\alpha)=
(1-\alpha/z)\theta(0)$, though we see a weak deviation from linearity
for $\alpha \simeq \alpha_p$, with $\alpha_p=1/2$ in the square lattice and $\alpha_p=1/3$ on the
triangular lattice. Indeed,
we showed in~\cite{BCCP} that a $2d$IM ferromagnet quenched from infinite
temperature approaches critical percolation after a time $t_p \simeq L^{\alpha_p}$ with $\alpha_p$ taking these
value on the two lattices. The deviation should then be due to the existence of percolating states in the system.
Finally, one could extend this analysis to the evolution at finite temperature by using, {\it e.g.}, the numerical methods proposed in~\cite{Derrida} and
\cite{Drouffe-Godreche} to measure persistence under thermal fluctuations. A careful analysis of pre-asymptotic and finite size effects should help settling the issue
about the dependence or independence
of the local persistence exponent on temperature in the low-temperature phase~\cite{Derrida}. One could also measure the deviations from the scaling relation $z\theta = \lambda-d+1 - \eta/2$,
with $\theta$ the global persistence exponent, $\eta$ the static anomalous and $\lambda$ the dynamic short-time exponents, expected at criticality
beyond the Gaussian approximation~\cite{Majumdar96}.
\vspace{0.5cm}
\noindent{\large\bf Acknowledgments}
\vspace{0.5cm}
LFC is a member of the Institut Universitaire de France.
|
1,116,691,498,394 | arxiv | \section{Introduction}
In his proof of the non-integrability of the restricted three-body problem, \citet{Poincare1899} first identified the possibility of dynamical chaos in the motion of planetary systems.
This result cast doubt on Laplace and Lagrange's "proof" of the solar system's stability \citep{Laskar2012Stable}.
Eventually, the development of KAM theory \citep{Kolmogorov:430016,arnold1963small,arnold1963proof,moser1973stable} led to the understanding that the phase spaces of conservative dynamical systems like the $N$-body problem are generally an intricate mix of quasi-periodic and chaotic trajectories.
However, deducing when particular planetary systems are chaotic or not remains an unsolved problem; the rigorous mathematical results of KAM theory are typically of little practical use when applied to realistic astrophysical cases.
One solution is to turn to numerical simulations:
the determination of the solar system's chaotic nature was finally made possible with the advent of the computers capable of running simulations spanning billions of years {\citep[e.g.,][]{1988Sci...241..433S,1989Natur.338..237L,wisdom_holman1991,SussmanWisdom1992,BatyginLaughlin2008,2009Natur.459..817L}}.
A fairly comprehensive global picture of the regular and chaotic regions of phase-space for test-particle orbits in the solar system has since been established by numerical means \citep{Robutel2001}.
The discovery of thousands of exoplanetary systems over the past few decades has renewed interest in understanding chaos and dynamical stability in planetary systems.
Most general studies of stability in multi-planet systems have focused on fitting empirical relations to large ensembles of $N$-body simulations \citep[e.g.,][]{1996Icar..119..261C,2007MNRAS.382.1823F,Smith:2009eu,Petrovich2015,PuWu2015,2016ApJ...832L..22T,Obertas2017}.
However, such numerical studies suffer some limitations: the large parameter space of the problem, six dynamical degrees of freedom plus a mass for each planet, severely restricts the extent of any numerical explorations.
Additionally, the ages of many exoplanet systems, as measured in planet orbital periods, are frequently orders of magnitude larger than what can feasibly be integrated on a computer so that it is often necessary to extrapolate such numerical results.
Perhaps most importantly, empirical fits do not reveal the underlying dynamical mechanisms responsible for chaos and instability.
Therefore, analytic results are desirable as a complement to such numerical studies.
The resonance overlap criterion, proposed by \citet{1979PhR....52..263C} \citep[and also ][]{WalkerFord69}, provides one of the few analytic tools for predicting chaos in conservative systems.
The heuristic criterion states that large-scale chaos arises in the phase space of conservative systems when domains of resonant motion overlap with one another.
The criterion was first applied to celestial mechanics by \citet{Wisdom80}, who derived a criterion for the onset of chaotic motion of a closely spaced test particle in the restricted circular three-body problem.
Wisdom's criterion is based on the overlap of {\it first-order} mean motion resonances (MMRs).
Since then, the criterion has found numerous applications in planetary dynamics \citep[e.g.,][]{Holman:1996tu,1997AJ....114.1246M,Murray:1999ff,2006ApJ...639..423M,Quillen:2006ez,2008LNP...760...59M,2011ApJ...739...31L,2011MNRAS.418.1043Q,QuillenFrench2014,2015ApJ...799..120B,Ramos:2015vla,StorchLai2015,Petit2017}.
\citet{Wisdom80}'s overlap criterion has been extended to test particles perturbed by an eccentric planet \citep{Quillen:2006ez} and the case of two massive planets on nearly circular orbits \citep{Deck2013overlap}.
\citet{MustillWyatt2012} derive an analytic criterion for the onset of chaos for an eccentric test particle ($\delta a /a \propto \mu^{1/5} e^{1/5}$), again based on the overlap of first-order MMR's.
{
The aforementioned works considered the overlap of MMRs only.\footnote{\citet{Ramos:2015vla} refine \citet{Wisdom80}'s overlap criterion by considering the presence of second-order resonances, though they do not account for the finite width of these resonances. {\citet{2008LNP...760...59M} develops a criterion for the overlap of $N$:1 resonances in the general three-body problem to predict chaos in eccentric systems in the widely spaced regime (period ratios $P'/P>2$), complementary to the closely spaced regime we consider in this paper.}}
As \citet{Wisdom80} originally demonstrated, the widths of first-order resonances increase with increasing eccentricity so that resonance overlap and chaos is expected to occur at wider spacings for eccentric planets than for nearly circular planets.
However, as we demonstrate in this paper, accounting for the contribution of higher-order resonances beyond first order is essential for correctly predicting the onset of chaos when planets have nonnegligible eccentricities.}
This paper is organized as follows.
We analytically predict {the onset of} chaos based on the overlap of resonances in Section \ref{sec:two_planet_overlap} and compare analytic predictions with numerical integrations in Section \ref{sec:numerical_compare}.
We compare the newly derived resonance overlap criterion with other stability criteria in \ref{sec:criteria_compare} and {numerically explore} the relationship between chaos and instability in \ref{sec:chaos_vs_stability}. We conclude in Section \ref{sec:conclusion}.
\section{A Theory for the onset of chaos}
\label{sec:two_planet_overlap}
Here we derive the main result of this paper: the resonance overlap criterion that predicts
the critical eccentricity for the onset of chaos, as a
function of planet mass and separation.
To simplify our discussion, we initially restrict our considerations to an eccentric test-particle subject to a massive exterior perturber on a circular orbit (Sections \ref{sec:two_planet_overlap:widths}-- \ref{sec:two_planet_overlap:analytic_expression}).
We then generalize to two planets of arbitrary mass and eccentricity (Section \ref{sec:two_planet_overlap:generalize}). This generalization turns out to be surprisingly simple. It is
based on the discovery by one of us \citep{Hadden_inprep} of a simple approximation to the general two-planet Hamiltonian near resonance.
\subsection{Resonance Widths}
\label{sec:two_planet_overlap:widths}
The dynamics of a test-particle near the $j$:$j-k$ MMR of an exterior circular planet can be approximated by the Hamiltonian
\begin{align}
H(&\lambda,\gamma;\delta\Lambda,\Gamma) \approx \delta\Lambda -\frac{3}{4}\delta\Lambda^2 \nonumber \\&+ 2\alpha\mu'S_{j,k}(\alpha,e)\cos[(j-k)(t-\lambda) + k\gamma]~, \label{eq:ham_ctp_simple}
\end{align}
which has canonical coordinates $\lambda, \gamma$ and momenta $\delta \Lambda, \Gamma$. The variables are defined as follows: $\delta\Lambda=2\left(\sqrt{\frac{a}{a_\text{res}}}-1\right)\approx \frac{a-a_\text{res}}{a_\text{res}}$ where $a$ is the test particle's semimajor axis (and henceforth un-primed orbital elements refer to the test particle); $a_\text{res}$
is that of nominal resonance, i.e.,
\begin{eqnarray}
a_\text{res}=\left(\frac{j-k}{j}\right)^{2/3}a' \approx \left(1-{\frac{2k}{3j}} \right) a' \label{eq:mya}
\end{eqnarray}
where $a'$ is the planet's semimajor axis, and the latter approximation assumes close spacing; $\lambda$ is the mean longitude;
$\Gamma=2\sqrt{\frac{a}{a_\text{res}}}(1-\sqrt{1-e^2})$;
$\gamma=-\varpi$, where $\varpi$ is the longitude of perihelion; $e$ is the eccentricity; $\alpha=a/a'$; $\mu'$ is the ratio of the planet's mass to that of the star; and time units are chosen so that the planet's mean longitude is
$\lambda'= {j-k\over j}t$ (or equivalently $\frac{d\lambda}{dt}=1$ when $\delta\Lambda=0$ and $\mu'\rightarrow 0$).
The above Hamiltonian is standard \citep[e.g.,][]{2000ssd..book.....M}.
{As is common, we approximate the coefficient of the cosine term, $2\mu'\alpha S_{j,k}(\alpha,e)$, as being temporally constant.
This ``pendulum'' approximation \citep{2000ssd..book.....M} shows good agreement with exact resonance widths computed via numerical averaging methods \citep[e.g.,][see Appendix \ref{sec:appendix:pendulum_vs_exact} for comparison]{Morbidelli:1995jf} with one notable exception:
it does not adequately capture the resonant width of $k=1$ resonances at low $e$ {\citep[$\lesssim (\mu'/j)^{1/3}$; see][]{Wisdom80}.}
We discuss the consequences of this shortcoming of the pendulum model below.
}
The Hamiltonian in Equation \eqref{eq:ham_ctp_simple} can now be transformed with the type-2 generating function $F_2= [(j-k)(t-\lambda) + k\gamma]I + \gamma K$ to the new Hamiltonian
\begin{eqnarray}
H'(\phi;I) =-\frac{3}{4}(j-k)^2 I^2 + 2\alpha\mu'S_{j,k}(\alpha,e)\cos\phi~. \label{eq:res_ham_transformed}
\end{eqnarray}
where $I=(k-j)\delta\Lambda$ and $\phi=(j-k)(t-\lambda)+k\gamma$. The Hamiltonian $H'$ describes a pendulum with a maximal libration half-width
\begin{equation}
\Delta I = \sqrt{\frac{16\alpha \mu' |S_{j,k}(\alpha,e)|}{3(j-k)^2}} \label{eq:res_width}
\end{equation}
or, in terms of semi-major axis,
\begin{equation}
\frac{\Delta a}{a_\text{res}} = (k-j)\Delta I = \sqrt{\frac{16\alpha\mu'|S_{j,k}(\alpha,e)|}{3}}~.\label{eq:da_ctp}
\end{equation}
\subsection{{The ``Close Approximation'' for $S_{j,k}$}}
\label{sec:close_approx}
The cosine amplitude $S_{j,k}(\alpha,e)$ is often replaced with its leading-order approximation $\propto e^k$, which is valid at low $e$
\citep{2000ssd..book.....M}. However, we will consider eccentricities up to
\begin{equation}
e_{\rm cross}\equiv {a'-a\over a} \ ,
\end{equation}
which is the eccentricity at which the particle's orbit crosses the planet's. The leading-order approximation is inadequate at such high $e$, as we quantify below.
Therefore in Appendix \ref{sec:appendix}, we derive a more accurate approximation by proceeding as follows:
first, we derive an {\it exact} expression for $S_{j,k}$ in the form of a one-dimensional definite integral (Eq. \ref{eq:Sjk_with_M}). However, this integral is both cumbersome and numerically challenging to evaluate at high $k$.
Therefore, we derive a simpler expression under the approximation that the test particle is close to the planet ($a'/a-1 \ll 1$).
Under this ``close approximation'',
the integral simplifies considerably, and furthermore it only depends on $\alpha$, $e$, and $j$ in the combination ${e\over e_{\rm cross}}\approx {3j\over 2k}e$. We thereby find
(Eq. \ref{eq:Sk_def})
\begin{multline}
S_{j,k}(\alpha,e)\approx s_k({e\overe_\text{cross}})
\equiv\\ {1\over\pi^2}
\int_0^{2\pi}K_0\left[\frac{2k}{3}(1+{e\overe_\text{cross}}\cos M)\right]\cos\left[k\left(M+\frac{4}{3} {e\overe_\text{cross}}\sin M\right)\right] {dM}
\label{eq:Sjk_approx2}
\end{multline}
where $K_0$ is a modified Bessel function.
Equation \eqref{eq:Sjk_approx2} provides an
adequate approximation when planet period ratios are $P'/P\lesssim 2$, generally predicting resonance resonance widths via Equation \eqref{eq:da_ctp} with $\lesssim 20\%$ fractional errors.
\subsection{Resonance Overlap}
\label{sec:two_planet_overlap:optical_depth}
Our criterion for chaos is the overlap of resonances \citep{1979PhR....52..263C,Wisdom80}.
With a formula for resonance widths in hand (Equation \ref{eq:da_ctp}), we examine under what conditions resonances overlap and motion is chaotic.
The top panel of Figure \ref{fig:optical_depth_schematic} plots the locations and widths for all resonances with order $k\leq 7$ between the 3:2 and 4:3 MMR's.
Resonance widths in Figure \ref{fig:optical_depth_schematic} are computed using Equation \eqref{eq:da_ctp} with $S_{j,k}$ computed via Equation \eqref{eq:Sjk_with_M}.
At low $e$, the resonances are narrow, and there is no overlap. As $e$ increases, the resonances widen ($\Delta a\propto e^{k/2}$) and overlap everywhere.
At a given $a$ (or $P/P'$), there is a critical $e$ at which the test particle comes under the influence of two resonances simultaneously and hence becomes chaotic.
Of course, to determine the critical $e$, one should include the overlap
between resonances of all orders, not just $k\leq 7$.
However, resonances with very high $k$'s will have little effect on the critical $e$ {even though there are an infinite number of them}. That is because the widths decrease exponentially
with increasing $k$, whereas the number of resonances increases only algebraically.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{analytic_overlap_low_k.pdf}
\includegraphics[width=0.45\textwidth]{analytic_overlap_high_k.pdf}
\caption{\label{fig:optical_depth_schematic}
Structure of resonances and their overlap between the 3:2 and 4:3 MMRs for a test-particle perturbed by a $\mu'=10^{-5}$ planet on a circular orbit.
Blue separatrices are plotted for resonances up to seventh order.
Each panel shows a grid of $e/e_\text{cross}$ versus $P/P'$, the ratio of periods of the test particle to the planet;
note that $e_\text{cross}\approx 0.25$ at these periods.
Grid points are colored according to the number of resonances they fall within, accounting for all resonance up to order $k_\text{max}=7$ in the top panel and $k_\text{max}=30$ in the bottom panel.
Resonance widths are computed with $S_{j,k}=s_k$ (Eq. \ref{eq:Sjk_approx2}) for $k>7$ in the bottom panel.
Points falling in $N_\text{overlap}\ge 2$ resonances are predicted to be chaotic based on the resonance overlap criterion.
The solid red line marks the $e/e_\text{cross}$ value where the covering fraction of resonances is unity, i.e., $\tau_\text{res}=1$ according to Equation \eqref{eq:tau_sum}.
The dashed red line marks the estimate of our fitting formula, Equation \eqref{eq:ecrit_approx_2}, for $\tau_\text{res}=1$.
Note that the widths of the two $k=1$ resonances at the left and right edges of the figure are represented incorrectly near $e=0$ due to our adoption of the pendulum approximation. In reality, they are wider than shown at {$e\lesssim (\mu'/j)^{1/3}$, i.e., at $e/e_\text{cross}\lesssim 0.05$ in this figure.}}
\end{figure}
The bottom panel of Figure \ref{fig:optical_depth_schematic} illustrates this by repeating the top panel, but with $k\leq30$.
We see that the regions where resonances are significantly overlapped are similar in the two panels.
We estimate the critical $e$ for significant overlap by first evaluating the covering fraction (or ``optical depth'' $\tau_{\rm res}$) of resonances in a range $\delta a$
of semimajor axes, as a function of $e$. The threshold for overlap will then be the $e$ at which $\tau_{\rm res}=1$.
{(This ``optical depth" construction is similar to \citet{2011MNRAS.418.1043Q}'s method for estimating the density of three-body resonances in systems of three planets.)}
Now, to determine a convenient range $\delta a$, we examine the pattern of non-commensurate MMR's in Figure
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{res_locations.pdf}
\caption{\label{fig:res_locations}
The location and order of the non-commensurate mean motion resonances up to seventh order, illustrating how the pattern repeats itself relative to
each first-order MMR.
Note that each differently colored cluster
can be enumerated up to any $k$
using the `Farey sequence' $F_k$, which is the sequence of reduced fractions between 0 and 1 that have denominators less than or equal to $k$, e.g., $F_2=\left\{0,1/2,1\right\}$ and $F_3=\left\{0,1/3,1/2,2/3,1\right\}$.
The resonant period ratios within a cluster that contains the first order $J:J-1$ MMR
occur at $P/P' = \frac{J-1+r}{J+r}$ for $r\in F_k$.
When plotting $P/P'$ elsewhere in this paper we stretch the horizontal scale to assign equal measure to each group of resonances associated with a single $J$.
This is done by setting plots' horizontal coordinates uniformly in $J=(1-P/P')^{-1}$.
}
\end{figure}
\ref{fig:res_locations}.
We see that the pattern repeats itself relative to each first-order MMR. Therefore, we choose $\delta a$
to be the distance between neighboring first-order MMR's, or from Equation \eqref{eq:mya}
\begin{eqnarray}
{\delta a\over a'} \approx {2\over 3J^2} \approx {3\over 2}\left({a'-a\over a'} \right)^2
\end{eqnarray}
where $J$ in the above refers to that of the first order $J:J-1$ MMR's.
To evaluate the covering fraction of resonances within this semi-major axis range, we assume that the planet and particle are sufficiently close that we can treat $e_\text{cross}$ as constant over the range.
Taking the resonant width $\Delta a$ from Equation \eqref{eq:da_ctp} and using the close approximation of Equation (\ref{eq:Sjk_approx2}) yields
\begin{eqnarray}
\tau_\text{res}&\approx&
{1\over\delta a }\sum_{k=1}^\infty\phi(k) \Delta a \\
&\approx& \frac{8}{3\sqrt{3}}\fracbrac{a'}{a'-a}^2\sqrt{\alpha\mu'}\sum_{k=1}^\infty \phi(k)|s_{k}(e/e_\text{cross})|^{1/2}\label{eq:tau_sum}
\end{eqnarray}
where $\phi(k)$ (called `Euler's totient function') gives the number of $k$th order resonances contained in $\delta a$. The Euler totient function is defined as the number of integers up to $k$ that are relatively prime to $k$.
To see that this is equivalent to the number of $k$th-order resonances within $\delta a$ consider the following:
the period ratios of all $k$th order resonances between the $J$:$J-1$ and $J+1$:$J$ first-order resonances (inclusive) can be written as $\frac{P}{P'}=\frac{(J-1)k+l}{Jk+l}$ with $0\le l\le k$.
{Therefore, of the $k+1$ possible values for $l$, we should only retain those that are relatively prime to $k$; otherwise, the numerator and denominator are commensurate, and the period ratio is
the same as one of lower-order.}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{tau_compuare.pdf}
\caption{
The critical eccentricity at which $\tau_\text{res}=1$ and the onset of chaos occurs as a function of $\mu'\fracbrac{a'}{a'-a}^4$.
The solid red line shows the critical eccentricity computed by numerically solving Equation \eqref{eq:tau_sum} for $\tau_\text{res}=1$. The gray line shows the approximation Equation \eqref{eq:ecrit_approx_1} valid for $\mu'\fracbrac{a'}{a'-a}^4\gtrsim0.03$.
The dashed red line shows our numerically fit approximation, Equation \eqref{eq:ecrit_approx_2}.
}
\label{fig:tau_compare}
\end{figure}
Our resonance overlap criterion, $\tau_{\rm res}=1$, provides the critical $e/e_\text{cross}$ for the onset of chaos
as a function of $\mu'$ and $a'/a$.
Figure \ref{fig:tau_compare} (thick red line) plots the critical eccentricity at which $\tau_{\rm res}=1$. \footnote{To evaluate the sum in Equation (\ref{eq:tau_sum}) we truncate at a finite value $k_{\rm max}$ such that, for each $e/e_\text{cross}$, the sum increases by no more than 1\% upon doubling the number of terms. We find that $k_{\rm max}\leq 1024$ is sufficient for eccentricities $e<0.99e_\text{cross}$. }
Both spacing and mass determine the critical eccentricity, but only in the combination $\fracbrac{a'-a}{a'}/\mu'^{1/4}$;
in other words, the relevant spacing is in units of
$\mu'^{1/4}a'$ rather than, e.g., number of Hill radii ($\mu'^{1/3}a'$).
As is physically plausible, when the planet's mass is very small ($\mu'\rightarrow 0$), the test particle's $e$ must be
very close to $e_{\rm cross}$ before
chaos is triggered---regardless of spacing.
Figure \ref{fig:validity_regions} (red lines) shows the spacing and mass dependencies separately.
From this figure, we see, for example, that planetary systems that have $(a'-a)/a'\sim 0.1$ and $\mu'\sim 10^{-5}$ (typical values for systems discovered
by the {\it Kepler} telescope) have a
critical $e$ for chaos of $\sim 0.35e_\text{cross}\sim 0.035$. (Making both bodies massive---rather than working in the test particle limit---changes this number by of order unity; see Sec. \ref{sec:two_planet_overlap:generalize}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ValidityLines_shaded.pdf}
\caption{
\label{fig:validity_regions}
Onset of resonance overlap, and hence chaos, as a function of perturber mass and planet spacing.
The red lines show our overlap criterion from setting $\tau_{\rm res}=1$ in Equation \eqref{eq:tau_sum} at three values of critical eccentricity, as labeled.
The purple line shows the onset of chaos due to the overlap of first-order mean motion resonances at small eccentricity according to \citet{Wisdom80}'s $\mu^{2/7}$ criterion.
Our overlap criterion, which predicts the critical eccentricity for the onset of chaos, applies below this line.
The black dashed line indicates the 2:1 resonance. Since our result for the critical eccentricity shown in Fig. \ref{fig:tau_compare} adopts the ``close approximation'' (see subsection \ref{sec:close_approx}), it becomes quantitatively incorrect
beyond the 2:1.
}
\end{figure}
As mentioned above, our overlap criterion ignores the finite width of first-order resonances at small eccentricity.
\citet{Wisdom80} shows that these resonances overlap when
$\frac{a'-a}{a'}<1.46\mu'^{2/7}$.
We plot this critical spacing as the purple line in Figure \ref{fig:validity_regions}.
Our criterion for the critical eccentricity is only valid below this line, i.e., for masses $\mu'\le 0.27\fracbrac{a'-a}{a'}^{7/2}$.
For larger masses, chaos from first-order resonance overlap is expected at all eccentricities.
\subsection{Analytical Expressions for the Critical $e$}
\label{sec:two_planet_overlap:analytic_expression}
While it is straightforward to numerically evaluate Equation \eqref{eq:tau_sum}, it is useful to have an explicit formula
for the critical $e$.
To that end, we expand $s_k$ at low $e$, in which limit
\begin{equation}
|s_{k}(e/e_\text{cross})| \approx \frac{\sqrt{3}\exp(k/3)}{\pi k}\fracbrac{e}{e_\text{cross}}^k \label{eq:sk_leading_order}
\end{equation}
(see Eq. \eqref{eq:wk_approx2}
in Appendix \ref{sec:appendix}) after neglecting higher-order terms in $e/e_\text{cross}$. Using the approximation $\phi(k)\approx \frac{6k}{\pi^2}$ (valid at large $k$),
the sum in Equation \eqref{eq:tau_sum} becomes
\begin{eqnarray}
\sum_{k=1}^\infty \phi(k)|s_{k}(e/e_\text{cross})|^{1/2} &\approx& { 3^{1/4} 6\over\pi^{5/2}} \sum_{k=1}^\infty \sqrt{k}e^{k/6}\left( {e\over e_\text{cross}} \right)^{k/2} \label{eq:ksum} \\
&\approx& \frac{3^{1/4}6}{\pi^{5/2}}\int_{0}^{\infty} \sqrt{k}x^{k}dk\nonumber \\
&=&\frac{3^{5/4}}{\pi ^2 |\log \left(x\right)|^{3/2}}~.\label{eq:tau_sum_approx}
\end{eqnarray}
after
defining $x\equiv\sqrt{\frac{e}{e_\text{cross}}}\exp(1/6)$, and replacing the sum with an integral.\footnote{
{Note that the $k$'s that dominate the integral leading to Equation (\ref{eq:tau_sum_approx}) are $k\sim {1\over \ln (e_\text{cross}/e)}$. Hence, low-order resonances are the most important ones when $e\ll e_\text{cross}$, and higher orders become the dominant ones at higher $e$.}}
Inserting this
into Equation \eqref{eq:tau_sum} and solving for the critical value of $e$ that gives $\tau_\text{res}=1$, we find
\begin{equation}
e\approx 0.72e_\text{cross} \exp\left[-1.4\mu'^{1/3}\fracbrac{a'}{a'-a}^{4/3}\right]~.\label{eq:ecrit_approx_1}
\end{equation}
Equation \eqref{eq:ecrit_approx_1} is compared with the numerically computed critical eccentricity in Figure \ref{fig:tau_compare}. We see that they agree well at $e/e_\text{cross}\lesssim 0.6$, or equivalently for $\mu'\fracbrac{a'}{a'-a}^4\gtrsim0.03$.
However, beyond this limit the agreement is poor. For example, in the limit $\mu'\rightarrow 0$ Equation \eqref{eq:ecrit_approx_1} mistakenly predicts that the onset of chaos occurs at $e=0.72e_\text{cross}$ rather than the expected limit, $e=e_\text{cross}$.
The error arises because Equation \eqref{eq:sk_leading_order} over-predicts $|s_k|$, and hence resonance widths, for large $k$ when $e\gtrsim0.6e_\text{cross}$.
Nonetheless, we obtain an adequate fit to numerical results over the full range of $0<e<e_\text{cross}$ by adopting the functional form of Equation \eqref{eq:ecrit_approx_1} but dropping the factor of $0.72$ so that the appropriate $\mu'\rightarrow 0$ limit is recovered. Fitting for a new numerical constant in the exponential, we find that the formula
\begin{equation}
e\approx e_\text{cross} \exp\left[-2.2\mu'^{1/3}\fracbrac{a'}{a'-a}^{4/3}\right] , \label{eq:ecrit_approx_2}
\end{equation}
plotted in Figure \ref{fig:tau_compare}, provides an acceptable approximation for the critical eccentricity yielding relative errors $<10\%$ when $\fracbrac{a'}{a'-a}^{4}\mu'<0.1$.
\subsection{Generalization to two massive planets}
\label{sec:two_planet_overlap:generalize}
We generalize our result for the threshold of chaos to the case of two massive planets,
each of which may be eccentric.
As will be shown in detail in \citet{Hadden_inprep}, the resonant dynamics of a massive planet pair can be cast in terms of a pendulum model almost identical to the one used in Section \ref{sec:two_planet_overlap:widths}.
The key step is the surprising fact that, to an excellent approximation, the resonant dynamics only depend on a single linear combination of the planet pairs' complex eccentricities, $e_1e^{i\varpi_1}$ and $e_2e^{i\varpi_2}$, where $e_i$ and $\varpi_i$ are the eccentricity and longitude of perihelion of the inner ($i=1$) and outer ($i=2$) planet.
{This represents a nontrivial generalization of a previously-derived result for first-order resonances \citep{Sessin1984,wisdom1986,Batygin2013,Deck2013overlap}.}
To oversimplify the results of Hadden slightly for the sake of clarity, the resonance dynamics depends
only on the difference in complex eccentricities:
\begin{equation}
{\cal Z}={1\over\sqrt{2}}\left( e_2e^{i\varpi_2} - e_1e^{i\varpi_1}\right)~. \label{eq:zdiff}
\end{equation}
We will refer to ${\cal Z}$ as the complex relative
eccentricity and its magnitude as the relative eccentricity, which we will write as $Z\equiv|{\cal Z}|$.\footnote{\label{ft:zdef}
A more precise statement of Hadden's result is as follows:
if we define the complex quantities
\begin{eqnarray}
\begin{pmatrix}{\cal Z} \\ {\cal W} \end{pmatrix} = \begin{pmatrix} \cos\theta &~-\sin\theta \\ \sin\theta &~\cos\theta \end{pmatrix} \begin{pmatrix} e_2e^{i\varpi_2} \\ e_1e^{i\varpi_1} \end{pmatrix}
\label{eq:zwdef}
\end{eqnarray}
where $\theta = \arctan[(a_1/a_2)^{0.37}]$ then the dynamics of nearby resonances will depend
almost entirely on $\cal Z$ and are essentially independent of ${\cal W}$. Throughout this paper $\cal Z$ really refers to that in the above matrix equation {rather than the oversimplified form of Equation \ref{eq:zdiff}};
but note that
for period ratios interior to the 2:1 resonance, $\theta$ differs from $\pi/4$ by no more than $10\%$ so that ${\cal Z}\approx \frac{1}{\sqrt{2}}\left(e_2e^{i\varpi_2} - e_1e^{i\varpi_1}\right)$ provides an adequate approximation for most purposes. We will refer to ${\cal W}\approx \frac{1}{\sqrt{2}}\left(e_2e^{i\varpi_2} + e_1e^{i\varpi_1}\right)$ as the average complex eccentricity.
}
Resonant widths scale with mass and eccentricity in essentially the same way as in the test-particle case, Equation \eqref{eq:da_ctp}, after replacing $e \rightarrow \sqrt{2}Z$ and $\mu'\rightarrow\mu_1+\mu_2$.
Proceeding through exactly the same resonance optical depth formulation presented in Section \ref{sec:two_planet_overlap:optical_depth}
yields
\begin{multline}
\tau_\text{res}\approx \frac{8}{3\sqrt{3}}\fracbrac{a_2}{a_2-a_1}^2\sqrt{\alpha(\mu_1+\mu_2)}\sum_{k=1}^\infty \phi(k)\left|s_{k}\fracbrac{\sqrt{2}Z}{e_\text{cross}}\right|^{1/2}
\label{eq:zcrit_exact}
\end{multline}
as the generalization of Equation \eqref{eq:tau_sum} when both planets are massive and/or eccentric.
Similarly,
\begin{equation}
Z \approx \frac{e_\text{cross}}{\sqrt{2}}\exp\left[-2.2(\mu_1+\mu_2)^{1/3}\left(\frac{a_2}{a_2-a_1}\right)^{4/3}\right]
\label{eq:zcrit_approx}
\end{equation}
provides an approximate formula for the critical $Z$ for the onset of chaos as the generalization of Equation \eqref{eq:ecrit_approx_2}.
{When both planets are massive and/or eccentric, $Z$ is not a strictly conserved quantity, but rather can vary on secular timescales. In principle, this means that planet pairs can evolve secularly from regions of phase space where resonances are initially not overlapped into overlapped regions. In practice, however, secular variations in $Z$ are generally negligible because the linear combination of complex eccentricities that defines $Z$ (Equation \ref{eq:zdiff}) is nearly identical to one of the secular eigen-modes of the two-planet system. The secular evolution of $Z$ will be explored further by \citet{Hadden_inprep}. }
\section{Comparison with Numerical Results}
\label{sec:numerical_compare}
\begin{figure}
\includegraphics[width=1\columnwidth]{test_particle_compare_EX.pdf}
\includegraphics[width=1\columnwidth]{test_particle_compare_EX_otherside.pdf}
\centering
\caption{\label{fig:chaos_compare}
Chaotic structure of phase space as a function of period ratio and $e/e_\text{cross}$ for a test-particle subject to an exterior circular perturber with mass $\mu'=10^{-5}$.
The color scale indicates the Lyapunov time of chaotic trajectories computed for a grid numerical integrations (see text for initial conditions).
Integrations that led to a close encounter (within one Hill radius) were stopped early and are marked in bright yellow.
Initial conditions determined to have $t_\text{Ly}>1300P'$ are assumed regular and marked as gray.
The blue separatrices and our red overlap criteria are repeated from Figure \ref{fig:optical_depth_schematic}.
}
\end{figure}
We compare the prediction of our resonance overlap criterion with the results of numerical integrations in Figure \ref{fig:chaos_compare}.
All numerical integrations are done with the WHFast integrator \citep{RTwhfast2015} { based on the symplectic mapping algorithm of \citet{wisdom_holman1991} and} implemented in the REBOUND code \citet{RL12}. {Integration step sizes are set to 1/30th of the orbital period of the inner planet unless stated otherwise. To ensure that our results are not driven by numerical artifacts of our integration method, in Appendix \ref{sec:appendix:integrator_compare} we compare results derived using the WHFast integrator with results obtained using the high-order, adaptive time step IAS15 routine \citep{RS15} implemented in the REBOUND code. The two methods show excellent agreement, indicating that our results are not affected by numerical artifacts.}
Initial conditions for {the top panel} are chosen {to be} $\lambda=\lambda'=0$ and {$\varpi=0$}; {the bottom panel is the same, but with $\varpi=\pi$}. The integrations lasted for 3000 planet orbits.
To compute Lyapunov times we used
the MEGNO chaos indicator \citep{2003PhyD..182..151C} built into REBOUND.
The MEGNO grows linearly at a rate of $1/t_\text{Ly}$, where $t_{Ly}$ is the Lyapunov time, for chaotic trajectories while asymptotically approaching a value of 2 for regular trajectories.
Throughout the paper we report $t_\text{Ly}$ values estimated by simply dividing integration runtimes by MEGNO values.
In the figure, trajectories with $t_\text{Ly}>1300P'$ are considered regular and plotted in gray.
We are unable to detect chaos for initial conditions with longer Lyapunov times given the limited duration of our integrations.
However, we find that longer integration runtimes, up to $10^6$ orbits, do not significantly change the number of simulations classified as chaotic.
Figure \ref{fig:chaos_compare} shows that the analytic overlap criterion ($\tau_{\rm res}=1$) broadly agrees with the $N$-body results, predicting the transition to large-scale chaos as a function of eccentricity in the period range shown.
{The boundaries between regular and chaotic orbits in the top and bottom panels are similar, demonstrating that the onset of resonance overlap does not depend strongly on the initial orbital phase.}
There are at least two caveats to our overlap criterion: first,
non-chaotic regions extend above the predicted overlap region in the top panel, most prominently for the first-order 3:2 and 4:3 resonances, but also at other odd-ordered MMRs.
Our choice of initial conditions in the top panel of Figure \ref{fig:chaos_compare} places the test particle near stable fixed points of these odd-order MMRs and regular regions of phase-space clearly remain near these fixed points even when the resonances are overlapped.
Second, the curve $\tau_{\rm res}=1$ is not a sharp boundary. A mixture of chaotic and regular trajectories is generically expected in regions of marginal resonance overlap and the boundary between regular and chaotic phase-space exhibits fractal structure \citep[e.g.,][]{LLBook}.
Nonetheless, the heuristic resonance overlap criterion provides an excellent prediction for onset of chaos from a coarse-grained perspective.
Figure \ref{fig:w_compare} shows results for two massive planets.
As we have argued, the threshold for chaos should depend on planet eccentricities only through the {\it relative} complex eccentricity $\cal Z$, and not on the {\it average} complex eccentricity $\cal W$ (see footnote \ref{ft:zdef})
To test this, each panel of Figure \ref{fig:w_compare} displays numerical results on a grid computed from initial conditions that are identical except in their initial value of $\cal W$. In all three cases, the boundary of chaos agrees quite well with the theoretical prediction. Even
when ${\cal W}=0.3$, which is significantly bigger than the relative eccentricity in the plot, there is only a modest effect on the stability boundary seen in the simulations. {Because the planets can have significant eccentricities when ${\cal W}$ is large, we use reduced time steps for the integrations shown in Figure \ref{fig:w_compare}. The step size is chosen based on the initial eccentricity of the inner planet to be $\frac{1}{30}\left({2\pi}/{\dot{f}_p}\right)$, where $\dot{f}_p$ is the time derivative of the planet's true anomaly at pericenter. }
\begin{figure*}
\centering
\includegraphics[width=0.33\textwidth]{w_grid_NEWNEW_1.pdf}
\includegraphics[width=0.33\textwidth]{w_grid_NEWNEW_2.pdf}
\includegraphics[width=0.33\textwidth]{w_grid_NEWNEW_3.pdf}
\caption{
\label{fig:w_compare} Chaotic phase space structure for two planets of mass $m_1=m_2=3\times10^{-5}M_*$ with different values of ${\cal W}$ (Equation \ref{eq:zwdef}) integrated for 3000 orbits of the outer planet.
{The planets are initialized with $\lambda_1=\lambda_2=0$ and $\text{Im}{\cal Z}=0$.}
Chaotic trajectories are shown in white while regular trajectories appear black.
{More precisely}, the background consists of a gray-scale spanning a narrow range of MEGNO values ranging from MEGNO$\le 2$ (black; $t_{Ly}\gtrsim 1500 P_2$) to MEGNO$\ge 6$ (white; $t_{Ly}\lesssim 500 P_2$).
The narrow gray-scale range emphasizes the sharp transition from regular to chaotic orbits.
Integrations that led to a close encounter within one Hill radius were stopped early and are also marked in white.
The vertical axis shows the real component of the initial complex relative eccentricity ${\cal Z}$; the imaginary component is zero.
The onset of chaos predicted by Equations \eqref{eq:zcrit_exact} and \eqref{eq:zcrit_approx} are shown as red solid and dashed lines, respectively.
The predicted onset of chaos does not depend on ${\cal W}$ and is the same in all three panels.
}
\end{figure*}
Figures \ref{fig:optical_depth} and \ref{fig:optical_depth_2} compare our overlap criterion with suites of numerical simulations with a wide range of planet masses and spacings.
The planets are equal mass and initial conditions are chosen so that $\lambda=\lambda'=0$, $\arg{{\cal Z}}=0$, and ${\cal W}=0$.
The transition to chaos is measured from numerical simulations by computing period-ratio/eccentricity grids similar to those shown in Figures \ref{fig:chaos_compare} and \ref{fig:w_compare} and identifying the minimum value of initial $Z$ for a given period ratio that yields chaos (taken to mean MEGNO$>5$ after a 3000 orbit integration, though our results are not sensitive to the choice of MEGNO threshold).
Figure \ref{fig:optical_depth} shows that the onset of chaos occurs at $Z$s that are a decreasing fraction of the orbit-crossing value as the planets' masses are increased.
In all cases, the numerical results broadly agree with our prediction.
Figure \ref{fig:optical_depth_2} confirms that the scaling of the critical eccentricity with planet mass predicted by the optical depth method holds over a wide range of planet masses and spacings. Note that in Figure \ref{fig:optical_depth_2} points computed from wide range of masses are plotted at every value of $(a/\Delta a)^4(\mu_1+\mu_2)$.
This figure also shows excellent overall agreement with the analytic predictions of Equations \eqref{eq:zcrit_exact} and \eqref{eq:zcrit_approx}.
\begin{figure}
\includegraphics[width=\columnwidth]{optical_depth_prediction.pdf}
\centering
\caption{\label{fig:optical_depth} Critical eccentricity as a function of planet period ratio for two equal mass planets using the $\tau_{res}=1$ criterion (see main text).
Points show the transition to chaos computed from $N$-body simulations.
Analytic predictions given by Equation \eqref{eq:zcrit_approx} are plotted as dashed lines.
Curves and points are colored according to planet mass.
The value of $Z$ corresponding to crossing orbits is indicated by the black dashed line.
}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{zcrit_analytic_curve_withK36.pdf}
\centering
\caption{\label{fig:optical_depth_2}
Critical $Z$ as a fraction of $e_\text{cross}/\sqrt{2}$, plotted versus the quantity $(a/\Delta a)^4(\mu_1+\mu_2)$ computed from numerical simulations over wide range of planet spacings and masses in the range $[10^{-10}-10^{-3}]M_*$.
Points are colored according to planet mass.
The square markers and error bars indicate the median and $1\sigma$ range of binned numerical results for planet masses $>10^{-6}M_*$ (red) and $<10^{-6}M_*$ (blue).
Quantities for the planet pair Kepler-36 b and c are indicated by the black circle (see Section \ref{sec:chaos_vs_stability}).
}
\end{figure}
\section{Onset of Chaos and Long-term Stability}
\label{sec:stability}
\subsection{Comparison with Other Stability Criteria}
\label{sec:criteria_compare}
We compare our result for the onset of chaos in two-planet systems with some of the other stability criteria that appear
in the literature (Figure \ref{fig:criteria_compare}).
\citet{Wisdom80} derived a criterion for the onset of chaos based on resonance overlap for a test-particle subjected to a planetary perturber, both of which are {nearly} circular. When $e$ {is sufficiently small}, only first-order resonances have non-vanishing width (see Section \ref{sec:two_planet_overlap:widths}). Therefore Wisdom considered only the overlap of first-order MMR's to derive his well-known $\mu^{2/7}$ criterion. \citet{Deck2013overlap} extended Wisdom's result to the case of two massive (but still circular) planets, predicting the transition to chaos occurs at a critical spacing $(a_2-a_1)/a_1\approx1.46(\mu_1+\mu_2)^{2/7}$. The vertical purple line at $P/P'$ slightly greater than 7/8 in Figure \ref{fig:criteria_compare} shows \citet{Deck2013overlap}'s $\mu^{2/7}$ prediction. As seen in that figure, their criterion works well at $Z=0$. By contrast, since we ignore the peculiar low-$e$ behavior of first-order MMR's our formula does not recover this result. Therefore our threshold for chaotic onset (Equation \eqref{eq:zcrit_approx}) should be restricted to separations $(a_2-a_1)/a_1 \gtrsim (\mu_1+\mu_2)^{2/7}$. \citet{Deck2013overlap} also account for eccentricities in their overlap criterion by {generalizing the results of \citet{MustillWyatt2012} to include} the eccentricity-dependence of first-order MMR widths.
{\citet{MustillWyatt2012}'s criterion for the critical eccentricity can be stated as $e\propto \fracbrac{a-a'}{a'}^{4}\mu^{-1}e_\text{cross}$ \citep[see also ][]{cutler:2005}. Their result, based on the overlap of first-order resonances, can be recovered by considering only the $k=1$ term in the sum in Equation \eqref{eq:tau_sum} and noting that $s_1\propto e/e_\text{cross}$ when eccentricity is small. }
Their prediction, {as generalized by \citet{Deck2013overlap}}, is plotted as the non-vertical part of the purple curve in Figure \ref{fig:criteria_compare}; however, it significantly over-predicts the critical $e$ because it ignores MMR's with $k> 1$.
{In particular, the $k>1$ terms in Equation \eqref{eq:tau_sum} defining $\tau_\text{res}$ represent a fractional correction to the leading $k=1$ term of $>50\%$ for $e>0.09e_\text{cross}$.}
{
A somewhat common practice in the literature is to presume that stability criteria derived for circular orbits can be applied to eccentric systems by simply replacing the critical semi-major axis separation, $a_2-a_1$, with the closest approach distance, $a_2(1-e_2)-a_1(1+e_1)$.
For example, \citet{Giuppone:2013iw} propose such a `semi-empirical' stability criterion as an extension \citet{Wisdom80}'s overlap criterion to eccentric planet pairs.
Specifically, \citet{Giuppone:2013iw} posit that a pair of anti-aligned orbits will be unstable if $\frac{a_2(1-e_2)-a_1(1+e_1)}{a_2(1-e_2)}<\delta$ where $\delta=1.57[\mu_1^{2/7}+\mu_2^{2/7}]$.
Their empirical criterion (slightly modified here to $\frac{a_2(1-e_2)-a_1(1+e_1)}{a_2(1-e_2)}<1.46(\mu_1+\mu_2)^{2/7}$ so as to match \citet{Deck2013overlap}'s prediction at $Z=0$) is plotted as a green curve in Figure \ref{fig:criteria_compare} and provides a fair approximation for the transition to chaos.
Figure \ref{fig:criteria_compare_approach} compares our resonance overlap prediction (red curves) to contours of constant closest approach distance for an eccentric test-particle subject to an exterior perturber for three different perturber masses.
The figure shows that, while our prediction matches the simulation results quite well, it cannot be reduced simply to a threshold on closest-approach distance that is independent of mass.}
{As described in the introduction, a number of empirical studies have derived relationships to predict the stability of multi-planet systems.
These empirical relations are generally derived for systems of three or more planets and cast as predictions for the timescale for instability to occur as a function of planet spacings measured in mutual Hill radii.
Directly comparing our analytic resonance overlap criterion to these empirical studies is difficult since our analytic criterion only applies to two-planet systems and yields a binary classification of systems as chaotic or regular without any instability timescale information.
Nonetheless, we can make a couple of qualitative comparisons: first, we showed in Section \ref{sec:two_planet_overlap} that the resonance optical depth and onset of chaos depends on planet spacing measured in units of ${\mu^{1/4}a}$.
Presuming that mean-motion resonance overlap is responsible for chaos in higher-multiplicity systems,\footnote{In systems of three or more planets, overlap of secular resonances and/or three-body resonances could also play a significant role in determining dynamical stability. Since three-body resonances arise from combinations of two-body resonances their density should also depend on planet spacing measured in units of ${\mu^{1/4}a}$.
Indeed, \citet{2011MNRAS.418.1043Q} predicts that the degree of overlap of three-body resonances in three-planet systems scales with the planets' spacing measured in units of ${\mu^{1/4}a}$.} planet separation measured in units of ${\mu^{1/4}a}$ should be a better predictor of systems' stability than separations measured in Hill radii.
Second, while most of the studies mentioned in the introduction focus on circular planetary systems, \citet{PuWu2015} explore the eccentricity-dependence of stability lifetimes.
They find that more eccentric systems require slightly larger closest-approach distances in units of Hill radii to maintain the same stability lifetime as more circular systems.
This trend is consistent with the prediction of our overlap criterion shown in Figure \ref{fig:criteria_compare_approach}: more eccentric systems require greater closest-approach distances to maintain regularity.}
Whether or not a pair of planets is chaotic, their ultimate fate can sometimes be constrained by angular momentum and energy
conservation laws.
If those conservation laws forbid the pair from experiencing close encounters, the system is called {\it Hill stable}.
In the circular restricted three-body problem, Hill stability is determined by the Jacobi constant. When a particle's Jacobi constant is greater than the Jacobi constant of a particle at the $L_1$ Lagrange point, then close encounters between the particle and perturbing mass are prohibited \citep{2000ssd..book.....M}.
{The consequences of Hill stability are evident in the distribution of initial conditions leading to close encounters in Figure \ref{fig:criteria_compare_approach}.}
A generalization of Hill stability exists for systems of three massive, gravitationally interacting bodies \citep[e.g.,][]{1982CeMec..26..311M}.
For two-planet systems with total energy $E$ and angular momentum $L$, if the product $L^2 E$ is greater than some critical value then close approaches between the planets are forbidden.
Importantly, Hill stability does not preclude substantial changes in the planets' semi-major axes or even their ejection from the system.
\citet{1993Icar..106..247G} provides an analytic criterion for Hill stability, formulated in terms of the orbital elements of a planet pair.
The solid orange line in Figure \ref{fig:criteria_compare} shows his result
(from his Equation 21).
The threshold for Hill stability is quite different from that for chaotic onset, as we discuss below.
\begin{figure*}
\centering
\includegraphics[width=.9\textwidth]{criteria_compare_NEW.pdf}
\caption{\label{fig:criteria_compare}
Chaotic phase space structure for a two-planet system with the same parameters as in the middle panel of Figure \ref{fig:w_compare}.
Different colored curves show the predictions of different stability criteria (see text). The gray curve shows orbit-crossing eccentricity values.
Note that the background grid of white and black points are plotted at their {\it initial} $P/P'$ and $e$.
That is why the black resonant tongues toward the right of the figure do not line up with the nominal positions of the first-order MMR's. (They do line up when we plot mean, rather than initial, orbital elements.)
The resonance-overlap predictions have been corrected to account {for} the difference between the osculating and mean period ratio using a correction accounting for 0th order resonances.
}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.3\textwidth]{eX_compare_1E-8.pdf}
\includegraphics[width=0.3\textwidth]{eX_compare_1E-6.pdf}
\includegraphics[width=0.3\textwidth]{eX_compare_1E-4.pdf}
\caption{\label{fig:criteria_compare_approach}
Chaotic phase space structure for a test-particle subject to external perturbers of three different masses.
Lines of constant closest approach distance, $a'-a(1+e)$, in units of Hill radii, $R_\text{Hill}={a'(\mu'/3)^{1/3}}$ are shown in blue.
Various stability criteria are also plotted: {our resonance overlap boundary (Eq. \ref{eq:tau_sum}) in red, the Hill stability boundary in orange, and \citet{Wisdom80}'s resonance overlap boundary, interior to which chaos is predicted for all eccentricity values, indicated by a purple arrow.}
The semi-major axis separation, $a'-a$, is shown by ticks on the top of each panel in units of $R_\text{Hill}$.
The background gray-scale is the same as in Figure \ref{fig:w_compare} with the addition that initial conditions leading to a close encounters (within a Hill radius) are marked in red.
The Hill stability boundary separates initial conditions that could experience such an encounter from those that cannot based on conservation of the Jacobi constant.
}
\end{figure*}
\subsection{Long-term stability}
\label{sec:chaos_vs_stability}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{da_by_a_w0.pdf}
\includegraphics[width=1\textwidth]{da_by_a_w1.pdf}
\caption{\label{fig:stability_compare}
Comparison between onset of chaos and long-term evolution.
The black/white background maps are reproduced from Figure \ref{fig:w_compare}.
Red `x's and colored squares show the outcome of long-term numerical simulations that lasted
$10^7$ orbits of the outer planet.
Red `x's denote integrations that were stopped early after experiencing a close encounter.
The purple/green color scale indicates, for systems that did not experience close encounters, the maximum fractional deviation in the inner planet's semimajor axis.
In the top panel systems are initialized with ${\cal W}=0$ and in the bottom panel with ${\cal W}=0.1$.
Orange curves indicate the Hill stability boundary from \citet{1993Icar..106..247G}.
Our resonance overlap boundary (Equation \eqref{eq:zcrit_exact}) is plotted in red.
}
\end{figure*}
We explore the relationship between the onset of chaos and long-term stability with two suites of numerical simulations run for $10^7$ outer planet orbits.
Figure \ref{fig:stability_compare} demonstrates
that most systems that do not cross the threshold for chaos (i.e., that fall within our predicted red curves)
exhibit little change in
$a$ over the course of the simulation. In contrast, orbits which are chaotic can experience two different fates: many of them experience close encounters (red `x's), but many also exhibit relatively large changes in $a$ throughout the duration
of the simulations without ever experiencing close encounters or ejections (yellow-green squares).
We attribute the boundary between the latter two behaviors as being due to Hill stability: the yellow-green squares
are Hill stable, and the red crosses are not.
Note that this true Hill stability boundary is roughly coincident with the prediction of Gladman but there is some discrepancy,
which is presumably due to some of Gladman's approximations.\footnote{
Specifically, Gladman's formula is given to leading order in $(\mu_1+\mu_2)^{1/3}$, neglecting terms of order $\propto \mu^{2/3}$ and higher in planet-star mass ratios. For the planets shown in Figure \ref{fig:stability_compare} this corresponds to fractional error of ${(3\times 10^{-5}})^{1/2}\approx 0.03$.
This estimated error is consistent with the percent-level deviation in period ratio between Hill-stability boundary predicted by the orange curve in Figure \ref{fig:stability_compare} for ${\cal Z}=0$ ($P/P'\approx 7/8$) and the last yellow-green square near ${\cal Z}=0$ ($P/P'\approx 8/9$).
}
Systems such as the yellow-green squares that are chaotic yet do not experience close encounters have been referred to as ``Lagrange unstable" \citep[e.g.,][]{Deck2013overlap}. (More precisely, ``Lagrange unstable" refers to systems that experience significant semi-major axis variations, irrespective of whether or not they are Hill stable.)
The Kepler-36 system provides an illustrative example \citep{2012Sci...337..556C,deck2012kep36}:
the two sub-Neptune planets, b and c, exhibit chaos with a Lyapunov time of only $\sim$10 years.
\citet{deck2012kep36} find that, over the course of $\sim 10^7$ years, $\sim$75\% of their integrations exhibit $>10\%$ variations in the planets' semi-major axes.
The planet pair has presumably survived for a substantially longer time, suggesting it is Hill stable and protected from close encounters.
We plot Kepler-36 b/c on Figure \ref{fig:optical_depth_2} using the planets' masses and eccentricities measured from transit timing variations in \citet{2017AJ....154....5H}.
The planet pair lies very near our prediction for the onset of chaos.
{Finally, we note that failing the Hill criterion does not necessarily mean a planet pair is doomed to experience a close encounter; the system must be chaotic as well.
Both panels of Figure \ref{fig:stability_compare} show regions of stable initial conditions that fail the Hill criterion {(i.e., lie to the right of the orange curves)} but are protected from close encounters by first-order resonances. Additionally, the bottom panel contains a significant swath of regular points with small $Z$s that fail the Hill criterion but remain stable.
}
\section{Summary and Conclusions}
\label{sec:conclusion}
We derived a new criterion for the threshold of instability in two-planet systems.
The derivation was based on the idea that the onset of chaos, and therefore instability, occurs where resonances overlap in phase space.
Our prediction for the test-particle eccentricity at which chaos first occurs in the restricted three-body problem is given by Equation \eqref{eq:tau_sum} at $\tau_{\rm res}=1$ and is depicted as the solid red line in Figure \ref{fig:tau_compare}; Equation \eqref{eq:ecrit_approx_2} gives an adequate fitting formula for this critical eccentricity.
Our prediction for the onset of chaos generalizes from the restricted problem to two massive and eccentric planets in a straightforward manner: one simply replaces test particle's eccentricity with the planets' relative eccentricity (Eq. \ref{eq:zdiff}) and the perturber's mass with the sum of the planets' masses. This yields
Equation \eqref{eq:zcrit_exact} with $\tau_{\rm res}=1$ as our criterion for the onset of chaos in two-planet systems with an adequate approximation given by Equation \eqref{eq:zcrit_approx}.
This work extends the past overlap criteria developed by \citet{Wisdom80} and \citet{Deck2013overlap} for {nearly} circular planets to eccentric planet pairs.
The `optical depth' method adopted in Section \ref{sec:two_planet_overlap:optical_depth} allowed us to consider resonances at all orders and extend these past criteria, which treated only first-order resonances.
{Figure \ref{fig:validity_regions} shows how our new criterion extends the range of parameters under which a two-planet system becomes chaotic.}
The analytic overlap predictions were shown to successfully predict the onset of chaos seen in numerical simulations in Section \ref{sec:numerical_compare} {(Figures \ref{fig:chaos_compare}--\ref{fig:optical_depth_2})}.
{We also used the simulations to explore the conditions under which a chaotic system leads to planetary collisions.}
The parameter regime studied in this paper, closely spaced planets with moderate eccentricities, is motivated by the observed exoplanet population.
The results of this work serve as a starting point for better understanding the sources of chaos and instability in realistic systems.
While our overlap criterion was derived assuming strictly coplanar planets, we expect that our criterion still approximately predicts the onset of chaos when inclinations (measured in radians) are {small compared to eccentricities.
In this regime, the disturbing-function terms associated with any particular MMR will be dominated by the eccentricity-dependent terms.
Additional {development is} likely necessary to predict the onset of chaos when inclinations are comparable in size to eccentricities.}
Finally, we expect the formulae for resonance widths and the `optical depth' formulation of resonance overlap derived in this paper will prove to be useful tools for understanding the onset of chaos in more complicated systems hosting three or more planets.
\acknowledgments
Acknowledgments.
We thank {Matt Holman, Jacques Laskar, Rosemary Mardling, Alessandro Morbidelli, Matt Payne, Alice Quillen, Dan Tamayo, and Yanqin Wu for helpful discussions and comments. We thank Jack Wisdom for his careful referee report and insightful comments.} S.H. acknowledges support from the NASA Earth and Space Science Fellowship program, grant No. NNX15AT51H. Y.L. acknowledges NSF grant AST-1352369 and NASA grant NNX14AD21G.
This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.
\software{REBOUND, python, Mathematica}
\bibliographystyle{yahapj}
|
1,116,691,498,395 | arxiv | \section{Introduction}\label{Intro}
\subsection{Background}
During the last several decades, computational advances as well as new insights that reduce the computational cost of methods have made many problems feasible to solve. This has a great impact for uncertainty and sensitivity analysis as a means of reliability engineering, where a huge number of evaluations need to be carried out to thoroughly investigate their behavior and to identify critical model parameters. In recent years, substantial development has advanced the current landscape of stochastic computational problems. Surrogate modeling has become a very popular method to effectively map complex input-output relationships and to tremendously reduce the computational cost of successive uncertainty and sensitivity evaluations. The general Polynomial Chaos Expansion (GPCE) \cite{Wiener.1938, Xiu.2002, LeMaitre.2010} has proven to be a versatile method to predict model behavior for systems where direct evaluation can be cumbersome and very time consuming \cite{Eldred.2012}. GPCE allows physical and engineering systems to be investigated with various Quantities Of Interest (QOI) as functions of various uncertain model parameters. The goal lies in characterizing the sensitivities of the output dependencies of the system inputs, which can be done by performing a Sobol decomposition \cite{Sobol.1999} or gradient based measures \cite{Xiu.2009}. A lot of research can be found on this process of uncertainty quantification (UQ) using the GPCE model \cite{Ghanem.2003, Xiu.2002, LeMaitre.2010}. It has found applications in non-destructive material testing \cite{Weise.2016}, neuroscience \cite{Codecasa.2016, Weise.2015, Saturnino.2019b, Weise.2020b}, mechanical engineering \cite{Sepahvand.2012}, aerospace engineering\cite{Hosder.2007}, electrical engineering \cite{Kaintura.2018, Diaz.2018}, fluid dynamics \cite{Najm.2009}, and various other fields. However, compared to engineering, there is a relative lack of targeted applications of GPCE in the life sciences \cite{Burk.2020, Grignard.2022, Hu.2018, Massoud.2019, Pepper.2019, Son.2020}. In this branch of science, strong assumptions are made about the parameter values to be chosen. Moreover, the analysis of individuals requires the analysis of model behavior to consider stochastic parameter definitions instead of deterministic approaches.
\subsection{Recent advancements}
The majority of modern research on GPCE has focused on non-intrusive approaches where the problem at hand can be treated as a black box system. There exist a number of possibilities to further improve the basic GPCE approach. On one hand, it is possible to modify the assembly process of the basis functions by identifying and choosing the most suitable order \cite{Sachdeva.2006}, splitting the GPCE problem in a multi-element-GPCE (ME-GPCE) \cite{Wan.2006}, or applying an adaptive algorithm to extend the number of samples and basis functions iteratively \cite{Blatman.2011, Novak.2021}. This reduces the number of unknown GPCE coefficients and the number of required model evaluations.
\subsection{Improved sampling strategies}
On the other hand, the potential of a more efficient GPCE approximation lies in the selection of the sampling locations prior to any GPCE approximation. To take a closer look at this topic, we thoroughly investigate GPCE optimized sampling schemes compared to standard random sampling. We will (i) show the performance of standard Monte Carlo methods in the framework of polynomial chaos, (ii) improve them with space-filling sampling designs using state-of-the-art Latin Hypercube Sampling (LHS) schemes \cite{McKay.1979, Helton.2003}, (iii) apply principles of Compressed Sensing (CS) and optimal sampling for L1-minimization \cite{Donoho.2006, Rauhut.2012, Hampton.2015b}. We investigate the performance and reliability of the sampling schemes in a comprehensive numerical study, which consists of three representative test cases and one practical example.
On one hand, LHS has seen a lot of usage in the fields of reliability engineering and uncertainty analysis since it is equipped with good space-filling properties \cite{Shu.2011, Robinson.1999}. The basic principle was improved by Jin et al. (2003) \cite{Jin.2003}, who proposed an optimization scheme based on the Enhanced Stochastic Evolutionary algorithm (ESE), which maximizes the Maximum-Minimal Distance criterion \cite{Johnson.1990} to reliably construct sample sets with a very even spread. It can be reasonably assumed that GPCE can benefit significantly from this, since it ensures to some extent that the parameter space is scanned evenly and thus all features of the transfer function can be found.
On the other hand, CS emerged in the field of efficient data recovery to reconstruct signals with a much smaller number of samples than the Shannon-Nyquist criterion would suggest \cite{Donoho.2006, Candes.2008, Candes.2006, Elad.2007}. This has been applied in a number of cases where the number of samples available is limited \cite{Lustig.2005, Paredes.2007, Ender.2010}. Because it is not possible to select the required basic functions in advance, most GPCE dictionaries are over-complete, which leads to sparse coefficient vectors. Using these properties, compressive sampling recently became popular in the framework of UQ. Another appealing fact is that in computational UQ, it is possible to freely draw additional samples, thus enabling a multitude of new possibilities compared to real data acquisition, where the number of measurements is limited and possibly even restricted. This provided a new subcategory of sparse Polynomial Chaos Expansions \cite{Blatman.2011, Karagiannis.2014, Jakeman.2015, Deman.2016} where generally solvers like Least Angle Regression (LARS) \cite{Tibshirani.2004} or Orthogonal Matching Pursuit (OMP) \cite{Pati.1993} are used to determine the sparse coefficient vectors. Better GPCE recoverability has also been shown by designing a unique sampling method for Legendre Polynomials using the Chebyshev measure \cite{Rauhut.2012} or by extending this with a distinct coherence parameter and sampling from an altered input variable distribution by using a Monte-Carlo Markov-Chain algorithm \cite{Hampton.2015b}. Those methods, however, are restricted to problems with a low number of random variables employing a high polynomial order. Progress has also been made in defining criteria such as the mutual coherence $\mu$, the RIP-constant \cite{Candes.2008b}, and a number of correlation-constants to categorize measurement matrices and quantify their possible recovery. In this paper, we focus on evaluating the mutual coherence parameter as a global measure for a minimization objective and on a combination of different local criteria. We adopted the proposed "near optimal" sampling method of Alemazkoor and Meidani \cite{Alemazkoor.2018}, which uses a greedy algorithm to ensure a more stable recovery. We additionally use the same framework to create mutual coherence optimal GPCE matrices. This is meant to serve as a comparative example. Additionally, we propose a global approach to create an L1-optimal design by using an iterative algorithm to maximize the local and global optimality criteria.
Given those two approaches to construct the set of sampling points, we are proposing a hybrid design that is partially created using LHS and then expanded according to a chosen L1-optimality criteria or vice versa. This aims to give a broad overview over not only the effectiveness of the two branches, but also over possible enhancements or coupling occurrences from their interaction.
In this paper, we are going to compare the aforementioned sampling strategies on a set of test problems with varying order and dimension. We investigate their error convergence over different sample sizes. We also test their applicability on a practical example, which consists of an electrode-impedance model used to characterize the impedance of brain tissue. All algorithms are implemented in the open source python package "pygpc" \cite{Weise.2020}, and the scripts to run the presented benchmarks are provided in the Supplemental Material.
The remainder of the paper is structured as follows.
\subsection{Content}
The theoretical background of GPCE is revisited in Section \ref{sec:PolynomialChaosExpansion}. It is followed by introducing the different sampling schemes, namely standard Random Sampling in Section \ref{sec:RandomSampling}, LHS in Section \ref{sec:Space-fillingoptimalsamplingapproaches} and CS-optimal sampling in Section \ref{sec:CompressiveSamplingapproaches}. An overview about the test problems to which the sampling schemes are applied is given in Section \ref{sec:Results} together with the benchmark results \ref{sec:Results}. Finally, the results are discussed in Section \ref{sec:Discussion}.
\section{Polynomial Chaos Expansion}
\label{sec:PolynomialChaosExpansion}
In GPCE, the $d$ parameters of interest, which are assumed to underlie a distinct level of uncertainty, are modeled as a $d$-variate random vector denoted by $\bm{\xi} = (\xi_1, \, \xi_2, \, ... \xi_d)$ following some probability density function (pdf) $p_i(\xi_i)$, with $i=1,...,d$. The random parameters are defined in the probability space $(\Theta, \Sigma, P)$. The event or random space $\Theta$ contains all possible events. $\Sigma$ is a $\sigma$-Algebra over $\Theta$, containing sets of events, and $P$ is a function assigning the probabilities of occurrence to the events. The number of random variables $d$ determines the dimension of the uncertainty problem. It is assumed that the parameters are statistically mutually independent from each other. In order to perform a GPCE expansion, the random variables must have a finite variance, which defines the problem in the $L_2$-Hilbert space.
The quantity of interest (QOI) will be analyzed in terms of the random variables $\bm{\xi}$, is $y(\mathbf{r})$. It may depend on some external parameters $\mathbf{r}=(r_{0},\,...,\,r_{R-1})$ like space, where $R=3$, or any other dependent parameters. Those are treated as deterministic and are not considered in the uncertainty analysis.
The basic concept of GPCE is to find a functional dependence between the random variables $\bm{\xi}$ and the solutions $y(\mathbf{r},\bm{\xi})$ by means of an orthogonal polynomial basis $\Psi(\bm{\xi})$. In its general form, it is given by:
\begin{equation}
y(\mathbf{r},\bm{\xi}) = \sum_{\bm{\alpha}\in\mathcal{A}} c_{\bm{\alpha}}(\mathbf{r}) \Psi_{\bm{\alpha}}(\bm{\xi}).
\label{eq:ua:gPC_coeff_form}
\end{equation}
A separate GPCE expansion must be performed for every considered parameter set $\mathbf{r}$. The discrete number of QOIs is denoted as $N_y$.
The terms are indexed by the multi-index $\bm{\alpha}=(\alpha_0,...,\alpha_{d-1})$, which is a $d$-tuple of non-negative integers $\bm{\alpha}\in\mathbb{N}_0^d$. The sum is carried out over the multi-indices, contained in the set $\mathcal{A}$.
The function $\Psi_{\bm{\alpha}}(\bm{\xi})$ are the polynomial basis functions of GPCE. They are composed of polynomials $\psi_{\alpha_i}(\xi_i)$.
\begin{equation}
\Psi_{\bm{\alpha}}(\bm{\xi}) = \prod_{i=1}^{d} \psi_{\alpha_i}(\xi_i)
\label{eq:ua:Psi}
\end{equation}
The polynomials $\psi_{\alpha_i}(\xi_i)$ are defined for each random variable separately according to the corresponding input pdf $p_i(\xi_i)$. They must be chosen to be orthogonal with respect to the pdfs of the random variables, e.g. Jacobin polynomials for beta-distributed random parameters or Hermite polynomials for normal-distributed random variables.
The family of polynomials for an optimal basis of continuous probability distributions is given by the Askey scheme \cite{Askey.1985}. The index of the polynomials denotes its order (or degree). In this way, the multi-index $\bm{\alpha}$ corresponds to the order of the individual basis functions forming the joint basis function.
In general, the set $\mathcal{A}$ of multi-indices can be freely chosen according to the problem under investigation. In practical applications, the \emph{maximum order} GPCE is frequently used. In this case, the set $\mathcal{A}$ includes all polynomials whose total order does not exceed a predefined order $p$.
In the present work, the concept of \emph{maximum order} GPCE is extended by introducing the \emph{interaction} order $p_i$. An interaction order $p_i(\bm{\alpha})$ can be assigned to each multi-index $\bm{\alpha}$. The multi-index reflects the respective powers of the polynomial basis functions of random variables, $\Psi_{\bm{\alpha}}(\bm{\xi})$:
\begin{align}
p_i(\bm{\alpha}) = \lVert\bm{\alpha}\rVert_0,
\end{align}
where $\lVert\bm{\alpha}\rVert_0 = \#(i:\alpha_i>0)$ is the zero (semi)-norm, quantifying the number of non-zero index entries. The reduced set of multi-indices is then constructed by the following rule:
\begin{equation}
\mathcal{A}(p, p_i) := \left\{ \bm{\alpha} \in \mathbb{N}_0^d\, : \lVert \bm{\alpha} \rVert_1 \leq p \wedge \lVert\bm{\alpha}\rVert_0 \leq p_i \right\}
\label{eq:ua:multi_index_set_reduced}
\end{equation}
It includes all elements from a total order GPCE with the restriction of the interaction order $p_i$. Reducing the number of basis functions is advantageous especially in case of high-dimensional problems. This is supported by observations in a number of studies, where the magnitude of the coefficients decreases with increasing order and interaction \cite{Hampton.2015}. Besides that, no hyperbolic truncation was applied to the basis functions \cite{Blatman.2011}.
After constructing the polynomial basis, the corresponding GPCE-coefficients $c_{\bm{\alpha}}(\mathbf{r})$ must be determined for each output quantity. In this regard, the output variables are projected from the $d$-dimensional probability space $\Theta$ into the $N_c$-dimensional polynomial space $\mathcal{P}_{N_c}$. This way, an analytical approximation of the solutions $y(\mathbf{r},\bm{\xi})$ as a function of its random input parameters $\bm{\xi}$ is derived and very computationally-efficient investigation of its stochastics is made possible.
The GPCE from (\ref{eq:ua:gPC_coeff_form}) can be written in matrix form as:
\begin{align}\label{eq:gPC_system}
\mathbf{Y} = \bm{\Psi}\mathbf{C}
\end{align}
Depending on the sampling strategy, one may define a diagonal positive-definite matrix $\mathbf{W}$ whose diagonal elements $\mathbf{W}_{i,i}$ are given by a function of sampling points $w(\bm{\xi}^{(i)})$.
\begin{align}\label{eq:gPC_system_weighted}
\mathbf{W}\mathbf{Y} = \mathbf{W}\bm{\Psi}\mathbf{C}
\end{align}
The GPCE-coefficients for each QOI (columns of $\mathbf{C}$) can then be found by using solvers that minimize either the L1 or the L2 norm of the residuum depending on the expected sparsity of the coefficient vectors. Each row in \ref{eq:gPC_system_weighted} corresponds to a distinct sampling point $\bm{\xi}_i$. For this reason, the choice of the sampling points has a considerable influence on the characteristics and solvability of the equation system.
Complex numerical models can be very computationally intensive. To enable uncertainty and sensitivity analysis of such models, the number of sampling points must be reduced to a minimum. Minimizing the sampling points may lead to a situation where there are fewer observations than unknowns, i.e. $M \leq K$, resulting in a under-determined system of equations with infinitely many solutions for $\mathbf{c}$. Considering compressive sampling, we want $\mathbf{c}$ to be the sparsest solution, formulating the recovery problem as:
\begin{equation}\label{eq:l0}
\min_{\mathbf{c}} ||\mathbf{c}||_0 \quad \textrm{subject to} \quad \mathbf{\Psi} \mathbf{c} = \mathbf{u}
\end{equation}
where $||.||_0$ indicates the $\ell_0$-norm, the number of non-zero entries in $\mathbf{c}$. This optimization problem is NP-hard and not convex. The latter property can be overcome by reformulating it using the L1-norm:
\begin{equation}\label{eq:l1_gpc}
\min_{\mathbf{c}} ||\mathbf{c}||_1 \quad \textrm{subject to} \quad \mathbf{\Psi} \mathbf{c} = \mathbf{u}
\end{equation}
It has been shown that if $[\mathbf{\Psi}]$ is sufficiently incoherent and $\mathbf{c}$ is sufficiently sparse, the solution of the $\ell_0$ minimization is unique and equal to the solution of the L1 minimization \cite{Bruckstein.2009}. The minimization in equation \ref{eq:l1_gpc} is called basis pursuit \cite{Chen.2001} and can be solved using linear programming.
\section{Sampling techniques}
\subsection{Standard Monte Carlo sampling}
\label{sec:RandomSampling}
The most straightforward sampling method is to draw samples according to the input distributions. In this case, one proceeds with a Monte Carlo method to sample the random domain without any sophisticated process for choosing the sampling locations. The random samples must be chosen independently and should be uncorrelated, but a simple sampling process may inadvertently violate this requirement, especially when the number of sampling points is small (a situation we are targeting). For instance, the sampling points can be concentrated in certain regions that do not reveal some important features of the model's behavior, thus significantly degrading the overall quality of the GPCE approximation.
\subsection{Coherence-optimal sampling}
\label{sec:COSampling}
Coherence-optimal (CO) sampling aims to improve the stability of the coefficients when solving (\ref{eq:gPC_system_weighted}). It was introduced by Hampton and Doostan in the framework of GPCE in \cite{Hampton.2015}. The Gram matrix (also referred to as the gramian or information matrix) defined in eq. (\ref{eq:gramian}) and its properties play a central role when determining the GPCE coefficients. Coherence-optimal sampling has been the building block for a number of sampling strategies that aim for an efficient sparse recovery of the PC \cite{Alemazkoor.2018, Hadigol.2018}. It generally outperforms random sampling by a large margin on problems with higher order than dimensionality $p \geq d $ and has been claimed to perform well on any given problem when incorporated in compressive sampling approaches \cite{Hampton.2015b}. It is defined by:
\begin{equation}\label{eq:gramian}
\mathbf{G_\Psi} = \frac{1}{N_g}\mathbf{\Psi^T}\mathbf{\Psi}
\end{equation}
CO sampling seeks to minimize the spectral matrix norm between the Gram matrix and the identity matrix, i.e. $||\mathbf{G_\Psi}-\mathbf{I}||$, by minimizing the coherence parameter $\mu$:
\begin{equation}\label{eq:coherence}
\mu = \sup_{\bm{\xi}\in\Omega} \sum_{j=1}^P \left|w(\mathbf{\xi})\psi_j(\bm{\xi})\right|^2
\end{equation}
This can be done by sampling the input parameters with an alternative distribution:
\begin{equation}\label{eq:alternative_distribution}
P_{\mathbf{Y}}(\bm{\xi}) := c^2 P(\bm{\xi}) B^2(\bm{\xi}),
\end{equation}
where $c$ is a normalization constant, $P(\bm{\xi})$ is the joint probability density function of the original input distributions, and $B(\bm{\xi})$ is an upper bound of the PC basis:
\begin{equation}\label{eq:B2}
B(\bm{\xi}):= \sqrt{\sum_{j=1}^P|\psi_j(\bm{\xi})|^2}
\end{equation}
To avoid defining the normalization constant $c$, a Markov Chain Monte Carlo approach using a Metropolis-Hastings sampler \cite{Hastings.1970} is used to draw samples from $P_{\mathbf{Y}}(\bm{\xi})$ in (\ref{eq:alternative_distribution}). For the Mertopolis-Hastings sampler, it is necessary to define a sufficient candidate distribution. For a coherence optimal sampling according to (\ref{eq:coherence}), this is realized by a proposal distribution $g(\xi)$ \cite{Hampton.2015}. By sampling from a different distribution than $P(\xi)$, however, it is not possible to guarantee $\mathbf{\Psi}$ to be a matrix of orthonormal polynomials. Therefore $\mathbf{W}$ needs to be a diagonal positive-definite matrix of weight-functions $w(\xi)$. In practice, it is possible to compute $\mathbf{W}$ with:
\begin{equation}\label{eq:weighting_matrix}
w_i(\xi) = \frac{1}{B_i(\xi)}
\end{equation}
A detailed description about the technique can be found in \cite{Hampton.2015}.
\subsection{Optimal design of experiment}
\label{sec:DSampling}
A judicious choice of sampling points $\{\bm{\xi}^{(i)}\}_{i}^{N_g}$ allows us to improve the properties of the Gramian without any prior knowledge about the model under investigation. The selection of an appropriate optimization criterion derived from $[\mathbf{G_\Psi}]$ and the identification of the corresponding optimal sampling locations is the core concept of optimal design of experiment (ODE). The most popular criterion is $D$-optimality, where the goal is to increase the information content from a given number of sampling points by minimizing the determinant of the inverse of the Gramian:
\begin{equation}\label{eq:det_Dopt}
\phi_D = \left|\mathbf{G_\Psi}^{-1}\right|^{1/N_c},
\end{equation}
$D$-optimal designs are focused on precise estimation of the coefficients. Besides $D$-optimal designs, there exist many other alphabetic optimal designs, such as $A$-, $E$-, $I$-, or $V$- optimal designs with different goals and criteria. A nice overview of these designs can be found in \cite{Pukelsheim.2006, Atkinson.2007}.
Hadigol and Doostan investigated the convergence behavior of $A$-, $D$- and $E$-optimal designs \cite{Hadigol.2018} in combination with coherence-optimal sampling in the framework of least squares GPCE. They found that those designs clearly outperform standard random sampling. Their analysis was restricted to cases where the number of sampling points is larger than the number of unknown coefficients ($N_g > N_c$). Based on the current state of knowledge, our analysis focuses on investigating the convergence properties of $D$-optimal and $D$-coherence-optimal designs in combination with L1 minimization where $N_g < N_c$.
\subsection{Space-filling optimal sampling}
\label{sec:Space-fillingoptimalsamplingapproaches}
In order to overcome the disadvantages of standard random sampling for low sample sizes, one may use sampling schemes that improve the coverage of the random space. Early work on this topic focused on pseudo-random sampling while optimizing distinct distance and correlation criteria between the sampling points. Designs optimizing the Maximum Minimal distance \cite{Johnson.1990, Morris.1995, Jin.2003} or Audze-Eglais Designs \cite{Audze.1977, Bates.2004} proved to be both more efficient and more reliable than standard random sampling schemes. Space-filling optimal sampling such as Latin Hypercube Sampling (LHS) is nowadays frequently being used in the framework of GPCE \cite{Choi.2004, Hosder.2007, Hadigol.2018}. In the following, we briefly introduce two prominent distance criteria we used in our space-filling optimal sampling approaches.
\subsubsection{Space-filling optimality criteria}
\paragraph*{Maximum-Minimal distance criterion:}
The maximum-minimal distance criterion is a space-filling optimality criterion. A design can be called maximum-minimum distance optimal if it maximizes the minimum inter-site distance \cite{Johnson.1990}:
\begin{equation}\label{eq:inter_cite_dist}
\min_{1 \leq i, j\leq n, i\neq j} d(\mathbf{x}_{i},\mathbf{x}_{j})\quad \textrm{subject to} \quad d(\mathbf{x}_{i},\mathbf{x}_{j}) = d_{ij} = \left( \sum_{k=1}^{m}|x_{ik}-x_{jk}|^t\right)^\frac{1}{t}
\end{equation}
where $d(\mathbf{x}_{i},\mathbf{x}_{j})$ is the distance between two sampling points $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$, and $t = 1$ or $2$. A design optimized in its minimum inter-site distance is able to create well-distributed sampling points. For a low number of sampling points, however, the sampling points may be heavily biased towards the edges of the sampling space because the distance criterion pushes the sampling points relentlessly outwards and away from possible features close to the center of the sampling space \cite{Morris.1995}.
\paragraph*{The $\varphi_{p}$ criterion:}
To counteract the shortcomings of the plain inter-side distance in (\ref{eq:inter_cite_dist}), the equivalent $\varphi_{p}$ criterion has been proposed \cite{Morris.1995}. A $\varphi_{p}$-optimal design can be constructed by setting up a distance list $(d_{1},...,d_{s})$ obtained by sorting the inter-site distances $d_{ij}$ together with a corresponding index list $(J_{1}, ..., J_{s})$. The $d_{i}$ are distinct distance values, $d_{1}<d_{2}<...<d_{s}$ and $J_{s}$ are the corresponding indices of pairs of sites in the design separated by $d_{i}$. A design can then be called $\varphi_{p}$-optimal if it minimizes:
\begin{equation}\label{eq:phi_p}
\varphi_{p} = \left(\sum_{i=1}^{s}J_{i}d_{i}^{-p}\right)^\frac{1}{p}
\end{equation}
We empirically choose $p$ as 10 in the numerical construction of LHS designs.
\subsection{Limitations of the distance criterion for low sampling sizes}\label{subsec:phi-limit}
As the investigation of the following sampling schemes in a sparse reconstruction suggests, functions with a high number of variables occur very commonly. Since the goal of a sparse reconstruction is to reduce the sampling size, an important caveat to the optimization of criteria based around the distance $d_i, d_{i,j}$ in the two criteria maximum-minimal distance and the $\phi_p$ is that it shows a systematic bias that breaks the uniformity sought in the following optimization algorithm in section \ref{sec:ESE}. This problem has been identified recently by Vořechovský and Eliáš \cite{Vorechovsky.2020, Elias.2020} and becomes apparent in efficient optimization. They introduced a new distance criterion called periodic distance:
\begin{equation}\label{eq:periodic-d}
\bar{d_{ij}} = \left( \sum_{k=1}^{m} \left(\min \left(|x_{ik}-x_{jk}|, 1 - |x_{ik}-x_{jk}| \right) \right)^t \right)^\frac{1}{t}
\end{equation}
With $\bar{d_{ij}}$, it is possible to calculate the periodic maximum-minimum distance and the periodic $\phi_p$ criterion as $ \min_{1 \leq i, j\leq n, i\neq j} \bar{d_{ij}} $ and $ \phi_p(\bar{d_{ij}})$, respectively, by using the periodic distance instead of the conventional euclidean distance metric. Further, for the $\phi_p$ criterion, it was shown by the same authors and Mašek that specifying the $p$-exponent based on investigations of the potential energy of the design can lead to an enhancement in the space-filling and projection properties as well as a decrease in its discrepancy. For successful application of LHS, they recommend $p = N_{var} + 1$, where $N_{var}$ is the number of variables in the given function \cite{Vorechovsky.2020b}.
\subsubsection{Standard Latin Hypercube Sampling (LHS)}
In LHS, the $d$-dimensional sampling domain is segmented into $n$ subregions corresponding to the $n$ sampling points to be drawn. LHS designs ensure that every subregion is sampled only once. This method can be mathematically expressed by creating a matrix of sampling points $\mathbf{\Pi}$ with:
\begin{equation}\label{eq:lhs}
\pi_{i, j} = \frac{p_{i, j} - u_{i,j}}{n},
\end{equation}
where $\mathbf{P}$ is a matrix of column-wise randomly permuted indices of its rows and $\mathbf{U}$ is a matrix of independent uniformly distributed random numbers $u \in [0, 1]$.
\subsubsection{$\varphi_{p}$-optimal Latin Hypercube Sampling}
The space-filling properties of LHS designs can be improved by optimizing the $\varphi_{p}$ criterion. A pseudo-optimal design can be determined by creating a pool of $n_i$ standard LHS designs and choosing the one with the best $\varphi_{p}$ criterion. As $n_i$ reaches infinity, the design will become space-filling optimal. In this study we used $n_i=100$ iterations, which was found to be an efficient trade-off between computational cost and $\varphi_{p}$-optimality.
\subsubsection{Enhanced Stochastic Evolutionary Algorithm LHS}\label{sec:ESE}
The Enhanced Stochastic Evolutionary Algorithm Latin Hypercube Sampling (LHS-ESE) is a very stable space-filling optimal algorithm designed by Jin et al. (2003) \cite{Jin.2003}. The resulting designs aim for a specified $\varphi_p$ parameter and achieve that by multiple element-wise exchanges of an initial LHD in an inner loop while storing their respective $\varphi_p$ - parameters in an outer loop. This process shows a far smaller variance in the space-filling criteria of the created sample sets.
However, we observed that the LHS-ESE scheme often undersamples the boundaries of the random domain, which is disadvantageous for transfer functions with high gradients close to the parameter boundaries. This is less apparent for a high number of samples but becomes a serious drawback when the number of sampling points is low; this effect can be linked to the systematic bias pointed out in \cite{Vorechovsky.2020}. In order to overcome this problem, we modified the LHS-ESE algorithm by shrinking the first and last subregion of the interval to a fraction of their original sizes while keeping the remaining intermediate $n-2$ subregions equally spaced. The procedure is illustrated in Fig. \ref{fig:SC_ESE_scheme}. The initial matrix for the Latin Hypercube design $\mathbf{\Pi}$ will then be changed as $\mathbf{P}$ is randomly permuted according to:
\begin{equation}\label{eq:sese_1}
\pi_{i, 1} = \pi_{i, n} =\alpha \frac{p_{i, j} - u_{i,j}}{n},
\end{equation}
where $\alpha$ is the fraction to which the size of the border interval is decreased, and $i \in [1, d]$ and $j \in [1, n]$ are used to cover the size reduction of the intervals at the edges. Our empirical studies showed that a reduction of $\alpha=\frac{1}{4}$ counteracts the aforementioned undersampling close to the border. We pick the index $j$ to target these edges, since after the normalization by dividing by $n$, the indices $1$ and $n$ for $j$ are expected for the values closest to $0$ and $1$ respectively, which occur at the borders of the sampled section. The centre is then stretched by:
\begin{equation}\label{eq:sese_2}
\scalebox{1.25}{$\pi_{i, j_c} = \begin{cases}
\frac{p_{i, j_c} - u_{i,j_c}}{n} - \frac{(1-\alpha) p_{i, j_c}}{n -2} &\text{for $j \leq \frac{n}{2}$}\\
\frac{p_{i, j_c} - u_{i,j_c}}{n} + \frac{(1-\alpha) p_{i, j_c}}{n -2} &\text{else,}
\end{cases}$}
\end{equation}
with $j_c$ being the indices of $j$ without the border domains $1$ and $n$, $j_c = j \setminus \{1, n\}$. After that alteration, the elements of each column in $\mathbf{\Pi}$ can be randomly permuted to proceed with the construction of the Latin Hypercube design just like in \ref{eq:lhs}. If $\alpha$ is made smaller, then the size of the guaranteed sampling region at the border also becomes smaller, thus forcing the sampling point to be chosen closer to the border as illustrated in Fig. \ref{fig:SC_ESE_scheme}. To the best of our knowledge, the Enhanced Stochastic Evolutionary LHS algorithm has not yet been studied in the context of GPCE.
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.49\textwidth]{pics/sc-ese_dec.png}
\caption[SC-ESE]{Schematic representation of the SC-ESE, where the outer area is cut to an $\alpha$ fraction of its original size and the center is stretched outward}
\label{fig:SC_ESE_scheme}
\end{figure*}
\subsection{Compressive Sampling}
\label{sec:CompressiveSamplingapproaches}
Compressive Sampling is a novel method, first introduced in the field of signal processing, that allows the recovery of signals with significantly fewer samples assuming that the signals are sparse: i.e. a certain portion of the coefficients are zero, meaning that the coefficient vector $\mathbf{c}$ can be well-approximated with only a small number of non-vanishing terms. A coefficient vector $\mathbf{c}$ that is $s$-sparse obeys:
\begin{equation}\label{eq:sparsity:2}
||\mathbf{c}||_0 \leq s, \quad \forall s \in \mathbb{N}
\end{equation}
The locations of the sampling points have a profound impact on the reconstruction quality because they determine the properties of the GPCE matrix. There are several criteria that can be evaluated exclusively on the basis of the GPCE matrix that may favor the reconstruction. It could be shown that optimization of those criteria lead to designs that promote successful reconstruction \cite{Gang.2013, Alemazkoor.2018}. In the following, we give a brief overview about the different criteria we considered in this study.
\subsubsection{L1-optimality criteria}
\paragraph*{Mutual Coherence}
The mutual coherence (MC) of a matrix measures the cross-correlations between its columns by evaluating the largest absolute and normalized inner product between different columns. It can be evaluated by:
\begin{equation}\label{eq:L1:mc}
\mu(\mathbf{\Psi}) = \max_ {1 \leq i, j\leq N_c, j\neq i} \quad \frac{|\psi_i^T \psi_j|}{||\psi_i||_2||\psi_j||_2}
\end{equation}
The objective is to select sampling points that minimize $\mu(\mathbf{\Psi})$ for a desired L1-optimal design. It is noted that minimizing the mutual-coherence considers only the worst-case scenario and does not necessarily improve compressive sampling performance in general \cite{Elad.2007}.
\paragraph*{Average Cross-Correlation}
It is shown in [39, 45, 46, 47] that the robustness and accuracy of signal recovery can be increased by minimizing the distance between the Gram matrix $\mathbf{G_\mathbf{\Psi}}$ and the identity matrix $\mathbf{I}_{N_c}$:
\begin{equation}\label{eq:L1:cc}
\gamma(\mathbf{\Psi}) = \frac{1}{N} \min_{\mathbf{\Psi} \in R^{M \times N_c}} ||I_{N_c} - \mathbf{G_\mathbf{\Psi}}||^2_F
\end{equation}
where $||\cdot||_F$ denotes the Frobenius norm and $N := K \times (K - 1)$ is the total number of column pairs. Note that the optimization of only the average cross-correlation can result in large mutual coherence and is regularly prone to inaccurate recovery. In this context, Alemazkoor and Meidani (2018) \cite{Alemazkoor.2018} proposed a hybrid optimization criteria, which minimizes both the average cross-correlation $\gamma(\mathbf{\Psi})$ and the mutual coherence $\mu(\mathbf{\Psi})$:
\begin{equation}\label{key}
\argmin\left(f(\mathbf{\Psi})\right) = \argmin\left(\left(\frac{\mu_{i} -\min(\boldsymbol\mu)}{\max(\boldsymbol\mu) - \min(\boldsymbol\mu)}\right)^2 + \left(\frac{\gamma_i -\min(\boldsymbol\gamma)}{\max(\boldsymbol\gamma) - \min(\boldsymbol\gamma)}\right)^2\right)
\end{equation}
with $\boldsymbol\mu = (\mu_{1}, \mu_{2}, ..., \mu_{i})$ and $ \boldsymbol\gamma = (\gamma_1, \gamma_2, ..., \gamma_i)$
\subsection{Greedy algorithm to determine optimal sets of sampling points}
We used a greedy algorithm as shown in Algorithm \ref{alg:greedy} to determine L1-optimal sets of sampling points. In this algorithm, we generate a pool of $M_p$ samples and randomly pick an initial sample. In the next iteration, we successively add a sampling point and calculate the respective optimization criteria. After evaluating all possible candidates, we select the sampling point yielding the best criterion and append it to the existing set. This process is repeated until the sampling set has the desired size $M$.
\begin{algorithm}
\caption{Greedy algorithm to determine L1-optimal sets of sampling points}\label{alg:greedy}
\begin{algorithmic}[1]
\State $\textit{create a random pool of } M_p \textit { samples}$
\State $\textit{create the measurement matrix }\mathbf{\Psi_{pool}} \textit { of the samples}$
\State $\textit{initiate } \mathbf{\Psi_{opt}} \textit{ as a random row } r \textit{ of } \mathbf{\Psi_{pool}}$
\State $\textit{add row } r \textit{ to the added rows } r_{added} $
\For{$i$ in (2, M)}
\For{$j$ in $(1, M_p \textit{ without } r_{added})$}
\State $\mathbf{\Psi_j} = \textit{row-concatenate } (\mathbf{\Psi_{opt}}, r_j)$
\State $f_j = f(\mathbf{\Psi_j}) $
\State $\textit{evaluate } f_j = f(\mathbf{\Psi_j})$
\EndFor
\State $\textit{save } f_i = argmin(f_j) \textit{ for all j and } j_{best} $
\State $\textit{add } r_{j_{best}} \textit{ to } r_{added}$
\State $\mathbf{\Psi_{opt}} = \textit{row-concatenate } (\mathbf{\Psi_{opt}}, r_{j_{best}})$
\EndFor
\State$\textit{Return } \mathbf{\Psi_{opt}} \textit{ and } r_{added} = X_{best}$
\end{algorithmic}
\end{algorithm}
\begin{comment}
Sobol', I.M., Levitan, Y.L. (1999). On the use of variance reducing multipliers in Monte Carlo computations of a global sensitivity index. Computer Physics Communications, 117(1), 52-61.
Global Optimization Test Problems. Retrieved June 2013, from http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar\_files/TestGO.htm.
\end{comment}
\section{Results}
\label{sec:Results}
The respective performances of the sampling schemes are thoroughly investigated based on four different scenarios. We compare the accuracy of the resulting GPCE approximation with respect to the original model and investigate the convergence properties and recoverability of the different sampling schemes. Following comparable studies \cite{Alemazkoor.2018}, we used uniformly distributed random variables in all examples and constructed GPCE bases using Legendre polynomials. This is the most general case, as any other input distribution can (in principle) be emulated by modifying the post-processing stage of the GPCE. The examples compare the sampling schemes on three theoretical test functions: (i) The Ishigami Function representing a low-dimensional problem that requires a high approximation order; (ii) the six-dimensional Rosenbrock Function representing a problem of medium dimension and approximation order; and (iii) the 30-dimensional Linear Paired Product (LPP) Function \cite{Alemazkoor.2018} using a low-order approximation. Finally, we consider a practical example, which consists of an electrode model used to measure the impedance of biological tissues for different frequencies. All sampling schemes were implemented in the open-source python package pygpc \cite{Weise.2020}, and the corresponding scripts to run the benchmarks are provided in the supplemental material. The sparse coefficient vectors were determined using the LARS-Lasso solver from scipy \cite{Virtanen.2020}.
A summary about the GPCE parameters for each test case is given in Table \ref{tab:Testfunctions:Overview}. For each test function, we successively increased the approximation order until an NRMSD of $\varepsilon<1^{-5}$ is reached. We assumed a very high number of sampling points. In this regard, we wanted to eliminate approximation order effects in order to focus on the convergence with respect to the number of sampling points.
For each sampling scheme and test case, we created a large set of sampling points that can be segmented in different sizes. For each case, we computed the associated GPCE approximation and calculated the normalized root mean square deviation (NRMSD) between the GPCE approximation $\tilde{y}$ and the solution of the original model $y$ using an independent test set containing $N_t = 10.000$ random sampling points. The NRMSD is given by:
\begin{equation}
\varepsilon=\frac{\sqrt{\frac{1}{N_t}\sum_{i=1}^{N_t}\left(y_i-\tilde{y}_i\right)^2}}{\max(\mathbf{y}) - \min(\mathbf{y})}
\end{equation}
We evaluated the average convergence of the NRMSD together with the success rate of each sampling scheme by considering $30$ repetitions. In addition, we quantified the convergence of the first two statistical moments, i.e. the mean and the standard deviation. The results are presented in the supplemental material. Reference values for the mean and standard deviation were obtained for each testfunction from $N=10^7$ evaluations from the original model functions.
\begin{table}[t]
\caption{Overview of numerical examples.}
\label{tab:Testfunctions:Overview}
\renewcommand{\arraystretch}{1.3}
\centering
\begin{tabular}{c c c c c c}
\hline
\textbf{Function} & \textbf{Problem} & \textbf{Dim.} & \textbf{Order} & \textbf{Int. Order} & \textbf{Basisfunctions} \\ \hline
Ishigami & Low dim., high order & $2$ & $12$ & $2$ & $91$ \\
Rosenbrock & Med. dim., med. order & $6$ & $5$ & $2$ & $181$ \\
LPP & High dim., low order & $30$ & $2$ & $2$ & $496$ \\
Electrode & Application example & $7$ & $5$ & $3$ & $596$ \\ \hline
\end{tabular}
\end{table}
\subsection{Low-dimensional high-order problem (Ishigami function)}
As a first test case, we investigate the performance of the different sampling schemes considering the Ishigami function \cite{Ishigami.1990}. It is often used as an example for uncertainty and sensitivity analysis because it exhibits strong nonlinearity and nonmonotonicity. It is given by:
\begin{equation}\label{eq:Ishigami}
y = \sin(x_1) + a \sin(x_2)^2 + b x_3^4 \sin(x_1)
\end{equation}
This example represents a low-dimensional problem requiring a high polynomial order to provide an accurate surrogate model. We defined $x_1$ and $x_2$ as uniformly distributed random variables and set $x_3=1$. The remaining constants are $a=7$ and $b=0.1$ according to \cite{Crestaux.2007} and \cite{Marrel.2009}. The approximation order was set to $p=12$, resulting in $N_c=91$ basis functions. We investigated the function in the interval $(-\pi, \pi)^2$ as shown in Fig. \ref{fig:Testfunctions:Ishigami}.
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.49\textwidth]{pics/Fig_Ishigami.pdf}
\caption[Ishigami]{Two-dimensional Ishigami function used to investigate the performance of different sampling schemes.}
\label{fig:Testfunctions:Ishigami}
\end{figure*}
The convergence results for the different sampling schemes are shown in Fig. \ref{fig:nrmsd:Ishigami}. It shows the dependence of the NRMSD $\varepsilon$ on the number of sampling points $N$ of the best sampling scheme from the LHS (Fig. \ref{fig:nrmsd:Ishigami}(a)) and L1-optimal sampling schemes (Fig. \ref{fig:nrmsd:Ishigami}(b)). The graphs of the error convergence consist of box-plots with whiskers of the error over different sampling sizes and are then connected with lines representing the median. Standard random sampling has the largest boxes and a black line on top for reference. We defined a target error level of $10^{-3}$, indicated by a horizontal red line, which corresponds to a relative error between the GPCE approximation and the original model function of 0.1\%. Additionally, the mutual coherence of the sampling sets is shown in Fig. \ref{fig:nrmsd:Ishigami}(c) and (d). The success rate of the best sampling schemes from both categories and standard random sampling is shown in Fig. \ref{fig:nrmsd:Ishigami}(e). In the Table shown in Fig. \ref{fig:nrmsd:Ishigami}(f), the median number of grid points required to reach that target error $\hat{N}_{\varepsilon}$ together with its standard deviation is shown. We also evaluated the median of the required number of sampling points for the random sampling scheme to determine the GPCE coefficients considering the L2 norm using the Moore-Penrose pseudo-inverse. All other evaluations have been performed using the LARS-Lasso solver. The success rates of the algorithms with the lowest 99\% recovery sampling size $\hat{N}_{\varepsilon}^{(99\% )}$ of each category of sampling schemes are marked in bold.
The sparsity of the model function greatly influences the reconstruction properties and hence the effectiveness of the sampling schemes. A GPCE approximation of the Ishigami function with an accuracy of $<10^{-5}$ requires $k=12$ out of the available $N_c=91$ coefficients ($13\%$).
Considering standard random sampling, the use of a conventional L2 solver requires $127$ sampling points to achieve a GPCE approximation with an error of less than $10^{-3}$ (see first row of Table in Fig. \ref{fig:nrmsd:Ishigami}(f)). In contrast, by using the L1 based LARS-Lasso solver, the number of required sampling points reduces to $31$, which serves as a baseline to compare the performance of the investigated sampling schemes.
By using the ESE enhanced LHS sampling scheme, the number of sampling points could be reduced to $25$, a substantial relative saving of $13$\% compared to standard random sampling. In the category of L1-optimal sampling, D-coherence optimal sampling schemes performed best; these schemes showed a slight increase in samples for the convergence.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.85\textwidth]{pics/Fig_results_Ishigami.pdf}
\caption{(a) and (b) Convergence of the NRMSD $\varepsilon$ with respect to the number of sampling points $N$ considering the Ishigami function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) mutual coherence of the gPC matrix; (e) success rate of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$; (f) average number of sampling points needed to reach the error threshold of $0.1\%$, a success rate of $95\%$, $99\%$, and the associated p-value comparing if the grids require significantly lower number of sampling points than standard random sampling.}
\label{fig:nrmsd:Ishigami}
\end{figure*}
In addition to the average convergence of the sampling schemes, their reliability was calculated to evaluate their practical applicability. We have quantified reliability by calculating the number of sampling points required to achieve success rates of $95\%$ and $99\%$ in reaching the target error of $10^{-3}$; the reliability is determined by the relative number of repetitions required to reach the target error. Finally, we tested the hypothesis that the number of sampling points to reach the target error is significantly lower compared to standard random sampling. The Shapiro-Wilk-Test indicated that the error threshold distributions are not normally distributed. For this reason, we used the one-tailed Mann-Whitney U-test to compute the corresponding p-values. The generally good performance of LHS (SC-ESE) sampling is underpinned by a p-value of $4.3\cdot10^{-5}$. D-Coherence-optimal grids show a similar success rate and outperform standard random sampling (as measured by the numbers of sampling points required to achieve success rates of 95\% and 99\%) by factors of 9 and 8 respectively, signifying higher stability compared to standard random sampling on the Ishigami function.
Alongside the NRMSD, we calculated the mutual coherence of the GPCE matrix for each sampling scheme. It is shown in Fig. \ref{fig:nrmsd:Ishigami}(c) and (d). It can be seen that the mutual coherence is very stable around 0.7 for all LHS schemes considering a sampling size of 25. In contrast, L1-optimal grids show large variation. The coherence-optimal designs yield GPCE matrices with much higher coherences as defined in (\ref{eq:L1:mc}) compared to standard random sampling. D-Coherence-optimal sampling manages to reduce the mutual coherence the most after 25 samples; however, it shows very non-linear behavior for larger sampling sets, where it increases strongly above the level of random sampling.
\subsection{Medium-dimensional medium-order problem (Rosenbrock function)} \label{sec:Rosenbrock}
As a second test case, we used the $d$-dimensional generalized Rosenbrock function, also referred to as the Valley or Banana function \cite{Dixon.1978}. It is given by:
\begin{equation}\label{eq:Rosenbrock}
y = \sum_{i=1}^{d-1} 100\left(x_{i+1}-x_i^2\right)^2+\left(x_i-1\right)^2
\end{equation}
The Rosenbrock function is a popular test problem for gradient-based optimization algorithms \cite{Molga.2015, Picheny.2013}.
In Fig. \ref{fig:Testfunctions:Rosenbrock}, the function is shown in its two-dimensional form. The function is unimodal, and the global minimum lies in a narrow, parabolic valley. However, even though this valley is easy to approximate, it has more complex behavior close to the boundaries. To be consistent with our definitions of "low", "medium", and "high" dimensions and approximation orders in the current work, we classify this problem as medium-dimensional and requiring a moderate polynomial order approximation order to yield an accurate surrogate model. Accordingly, we defined the number of dimensions to be $d=6$ and set the approximation order to $p=5$ to ensure an approximation with an NRMSD of $\varepsilon<10^{-5}$ when using a high number of samples. This results in $N_c=181$ basis functions. The generalized Rosenbrock function is also used by Alemazkoor and Meidani \cite{Alemazkoor.2018} to compare the performance of different L1-optimal sampling strategies. We have used the same test function to make the results comparable and to be able to better integrate our study into the previous literature.
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.49\textwidth]{pics/Fig_Rosenbrock.pdf}
\caption[Rosenbrock]{Rosenbrock function in its two-dimensional form. In the present analysis, the Rosenbrock function of dimension $d=6$ is investigated.}
\label{fig:Testfunctions:Rosenbrock}
\end{figure*}
The results of the error convergence for the investigated sampling schemes are shown in Fig. \ref{fig:nrmsd:Rosenbrock}(a) and (b). The mutual coherence of these algorithms is shown in Fig. \ref{fig:nrmsd:Rosenbrock}(c) and (d). The success rates of the best-performing sampling schemes from each category are shown in Fig. \ref{fig:nrmsd:Rosenbrock}(e), and the statistics are summarized in Table \ref{fig:nrmsd:Rosenbrock}(f).
The Rosenbrock function can be exactly replicated by the polynomial basis functions of the GPCE using $k=23$ out of the $N_c=181$ available coefficients ($13\%$).
Random sampling in combination with L2-minimization requires $190$ sampling points to reach the target error of $10^{-3}$. In contrast, only $76$ samples are required when using the L1 based LARS-Lasso solver, which again serves as a baseline for comparison. The LHS (SC-ESE) algorithm is substantially less efficient than the other two LHS designs (STD and MM) in this test case. With LHS (MM) it is possible to achieve a reduction of sampling points by roughly 8\% compared to standard random sampling. From all investigated sampling schemes, MC-CC optimal grids performed best and required 13\% fewer sampling points than standard random grids. D-optimal sampling follows closely with a reduction of 8\%. However, for D-optimal, there is a very strong caveat connected to this measure which renders the value given by the table irrelevant. The median of the NRMSD for D-optimal sampling increases by orders of magnitude again after a sampling size of 75 is reached. It eventually drops below the error threshold again for sampling sizes close to 80 as seen in Fig. \ref{fig:nrmsd:Rosenbrock}(b), but this strong irregularity invalidates any statement regarding the significance of the error convergence for D-optimal sampling. This irregularity can also be observed in the mutual coherence as discussed in the following.
In terms of success rate, the sampling schemes differ considerably. Standard random sampling requires $N_{sr}^{95\%}=92.5$ and $N_{sr}^{99\%}=112.5$ sampling points to achieve success rates of $95\%$ and $99\%$ respectively. Standard LHS designs are more stable and require only $N_{sr}^{95\%}=84.5$ and $N_{sr}^{99\%}=84.9$ sampling points, respectively. D-optimal grids are very reliable and require only $N_{sr}^{95\%}=73.5$ and $N_{sr}^{99\%}=78.4$ sampling points. MC-CC grids, however, are able to surpass all other L1-optimal grids by achieving $N_{sr}^{99\%}=78.1$.
The mutual coherence of the measurement matrices for each algorithm is shown in Fig. \ref{fig:nrmsd:Rosenbrock}(c) and (d). It shows the same behavior for LHS sampling schemes as in case of the Ishigami function. This time, the sampling size of interest is larger, with about $80$ samples for the random sampling convergence. In this region, LHS (SC-ESE) shows the lowest mutual coherence at about $0.45$. Except for D-optimal sampling, the L1 sampling schemes are all able to reduce beyond the level of random sampling. This time, MC-CC sampling emerges as the leading design in that regard, which is then followed by MC and D-Coherence optimal designs. D-optimal designs show very irregular behavior connected to sampling sizes between $62$ to $68$ samples and $72$ to $78$, as can be seen in Fig. \ref{fig:nrmsd:Rosenbrock}(d). In both of these intervals, the mutual coherence drops briefly by about $0.3$ and then increases again to the initial level. Those two intervals also show tremendously lower NRMSD values then the surrounding sampling sizes in Fig. \ref{fig:nrmsd:Rosenbrock}(b).
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.85\textwidth]{pics/Fig_results_Rosenbrock.pdf}
\caption{(a) and (b) Convergence of the NRMSD $\varepsilon$ with respect to the number of sampling points $N$ considering the Rosenbrock function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) mutual coherence of the gPC matrix; (e) success rate of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$; (f) average number of sampling points needed to reach the error threshold of $0.1\%$, a success rate of $95\%$, $99\%$, and the associated p-value comparing if the grids require significantly lower number of sampling points than standard random sampling.}
\label{fig:nrmsd:Rosenbrock}
\end{figure*}
\subsection{High-dimensional low-order problem (LPP function)}
As a third test case, we used the $d$-dimensional Linear Paired Product (LPP) function \cite{Alemazkoor.2018} assuming a linear combination between two consecutive dimensions:
\begin{equation}\label{eq:LPP}
y = \sum_{i=1}^{d}x_i x_{i+1}
\end{equation}
It has $d$ local minima except for the global one. It is continuous, convex and unimodal. In the present context, it is investigated having $d=30$ dimensions with an approximation order of $p=2$, resulting in $N_c=496$ basis functions. This test case represents high-dimensional problems requiring a low approximation order. This test function is also used by Alemazkoor and Meidani (2018) \cite{Alemazkoor.2018} but considering $d=20$ random variables.
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.49\textwidth]{pics/Fig_Lin2Coupled.pdf}
\caption[LPP]{Linear Paired Product (LPP) function in its two-dimensional form. In the present analysis, we investigated it with $d=30$ dimensions.}
\label{fig:Testfunctions:Lin2Coupled}
\end{figure*}
The error convergence for the different sampling schemes is shown in Fig. \ref{fig:nrmsd:Lin2Coupled}(a) and (b). The mutual coherence is visualized in \ref{fig:nrmsd:Lin2Coupled}(c) and (d), the success rates of the best-performing sampling schemes from each category are shown in Fig. \ref{fig:nrmsd:Lin2Coupled}(e), and the statistics are summarized in Table \ref{fig:nrmsd:Lin2Coupled}(d).
This test function can be exactly replicated by the polynomial basis functions of the GPCE using $k=29$ out of $N_c=496$ available coefficients ($6\%$).
In this case, random sampling requires $190$ sampling points using L2-minimization and $110$ samples using L1-minimization. LHS designs showed similar convergence behavior to standard random sampling, with no reported improvement for LHS (MM) and LHS (STD) sampling and an increase in samples for LHS (SC-ESE). In contrast, L1-optimal designs did not manage to improve on the sampling count, with MC-CC and Coherence-Optimal sampling showing the best convergence rates. However, only D-optimal and D-Coherence optimal sampling increased by more then $4\%$, indicating very little variability between the sampling schemes. It can be observed that the variance in the range between $110$ and $120$ sampling points is very high for all sampling methods. The reason for this is that the LPP function is very sparse, and the convergence is mainly determined by the L1 solver. An additional sample point can lead to an abrupt reduction of the approximation error and a "perfect" recovery. This is often observed with L1 minimization.
To achieve the $95\%$ and $99\%$ success rates, standard random sampling requires $N_{sr}^{(95\%)}=122.5$ and $N_{sr}^{(99\%)}=128.5$ samples. The LHS (MM) algorithm performs slightly better and requires $N_{sr}^{(95\%)}=119.4$ and $N_{sr}^{(99\%)}=127$ samples. L1-optimal sampling schemes show tremendously weaker stability for this test-function. Here, only D-optimal designs reach the range of standard random sampling and the LHS variations, with $N_{sr}^{(95\%)}=128.1$ and $N_{sr}^{(99\%)}=129.6$.
The mutual coherence of the measurement matrices for this test case are shown in Fig. \ref{fig:nrmsd:Lin2Coupled}(c) and (d). The LHS (SC-ESE) shows the lowest coherence for the category of LHS grids, very similar to the previous example. L1 (MC) and L1 (MC-CC) designs display the lowest mutual coherence for L1-optimal designs, while the remaining L1-optimal sampling schemes are densely packed around the region slightly below random sampling.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.85\textwidth]{pics/Fig_results_Lin2Coupled.pdf}
\caption{(a) and (b) Convergence of the NRMSD $\varepsilon$ with respect to the number of sampling points $N$ considering the Linear Paired Product (LPP) function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) mutual coherence of the gPC matrix; (e) success rate of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$; (f) average number of sampling points needed to reach the error threshold of $0.1\%$, a success rate of $95\%$, $99\%$, and the associated p-value comparing if the grids require significantly lower number of sampling points than standard random sampling.}
\label{fig:nrmsd:Lin2Coupled}
\end{figure*}
\subsection{Practical example (Probe impedance model)}
The last test case is an application example from electrical engineering. The aim is to estimate the sensitivity of the intrinsic impedance of a probe used to measure the impedance of biological tissues for different frequencies. The model is shown in Fig. \ref{fig:Testfunctions:Electrode}(a) and consists of a Randles circuit that was modified according to the coaxial geometry of the electrode. The lumped parameters model the different contributions of the physical phenomena. The resistance $R_s$ models the contribution of the serial resistance of the electrolyte into which the electrode is dipped. The constant phase element $Q_{dl}$ models the distributed double layer capacitance of the electrode. The resistance $R_{ct}$ models the charge transfer resistance between the electrode and the electrolyte. The elements $Q_d$ and $R_d$ model the diffusion of charge carriers and other particles towards the electrode surface. The constant phase elements $Q_{dl}$ and $Q_d$ have impedances of $1/\left(Q_{dl}(j\omega)^{\alpha_{dl}}\right)$ and $1/\left(Q_{d}(j\omega)^{\alpha_{d}}\right)$, respectively. The electrode impedance, according to the Randles circuit shown in Fig. \ref{tab:Testfunctions:Electrode}(a), is given by:
\begin{equation}\label{eq:electrode}
\bar{Z}(\omega) = R_s + \left(Q_{dl}(j\omega)^{\alpha_{dl}} + \frac{1}{R_{ct}+\frac{R_d}{1+R_d Q_d (j\omega)^{\alpha_d}}}\right)^{-1}
\end{equation}
The impedance is complex-valued and depends on the angular frequency $\omega=2 \pi f$, which acts as an equivalent to the deterministic parameter $\mathbf{r}$ from eq. \ref{eq:ua:gPC_coeff_form}. A separate GPCE is constructed for each frequency. In this analysis, the angular frequency is varied between $1$ Hz and $1$ GHz with $1000$ logarithmically spaced points. The real part and the imaginary part are treated independently. The application example thus consists of $2000$ QOIs, for each of which a separate GPCE is performed. The approximation error is estimated by averaging the NRMSD over all QOIs. Accordingly, the impedance of the equivalent circuit depends on seven parameters, which will be treated as uncertain: $(R_s, R_{ct}, R_d, Q_d, \alpha_d, Q_{dl}, \alpha_{dl})$. They are modeled as uniformly-distributed random variables with a deviation of $\pm10$\% from their estimated mean values, with the exception of $R_s$, which was defined between $0 \Omega$ and $1 k\Omega$. The parameters were estimated by fitting the model to impedance measurements from a serial dilution experiment of KCl with different concentrations. The parameter limits are summarized in Table \ref{tab:Testfunctions:Electrode}.
In preliminary investigations, we successively increased the approximation order until we reached an accurate surrogate model with an NRMSD of $\varepsilon<1^{-5}$. It was found that the parameters in this test problem strongly interact with each other, which explains the rather high order of approximation compared to the smooth progression of the real and imaginary parts in the cross sections shown in Fig. \ref{fig:Testfunctions:Electrode}. This means that (for example) when five parameters of first order interact with each other, the maximum GPCE order is reached, and this coefficient is significant compared to (for example) a fifth order approximation of a single parameter.
\begin{table}[t]
\caption{Estimated mean values of the electrode impedance model, determined from calibration experiments, and limits of parameters.}
\label{tab:Testfunctions:Electrode}
\renewcommand{\arraystretch}{1.3}
\centering
\begin{tabular}{c c c c}
\hline
\textbf{Parameter} & \textbf{Min.} & \textbf{Mean} & \textbf{Max.} \\ \hline
$R_s$ & $0$ $\Omega$ & $0.5$ k$\Omega$ & $1$ k$\Omega$ \\
$R_{ct}$ & $9$ k$\Omega$ & $10$ k$\Omega$ & $1.1$ k$\Omega$ \\
$R_d$ & $108$ k$\Omega$ & $120$ k$\Omega$ & $132$ k$\Omega$ \\
$Q_{d}$ & $3.6 \cdot 10^{-10}$ F & $4.0 \cdot 10^{-10}$ F & $4.4 \cdot 10^{-10}$ F \\
$Q_{dl}$ & $5.4 \cdot 10^{-7}$ F & $6 \cdot 10^{-7}$ F & $6.6 \cdot 10^{-7}$ F \\
$\alpha_d$ & $0.855$ & $0.95$ & $1.0$ \\
$\alpha_{dl}$ & $0.603$ & $0.67$ & $0.737$ \\ \hline
\end{tabular}
\end{table}
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.9\textwidth]{pics/Fig_Electrode.pdf}
\caption[Electrode model]{Electrode impedance model: (a) Randles circuit; (b) Real part and (c) imaginary part of the electrode impedance as a function of $R_s$ and $\alpha_{dl}$. The remaining parameters are set to their respective mean values.}
\label{fig:Testfunctions:Electrode}
\end{figure*}
The results of the error convergence are shown in Fig. \ref{fig:nrmsd:ElectrodeModel}(a) and (b), and the corresponding mutual coherences are shown in \ref{fig:nrmsd:ElectrodeModel}(c) and (d). The success rates of the best-performing sampling schemes from each category are shown in Fig. \ref{fig:nrmsd:ElectrodeModel}(e), and the statistics are summarized in the Table Fig. \ref{fig:nrmsd:ElectrodeModel}(f).
The practical example consisting of the probe impedance model can be considered as non-sparse. It requires $k=500$ out of $N_c=596$ coefficients ($84\%$) to reach an accuracy of $<10^{-5}$.
Random sampling requires $269$ random samples to construct an accurate surrogate model using conventional L2 minimization. By using L1 minimization, the number of samples reduces to $82$. The LHS (SC-ESE) sampling scheme shows very good performance, requiring only $70.2$ samples to reach the target error, which corresponds to a decrease of $14$\%. In this test case, for L1-optimal sampling schemes, only MC-CC sampling managed to yield an improvement of the median by $2$ samples, yet they display a severe lack in stability as discussed in the next part.
Random grids require $N_{sr}^{(95\%)}=90.7$ and $N_{sr}^{(99\%)}=93.8$ samples to reach the desired success rates. Besides its good average convergence, LHS (SC-ESE) grids show significantly better stability, requiring only $N_{sr}^{(95\%)}=71.9$ and $N_{sr}^{(99\%)}=84.4$ samples, which corresponds to a decrease of $10$\% for both. A general lack of robust recovery is found for pure L1-optimal sampling schemes. Their success rates exceed those of random sampling, much like their median, rendering them inefficient on this test case.
Fig. \ref{fig:nrmsd:ElectrodeModel}(c) and (d) shows the mutual coherences of the electrode impedance model. LHS (SC-ESE) still has the lowest mutual coherence in their category and may even show a lower mutual coherence than any L1-optimal design for single test runs. As seen previously, the lowest coherence for L1-optimal sampling can be observed in the case of MC-CC and MC sampling. Both of them form the bottom line of L1-optimal sampling in the region of $80$ samples; MC sampling rises above D and D-Coherence optimal sampling for larger sampling sizes.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.85\textwidth]{pics/Fig_results_Electrode.pdf}
\caption{(a) and (b) Convergence of the NRMSD $\varepsilon$ with respect to the number of sampling points $N$ considering the probe impedance model. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) mutual coherence of the gPC matrix; (e) success rate of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$; (f) average number of sampling points needed to reach the error threshold of $0.1\%$, a success rate of $95\%$, $99\%$, and the associated p-value comparing if the grids require significantly lower number of sampling points than standard random sampling.}
\label{fig:nrmsd:ElectrodeModel}
\end{figure*}
\subsection{Average performance over all test problems}
The examined sampling schemes showed different strengths and weaknesses depending on the test problem. In order make general statements regarding their performance, we have weighted the error crossing $\hat{N}_{\varepsilon}$ and success rates $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ with the corresponding value of random sampling and averaged the results over all investigated test problems. The results are shown in Table \ref{tab:results:average}. It can be observed that LHS (SC-ESE) grids outperform random sampling in terms of average convergence and success rate for two of the test functions while maintaining a very competitive $N_{sr}^{(99\%)}$ on all test functions. They share this quality with the other LHS grids; however, the SC-ESE variation manages to score the largest sample reduction of $14.6$\% regarding the $\hat{N}_{\varepsilon}$ measure and about $35$\% on the two success rate measures. It is still closely trailed by the other two LHS sampling schemes. For the two higher-dimensional examples, however, LHS (STD) and LHS (MM) clearly show the most stable recovery success as seen in the $N_{sr}^{(99\%)}$ for all test functions, with a sample reduction of $15$\% on average. L1-optimal sampling schemes managed to achieve a significant sample reduction for the Rosenbrock function; specifically, MC-CC sampling is unrivalled regarding the $\hat{N}_{\varepsilon}$ decreases. However, the success rate measures are paralleled by LHS sampling. In terms of stability, only D-Coherence optimal sampling shows some perseverance, as its largest increase of samples for the $N_{sr}^{(99\%)}$ is $52$\% for the Electrode Model function. D-optimal sampling can be eliminated from this discussion since it showed a deeper-lying irregularity for the error convergence regarding the Rosenbrock function as discussed in Section \ref{sec:Rosenbrock}. Surprisingly, the remaining L1-optimal grids, namely MC, MC-CC and Coherence-optimal sampling, all exhibit sample increases of $100$\% and more for the $N_{sr}^{(99\%)}$ for one of the tested examples. For the test functions that were investigated in the context of a high-order GPCE approximation, no sampling scheme except MC-CC showed a reduction of samples, and all of them showed large increases in the amount of samples needed for the success rate targets.
\begin{table}[!htbp]
\caption{Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach an NRMSD of $10^{-3}$ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach success rates of $95\%$ and $99\%$, respectively.}
\label{tab:results:average}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.85}
\centering
\scriptsize
\begin{tabular}{ |l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l| }
\hline
& \multicolumn{3}{l}{Ishigami} \vline & \multicolumn{3}{l}{Rosenbrock} \vline & \multicolumn{3}{l}{LPP} \vline & \multicolumn{3}{l}{Electrode} \vline & \multicolumn{3}{l}{Average (all testfunctions)}\vline \\ \hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ \\ \hline\hline
Random (L1) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\hline
LHS (SC-ESE) & 0.869 & 0.615 & 0.674 & 1.246 & 1.068 & 0.887 & 1.018 & 1 & 1 & 0.856 & 0.898 & 0.900 & 0.997 & 0.895 & 0.865 \\
LHS (MM) & 0.872 & 0.660 & 0.712 & 0.919 & 0.827 & 0.691 & 1 & 0.972 & 0.966 & 1.027 & 1.019 & 0.999 & 0.9545 & 0.870 & 0.842 \\
LHS (STD) & 0.872 & 0.649 & 0.706 & 0.921 & 0.798 & 0.670 & 1 & 0.987 & 0.983 & 1.006 & 1.030 & 1.017 & 0.950 & 0.866 & 0.844 \\ \hline\hline
L1 (MC) & 1.131 & 0.887 & 0.956 & 0.961 & 1.049 & 1.378 & 1.037 & 1.282 & - & 1.039 & 2.226 & - & 1.042 & 1.361 & 1.167 \\
L1 (MC-CC) & 1.257 & 0.969 & 1.000 & 0.868 & 0.789 & 0.670 & 1.028 & 1.235 & 1.449 & 0.972 & 1.223 & 2.187 & 1.031 & 1.054 & 1.327 \\
L1 (D) & 1.503 & 1.026 & 1.048 & 0.92 & 0.784 & 0.698 & 1.091 & 1.079 & 1.082 & 1.429 & 1.427 & 1.433 & 1.236 & 1.079 & 1.065 \\
L1 (D-COH) & 1.076 & 0.820 & 0.847 & 0.999 & 0.941 & 0.797 & 1.082 & 1.154 & 1.164 & 1.414 & 1.526 & 1.518 & 1.143 & 1.110 & 1.082 \\
L1 (CO) & 1.882 & 2.010 & 2.183 & 0.955 & 0.892 & 0.751 & 1.009 & 1.132 & 1.160 & 1.05 & 1.289 & 1.351 & 1.224 & 1.331 & 1.361 \\ \hline
\end{tabular}
\end{table}
\section{Discussion}\label{sec:Discussion}
We thoroughly investigated the convergence properties of space-filling sampling schemes, L1-optimal sampling schemes minimizing the mutual coherence of the GPCE matrix, hybrid versions of both, and optimal designs of experiment by considering different classes of problems. We compared their performance against standard random sampling and found great differences between the different sampling schemes. To the best of our knowledge, this study is currently the most comprehensive comparing different sampling schemes for GPCE.
Very consistently, a great reduction in the number of sampling points was observed for all test cases when using L1 minimization compared to least-squares approximations. The reason for this is that the GPCE basis in its traditional form of construction is almost always over-complete, and oftentimes not all basis functions and parameter interactions are required or appropriate for modeling the underlying transfer function. For this reason, the use of L1 minimization algorithms in the context of GPCE is strongly recommended as long as the approximation error is verified by an independent test set or leave-one-out cross validation.
The first three test cases can be considered "sparse" in the framework of GPCE. It is noted that these values are high in comparison to typical values in signal processing and do not fully meet the definition of sparse signals, where $||\mathbf{c}||_0 \ll N_c$. The sparse character is reflected in the shape of the convergence curves, which show a steep drop in the approximation error after reaching a certain number of sampling points. Non-sparse solutions (as in case of the probe impedance model) show a gradual exponential decrease of the approximation error.
The Ishigami function represents a class of problems where the QOI exhibits comparatively complex behavior within the parameter space. LHS designs, and especially LHS (SC-ESE), outperform all other investigated sampling schemes by taking advantage of their regular and space-filling properties, which ensures that model characteristics are covered over the whole sampling space. Similar benefits were observed for the probe impedance model.
D-optimal and D-coherence optimal designs were investigated in \cite{Hadigol.2018} by comparing their performance to standard random sampling using L2 minimization. In this context, the number of chosen sampling points had to be considerably larger than the number of basis functions, i.e. $N_g \gg N_c$. In the present analysis, we loosened this constraint and decreased the number of sampling points below the number of basis functions $N_g < N_c$. We were able to observe improved performance for the first three test cases (Ishigami function, Rosenbrock function, Linear Paired Product function), where the relative number of non-zero coefficients is between $6-13$\%. In the case of the non-sparse probe impedance model with a ratio of non-zero coefficients of $84\%$, D-optimal and D-coherence optimal designs were less efficient compared to standard random sampling.
In our analysis, we did not find a relationship between mutual coherence and error convergence. A good example is the excellent convergence of LHS algorithms and the comparatively high mutual coherence in the case of the Ishigami function (Fig. \ref{fig:nrmsd:Ishigami}) or the comparatively late convergence of mutual coherence optimized sampling schemes in the case of the Rosenbrock function (Fig. \ref{fig:nrmsd:Rosenbrock}). This is in accordance with the observations reported by Alemazkoor et al. (2018) \cite{Alemazkoor.2018}. However, it has been observed that minimizing maximum cross-correlation does not necessarily improve the recovery accuracy of compressive sampling.
It can be concluded that both the properties of the transfer function and the sparsity of the model function greatly influence the reconstruction properties and hence the effectiveness of the sampling schemes.
We minimized the maximum cross-correlation of the measurement matrix $\mathbf{\Psi}$; however, this minimization demonstrated few benefits in reducing the number of sampling points to determine a GPCE approximation \cite{Li.1997, Elad.2007}.
All numerical examples were applied to uniformly distributed random inputs and Legendre polynomials, while in many real-world applications, different distributions may have to be assigned to each random variable. This requires the use of different polynomial basis functions, which would change the properties of the GPCE matrix. This can have a major influence on the performance of L1-optimal sampling schemes. In contrast, we expect fewer differences for LHS based grids because they only depend on the shape of the input pdfs and not additionally on the basis functions.
A large interest in the field of compressed sensing lies in identifying a lower bound on the sampling size needed for an accurate reconstruction. This depends on two factors: the properties of the measurement matrix and the sparsity of the solution. The formulation of general statements for GPCE matrices, which are constructed dynamically and depend on the number of random variables and type of input pdfs, is only possible with many preliminary considerations about their structure and type. Moreover, it requires additional work on the topic of sparsity estimation. Both topics are very recent and subject to current and future research.
The sampling methods analyzed were based on the premise that the entire grid is created prior to the calculations. The importance of covering important aspects of the quantity of interest in the sampling space suggests the development of adaptive sampling methods. An iterative construction of the set of sampling points would benefit from information of already calculated function values. For this purpose, e.g. the gradients in the sampling points could be used to refine regions with high spatial frequencies. We believe that those methods would have great potential to further reduce the number of sampling points.
As a result, the convergence rate as well as the reliability could be increased considerably when using LHS or D-coherence optimal sampling schemes. Even though the LHS (SC-ESE) was enhanced in this paper to perform better in corner regions, it still performs less than optimally in cases where function features are close to the borders of the sampling region. In further investigations, it may become crucial to address the systematic bias in the optimization of the criterion used for the ESE algorithm by using the periodic distance as shown in section \ref{subsec:phi-limit} to fully remedy this shortcoming.
The advantages of the more advanced sampling methods over standard random sampling were even more pronounced when considering the first two statistical moments, i.e., the mean and standard deviation (see supplemental material).
We minimized the maximum cross-correlation of the measurement matrix $\mathbf{\Psi}$, but few benefits arose from reducing the number of sampling points to determine a GPCE approximation \cite{Li.1997, Elad.2007}. We could not observe that L1-optimal sampling schemes are superior to their competitors. It has also been repeatedly shown that $\mu$ may only be able to optimize the recovery in a worst case scenario and therefore acts as a lower bound on the expected error \cite{Elad.2007}. In this sense, we also could not observe a direct relationship between mutual coherence and error convergence. From our results, we conclude that the sampling points should be chosen to capture all properties of the model function under investigation rather than to optimize the properties of the GPCE matrix to yield an accurate surrogate model more efficiently. This is in contrast to the results reported by Alemazkoor et al. (2018) \cite{Alemazkoor.2018} in case of the LPP function. We could reproduce their results for mutual coherence optimal sampling. However, random sampling performed much better in our case than they reported. For this reason, they argued for an application of L1-optimized sampling schemes in case of high-dimensional sparse functions. They investigated the LPP function considering $20$ dimensions, whereas we considered $30$ random variables. Nevertheless, we could not reproduce their results for random sampling in this test case as well. A possible reason could be the use of a different L1 solver, which could lead to potential changes in effectiveness of certain algorithms and should therefore be taken into consideration when comparing the results.
\section*{Acknowledgments}
This work was supported by the German Science Foundation (DFG) (WE 59851/2); The NVIDIA Corporation (donation of one Titan Xp graphics card to KW). We acknowledge support for the publication costs by the Open Access Publication Fund of the Technische Universit\"at Ilmenau.
\section{NRMSD convergence results considering a 10\% error threshold}
In the following the convergence of the NRMSD and the success rates are examined considering an error threshold of 10\%.
\subsection{Low-dimensional high-order problem (Ishigami function)}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of $10^{-3}$ considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Ishigami_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 20.5$\pm$7.1 & 40.5 & 41.7 & 1 \\ \hline
LHS (SC-ESE) & 14.9$\pm$1.8 & 18.2 & 18.9 & $1.3\cdot 10^{-08}$ \\
LHS (MM) & 16.8$\pm$3.0 & 20.8 & 23.1 & $2.6\cdot 10^{-05}$ \\
LHS (STD) & 17.6$\pm$3.3 & 24.0 & 25.7 & $2.9\cdot 10^{-03}$ \\ \hline
L1 (MC) & 21.2$\pm$4.7 & 30.0 & 33.4 & 0.72 \\
L1 (MC-CC) & 21.9$\pm$4.7 & 30.5 & 33.8 & 0.82 \\
L1 (D) & 25.3$\pm$4.0 & 34.3 & 34.9 & 1.0 \\
L1 (D-COH) & 20.7$\pm$2.5 & 23.2 & 23.8 & 0.21 \\
L1 (CO) & 41.4$\pm$9.9 & 46.5 & 65.2 & 1 \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{Medium-dimensional medium-order problem (Rosenbrock function)}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of $10$\% considering the Rosenbrock function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Rosenbrock_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 51.3$\pm$8.9 & 62.0 & 64.7 & 1.0 \\ \hline
LHS (SC-ESE) & 50.7$\pm$5.3 & 58.5 & 60.4 & 0.36 \\
LHS (MM) & 46.1$\pm$4.7 & 54.5 & 58.8 & $1.2\cdot 10^{-02}$ \\
LHS (STD) & 47.3$\pm$4.3 & 52.8 & 55.1 & $2.9\cdot 10^{-02}$ \\ \hline
L1 (MC) & 49.7$\pm$11.0 & 65.3 & 67.7 & 0.31 \\
L1 (MC-CC) & 41.5$\pm$6.4 & 50.8 & 51.7 & $1.4\cdot 10^{-04}$ \\
L1 (D) & 41.0$\pm$5.4 & 50.8 & 51.7 & $6.6\cdot 10^{-05}$ \\
L1 (D-COH) & 37.6$\pm$5.1 & 44.0 & 49.9 & $9.3\cdot 10^{-07}$ \\
L1 (CO) & 45.9$\pm$7.7 & 56.5 & 61.2 & $2.1\cdot 10^{-02}$ \\ \hline
\end{tabular}
\end{table}
\subsection{High-dimensional low-order problem (LPP function)}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of 10\% considering the LPP function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:LPP_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 75.4$\pm$8.8 & 82.5 & 87.9 & 1.0 \\ \hline
LHS (SC-ESE) & 70.0$\pm$10.4 & 88.5 & 89.7 & 0.49 \\
LHS (MM) & 72.3$\pm$11.5 & 86.0 & 90.8 & 0.17 \\
LHS (STD) & 73.8$\pm$9.6 & 86.3 & 89.3 & 0.30 \\ \hline
L1 (MC) & 74.4$\pm$12.3 & 90.2 & 96.6 & 0.62 \\
L1 (MC-CC) & 71.5$\pm$12.2 & 90.5 & 100.4 & 0.21 \\
L1 (D) & 67.4$\pm$10.1 & 84.8 & 89.2 & $3.4\cdot 10^{-02}$ \\
L1 (D-COH) & 67.3$\pm$12.0 & 90.5 & 95.1 & $5.6\cdot 10^{-02}$ \\
L1 (CO) & 73.7$\pm$11.9 & 90.5 & 92.4 & 0.52 \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{Practical example (electrode impedance model)}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of 10\% considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:ElectrodeModel_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 19.4$\pm$1.2 & 21.0 & 24.1 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 20.0$\pm$1.7 & 23.8 & 24.7 & $1.0\cdot 10^{0}$ \\
LHS (MM) & 18.8$\pm$0.3 & 19.0 & 19.7 & $4.3\cdot 10^{-04}$ \\
LHS (STD) & 19.0$\pm$0.3 & 19.5 & 19.9 & $8.1\cdot 10^{-03}$ \\ \hline
L1 (MC) & 19.8$\pm$4.6 & 31.5 & 33.4 & $9.9\cdot 10^{-01}$ \\
L1 (MC-CC) & 19.5$\pm$7.5 & 39.5 & 49.1 & $8.8\cdot 10^{-01}$ \\
L1 (D) & 19.7$\pm$3.9 & 29.8 & 31.4 & $1.0\cdot 10^{0}$ \\
L1 (D-COH) & 19.8$\pm$4.1 & 31.0 & 31.8 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 19.5$\pm$4.3 & 31.5 & 33.4 & $8.7\cdot 10^{-01}$ \\ \hline
\end{tabular}
\end{table}
\subsection{Average performance over all test problems (10\%)}
\begin{table}[!htbp]
\caption{Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of $10\%$ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.}
\label{tab:results:average}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.85}
\centering
\scriptsize
\begin{tabular}{ |l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l| }
\hline
& \multicolumn{3}{l}{Ishigami} \vline & \multicolumn{3}{l}{Rosenbrock} \vline & \multicolumn{3}{l}{LPP} \vline & \multicolumn{3}{l}{Electrode} \vline & \multicolumn{3}{l}{Average (all testfunctions)}\vline \\ \hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ \\ \hline\hline
Random (L1) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\hline
LHS (SC-ESE) & 0.728 & 0.451 & 0.452 & 0.987 & 0.944 & 0.934 & 0.928 & 1.073 & 1.020 & 1.029 & 1.135 & 1.025 & 0.932 & 0.901 & 0.858 \\
LHS (MM) & 0.818 & 0.514 & 0.554 & 0.898 & 0.879 & 0.909 & 0.959 & 1.042 & 1.033 & 0.971 & 0.904 & 0.817 & 0.912 & 0.835 & 0.828 \\
LHS (STD) & 0.859 & 0.593 & 0.616 & 0.921 & 0.851 & 0.851 & 0.979 & 1.045 & 1.015 & 0.981 & 0.929 & 0.826 & 0.935 & 0.855 & 0.844 \\ \hline\hline
L1 (MC) & 1.030 & 0.741 & 0.801 & 0.968 & 1.05 & 1.046 & 0.987 & 1.094 & 1.099 & 1.018 & 1.500 & 1.386 & 1.001 & 1.096 & 1.083 \\
L1 (MC-CC) & 1.065 & 0.753 & 0.811 & 0.809 & 0.819 & 0.799 & 0.949 & 1.097 & 1.142 & 1.006 & 1.881 & 2.037 & 0.957 & 1.138 & 1.215 \\
L1 (D) & 1.232 & 0.846 & 0.836 & 0.798 & 0.819 & 0.799 & 0.894 & 1.027 & 1.015 & 1.016 & 1.417 & 1.303 & 0.985 & 1.027 & 0.988 \\
L1 (D-COH) & 1.007 & 0.574 & 0.572 & 0.733 & 0.710 & 0.771 & 0.893 & 1.097 & 1.082 & 1.022 & 1.476 & 1.320 & 0.914 & 0.964 & 0.921 \\
L1 (CO) & 2.015 & 1.148 & 1.564 & 0.894 & 0.911 & 0.946 & 0.978 & 1.097 & 1.051 & 1.005 & 1.500 & 1.386 & 1.223 & 1.164 & 1.237 \\ \hline
\end{tabular}
\end{table}
\section{NRMSD convergence results considering a 1\% error threshold}
In the following the convergence of the NRMSD and the success rates are examined considering an error threshold of 1\%.
\subsection{Low-dimensional high-order problem (Ishigami function)}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of $10^{-3}$ considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Ishigami_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 24.7$\pm$8.3 & 44.0 & 48.0 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 23.1$\pm$2.8 & 24.8 & 27.8 & $7.1\cdot 10^{-06}$ \\
LHS (MM) & 24.1$\pm$4.0 & 28.6 & 29.7 & $2.9\cdot 10^{-03}$ \\
LHS (STD) & 24.5$\pm$3.0 & 29.3 & 29.9 & $1.3\cdot 10^{-02}$ \\ \hline
L1 (MC) & 27.5$\pm$5.9 & 38.5 & 41.8 & $8.9\cdot 10^{-01}$ \\
L1 (MC-CC) & 29.7$\pm$6.2 & 39.5 & 44.2 & $1.0\cdot 10^{0}$ \\
L1 (D) & 39.3$\pm$5.6 & 43.5 & 45.7 & $1.0\cdot 10^{0}$ \\
L1 (D-COH) & 24.7$\pm$4.4 & 35.0 & 37.4 & $6.5\cdot 10^{-01}$ \\
L1 (CO) & 49.3$\pm$12.2 & 73.0 & 86.1 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\subsection{Medium-dimensional medium-order problem (Rosenbrock function)}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of $10$\% considering the Rosenbrock function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Rosenbrock_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 72.2$\pm$10.7 & 90.0 & 108.1 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 91.4$\pm$4.9 & 97.6 & 98.7 & $1.0\cdot 10^{0}$ \\
LHS (MM) & 65.8$\pm$4.5 & 75.5 & 76.7 & $2.1\cdot 10^{-04}$ \\
LHS (STD) & 69.3$\pm$3.8 & 72.8 & 74.4 & $4.7\cdot 10^{-03}$ \\ \hline
L1 (MC) & 71.7$\pm$18.3 & 93.0 & 143.7 & $4.7\cdot 10^{-01}$ \\
L1 (MC-CC) & 65.4$\pm$3.9 & 69.2 & 69.8 & $2.8\cdot 10^{-07}$ \\
L1 (D) & 61.3$\pm$6.2 & 68.9 & 75.3 & $1.7\cdot 10^{-06}$ \\
L1 (D-COH) & 74.9$\pm$5.9 & 82.5 & 85.8 & $6.5\cdot 10^{-01}$ \\
L1 (CO) & 69.6$\pm$6.6 & 81.5 & 83.4 & $7.3\cdot 10^{-02}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{High-dimensional low-order problem (LPP function)}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of 10\% considering the LPP function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:LPP_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 109.4$\pm$4.8 & 117.0 & 119.4 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 111.7$\pm$5.1 & 117.0 & 119.4 & $9.5\cdot 10^{-01}$ \\
LHS (MM) & 109.4$\pm$3.9 & 113.8 & 115.4 & $3.8\cdot 10^{-01}$ \\
LHS (STD) & 109.5$\pm$4.2 & 115.5 & 117.4 & $8.0\cdot 10^{-01}$ \\ \hline
L1 (MC) & 113.7$\pm$12.0 & 147.5 & - & $9.9\cdot 10^{-01}$ \\
L1 (MC-CC) & 112.7$\pm$15.8 & 141.0 & 167.0 & $9.3\cdot 10^{-01}$ \\
L1 (D) & 119.1$\pm$6.7 & 125.2 & 128.2 & $1.0\cdot 10^{0}$ \\
L1 (D-COH) & 118.7$\pm$10.0 & 133.5 & 137.1 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 110.5$\pm$11.1 & 133.5 & 138.7 & $7.6\cdot 10^{-01}$ \\ \hline
\end{tabular}
\end{table}
\subsection{Practical example (Electrode Model)}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of 10\% considering the Electrode Model. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:ElectrodeModel_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 42.0$\pm$4.0 & 48.8 & 50.4 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 39.4$\pm$1.5 & 42.7 & 43.7 & $3.2\cdot 10^{-03}$ \\
LHS (MM) & 39.3$\pm$3.4 & 46.5 & 46.9 & $3.6\cdot 10^{-02}$ \\
LHS (STD) & 39.4$\pm$3.1 & 46.2 & 46.8 & $3.4\cdot 10^{-02}$ \\ \hline
L1 (MC) & 43.8$\pm$7.2 & 60.0 & 66.1 & $8.5\cdot 10^{-01}$ \\
L1 (MC-CC) & 39.6$\pm$26.0 & 72.0 & 149.0 & $2.8\cdot 10^{-01}$ \\
L1 (D) & 44.5$\pm$8.1 & 61.0 & 66.2 & $9.9\cdot 10^{-01}$ \\
L1 (D-COH) & 48.6$\pm$17.4 & 90.0 & 91.7 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 44.6$\pm$7.8 & 60.0 & 69.7 & $8.6\cdot 10^{-01}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{Average performance over all test problems (1\%)}
\begin{table}[!htbp]
\caption{Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a NRMSD of $1\%$ with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.}
\label{tab:results:average_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.85}
\centering
\scriptsize
\begin{tabular}{ |l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l| }
\hline
& \multicolumn{3}{l}{Ishigami} \vline & \multicolumn{3}{l}{Rosenbrock} \vline & \multicolumn{3}{l}{LPP} \vline & \multicolumn{3}{l}{Electrode} \vline & \multicolumn{3}{l}{Average (all testfunctions)}\vline \\ \hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ \\ \hline\hline
Random (L1) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\hline
LHS (SC-ESE) & 0.934 & 0.564 & 0.579 & 1.266 & 1.084 & 0.913 & 1.021 & 1 & 1 & 0.938 & 0.875 & 0.867 & 1.040 & 0.881 & 0.840 \\
LHS (MM) & 0.976 & 0.650 & 0.619 & 0.911 & 0.839 & 0.710 & 1 & 0.972 & 0.966 & 0.935 & 0.954 & 0.931 & 0.956 & 0.854 & 0.806 \\
LHS (STD) & 0.992 & 0.665 & 0.622 & 0.961 & 0.809 & 0.688 & 1.001 & 0.987 & 0.983 & 0.938 & 0.949 & 0.930 & 0.973 & 0.866 & 0.844 \\ \hline\hline
L1 (MC) & 1.114 & 0.875 & 0.871 & 0.993 & 1.033 & 1.329 & 1.039 & 1.261 & - & 1.043 & 1.231 & 1.312 & 1.032 & 1.100 & 1.171 \\
L1 (MC-CC) & 1.201 & 0.898 & 0.921 & 0.906 & 0.769 & 0.646 & 1.030 & 1.205 & 1.399 & 0.942 & 1.477 & 2.956 & 1.020 & 1.087 & 1.481 \\
L1 (D) & 1.590 & 0.989 & 0.952 & 0.849 & 0.766 & 0.697 & 1.088 & 1.071 & 1.074 & 1.059 & 1.251 & 1.313 & 1.147 & 1.019 & 1.009 \\
L1 (D-COH) & 1.002 & 0.795 & 0.779 & 1.038 & 0.917 & 0.794 & 1.085 & 1.141 & 1.148 & 1.156 & 1.846 & 1.819 & 1.070 & 1.175 & 1.127 \\
L1 (CO) & 1.998 & 1.659 & 1.794 & 0.964 & 0.906 & 0.772 & 1.010 & 1.141 & 1.162 & 1.060 & 1.231 & 1.383 & 1.258 & 1.234 & 1.278 \\ \hline
\end{tabular}
\end{table}
\section{Convergence of mean and standard deviation}
Besides of the NRMSD, which quantifies the general quality of the GPCE surrogates with the original model function, we quantified the convergence of the first two statistical moments, i.e. the mean and the standard deviation. Reference values were obtained for each testfunction by calculating both the mean and the standard deviation from $N=10^7$ model evaluations.
\subsection{Ishigami function}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.85\textwidth]{pics_supplement/Fig_results_Ishigami_supplement.pdf}
\caption{(a) and (b) Convergence of the relative error of the mean with respect to the number of sampling points $N$ considering the Ishigami function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) convergence of the relative error of the standard deviation; (e) and (f) success rates of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$ for the mean and the standard deviation, respectively.}
\label{fig:mean_std:Ishigami}
\end{figure*}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 0.1\% considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Ishigami_mean_p1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 28.3$\pm$10.4 & 54.0 & 59.4 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 24.8$\pm$4.0 & 29.4 & 29.9 & $5.9\cdot 10^{-04}$ \\
LHS (MM) & 24.4$\pm$3.5 & 29.5 & 29.9 & $4.9\cdot 10^{-05}$ \\
LHS (STD) & 24.8$\pm$4.1 & 29.9 & 32.8 & $7.6\cdot 10^{-04}$ \\ \hline
L1 (MC) & 38.7$\pm$9.3 & 52.0 & 64.2 & $1.0\cdot 10^{0}$ \\
L1 (MC-CC) & 37.3$\pm$7.5 & 48.0 & 49.7 & $1.0\cdot 10^{0}$ \\
L1 (D) & 28.2$\pm$6.6 & 39.8 & 40.7 & $6.2\cdot 10^{-01}$ \\
L1 (D-COH) & 25.8$\pm$5.3 & 39.2 & 39.9 & $3.5\cdot 10^{-01}$ \\
L1 (CO) & 67.9$\pm$38.6 & 308.8 & 348.2 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 1\% considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Ishigami_mean_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 24.4$\pm$7.5 & 39.5 & 45.6 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 18.2$\pm$5.2 & 26.5 & 28.7 & $7.0\cdot 10^{-05}$ \\
LHS (MM) & 19.0$\pm$2.7 & 23.3 & 23.9 & $1.4\cdot 10^{-05}$ \\
LHS (STD) & 19.3$\pm$5.1 & 26.5 & 28.5 & $2.2\cdot 10^{-04}$ \\ \hline
L1 (MC) & 28.5$\pm$7.0 & 38.9 & 39.7 & $9.9\cdot 10^{-01}$ \\
L1 (MC-CC) & 29.8$\pm$6.6 & 39.8 & 44.9 & $1.0\cdot 10^{0}$ \\
L1 (D) & 24.6$\pm$4.0 & 31.0 & 36.5 & $6.0\cdot 10^{-01}$ \\
L1 (D-COH) & 24.6$\pm$4.0 & 32.0 & 37.1 & $6.9\cdot 10^{-01}$ \\
L1 (CO) & 49.4$\pm$15.0 & 68.0 & 88.6 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 10\% considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Ishigami_mean_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 14.4$\pm$5.3 & 22.3 & 25.8 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 9.7$\pm$2.4 & 13.6 & 13.9 & $5.0\cdot 10^{-04}$ \\
LHS (MM) & 10.7$\pm$3.2 & 14.8 & 15.7 & $6.6\cdot 10^{-03}$ \\
LHS (STD) & 12.6$\pm$2.3 & 14.9 & 17.1 & $2.3\cdot 10^{-02}$ \\ \hline
L1 (MC) & 21.5$\pm$7.2 & 32.5 & 34.4 & $1.0\cdot 10^{0}$ \\
L1 (MC-CC) & 21.6$\pm$10.5 & 33.0 & 35.4 & $9.0\cdot 10^{-01}$ \\
L1 (D) & 18.6$\pm$4.0 & 22.2 & 22.9 & $9.8\cdot 10^{-01}$ \\
L1 (D-COH) & 19.5$\pm$5.1 & 22.8 & 23.7 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 36.8$\pm$13.6 & 47.5 & 73.9 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 0.1\% considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Ishigami_std_p1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 29.9$\pm$11.1 & 54.5 & 61.8 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 29.1$\pm$3.8 & 35.5 & 36.7 & $2.2\cdot 10^{-02}$ \\
LHS (MM) & 28.1$\pm$2.8 & 29.9 & 33.5 & $9.8\cdot 10^{-04}$ \\
LHS (STD) & 28.3$\pm$4.0 & 35.2 & 37.7 & $1.4\cdot 10^{-02}$ \\ \hline
L1 (MC) & 48.2$\pm$22.2 & 162.5 & 198.1 & $1.0\cdot 10^{0}$ \\
L1 (MC-CC) & 37.3$\pm$7.5 & 48.0 & 49.7 & $9.7\cdot 10^{-01}$ \\
L1 (D) & 39.6$\pm$12.2 & 59.5 & 65.1 & $9.9\cdot 10^{-01}$ \\
L1 (D-COH) & 39.5$\pm$9.4 & 48.5 & 67.2 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 86.0$\pm$46.0 & 292.2 & 297.6 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 1\% considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Ishigami_std_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 24.9$\pm$11.0 & 49.0 & 55.9 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 24.6$\pm$4.3 & 29.5 & 29.9 & $5.3\cdot 10^{-02}$ \\
LHS (MM) & 23.8$\pm$6.2 & 27.0 & 28.6 & $4.8\cdot 10^{-04}$ \\
LHS (STD) & 24.1$\pm$6.8 & 28.9 & 29.7 & $1.4\cdot 10^{-02}$ \\ \hline
L1 (MC) & 29.3$\pm$11.5 & 39.6 & 39.9 & $6.5\cdot 10^{-01}$ \\
L1 (MC-CC) & 29.8$\pm$6.6 & 39.8 & 44.9 & $9.8\cdot 10^{-01}$ \\
L1 (D) & 24.7$\pm$7.8 & 37.5 & 38.7 & $7.1\cdot 10^{-02}$ \\
L1 (D-COH) & 24.8$\pm$7.0 & 37.5 & 38.7 & $3.1\cdot 10^{-01}$ \\
L1 (CO) & 49.6$\pm$17.8 & 91.2 & 94.2 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 10\% considering the Ishigami function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Ishigami_std_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 15.3$\pm$7.6 & 25.5 & 27.4 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 9.5$\pm$7.2 & 22.6 & 26.5 & $2.0\cdot 10^{-01}$ \\
LHS (MM) & 12.8$\pm$5.8 & 18.5 & 22.5 & $8.1\cdot 10^{-02}$ \\
LHS (STD) & 11.3$\pm$6.9 & 22.5 & 23.7 & $1.3\cdot 10^{-01}$ \\ \hline
L1 (MC) & 9.5$\pm$11.3 & 37.2 & 37.8 & $5.8\cdot 10^{-01}$ \\
L1 (MC-CC) & 21.6$\pm$10.5 & 33.0 & 35.4 & $9.6\cdot 10^{-01}$ \\
L1 (D) & 15.8$\pm$7.6 & 23.5 & 31.0 & $6.1\cdot 10^{-01}$ \\
L1 (D-COH) & 15.3$\pm$7.1 & 25.5 & 29.1 & $7.9\cdot 10^{-01}$ \\
L1 (CO) & 36.2$\pm$18.4 & 51.5 & 70.4 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{Rosenbrock function}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.85\textwidth]{pics_supplement/Fig_results_Rosenbrock_supplement.pdf}
\caption{(a) and (b) Convergence of the relative error of the mean with respect to the number of sampling points $N$ considering the Rosenbrock function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) convergence of the relative error of the standard deviation; (e) and (f) success rates of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$ for the mean and the standard deviation, respectively.}
\label{fig:mean_std:Rosenbrock}
\end{figure*}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 0.1\% considering the Rosenbrock function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Rosenbrock_mean_p1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 76.0$\pm$10.9 & 92.5 & 112.5 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 94.8$\pm$5.0 & 99.1 & 99.8 & $1.0\cdot 10^{0}$ \\
LHS (MM) & 70.0$\pm$5.1 & 75.8 & 77.4 & $8.8\cdot 10^{-04}$ \\
LHS (STD) & 69.9$\pm$3.3 & 73.5 & 75.4 & $1.8\cdot 10^{-04}$ \\ \hline
L1 (MC) & 72.0$\pm$9.0 & 97.0 & 125.6 & $2.6\cdot 10^{-01}$ \\
L1 (MC-CC) & 65.8$\pm$6.2 & 71.5 & 73.7 & $3.3\cdot 10^{-09}$ \\
L1 (D) & 78.9$\pm$7.5 & 88.1 & 89.6 & $7.5\cdot 10^{-01}$ \\
L1 (D-COH) & 76.0$\pm$6.8 & 86.3 & 89.3 & $6.9\cdot 10^{-01}$ \\
L1 (CO) & 72.4$\pm$6.4 & 84.2 & 84.8 & $4.4\cdot 10^{-02}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 1\% considering the Rosenbrock function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Rosenbrock_mean_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 72.7$\pm$12.7 & 93.5 & 110.1 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 89.1$\pm$24.1 & 97.2 & 97.8 & $9.9\cdot 10^{-01}$ \\
LHS (MM) & 68.6$\pm$5.3 & 75.5 & 75.9 & $1.8\cdot 10^{-02}$ \\
LHS (STD) & 69.4$\pm$3.4 & 73.0 & 74.7 & $4.3\cdot 10^{-03}$ \\ \hline
L1 (MC) & 71.6$\pm$10.5 & 94.0 & 134.2 & $5.6\cdot 10^{-01}$ \\
L1 (MC-CC) & 59.8$\pm$9.3 & 66.7 & 67.7 & $3.0\cdot 10^{-08}$ \\
L1 (D) & 77.9$\pm$7.7 & 86.5 & 87.7 & $8.1\cdot 10^{-01}$ \\
L1 (D-COH) & 75.7$\pm$6.6 & 85.5 & 88.4 & $8.5\cdot 10^{-01}$ \\
L1 (CO) & 69.3$\pm$8.4 & 81.0 & 83.7 & $5.3\cdot 10^{-02}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 10\% considering the Rosenbrock function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Rosenbrock_mean_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 59.5$\pm$13.4 & 67.5 & 72.9 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 22.2$\pm$7.6 & 37.5 & 38.7 & $7.7\cdot 10^{-1}$ \\
LHS (MM) & 54.9$\pm$10.0 & 60.5 & 61.7 & $1.2\cdot 10^{-02}$ \\
LHS (STD) & 55.3$\pm$9.7 & 62.5 & 62.9 & $4.0\cdot 10^{-02}$ \\ \hline
L1 (MC) & 49.3$\pm$26.6 & 78.0 & 131.6 & $6.4\cdot 10^{-03}$ \\
L1 (MC-CC) & 34.6$\pm$12.4 & 53.0 & 56.1 & $1.6\cdot 10^{-07}$ \\
L1 (D) & 42.4$\pm$7.8 & 50.0 & 52.4 & $1.3\cdot 10^{-06}$ \\
L1 (D-COH) & 53.9$\pm$21.9 & 66.5 & 68.7 & $2.2\cdot 10^{-02}$ \\
L1 (CO) & 51.6$\pm$14.0 & 64.0 & 70.2 & $3.6\cdot 10^{-03}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 0.1\% considering the Rosenbrock function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Rosenbrock_std_p1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 76.0$\pm$10.9 & 92.5 & 112.5 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 95.0$\pm$5.1 & 99.1 & 99.8 & $1.0\cdot 10^{0}$ \\
LHS (MM) & 70.0$\pm$5.2 & 75.8 & 77.4 & $1.0\cdot 10^{-03}$ \\
LHS (STD) & 70.0$\pm$3.3 & 73.5 & 75.4 & $2.6\cdot 10^{-04}$ \\ \hline
L1 (MC) & 72.0$\pm$9.2 & 97.5 & 126.6 & $4.0\cdot 10^{-01}$ \\
L1 (MC-CC) & 66.0$\pm$4.2 & 73.0 & 75.4 & $5.8\cdot 10^{-08}$ \\
L1 (D) & 79.0$\pm$7.5 & 88.1 & 89.6 & $8.1\cdot 10^{-01}$ \\
L1 (D-COH) & 76.0$\pm$6.8 & 86.3 & 89.3 & $4.8\cdot 10^{-01}$ \\
L1 (CO) & 73.6$\pm$6.7 & 84.0 & 84.8 & $1.3\cdot 10^{-01}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 1\% considering the Rosenbrock function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Rosenbrock_std_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 75.7$\pm$16.8 & 91.8 & 110.8 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 94.6$\pm$5.0 & 99.8 & 100.0 & $1.0\cdot 10^{0}$ \\
LHS (MM) & 69.9$\pm$11.9 & 75.8 & 76.7 & $2.1\cdot 10^{-03}$ \\
LHS (STD) & 69.8$\pm$15.6 & 73.5 & 75.4 & $1.8\cdot 10^{-04}$ \\ \hline
L1 (MC) & 71.8$\pm$22.8 & 92.0 & 96.8 & $1.3\cdot 10^{-01}$ \\
L1 (MC-CC) & 65.8$\pm$17.3 & 71.5 & 74.4 & $1.2\cdot 10^{-07}$ \\
L1 (D) & 78.5$\pm$7.3 & 87.1 & 88.6 & $7.8\cdot 10^{-01}$ \\
L1 (D-COH) & 75.7$\pm$6.6 & 85.5 & 88.4 & $5.1\cdot 10^{-01}$ \\
L1 (CO) & 71.8$\pm$6.5 & 83.0 & 84.7 & $8.1\cdot 10^{-02}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 10\% considering the Rosenbrock function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Rosenbrock_std_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 66.3$\pm$32.5 & 85.5 & 98.2 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 6.3$\pm$42.8 & 96.2 & 96.8 & $6.0\cdot 10^{-01}$ \\
LHS (MM) & 62.4$\pm$29.4 & 71.5 & 72.7 & $1.0\cdot 10^{-01}$ \\
LHS (STD) & 62.0$\pm$29.4 & 69.8 & 72.1 & $1.1\cdot 10^{-01}$ \\ \hline
L1 (MC) & 34.1$\pm$34.5 & 84.0 & 90.5 & $2.7\cdot 10^{-01}$ \\
L1 (MC-CC) & 52.4$\pm$27.7 & 65.5 & 66.7 & $8.8\cdot 10^{-03}$ \\
L1 (D) & 56.5$\pm$17.2 & 67.2 & 70.8 & $6.3\cdot 10^{-02}$ \\
L1 (D-COH) & 53.9$\pm$21.9 & 66.5 & 68.7 & $1.6\cdot 10^{-02}$ \\
L1 (CO) & 35.7$\pm$31.0 & 75.5 & 80.2 & $8.8\cdot 10^{-02}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{LPP function}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.85\textwidth]{pics_supplement/Fig_results_Lin2Coupled_supplement.pdf}
\caption{(a) and (b) Convergence of the relative error of the mean with respect to the number of sampling points $N$ considering the linear paired product function. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) convergence of the relative error of the standard deviation; (e) and (f) success rates of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$ for the mean and the standard deviation, respectively.}
\label{fig:mean_std:Lin2Coupled}
\end{figure*}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 0.1\% considering the LPP function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:LPP_mean_p1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 5.0$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 5.0$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
LHS (MM) & 5.0$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
LHS (STD) & 5.0$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\ \hline
L1 (MC) & 5.0$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
L1 (MC-CC) & 5.0$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
L1 (D) & 5.0$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
L1 (D-COH) & 5.0$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 5.0$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 1\% considering the LPP function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:LPP_mean_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 4.9$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 4.9$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
LHS (MM) & 4.9$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
LHS (STD) & 4.9$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\ \hline
L1 (MC) & 4.9$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
L1 (MC-CC) & 4.9$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
L1 (D) & 4.9$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
L1 (D-COH) & 4.9$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 4.9$\pm$0.0 & 5.0 & 5.0 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 10\% considering the LPP function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:LPP_mean_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 4.5$\pm$0.0 & 4.0 & 4.0 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 4.5$\pm$0.0 & 4.0 & 4.0 & $1.0\cdot 10^{0}$ \\
LHS (MM) & 4.5$\pm$0.0 & 4.0 & 4.0 & $1.0\cdot 10^{0}$ \\
LHS (STD) & 4.5$\pm$0.0 & 4.0 & 4.0 & $1.0\cdot 10^{0}$ \\ \hline
L1 (MC) & 4.5$\pm$0.0 & 4.0 & 4.0 & $1.0\cdot 10^{0}$ \\
L1 (MC-CC) & 4.5$\pm$0.0 & 4.0 & 4.0 & $1.0\cdot 10^{0}$ \\
L1 (D) & 4.5$\pm$0.0 & 4.0 & 4.0 & $1.0\cdot 10^{0}$ \\
L1 (D-COH) & 4.5$\pm$0.0 & 4.0 & 4.0 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 4.5$\pm$0.0 & 4.0 & 4.0 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 0.1\% considering the LPP function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:LPP_std_p1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 114.0$\pm$9.4 & 127.5 & 137.0 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 110.0$\pm$5.1 & 117.0 & 119.4 & $1.5\cdot 10^{-02}$ \\
LHS (MM) & 110.0$\pm$5.8 & 117.0 & 119.4 & $3.8\cdot 10^{-03}$ \\
LHS (STD) & 110.0$\pm$5.2 & 115.8 & 118.8 & $2.2\cdot 10^{-02}$ \\ \hline
L1 (MC) & -$\pm$- & - & - & - \\
L1 (MC-CC) & 113.0$\pm$17.0 & 145.0 & 174.0 & $3.2\cdot 10^{-01}$ \\
L1 (D) & -$\pm$- & - & - & - \\
L1 (D-COH) & -$\pm$- & - & - & - \\
L1 (CO) & -$\pm$- & - & - & - \\ \hline
\end{tabular}
\end{table}
\newpage
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 1\% considering the LPP function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:LPP_std_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 113.9$\pm$9.4 & 127.5 & 137.0 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 109.9$\pm$5.1 & 117.0 & 119.4 & $2.8\cdot 10^{-02}$ \\
LHS (MM) & 109.8$\pm$5.8 & 117.0 & 119.4 & $6.6\cdot 10^{-03}$ \\
LHS (STD) & 109.9$\pm$5.3 & 115.8 & 118.8 & $4.1\cdot 10^{-02}$ \\ \hline
L1 (MC) & -$\pm$- & - & - & - \\
L1 (MC-CC) & 112.9$\pm$16.5 & 144.0 & 171.3 & $3.9\cdot 10^{-01}$ \\
L1 (D) & -$\pm$- & - & - & - \\
L1 (D-COH) & -$\pm$- & - & - & - \\
L1 (CO) & -$\pm$- & - & - & - \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 10\% considering the LPP function. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:LPP_std_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 113.3$\pm$9.5 & 128.5 & 134.6 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 108.6$\pm$19.5 & 116.0 & 118.4 & $1.8\cdot 10^{-02}$ \\
LHS (MM) & 108.0$\pm$6.1 & 116.0 & 118.4 & $6.9\cdot 10^{-03}$ \\
LHS (STD) & 108.5$\pm$5.5 & 114.8 & 117.1 & $4.4\cdot 10^{-02}$ \\ \hline
L1 (MC) & 114.2$\pm$12.6 & 138.0 & 150.9 & $9.2\cdot 10^{-01}$ \\
L1 (MC-CC) & 111.2$\pm$9.8 & 123.5 & 127.5 & $2.6\cdot 10^{-01}$ \\
L1 (D) & 119.2$\pm$6.9 & 126.8 & 128.6 & $1.0\cdot 10^{0}$ \\
L1 (D-COH) & 120.2$\pm$10.7 & 137.5 & 147.1 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 112.5$\pm$11.9 & 133.8 & 138.0 & $6.6\cdot 10^{-01}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{Electrode Probe Model (practical example)}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.85\textwidth]{pics_supplement/Fig_results_Electrode_supplement.pdf}
\caption{(a) and (b) Convergence of the relative error of the mean with respect to the number of sampling points $N$ considering the electrode impedance model. For reference, the convergence of the random sampling scheme is shown as a black line in each plot. (abbreviations: SC-ESE: stretched center enhanced stochastic evolutionary algorithm; MM: maximum-minimal distance; STD: standard; MC: mutual coherence; CC: cross-correlation; D: determinant optimal; D-COH: determinant-coherence optimal.); (c) and (d) convergence of the relative error of the standard deviation; (e) and (f) success rates of the best converging grids for error thresholds of $0.1\%$, $1\%$, and $10\%$ for the mean and the standard deviation, respectively.}
\label{fig:mean_std:Electrode}
\end{figure*}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 0.1\% considering the Electrode Probe Model. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Electrode_mean_p1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 48.0$\pm$6.0 & 56.0 & 59.1 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 39.7$\pm$1.8 & 43.8 & 45.4 & $1.1\cdot 10^{-05}$ \\
LHS (MM) & 39.8$\pm$3.8 & 49.0 & 51.4 & $1.4\cdot 10^{-04}$ \\
LHS (STD) & 40.0$\pm$3.9 & 47.8 & 50.8 & $1.9\cdot 10^{-04}$ \\ \hline
L1 (MC) & 48.6$\pm$6.8 & 64.2 & 64.8 & $8.4\cdot 10^{-01}$ \\
L1 (MC-CC) & 47.6$\pm$24.1 & 70.0 & 147.2 & $4.0\cdot 10^{-01}$ \\
L1 (D) & 39.8$\pm$5.1 & 53.0 & 57.7 & $7.2\cdot 10^{-04}$ \\
L1 (D-COH) & 39.9$\pm$11.1 & 66.0 & 79.2 & $5.4\cdot 10^{-02}$ \\
L1 (CO) & 51.6$\pm$6.7 & 60.8 & 63.1 & $9.9\cdot 10^{-01}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 1\% considering the Electrode Probe Model. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Electrode_mean_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 26.4$\pm$5.4 & 34.5 & 36.4 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 26.2$\pm$3.8 & 31.5 & 32.7 & $4.0\cdot 10^{-01}$ \\
LHS (MM) & 19.9$\pm$2.3 & 23.5 & 29.9 & $8.6\cdot 10^{-07}$ \\
LHS (STD) & 19.9$\pm$1.9 & 25.5 & 26.7 & $2.9\cdot 10^{-06}$ \\ \hline
L1 (MC) & 32.1$\pm$6.4 & 36.5 & 37.7 & $9.9\cdot 10^{-01}$ \\
L1 (MC-CC) & 30.1$\pm$9.5 & 48.0 & 59.7 & $9.6\cdot 10^{-01}$ \\
L1 (D) & 24.5$\pm$6.2 & 35.5 & 36.7 & $3.9\cdot 10^{-01}$ \\
L1 (D-COH) & 20.0$\pm$5.1 & 32.3 & 35.8 & $9.2\cdot 10^{-03}$ \\
L1 (CO) & 33.0$\pm$6.2 & 37.7 & 38.7 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 10\% considering the Electrode Probe Model. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Electrode_mean_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 18.2$\pm$0.2 & 18.5 & 18.9 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 18.3$\pm$0.1 & 18.0 & 18.0 & $5.6\cdot 10^{-01}$ \\
LHS (MM) & 18.1$\pm$0.1 & 18.0 & 18.0 & $1.0\cdot 10^{-06}$ \\
LHS (STD) & 18.1$\pm$0.1 & 18.0 & 18.0 & $3.6\cdot 10^{-06}$ \\ \hline
L1 (MC) & 18.4$\pm$0.3 & 18.9 & 19.0 & $9.8\cdot 10^{-01}$ \\
L1 (MC-CC) & 18.3$\pm$5.1 & 24.0 & 36.0 & $9.6\cdot 10^{-01}$ \\
L1 (D) & 18.2$\pm$0.3 & 18.8 & 18.9 & $5.1\cdot 10^{-01}$ \\
L1 (D-COH) & 18.2$\pm$0.2 & 18.0 & 18.7 & $1.5\cdot 10^{-02}$ \\
L1 (CO) & 18.4$\pm$0.3 & 18.9 & 19.7 & $1.0\cdot 10^{0}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 0.1\% considering the Electrode Probe Model. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:RElectrode_std_p1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 97.4$\pm$12.5 & 117.5 & 131.3 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 77.4$\pm$4.8 & 82.5 & 83.7 & $5.0\cdot 10^{-11}$ \\
LHS (MM) & 88.1$\pm$8.5 & 97.0 & 107.7 & $5.2\cdot 10^{-05}$ \\
LHS (STD) & 89.4$\pm$6.4 & 95.5 & 103.0 & $7.5\cdot 10^{-05}$ \\ \hline
L1 (MC) & 99.6$\pm$20.4 & 158.0 & 253.8 & $6.7\cdot 10^{-01}$ \\
L1 (MC-CC) & 86.7$\pm$28.4 & 117.5 & 202.3 & $5.6\cdot 10^{-04}$ \\
L1 (D) & 115.0$\pm$5.3 & 120.0 & 128.7 & $1.0\cdot 10^{0}$ \\
L1 (D-COH) & 114.2$\pm$12.3 & 132.5 & 135.8 & $1.0\cdot 10^{0}$ \\
L1 (CO) & 95.5$\pm$10.5 & 114.0 & 117.1 & $2.6\cdot 10^{-01}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 1\% considering the Electrode Probe Model. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Electrode_std_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 54.7$\pm$10.0 & 75.0 & 79.8 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 45.9$\pm$3.4 & 48.8 & 49.7 & $1.2\cdot 10^{-06}$ \\
LHS (MM) & 49.2$\pm$3.9 & 53.9 & 55.4 & $3.0\cdot 10^{-03}$ \\
LHS (STD) & 49.0$\pm$3.5 & 55.0 & 57.4 & $1.3\cdot 10^{-02}$ \\ \hline
L1 (MC) & 57.1$\pm$8.2 & 72.5 & 77.5 & $9.5\cdot 10^{-01}$ \\
L1 (MC-CC) & 52.5$\pm$11.2 & 78.5 & 94.1 & $2.9\cdot 10^{-01}$ \\
L1 (D) & 46.0$\pm$5.5 & 56.0 & 58.4 & $2.6\cdot 10^{-05}$ \\
L1 (D-COH) & 47.3$\pm$12.7 & 79.0 & 86.5 & $2.4\cdot 10^{-03}$ \\
L1 (CO) & 59.0$\pm$7.1 & 67.5 & 74.3 & $9.2\cdot 10^{-01}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\caption{Estimated number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 10\% considering the Electrode Probe Model. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively. The bold rows highlight the best performing sampling schemes in each category.}
\label{tab:results:Electrode_std_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.672}
\centering
\scriptsize
\begin{tabular}{ |l|l|l|l|l| }
\hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & p-value \\ \hline\hline
Random & 32.7$\pm$5.0 & 36.8 & 37.7 & $1.0\cdot 10^{0}$ \\ \hline
LHS (SC-ESE) & 29.6$\pm$2.6 & 32.5 & 33.7 & $2.7\cdot 10^{-02}$ \\
LHS (MM) & 23.5$\pm$4.6 & 33.5 & 34.7 & $1.0\cdot 10^{-05}$ \\
LHS (STD) & 28.2$\pm$4.0 & 32.2 & 32.9 & $8.8\cdot 10^{-04}$ \\ \hline
L1 (MC) & 33.6$\pm$5.5 & 37.7 & 39.4 & $9.1\cdot 10^{-01}$ \\
L1 (MC-CC) & 33.4$\pm$7.0 & 47.5 & 50.4 & $8.4\cdot 10^{-01}$ \\
L1 (D) & 32.7$\pm$4.6 & 35.2 & 35.9 & $2.7\cdot 10^{-01}$ \\
L1 (D-COH) & 32.9$\pm$4.5 & 36.8 & 37.7 & $6.8\cdot 10^{-01}$ \\
L1 (CO) & 34.3$\pm$3.9 & 39.0 & 40.7 & $9.8\cdot 10^{-01}$ \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{Average performance over all test problems (mean 0.1\%)}
\begin{table}[!htbp]
\caption{Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 0.1\% with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.}
\label{tab:results:average_mean_01}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.85}
\centering
\scriptsize
\begin{tabular}{ |l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l| }
\hline
& \multicolumn{3}{l}{Ishigami} \vline & \multicolumn{3}{l}{Rosenbrock} \vline & \multicolumn{3}{l}{LPP} \vline & \multicolumn{3}{l}{Electrode} \vline & \multicolumn{3}{l}{Average (all testfunctions)}\vline \\ \hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ \\ \hline\hline
Random (L1) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\hline
LHS (SC-ESE) & 0.877 & 0.544 & 0.503 & 1.248 & 1.071 & 0.887 & 1.000 & 1.000 & 1.000 & 0.828 & 0.781 & 0.768 & 0.988 & 0.849 & 0.790 \\
LHS (MM) & 0.864 & 0.546 & 0.503 & 0.921 & 0.819 & 0.688 & 1.000 & 1.000 & 1.000 & 0.829 & 0.875 & 0.870 & 0.904 & 0.810 & 0.765 \\
LHS (STD) & 0.877 & 0.554 & 0.552 & 0.921 & 0.795 & 0.670 & 1.000 & 1.000 & 1.000 & 0.835 & 0.854 & 0.860 & 0.908 & 0.801 & 0.770 \\ \hline\hline
L1 (MC) & 1.367 & 0.963 & 1.081 & 0.947 & 1.049 & 1.116 & 1.000 & 1.000 & 1.000 & 1.013 & 1.147 & 1.097 & 1.082 & 1.040 & 1.074 \\
L1 (MC-CC) & 1.321 & 0.889 & 0.837 & 0.867 & 0.773 & 0.655 & 1.000 & 1.000 & 1.000 & 0.991 & 1.250 & 2.491 & 1.045 & 1.978 & 1.246 \\
L1 (D) & 0.996 & 0.738 & 0.685 & 1.039 & 0.953 & 0.797 & 1.000 & 1.000 & 1.000 & 0.829 & 0.946 & 0.976 & 0.966 & 1.896 & 0.905 \\
L1 (D-COH) & 0.912 & 0.727 & 0.671 & 1.000 & 0.932 & 0.793 & 1.000 & 1.000 & 1.000 & 0.832 & 1.179 & 1.340 & 0.936 & 0.960 & 0.955 \\
L1 (CO) & 2.403 & 5.719 & 5.861 & 0.953 & 0.911 & 0.754 & 1.000 & 1.000 & 1.000 & 1.076 & 1.085 & 1.068 & 1.358 & 2.179 & 2.171 \\ \hline
\end{tabular}
\end{table}
\subsection{Average performance over all test problems (mean 1\%)}
\begin{table}[!htbp]
\caption{Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 1\% with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.}
\label{tab:results:average_mean_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.85}
\centering
\scriptsize
\begin{tabular}{ |l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l| }
\hline
& \multicolumn{3}{l}{Ishigami} \vline & \multicolumn{3}{l}{Rosenbrock} \vline & \multicolumn{3}{l}{LPP} \vline & \multicolumn{3}{l}{Electrode} \vline & \multicolumn{3}{l}{Average (all testfunctions)}\vline \\ \hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ \\ \hline\hline
Random (L1) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\hline
LHS (SC-ESE) & 0.745 & 0.671 & 0.629 & 1.226 & 1.040 & 0.889 & 1.000 & 1.000 & 1.000 & 0.992 & 0.913 & 0.898 & 0.991 & 0.906 & 0.854 \\
LHS (MM) & 0.779 & 0.589 & 0.523 & 0.944 & 0.807 & 0.689 & 1.000 & 1.000 & 1.000 & 0.756 & 0.681 & 0.821 & 0.870 & 0.769 & 0.758 \\
LHS (STD) & 0.790 & ,0.671 & 0.625 & 0.955 & 0.781 & 0.678 & 1.000 & 1.000 & 1.000 & 0.756 & 0.739 & 0.734 & 0.875 & 0.798 & 0.759 \\ \hline\hline
L1 (MC) & 1.171 & 0.984 & 0.871 & 0.985 & 1.005 & 1.219 & 1.000 & 1.000 & 1.000 & 1.219 & 1.058 & 1.036 & 1.094 & 1.012 & 1.031 \\
L1 (MC-CC) & 1.223 & 1.008 & 0.985 & 0.824 & 0.714 & 0.615 & 1.000 & 1.000 & 1.000 & 1.143 & 1.391 & 1.640 & 1.048 & 1.028 & 1.060 \\
L1 (D) & 1.009 & 0.785 & 0.800 & 1.073 & 0.925 & 0.797 & 1.000 & 1.000 & 1.000 & 0.931 & 1.029 & 1.008 & 1.003 & 0.935 & 0.901 \\
L1 (D-COH) & 1.008 & 0.810 & 0.814 & 1.042 & 0.914 & 0.803 & 1.000 & 1.000 & 1.000 & 0.758 & 0.935 & 0.984 & 0.952 & 0.915 & 0.900 \\
L1 (CO) & 2.027 & 1.722 & 1.943 & 0.954 & 1.252 & 1.094 & 1.063 & 1.000 & 1.000 & 1.076 & 1.085 & 1.068 & 1.308 & 1.170 & 1.192 \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{Average performance over all test problems (mean 10\%)}
\begin{table}[!htbp]
\caption{Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the mean of 10\% with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.}
\label{tab:results:average_mean_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.85}
\centering
\scriptsize
\begin{tabular}{ |l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l| }
\hline
& \multicolumn{3}{l}{Ishigami} \vline & \multicolumn{3}{l}{Rosenbrock} \vline & \multicolumn{3}{l}{LPP} \vline & \multicolumn{3}{l}{Electrode} \vline & \multicolumn{3}{l}{Average (all testfunctions)}\vline \\ \hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ \\ \hline\hline
Random (L1) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\hline
LHS (SC-ESE) & 0.670 & 0.610 & 0.539 & 0.373 & 0.556 & 0.531 & 1.000 & 1.000 & 1.000 & 1.001 & 0.973 & 0.952 & 0.761 & 0.785 & 0.756 \\
LHS (MM) & 0.739 & 0.667 & 0.609 & 0.923 & 0.896 & 0.846 & 1.000 & 1.000 & 1.000 & 0.993 & 0.973 & 0.952 & 0.914 & 0.884 & 0.852 \\
LHS (STD) & 0.873 & 0.669 & 0.663 & 0.928 & 0.926 & 0.863 & 1.000 & 1.000 & 1.000 & 0.993 & 0.973 & 0.952 & 0.948 & 0.892 & 0.869 \\ \hline\hline
L1 (MC) & 1.493 & 1.461 & 1.333 & 0.829 & 1.156 & 1.805 & 1.000 & 1.000 & 1.000 & 1.006 & ,1.020 & 1.004 & 1.082 & 1.159 & 1.285 \\
L1 (MC-CC) & 1.497 & 1.483 & 1.372 & 0.581 & 0.785 & 0.770 & 1.000 & 1.000 & 1.000 & 1.005 & 1.297 & 1.905 & 1.021 & 1.145 & 1.262 \\
L1 (D) & 1.293 & 1.000 & 0.886 & 0.712 & 0.741 & 0.719 & 1.000 & 1.000 & 1.000 & 0.999 & 1.014 & 1.003 & 1.001 & 0.939 & 0.902 \\
L1 (D-COH) & 1.354 & 1.026 & 0.919 & 0.906 & 0.985 & 0.942 & 1.000 & 1.000 & 1.000 & 0.996 & 0.973 & 0.989 & 1.064 & 0.996 & 0.973 \\
L1 (CO) & 2.553 & 2.135 & 2.864 & 0.867 & 0.948 & 0.963 & 1.000 & 1.000 & 1.000 & 1.010 & 1.024 & 1.042 & 1.358 & 1.277 & 1.467 \\ \hline
\end{tabular}
\end{table}
\subsection{Average performance over all test problems (std 0.1\%)}
\begin{table}[!htbp]
\caption{Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 0.1\% with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.}
\label{tab:results:average_std_01}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.85}
\centering
\scriptsize
\begin{tabular}{ |l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l| }
\hline
& \multicolumn{3}{l}{Ishigami} \vline & \multicolumn{3}{l}{Rosenbrock} \vline & \multicolumn{3}{l}{LPP} \vline & \multicolumn{3}{l}{Electrode} \vline & \multicolumn{3}{l}{Average (all testfunctions)}\vline \\ \hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ \\ \hline\hline
Random (L1) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\hline
LHS (SC-ESE) & 0.970 & 0.651 & 0.594 & 1.250 & 1.071 & 0.887 & 0.965 & 0.918 & ,0.872 & 0.794 & 0.702 & 0.637 & 0.988 & 0.849 & 0.790 \\
LHS (MM) & 0.937 & 0.549 & 0.542 & 0.921 & 0.819 & 0.688 & 0.965 & 0.918 & ,0.872 & 0.904 & 0.826 & 0.820 & 0.904 & 0.810 & 0.765 \\
LHS (STD) & 0.946 & ,0.647 & 0.610 & 0.921 & 0.795 & 0.670 & 0.965 & 0.908 & 0.867 & 0.918 & 0.813 & 0.784 & 0.908 & 0.801 & 0.770 \\ \hline\hline
L1 (MC) & 1.609 & ,2.982 & 3.206 & 0.947 & 1.054 & 1.125 & - & - & - & 1.022 & 1.345 & 1.933 & 1.082 & 1.040 & 1.074 \\
L1 (MC-CC) & 1.247 & 0.881 & 0.804 & 0.868 & 0.789 & 0.670 & - & - & - & 0.889 & 1.000 & 1.541 & 1.045 & 1.978 & 1.246 \\
L1 (D) & 1.324 & 1.092 & 1.053 & 1.039 & 0.953 & 0.797 & - & - & - & 1.180 & 1.021 & 0.980 & 0.966 & 1.896 & 0.905 \\
L1 (D-COH) & 1.318 & 0.890 & 1.087 & 1.000 & 0.932 & 0.793 & - & - & - & 1.172 & 1.128 & 1.034 & 0.936 & 0.960 & 0.955 \\
L1 (CO) & 2.871 & 5.362 & 4.816 & 0.969 & 0.908 & 0.754 & - & - & - & 0.980 & 0.970 & 0.892 & 1.358 & 2.179 & 2.171 \\ \hline
\end{tabular}
\end{table}
\subsection{Average performance over all test problems (std 1\%)}
\begin{table}[!htbp]
\caption{Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 1\% with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.}
\label{tab:results:average_std_1}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.85}
\centering
\scriptsize
\begin{tabular}{ |l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l| }
\hline
& \multicolumn{3}{l}{Ishigami} \vline & \multicolumn{3}{l}{Rosenbrock} \vline & \multicolumn{3}{l}{LPP} \vline & \multicolumn{3}{l}{Electrode} \vline & \multicolumn{3}{l}{Average (all testfunctions)}\vline \\ \hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ \\ \hline\hline
Random (L1) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\hline
LHS (SC-ESE) & 0.989 & 0.602 & 0.535 & 1.250 & 1.088 & 0.902 & 0.964 & 0.918 & 0.872 & 0.839 & 0.650 & 0.623 & 1.010 & 0.814 & 0.733 \\
LHS (MM) & 0.956 & 0.551 & 0.512 & 0.923 & 0.826 & 0.692 & 0.964 & ,0.918 & 0.872 & 0.898 & 0.718 & 0.694 & 0.935 & 0.753 & 0.692 \\
LHS (STD) & 0.968 & 0.590 & 0.531 & 0.922 & 0.801 & 0.681 & 0.964 & 0.908 & 0.867 & 0.895 & 0.733 & 0.719 & 0.937 & 0.758 & 0.733 \\ \hline\hline
L1 (MC) & 1.175 & 0.809 & 0.714 & 0.948 & 1.003 & 0.874 & - & - & - & 1.042 & 0.967 & 0.971 & 1.041 & 0.945 & 0.890 \\
L1 (MC-CC) & 1.197 & 0.813 & 0.803 & 0.869 & 0.779 & 0.671 & - & - & - & 0.959 & 1.047 & 1.179 & 1.006 & 0.910 & 0.913 \\
L1 (D) & 0.992 & 0.765 & 0.692 & 1.037 & 0.950 & 0.800 & - & - & - & 0.839 & 0.747 & 0.732 & 0.967 & 0.865 & 0.806 \\
L1 (D-COH) & 0.997 & 0.765 & 0.692 & 1.000 & 0.932 & 0.798 & - & - & - & 0.863 & 1.053 & 1.084 & 0.965 & 0.938 & 0.894 \\
L1 (CO) & 1.989 & 1.862 & 1.686 & 0.948 & 0.905 & 0.764 & - & - & - & 1.077 & 0.900 & 0.931 & 1.254 & 1.135 & 1.095 \\ \hline
\end{tabular}
\end{table}
\newpage
\subsection{Average performance over all test problems (std 10\%)}
\begin{table}[!htbp]
\caption{Relative and average number of grid points $\hat{N}_{\varepsilon}$ of different sampling schemes to reach a relative error of the std of 10\% with respect to standard random sampling using the LARS-Lasso solver (L1) considering all test functions. The columns for $N_{sr}^{(95\%)}$ and $N_{sr}^{(99\%)}$ show the number of samples needed to reach a success rate of $95\%$ and $99\%$, respectively.}
\label{tab:results:average_std_10}
\setlength{\arrayrulewidth}{.2mm}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.85}
\centering
\scriptsize
\begin{tabular}{ |l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l| }
\hline
& \multicolumn{3}{l}{Ishigami} \vline & \multicolumn{3}{l}{Rosenbrock} \vline & \multicolumn{3}{l}{LPP} \vline & \multicolumn{3}{l}{Electrode} \vline & \multicolumn{3}{l}{Average (all testfunctions)}\vline \\ \hline
Grid & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ & $\hat{N}_{\varepsilon}$ & $N_{sr}^{(95\%)}$ & $N_{sr}^{(99\%)}$ \\ \hline\hline
Random (L1) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline\hline
LHS (SC-ESE) & 0.620 & 0.887 & 0.967 & 0.096 & 1.126 & 0.986 & 0.959 & 0.903 & 0.880 & 0.905 & 0.884 & 0.894 & 0.645 & 0.950 & 0.932 \\
LHS (MM) & 0.841 & 0.725 & 0.821 & 0.942 & 0.836 & 0.740 & 0.954 & 0.903 & 0.880 & 0.721 & 0.912 & 0.920 & 0.865 & 0.868 & 0.840 \\
LHS (STD) & 0.740 & 0.882 & 0.865 & 0.936 & 0.816 & 0.734 & 0.958 & 0.893 & 0.870 & 0.863 & 0.878 & 0.871 & 0.874 & 0.867 & 0.835 \\ \hline\hline
L1 (MC) & 0.624 & 1.461 & 1.381 & 0.515 & 0.982 & 0.922 & 1.008 & 1.074 & 1.121 & 1.027 & 1.027 & 1.045 & 0.794 & 1.136 & 1.117 \\
L1 (MC-CC) & 1.412 & 1.294 & 1.292 & 0.790 & 0.766 & 0.679 & 0.982 & 0.961 & 0.947 & 1.022 & 1.293 & 1.337 & 1.052 & 1.078 & 1.064 \\
L1 (D) & 1.032 & 0.922 & 1.131 & 0.853 & 0.787 & 0.721 & 1.052 & 0.986 & 0.955 & 1.001 & 0.959 & 0.951 & 0.984 & 0.966 & 0.940 \\
L1 (D-COH) & 1.001 & 1.000 & 1.062 & 0.814 & 0.778 & 0.700 & 1.061 & 1.070 & 1.093 & 1.008 & 1.000 & 1.000 & 0.971 & 0.962 & 0.964 \\
L1 (CO) & 2.366 & 2.020 & 2.569 & 0.540 & 0.883 & 0.817 & 0.993 & 1.041 & 1.025 & 1.049 & 1.061 & 1.080 & 1.237 & 1.256 & 1.373 \\ \hline
\end{tabular}
\end{table}
\end{document} |
1,116,691,498,396 | arxiv | \section{Introduction}
\IEEEPARstart{I}{t} has been
well known for many years that the derivation of the
rate--distortion function of a given source and distortion measure,
does not lend itself to closed form expressions, even
in the memoryless case, except for
a few very simple examples
\cite{Berger71},\cite{CT06},\cite{CK81},\cite{Gray90}.
This has triggered the derivation of some upper and lower bounds,
both for memoryless sources and for sources with memory.
One of the most important lower bounds on the rate--distortion function,
which is applicable for difference distortion measures (i.e., distortion
functions that depend on their two arguments only through the difference between
them), is the Shannon lower bound in its different forms, e.g., the discrete
Shannon lower bound, the continuous Shannon lower bound, and the
vector Shannon lower bound. This family of bounds is especially
useful for semi-norm--based distortion measures \cite[Section 4.8]{Gray90}.
The Wyner--Ziv lower bound
\cite{WZ71} for a source with memory is a
convenient bound, which is based on the rate--distortion function
of the memoryless source formed from the product measure pertaining to the single--letter
marginal distribution of the original source and it may be combined elegantly
with the Shannon lower bound.
The autoregressive lower bound
asserts that the
rate--distortion function
of an autoregressive source
is lower bounded by the rate--distortion function
of its innovation process, which is again, a memoryless source.
Upper bounds are conceptually easier to derive, as they may result from the
performance analysis of a concrete coding scheme, or from
random coding with respect to
(w.r.t.) an arbitrary random coding distribution, etc.
One well known example is the Gaussian upper bound, which upper bounds the
rate--distortion
function of an arbitrary memoryless (zero--mean) source w.r.t.\ the
squared error distortion measure by the rate--distortion
function of the Gaussian source with the same second moment.
If the original source has memory, then the same principle generalizes
with the corresponding Gaussian source having the same autocorrelation function
as the original source \cite[Section 4.6]{Berger71}.
In this paper, we focus on
a simple general
parametric representation of the rate--distortion function
which seems to set the stage for the derivation
of a rather wide family of both
upper bounds and lower bounds on the rate--distortion function.
In this parametric representation, both the rate and the distortion
are given by integrals whose integrands include
the minimum mean square error (MMSE) of the distortion based
on the source symbol, with respect to a certain joint distribution of these two
random variables. More concretely, given a memoryless source designated by a random
variable (RV) $X$, governed by a probability function\footnote{Here, and
throughout the sequel,
the term ``probability function'' refers to a probability mass function
in the discrete case and to a probability density function in
the continuous case.} $p(x)$,
a reproduction variable $Y$, governed by a probability function $q(y)$,
and a distortion measure $d(x,y)$, the rate and the distortion can be
represented parametrically via a real parameter $s\in[0,\infty)$ as follows:
\begin{eqnarray}
\label{distortion}
D_s&=&D_0-\int_0^s\mbox{d}\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)\nonumber\\
&=&D_\infty+\int_s^\infty\mbox{d}\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)
\end{eqnarray}
and
\begin{eqnarray}
\label{rate}
R_q(D_s)&=&\int_0^s\mbox{d}\hs\cdot\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)\nonumber\\
&=&R_q(D_\infty)-\int_s^\infty\mbox{d}\hs\cdot\hs\cdot\mbox{mmse}_{\hs}(\Delta|X),
\end{eqnarray}
where $D_s$ is the distortion pertaining to parameter value $s$,
$R_q(D_s)$ is the rate--distortion function w.r.t.\ reproduction
distribution $q$, computed at $D_s$,
$\Delta=d(X,Y)$, and $\mbox{mmse}_s(\Delta|X)$ is the MMSE of estimating
$\Delta$ based on $X$, where the joint probability function of $(X,\Delta)$ is induced
by the following joint probability function of $(X,Y)$:
\begin{equation}
p_s(x,y)=p(x)\cdot w_s(y|x)=p(x)\cdot\frac{q(y)e^{-sd(x,y)}}{Z_x(s)}
\end{equation}
where $Z_x(s)$ is a normalization constant, given by
$\int\mbox{d}yq(y)e^{-sd(x,y)}$ in the continuous case, or
$\sum_yq(y)e^{-sd(x,y)}$ in the discrete case.
At first glance, eq.\ (\ref{rate}) looks somewhat similar to the
I--MMSE relation of \cite{GSV05},
which relates the mutual information between the input and the output
of an additive white Gaussian noise (AWGN)
channel and the MMSE of estimating the channel input based
on the noisy channel output.
As we discuss later on, however,
eq.\ (\ref{rate}) is actually very different from the I-MMSE relation
in many respects. In this context, it is important to emphasize that
a relation analogous to (\ref{rate}) applies also to channel capacity,
as will be discussed in the sequel.
The relations (\ref{distortion}) and
(\ref{rate}) have actually already been raised in a
companion paper \cite{Merhav09a}
(see also \cite{Merhav09b} for a conference version).
Their derivation there was triggered and inspired by certain analogies
between the rate--distortion problem and statistical mechanics, which were
the main theme of that work. However, the significance and the usefulness
of these rate--distortion-MMSE
relations were not explored in \cite{Merhav09a} and \cite{Merhav09b}.
It is the purpose of the present work to study these
relations more closely and to demonstrate their utility,
which is, as said before, in deriving upper and lower bounds.
The underlying idea is that bounds on $R_q(D)$
(and sometimes also on $R(D)=\min_qR_q(D)$)
may be obtained via relatively simple bounds on
the MMSE of $\Delta$ based on $X$. These bounds can either
be simple technical bounds
on the expression of the MMSE itself, or bounds that stem
from pure estimation--theoretic considerations. For example, upper bounds
may be derived by analyzing the MMSE of a certain sub-optimum estimator, e.g.,
a linear estimator, which is easy to analyze.
Lower bounds can be taken from the available plethora of lower bounds
offered by estimation theory, e.g., the Cram\'er--Rao lower bound.
Indeed, an important part of this work is a section of examples,
where it is demonstrated how to use the proposed relations and
derive explicit bounds from them. In one of these
examples, we derive two sets of
upper and lower bounds, one for a certain range of low distortions and
the other, for high distortion values. At both edge-points of the
interval of distortion values of interest, the corresponding upper and
lower bound asymptotically approach the limiting value with the same
leading term, and so, they sandwich the exact asymptotic behavior of the
rate--distortion function, both in the low distortion limit and in the
high distortion limit.
The outline of this paper is as follows. In Section II,
we establish notation conventions. In Section III,
we formally present the main result, prove it, and discuss its
significance from the above--mentioned aspects. In Section IV, we
provide a few examples that demonstrate the usefulness of the MMSE
relations. Finally, in Section V, we summarize and conclude.
\section{Notation Conventions}
\label{notation}
Throughout this paper,
RV's will be denoted by capital
letters, their sample values will be denoted by
the respective lower case letters, and their alphabets will be denoted
by the respective calligraphic letters.
For example, $X$ is a random variable, $x$ is a specific realization
of $X$, and $\calX$ is the alphabet in which $X$ and $x$ take on values.
This alphabet may be finite, countably infinite, or a continuum, like the
real line $\reals$ or an interval $[a,b]\subset\reals$.
Sources and channels will be denoted generically by the letter $p$, or $q$,
which will designate also their corresponding probability functions, i.e.,
a probability density function (pdf) in the continuous case, or a probability mass
function (pmf) in the discrete case.
Information--theoretic quantities, like entropies and mutual
informations, will be denoted according to the usual conventions
of the information theory literature, e.g., $H(X)$, $I(X;Y)$,
and so on. If a RV is continuous--valued, then its differential entropy
and conditional differential entropy
will be denoted with $h$ instead of $H$, i.e., $h(X)$ is the conditional
differential entropy of $X$, $h(X|Y)$ is the conditional differential entropy of $X$
given $Y$, and so on. The expectation operator will be denoted, as usual,
by $\bE\{\cdot\}$.
Given a source RV $X$, governed by a probability function $p(x)$, $x\in\calX$,
a reproduction RV $Y$, governed by a probability function $q(y)$, $y\in\calY$,
and a distortion measure $d:\calX\times\calY\to \reals^+$, we define the
rate--distortion function of $X$ w.r.t.\ distortion measure $d$ and
reproduction distribution $q$ as
\begin{equation}
R_q(D)\dfn\min I(X;Y),
\end{equation}
where $X\sim p$ and the minimum is across
all channels $\{w(y|x),~x\in\calX,~y\in\calY\}$
that satisfy $\bE\{d(X,Y)\}\le D$ and $\bE\{w(y|X)\}=q(y)$ for all
$y\in\calY$. Clearly, the rate--distortion function, $R(D)$, is given
by $R(D)=\inf_qR_q(D)$. We will also use the notation $\Delta\dfn d(X,Y)$.
Obviously, since $X$ and $Y$ are RV's, then so is $\Delta$.
\section{MMSE Relations: Basic Result and Discussion}
Throughout this section, our definitions will assume that both $\calX$ and
$\calY$ are finite alphabets. Extensions to continuous alphabets will be obtained
by a limit of fine quantizations, with summations eventually being replaced by integrations.
Referring to the notation defined in Section \ref{notation},
for a given positive real $s$, define the conditional probability function
\begin{equation}
w_s(y|x)\dfn\frac{q(y)e^{-sd(x,y)}}{Z_x(s)}
\end{equation}
where
\begin{equation}
Z_x(s)\dfn\sum_{y\in\calY}q(y)e^{-sd(x,y)}
\end{equation}
and the joint pmf
\begin{equation}
p_s(x,y)=p(x)w_s(y|x).
\end{equation}
Further, let
\begin{eqnarray}
\mbox{mmse}_s(\Delta|X)&=&\bE_s\{[\Delta-\bE\{\Delta|X\}]^2\}\nonumber\\
&=&\bE_s\{[d(X,Y)-\bE_s\{d(X,Y)|X\}]^2\}
\end{eqnarray}
where $\bE_s\{\cdot\}$ is the expectation operator w.r.t.\ $\{p_s(x,y)\}$,
and defining $\psi(x)$ as the conditional expectation $\bE_s\{d(x,Y)|X=x\}$
w.r.t.\ $\{w_s(y|x)\}$, $\bE_s\{d(X,Y)|X\}$ is defined as $\psi(X)$.
Our main result, in this section, is the following (the proof appears in the
Appendix):
\noindent
\begin{theorem}
\label{thm1}
The function $R_q(D)$ can be represented parametrically via the parameter
$s\in[0,\infty)$ as follows:
\begin{itemize}
\item[(a)]
The distortion is obtained by
\begin{eqnarray}
\label{dist1}
D_s&=&D_0-\int_0^s\mbox{d}\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)\nonumber\\
&=&D_\infty+\int_s^\infty\mbox{d}\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)
\end{eqnarray}
where
\begin{equation}
D_0=\sum_{x,y}p(x)q(y)d(x,y)
\end{equation}
and
\begin{equation}
D_{\infty}=\sum_xp(x)\min_yd(x,y).
\end{equation}
\item[(b)]
The rate is given by
\begin{eqnarray}
\label{rate1}
& &R_q(D_s)\nonumber\\
&=&\int_0^s\mbox{d}\hs\cdot\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)\nonumber\\
&=&R_q(D_\infty)-\int_s^\infty\mbox{d}\hs\cdot\hs\cdot\mbox{mmse}_{\hs}(\Delta|X).
\end{eqnarray}
\end{itemize}
\end{theorem}
In the remaining part of this section, we discuss the significance
and the implications of Theorem 1 from several aspects.\\
\noindent
{\it Some General Technical Comments}\\
The parameter $s$ has the geometric meaning of the negative local slope
of the function $R_q(D)$. This is easily seen by taking the derivatives
of (\ref{dist1}) and (\ref{rate1}), i.e.,
$\mbox{d}R_q(D_s)/\mbox{d}s=s\cdot\mbox{mmse}_s(\Delta|X)$ and
$\mbox{d}D_s/\mbox{d}s=-\mbox{mmse}_s(\Delta|X)$, whose ratio is $R_q'(D_s)=-s$.
This means also that the parameter $s$
plays the same role as in the well known parametric representations
of \cite{Berger71} and \cite{Gray90}, which is to say that it can also be thought
of as the Lagrange multiplier of the minimization of
$[I(X;Y)+s\bE\{d(X,Y)\}]$ subject to the reproduction distribution constraint.
On a related note, we point out that
Theorem \ref{thm1} is based on the following representation of
$R_q(D)$:
\begin{equation}
\label{lgd1}
R_q(D)=-\min_{s\ge 0}\left[sD+\sum_{x\in\calX}p(x)\ln Z_x(s)\right],
\end{equation}
which we prove in the Appendix as the first step in the proof of Theorem 1.
It should be emphasized that the pmf $q$, that plays a role in the
definition of $w_{\hs}(y|x)$ (and hence also the definition of
$\mbox{mmse}_{\hs}(\Delta|X)$) should be kept {\it fixed} throughout the
integration, independently of the integration variable $\hs$, since it is the same
pmf as in the definition of $R_q(D)$. Thus, even if $q$ is known to
be optimum for a given target distortion $D$ (and then it yields $R(D)$), the pmf $q$
must be kept unaltered throughout the integration, in spite of the fact that
for other values of $\hs$ (which correspond to other distortion levels),
the optimum reproduction pmf might be different. In particular, note
that the marginal of $Y$, that is induced from the joint pmf $p_s(x,y)$, may not
necessarily agree with $q$. Thus, $p_{\hs}(x,y)$ should only be considered as an
auxiliary joint distribution that defines $\mbox{mmse}_{\hs}(\Delta|X)$.\\
\noindent
{\it Using Theorem 1 for Bounds on $R_q(D)$}\\
As was briefly explained in the Introduction (and
will also be demonstrated in the next section), Theorem \ref{thm1}
may set the stage for the derivation of upper and lower bounds
to $R_q(D)$ for a general
reproduction distribution $q$, and hence also for the
rate--distortion function $R(D)$ when the optimum $q$ is happened
to be known or is easily derivable
(e.g., from symmetry and convexity considerations).
The basic underlying
idea is that bounds on $R_q(D)$ may be induced from bounds on
$\mbox{mmse}_{\hs}(\Delta|X)$ across the integration interval. The bounds
on the MMSE may either be derived from purely technical considerations,
upon analyzing the expression of the MMSE directly, or by using
estimation--theoretic tools. In the latter case, lower bounds may be
obtained from fundamental lower bounds to the MMSE, like the Bayesian
Cram\'er--Rao bound, or more advanced lower bounds available
from the estimation theory literature, for example,
the Weiss--Weinstein bound
\cite{WW85},\cite{Weiss85}, whenever applicable. Upper bounds may be
obtained by analyzing the mean square error (MSE) of a specific
(sub-optimum) estimator, which is relatively easy to analyze, or more
generally by analyzing the performance of the best estimator within
a certain limited class of estimators, like the class of linear estimators
of the `observation' $X$, or a certain fixed function of $X$.
In Theorem 1 we have deliberately presented two integral forms for
both the rate and the distortion. As $D_s$ is
monotonically decreasing and $R_q(D_s)$ is
monotonically increasing in $s$, the integrals at the first lines of
both eqs.\ (\ref{dist1}) and (\ref{rate1}),
which include relatively small values of $\hs$,
naturally lend themselves to
derivation of bounds in the low--rate (high distortion) regime, whereas the
second lines of these equations are more suitable in low--distortion
(high resolution) region. For example, to derive an upper
bound on $R_q(D)$ in the high--distortion range, one would need
a lower bound on $\mbox{mmse}_{\hs}(\Delta|X)$ to be used in the
first line of (\ref{dist1}) and
an upper bound on $\mbox{mmse}_{\hs}(\Delta|X)$ to be substituted into the
first line of (\ref{rate1}). If one can then derive, from the former, an upper bound on
$s$ as a function of $D$, and substitute it into the upper bound on
the rate in terms on $s$,
then this will result in an upper bound to $R_q(D)$.
A similar kind of reasoning is applicable to
the derivation of other types of bounds. This point will be demonstrated
mainly in Examples C and D in the next section.\\
\noindent
{\it Comparison to the I--MMSE Relations}\\
In the more conceptual level,
item (b) of Theorem 1 may remind the familiar reader
about well--known results due to Guo, Shamai and Verd\'u \cite{GSV05},
which are referred to as I--MMSE relations (as well as later works
that generalize these relations). The similarity between eq.\ (\ref{rate1})
and the I--MMSE relation (in its basic form)
is that in both cases a mutual information
is expressed as an integral whose integrand includes the MMSE of a certain
random variable (or vector) given some observation(s). However, to the
best of our judgment, this is the only similarity.
In order to sharpen
the comparison between the two relations, it is instructive to look at
the special case where all random variables are Gaussian and the distortion
measure is quadratic: In the
context of Theorem 1, consider $Y$ to be a zero--mean Gaussian RV with
variance $\sigma_y^2$,
and let $d(x,y)=(x-y)^2$. As will be seen in Example B of the next section,
this then means that $w_s(y|x)$ can be described by the additive Gaussian
channel $Y=aX+Z$, where $a=2s\sigma_y^2/(1+2s\sigma_y^2)$ and $Z$ is a
zero--mean Gaussian RV, independent of $X$, and with variance
$\sigma_y^2/(1+2s\sigma_y^2)$. Here, we have $\Delta=(Y-X)^2=[Z-(1-a)X]^2$.
Thus, the integrand of (\ref{rate1}) includes the MMSE in estimating
$[Z-(1-a)X]^2$ based on the {\it channel input} $X$. It is therefore about
estimating a certain
function of $Z$ and $X$, where $X$ is the observation at hand and
$Z$ is independent of $X$.
This is very different from the paradigm of the
I--MMSE relation: there the channel is $Y=\sqrt{\mbox{\sl snr}}X+Z$, where $Z$ is
standard normal, the integration variable is $\mbox{\sl snr}$, and the estimated
RV is $X$ (or equivalently, $Z$) based on the {\it channel output}, $Y$. Also,
by comparing the two channels, it is readily seen that the
integration variable $s$, in our setting, can be related to the integration
variable, $\mbox{\sl snr}$, of the I-MMSE relation
according to
\begin{equation}
\mbox{\sl snr}=\frac{4s^2}{\sigma_y^2(1+2s\sigma_y^2)},
\end{equation}
and so, the relation between the two integration variables is highly
non--linear. We therefore observe that the two MMSE results are fairly different.\\
\noindent
{\it Analogous MMSE Formula for Channel Capacity}\\
Eq.\ (\ref{lgd1}) can be understood conveniently
as an achievable rate using a simple random
coding argument (see Appendix):
The coding rate $R$ should be (slightly larger than) the large
deviations rate
function of the probability of the event $\{\sum_{i=1}^n
d(x_i,Y_i)\le nD\}$, where $(x_1,\ldots,x_n)$ is a typical source sequence
and $(Y_1,\ldots,Y_n)$
are drawn i.i.d.\ from $q$. As is well known, a similar random coding argument applies to
channel coding (see also \cite{Merhav08}): Channel capacity can be obtained as
the large deviations rate function of
the event $\{\sum_{i=1}^n
d(X_i,y_i)\le nD\}$, where
now $(y_1,\ldots,y_n)$ is a channel output sequence typical to $q$,
$(X_1,\ldots,X_n)$ are drawn i.i.d.\ according to a given
input pmf $\{p(x)\}$, the distortion measure is
chosen to be $d(x,y)=-\ln w(y|x)$ ($\{w(y|x)\}$ being the channel transition
probabilities) and $D=H(Y|X)$. Thus, the analogue of (\ref{lgd1}) is
\begin{equation}
C_p=-\min_{s\ge 0}\left[sH(Y|X)+\sum_{y\in
Y}q(y)\ln Z_y(s)\right]
\end{equation}
where
\begin{equation}
Z_y(s)=\sum_{x\in\calX}p(x)w^s(y|x)
\end{equation}
and the minimizing $s$ is always $s^*=1$.
Consequently, the corresponding integrated MMSE formula would read
\begin{equation}
\label{cp}
C_p=\int_0^1\mbox{d}s\cdot s\cdot\mbox{mmse}_s[\ln p(Y|X)|Y],
\end{equation}
where $\mbox{mmse}_s[\ln p(Y|X)|Y]$ is defined w.r.t.\ the joint pmf
\begin{equation}
q_s(x,y)=q(y)v_s(x|y)=q(y)\cdot\frac{p(x)w^s(y|x)}{Z_y(s)}.
\end{equation}
Eq.\ (\ref{cp}) seems to be less useful than the analogous rate--distortion formulas,
for a very simple reason:
Since the channel is given, then once the input pmf $p$ is given
too (which is required for the use of (\ref{cp})), one can simply compute
the mutual information, which is easier than applying (\ref{cp}). This is
different from the situation in the rate--distortion problem, where even if
both $p$ and $q$ are given, in order to compute $R_q(D)$ in the direct way,
one still needs to minimize the mutual information
w.r.t.\ the channel between $X$ and $Y$. Eq.\ (\ref{cp}) is therefore
presented here merely for the purpose of drawing the duality.\\
\noindent
{\it Analogies With Statistical Mechanics}\\
As was shown in \cite{Rose94} and further advocated in \cite{Merhav08},
the Legendre relation (\ref{lgd1}) has a natural statistical--mechanical
interpretation, where $Z_x(s)$ plays the role of a partition function
of a system (indexed by $x$), $d(x,y)$ is an energy function (Hamiltonian)
and $s$ plays the role of inverse temperature (normally denoted by $\beta$
in the Physics literature). The minimizing $s$ is then the equilibrium inverse
temperature when $|\calX|$ systems (each indexed by $x$, with $n(x)=np(x)$
particles and Hamiltonian $\calE_x(y)=d(x,y)$) are brought into thermal
contact and a total energy of $nD$ is split among them. In this case,
$-R_q(D)$ is the thermodynamical entropy of the combined system and the
MMSE, which is $\mbox{d}D_s/\mbox{d}s$, is intimately related to the
heat capacity of the system.
An alternative, though similar, interpretation was given in
\cite{Merhav09a},\cite{Merhav09b},
where the parameter $s$ was interpreted
as being proportional to a generalized force acting on the
system (e.g., pressure or magnetic field), and the distortion variable is
the conjugate physical quantity influenced by this force (e.g., volume in the case of
pressure, or magnetization in the case of a magnetic field). In this case, the
minimizing $s$ means the equal force that each one of the various subsystems is applying
on the others when they are brought into contact and they equilibrate (e.g.,
equal pressures between two volumes of a gas separated by piston which is free
to move). In this
case, $-R_q(D)$ is interpreted as the free energy of the system, and the
MMSE formulas are intimately related to the fluctuation--dissipation theorem
in statistical mechanics.
More concretely, it was shown in \cite{Merhav09a} that given a source
distribution and a distortion measure, we can describe (at least
conceptually) a concrete physical system
that emulates the rate--distortion
problem in the following manner:
When no force is applied to
the system, its total length is $nD_0$,
where $n$ is the number of particles in the system
(and also the block length in the rate--distortion problem),
and $D_0$ is as defined above.
If one applies to the system a contracting force, that increases
from zero to some final value
$\lambda$, such that the length of the system shrinks to
$nD$, where $D < D_0$ is
analogous to a prescribed distortion level,
then the following two facts hold true:
(i) An {\it achievable lower bound}
on the total amount of
mechanical work that must be carried out
by the contracting force in order to shrink the system to length $nD$, is
given by
\begin{equation}
W\ge nkTR_q(D),
\end{equation}
where $k$ is Boltzmann's constant and
$T$ is the temperature.
(ii) The final force
$\lambda$ is related to $D$ according to
$\lambda=kTR_q'(D)$, where $R_q'(\cdot)$ is the derivative of $R_q(\cdot)$.
Thus, the rate--distortion function plays the role of a fundamental limit,
not only in Information Theory, but in Physics as well.
\section{Examples}
In this section, we provide a few examples for the use of Theorem 1.
The first two examples are simple and well known, and their purpose
is just to demonstrate how to use
this theorem in order to calculate rate--distortion
functions. The third example is aimed to demonstrate how Theorem 1
can be useful as a new method to
evaluate the behavior of a certain rate--distortion
function (which is apparently not straightforward
to derive otherwise) at both the low distortion (a.k.a.\ high
resolution) regime and the high distortion regime. Specifically,
we first derive, for this example, upper and lower bounds on $R(D)$, which
are applicable in certain ranges of high--distortion.
These bounds have the same asymptotic behavior as $D$
tends to its maximum possible value, and so, they sandwich the
exact high--distortion asymptotic behavior
of the true rate--distortion function. A similar analysis in then carried out in
the low distortion range, and again, the two bounds have the same limiting
behavior in the very low distortion limit.
In the fourth and last example, we show how Theorem 1
can easily be used to
evaluate the high--resolution behavior of the rate distortion
function for a general power--law distortion measure of the form
$d(x,y)=|x-y|^r$.
\subsection{Binary Symmetric Source and Hamming Distortion}
Perhaps the simplest example is that of
the binary symmetric source (BSS) and the Hamming distortion
measure. In this case, the optimum $q$ is also symmetric. Here $\Delta=d(X,Y)$
is a binary RV with
\begin{equation}
\mbox{Pr}\{\Delta=1|X-x\}=\frac{e^{-s}}{1+e^{-s}}
\end{equation}
independently of
$x$. Thus, the MMSE estimator of $d(X,Y)$ based on $X$ is
\begin{equation}
\hat{\Delta}=\frac{e^{-s}}{1+e^{-s}},
\end{equation}
regardless of $X$,
and so the resulting MMSE (which
is simply the variance in this case) is easily found to be
\begin{equation}
\mbox{mmse}_s(\Delta|X)=\frac{e^{-s}}{(1+e^{-s})^2}.
\end{equation}
Accordingly,
\begin{equation}
D=\frac{1}{2}-\int_0^s\frac{e^{-\hs}\mbox{d}
\hs}{(1+e^{-\hs})^2}=\frac{e^{-s}}{1+e^{-s}}
\end{equation}
and
\begin{eqnarray}
R(D)&=&\int_0^s\frac{\hs e^{-\hs}
\mbox{d}\hs}{(1+e^{-\hs})^2}\nonumber\\
&=&\ln 2 + \frac{se^{s}}{1+e^{s}}-\ln(1+e^{s})\nonumber\\
&=&\ln 2-h_2\left(\frac{e^{s}}{1+e^{s}}\right)\nonumber\\
&=&\ln 2-h_2(D),
\end{eqnarray}
where $h_2(u)=-u\ln u-(1-u)\ln(1-u)$ is the binary entropy function.
\subsection{Quadratic distortion and Gaussian Reproduction}
Another classic example concerns a general source with $\sigma_x^2= E\{X^2\} <
\infty$, the quadratic distortion $d(x,y)=(x-y)^2$, and a Gaussian reproduction
distribution, namely, $q(y)$ is the pdf of a zero--mean Gaussian RV with
variance $\sigma_y^2=\sigma_x^2-D$, for
a given $D < \sigma_x^2$. In this case, it well known that
$R_q(D)=\frac{1}{2}\ln\frac{\sigma_x^2}{D}$ (even without assuming that
the source $X$ is Gaussian).
We now demonstrate how this
result is obtained from the MMSE formula of Theorem 1.\footnote{We are not
arguing here that this is the simplest way to calculate $R_q(D)$ in this
example, the purpose is merely to demonstrate how Theorem 1 can be used.}
First, observe that since $q(y)$ is the pdf pertaining to
$\calN(0,\sigma_x^2-D)$,
then
\begin{equation}
w_s(y|x)=\frac{q(y)e^{-s(y-x)^2}}{\int_{-\infty}^{+\infty}\mbox{d}y'q(y')e^{-s(y'-x)^2}}
\end{equation}
is easily found to correspond to the Gaussian additive channel
\begin{equation}
Y= \frac{2s(\sigma_x^2-D)}{1+2s(\sigma_x^2-D)}\cdot X+Z
\end{equation}
where $Z$ is a zero--mean Gaussian RV with variance
$\sigma_z^2=(\sigma_x^2-D)/[1+2s(\sigma_x^2-D)]$, and $Z$ is uncorrelated with $X$.
Now,
\begin{eqnarray}
\Delta&=&(Y-X)^2\nonumber\\
&=&\left[Y-\frac{2s(\sigma_x^2-D)}{1+2s(\sigma_x^2-D)}\cdot X
-\frac{X}{1+2s(\sigma_x^2-D)}\right]^2\nonumber\\
&=&(Z-\alpha X)^2\nonumber\\
&=&Z^2-2\alpha XZ+\alpha^2X^2
\end{eqnarray}
where $\alpha\dfn 1/[1+2s(\sigma_x^2-D)]$.
Thus, the MMSE estimator of $\Delta$ given $X$ is obtained by
\begin{eqnarray}
\hat{\Delta}&=&\bE\{\Delta|X\}\nonumber\\
&=&\bE\{Z^2|X\}-2\alpha X\bE\{Z|X\}+\alpha^2X^2\nonumber\\
&=&\bE\{Z^2\}-2\alpha X\bE\{Z\}+\alpha^2X^2\nonumber\\
&=&\bE\{Z^2\}+\alpha^2X^2\nonumber\\
&=&\sigma_z^2+\alpha^2X^2,
\end{eqnarray}
which yields
\begin{eqnarray}
& &\mbox{mmse}_s\{\Delta|X\}\nonumber\\
&=&
\bE\{(\hat{\Delta}-\Delta)^2\}\nonumber\\
&=&\bE\{(\sigma_z^2+\alpha^2X^2-Z^2+
2\alpha XZ-\alpha^2X^2)^2\}\nonumber\\
&=&2\sigma_z^4+4\alpha^2\sigma_x^2\sigma_z^2\nonumber\\
&=&\frac{2(\sigma_x^2-D)^2}{[1+2s(\sigma_x^2-D)]^2}+
\frac{4\sigma_x^2(\sigma_x^2-D)}{[1+2s(\sigma_x^2-D)]^3}.
\end{eqnarray}
Now, in our case, $D_0=\sigma_x^2+\sigma_y^2=2\sigma_x^2-D$, and so,
for $s=1/(2D)$, we get
\begin{eqnarray}
D_s&=&D_0-\int_0^s\mbox{d}\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)\nonumber\\
&=&2\sigma_x^2-D-\nonumber\\
& &2(\sigma_x^2-D)^2\int_0^{1/2D}\frac{\mbox{d}\hs}{[1+2\hs(\sigma_x^2-D)]^2}-\nonumber\\
& &4\sigma_x^2(\sigma_x^2-D)\int_0^{1/2D}\frac{\mbox{d}\hs}{[1+2\hs(\sigma_x^2-D)]^3}\nonumber\\
&=&2\sigma_x^2-D+\nonumber\\
& &(\sigma_x^2-D)\left[\frac{1}{1+2s(\sigma_x^2-D)}\right]_0^{1/2D}+\nonumber\\
& &\sigma_x^2\left\{\frac{1}{[1+2s(\sigma_x^2-D)]^2}\right\}_0^{1/2D}
\end{eqnarray}
which, after some straightforward algebra, gives $D_s=D$. I.e.,
$s$ and $D$ are indeed related by $s=1/(2D)$, or $D=1/(2s)$.
Finally,
\begin{eqnarray}
R_q(D)&=&\int_0^s\mbox{d}\hs\cdot\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)\nonumber\\
&=&2(\sigma_x^2-D)^2\int_0^{1/2D}\frac{\hs\mbox{d}\hs}{[1+2\hs(\sigma_x^2-D)]^2}+\nonumber\\
& &4\sigma_x^2(\sigma_x^2-D)\int_0^{1/2D}
\frac{\hs\mbox{d}\hs}{[1+2\hs(\sigma_x^2-D)]^3}\nonumber\\
&=&\frac{1}{2}\left\{\ln[1+2s(\sigma_x^2-D)]+\right.\nonumber\\
& &\left.\frac{1}{1+2s(\sigma_x^2-D)}\right\}_0^{1/2D}+\nonumber\\
& &\frac{\sigma_x^2}{\sigma_x^2-D}
\left[\frac{1}{2[1+2s(\sigma_x^2-D)]^2}-\right.\nonumber\\
& &\left.\frac{1}{1+2s(\sigma_x^2-D)}\right]_0^{1/2D}
\end{eqnarray}
which yields, after a simple algebraic manipulation,
$R_q(D)=\frac{1}{2}\ln\frac{\sigma_x^2}{D}$.
\subsection{Quadratic Distortion and Binary Reproduction}
In this example, we again assume the quadratic distortion measure, but now, instead
of Gaussian reproduction codewords, we impose binary reproduction, $y\in\{-a,+a\}$,
where $a$ is a given constant.\footnote{The derivation, in this example, can be
extended to apply also to larger finite reproduction alphabets.}
Clearly, if the pdf of the source $X$ is
symmetric about the origin, then the best output distribution is also
symmetric, i.e.,
$q(+a)=q(-a)=1/2$. Thus, $R_q(D)=R(D)$ for every $D$, given this choice of $q$.
The channel $w_s(y|x)$ is now given by
\begin{equation}
w_s(y|x)=\frac{e^{-s(y-x)^2}}{e^{-s(x-a)^2}+e^{-s(x+a)^2}}=\frac{e^{2sxy}}{2\cosh(2asx)}.
\end{equation}
Note that in this case, the minimum possible distortion
(obtained for $s\to\infty$) is given by
$D_{\infty}=\bE\{[X-a\mbox{sgn}(X)]^2\}$. Thus, the rate--distortion function
is actually defined only for $D\ge D_{\infty}$. The maximum distortion of
interest is $D_0=\sigma_x^2+a^2$, pertaining to the choice $s=0$, where $X$ and $Y$ are
independent.
To the best of our knowledge, there is no closed form expression for $R(D)$ in
this example. The parametric representation of $D_s$ and $R(D_s)$, both as
functions of $s$, does not seem to lend itself to an explicit formula of
$R(D)$. The reason is that
\begin{eqnarray}
D_s&=&\bE\{(Y-X)^2\}\nonumber\\
&=&\sigma_x^2+a^2-2\bE\{XY\}\nonumber\\
&=&\sigma_x^2+a^2-2\bE\{X\cdot\bE\{Y|X\}\}\nonumber\\
&=&\sigma_x^2+a^2-2a\bE\{X\tanh(2asX)\}
\end{eqnarray}
and there is no apparent closed--form expression of $s$ a function of $D$,
which can be substituted into the expression of $R(D_s)$.
Consider the MMSE estimator of $\Delta=(Y-X)^2=X^2+a^2-2XY$:
\begin{eqnarray}
\hat{\Delta}&=&\bE\{(Y-X)^2|X\}\nonumber\\
&=&X^2+a^2-2X\bE\{Y|X\}\nonumber\\
&=&X^2+a^2-2aX\tanh(2asX).
\end{eqnarray}
The MMSE is then
\begin{eqnarray}
\mbox{mmse}_s(\Delta|X)&=&\bE\{[2X(Y-a\tanh(2asX))]^2\}\nonumber\\
&=&4a^2[\sigma_x^2-\bE\{X^2\tanh^2(2asX)\}].
\end{eqnarray}
We first use this expression to obtain upper and lower bounds on $R(D)$
which are asymptotically exact in the
range of high distortion levels (small $s$).
Subsequently, we do the same for the range of low distortion (large $s$).\\
\noindent
{\it High Distortion.}
Consider first the high distortion regime.
For small $s$, we can safely upper bound $\tanh^2(2asX)$ by $(2asX)^2$ and get
\begin{eqnarray}
\mbox{mmse}_s(\Delta|X)&\ge&
4a^2(\sigma_x^2-4a^2s^2\bE\{X^4\})\nonumber\\
&=&4a^2\sigma_x^2-16a^4\rho_x^4s^2
\end{eqnarray}
where $\rho_x^4\dfn E\{X^4\}$.
This results in the following lower bound to $R(D_s)$:
\begin{eqnarray}
R(D_s)&=&\int_0^s\mbox{d}\hs\cdot\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)\nonumber\\
&\ge&\int_0^s\mbox{d}\hs\cdot\hs[4a^2\sigma_x^2-16a^4\rho_x^4\hs^2]\nonumber\\
&=&2a^2\sigma_x^2s^2-4a^4\rho_x^4s^4\dfn r(s).
\end{eqnarray}
To get a lower bound to $D_s$, we need an upper bound to the MMSE.
An obvious upper bound (which is tight for small $s$) is given by
$4a^2\sigma_x^2$, which yields:
\begin{eqnarray}
D_s&=&D_0-\int_0^s\mbox{d}\hs\cdot\mbox{mmse}_{\hs}(\Delta|X)\nonumber\\
&\ge&D_0-\int_0^s\mbox{d}\hs\cdot(4a^2\sigma_x^2)\nonumber\\
&=&D_0-4a^2\sigma_x^2s
\end{eqnarray}
or
\begin{equation}
s\ge \frac{D_0-D_s}{4a^2\sigma_x^2}.
\end{equation}
Consider now the range $s\in[0, \sigma_x/(2a\rho_x^2)]$, which is
the range where $r(s)$ is monotonically increasing as a function of $s$.
In this range, a lower bound on $s$ would yield a lower bound on $r(s)$,
and hence a lower bound to $R(D_s)$. Specifically, for $s\in[0,
\sigma_x/(2a\rho_x^2)]$, we get
\begin{eqnarray}
R(D_s)&\ge&r(s)\nonumber\\
&\ge&r\left(\frac{D_0-D_s}{4a^2\sigma_x^2}\right)\nonumber\\
&=&\frac{(D_0-D_s)^2}{8a^2\sigma_x^2}-\frac{\rho_x^4(D_0-D_s)^4}{64a^4\sigma_x^8}.
\end{eqnarray}
In other words, we obtain the lower bound
\begin{equation}
\label{lowerbound}
R(D)\ge
\frac{(D_0-D)^2}{8a^2\sigma_x^2}-\frac{\rho_x^4(D_0-D)^4}{64a^4\sigma_x^8}\dfn
R_L(D).
\end{equation}
for the range of distortions $D\in[D_0-2a\sigma_x^3/\rho_x^2,D_0]$.
It is obvious that, at least in some range of high distortion levels,\
this bound is better than the Shannon lower bound,
\begin{equation}
R_S(D)=h(X)-\frac{1}{2}\ln(2\pi e D),
\end{equation}
where $h(X)$ is the differential entropy of $X$. This can be seen right away
from the fact that $R_S(D)$ vanishes at $D=(2\pi
e)^{-1}e^{2h(X)}\le\sigma_x^2$, whereas the bound $R_L(D)$ of
(\ref{lowerbound}) vanishes at $D_0=\sigma_x^2+a^2$, which is strictly
larger.
By applying the above--mentioned upper bound to the MMSE in the rate equation,
and the lower bound to the MMSE -- in the distortion equation, we can also get an upper
bound to $R(D)$ in the high--distortion range, in a similar manner.
Specifically,
\begin{equation}
R(D_s)\le\int_0^s\mbox{d}\hs\cdot\hs(4a^2\sigma_x^2)=2a^2\sigma_x^2s^2,
\end{equation}
and
\begin{eqnarray}
D_s&\le&D_0-\int_0^s\mbox{d}\hs(4a^2\sigma_x^2-16a^4\rho_x^4\hs^2)\nonumber\\
&=&D_0-4a^2\sigma_x^2s+\frac{16}{3}a^4\rho_x^4s^3\dfn \delta(s).
\end{eqnarray}
Considering again the range $s\in[0, \sigma_x/(2a\rho_x^2)]$, where
$\delta(s)$ is monotonically decreasing, the inverse function $\delta^{-1}(D)$
is monotonically decreasing as well, and so an upper bound on $R(D)$ will be
obtained by substituting $\delta^{-1}(D)$ instead of $s$ in the bound on the
rate, i.e., $R(D)\le 2a^2\sigma_x^2[\delta^{-1}(D)]^2$. To obtain an explicit
expression for $\delta^{-1}(D)$, we need to solve a cubic equation in $s$ and
select the relevant solution among the three. Fortunately, since this cubic equation has no
quadratic term, the expression of the solution can be found
trigonometrically and it is relatively simple (see, e.g., \cite[p.\
9]{mathhandbook}): Specifically, the cubic equation $s^3+As+B=0$ has solutions of the
form $s=m\cos\theta$, where $m=2\sqrt{-A/3}$ and $\theta$ is any solution to
the equation $\cos(3\theta)=\frac{3B}{Am}$. In other words,
the three solutions to the above cubic equation are
$s_i=m\cos\theta_i$, where
\begin{equation}
\theta_i=\frac{1}{3}\cos^{-1}\left(\frac{3B}{Am}\right)+\frac{2\pi
(i-1)}{3},~~~~~~i=1,2,3,
\end{equation}
with $\cos^{-1}(t)$ being
defined as the unique solution to the equation $\cos\alpha=t$
in the range $\alpha\in[0,\pi]$.
In our case,
\begin{equation}
A=-\frac{3\sigma_x^2}{4a^2\rho_x^4},~~~B=\frac{3(D_0-D)}{16a^4\rho_x^4},
\end{equation}
and so, the relevant solution for $s$ (i.e., the one that tends to
zero as $D\to D_0$),
which is $\delta^{-1}(D)$, is given by
\begin{eqnarray}
& &\delta^{-1}(D)\nonumber\\
&=&\frac{\sigma_x}{a\rho_x^2}
\cos\left[\frac{1}{3}\cos^{-1}\left(
\frac{3\rho_x^2(D-D_0)}{4a\sigma_x^3}\right)+\frac{4\pi}{3}\right]\nonumber\\
&=&\frac{\sigma_x}{a\rho_x^2}\cos\left[
\frac{1}{3}\left(\frac{\pi}{2}+\sin^{-1}\left(
\frac{3\rho_x^2(D_0-D)}{4a\sigma_x^3}\right)\right)+\frac{4\pi}{3}\right]
\nonumber\\
&=&\frac{\sigma_x}{a\rho_x^2}\sin\left[
\frac{1}{3}\sin^{-1}\left(
\frac{3\rho_x^2(D_0-D)}{4a\sigma_x^3}\right)\right],
\end{eqnarray}
where $\sin^{-1}(t)$ is defined as the unique solution to the equation
$\sin\alpha =t$ in the range $\alpha\in[-\pi/2,\pi/2]$.
This yields the upper bound
\begin{eqnarray}
R(D)&\le&\frac{2\sigma_x^4}{\rho_x^4}\sin^2\left[
\frac{1}{3}\sin^{-1}\left(
\frac{3\rho_x^2(D_0-D)}{4a\sigma_x^3}\right)\right]\nonumber\\
&\dfn&R_U(D).
\end{eqnarray}
for the range of distortions $D\in[D_0-4a\sigma_x^3/(3\rho_x^2),D_0]$.
For very small $s$, since the upper and the lower bound
to the MMSE asymptotically coincide (namely,
$\mbox{mmse}_s(\Delta|X)\approx 4a^2\sigma_x^2$), then both $R_U(D)$ and
$R_L(D)$ exhibit the same behavior near $D=D_0$, and hence so does the
true rate--distortion function, $R(D)$, which is
\begin{equation}
R(D)\approx \frac{(D_0-D)^2}{8a^2\sigma_x^2}
\end{equation}
or, stated more rigorously,
\begin{equation}
\lim_{D\uparrow D_0}\frac{R(D)}{(D_0-D)^2}=\frac{1}{8a^2\sigma_x^2}.
\end{equation}
Note that the high--distortion behavior of $R(D)$ depends on the
pdf of $X$ only via its second
order moment $\sigma_x^2$. On the other hand, the upper and lower
bounds, $R_U(D)$ and
$R_L(D)$,
depend only on $\sigma_x^2$ and the fourth order moment, $\rho_x^4$.
In Fig.\ \ref{bounds}, we display the upper bound $R_U(D)$
(solid curve) and the lower bound $R_L(D)$ (dashed curve) for
the choice $\sigma_x^2=a^2=1$
(hence $D_0=\sigma_x^2+a^2=2$)
and $\rho_x^4=3$, which is suitable for
the Gaussian source. The range of displayed distortions, $[1.25,2]$,
is part of the range where both bounds are valid in this numerical example.
As can be seen, the
functions $R_L(D)$ and $R_U(D)$ are very close
throughout the interval $[1.7,2]$, which is a fairly
wide range of distortion levels.
The corresponding Shannon lower bound,
in this case, which is $R_S(D)=\max\{0,\frac{1}{2}\ln\frac{1}{D}\}$,
vanishes for all $D\ge 1$ and hence also in the range displayed in the
graph.\\
\begin{figure}[h!t!b!]
\centering
\includegraphics[width=8.5cm, height=8.5cm]{graph4.eps}
\caption{The upper bound $R_U(D)$ (solid curve)
and the lower bound $R_L(D)$ (dashed curve) in the high--distortion regime
for $\sigma_x^2=a^2=1$ and $\rho_x^4=3$. The Shannon lower bound
vanishes in this distortion range.}
\label{bounds}
\end{figure}
\noindent
{\it Low Distortion.}
We now consider the small distortion regime, where $s$ is very large.
Define the function
\begin{equation}
f(u)=\left(\frac{1-u}{1+u}\right)^2 ~~~u\in[0,1)
\end{equation}
and consider the Taylor series expansion of $f(u)$ around $u=0$,
which, for the sake of convenience, will be represented as
\begin{equation}
f(u)=1-\sum_{n=1}^\infty \phi_nu^n
\end{equation}
The coefficients $\{\phi_n\}$ will be determined explicitly in the sequel.
Now, clearly, $\tanh^2(2asx)\equiv f(e^{-4as|x|})$, and so we have
\begin{eqnarray}
& &\mbox{mmse}_s(\Delta|X)\nonumber\\
&=&4a^2\left[\sigma_x^2-\bE\{X^2f(\exp\{-4as|X|\})\}\right]\nonumber\\
&=&4a^2\left[\sigma_x^2-\bE\left\{X^2\left(1-\sum_{n=1}^\infty
\phi_ne^{-4ans|X|}\right)\right\}\right]\nonumber\\
&=&4a^2\sum_{n=1}^\infty\phi_n\bE\left\{X^2
e^{-4ans|X|}\right\}.
\end{eqnarray}
To continue from this point,
we will have to let $X$ assume a certain pdf.
For convenience, let us select $X$ to have the Laplacian
pdf with parameter $\theta$, i.e.,
\begin{equation}
p(x) = \frac{\theta}{2}e^{-\theta|x|}.
\end{equation}
We then obtain
\begin{eqnarray}
\mbox{mmse}_s(\Delta|X)&=&2a^2\theta\sum_{n=1}^\infty\phi_n\int_{-\infty}^{+\infty}
x^2e^{-(\theta+4ans)|x|}\mbox{d}x\nonumber\\
&=&8a^2\theta\sum_{n=1}^\infty\frac{\phi_n}{(\theta+4ans)^3}.
\end{eqnarray}
Thus,
\begin{eqnarray}
& &R(D_s)\nonumber\\
&=&R(D_\infty)-\int_s^\infty
\mbox{d}\hs\cdot\hs\cdot\mbox{mmse}_s(\Delta|X)\nonumber\\
&=&1-8a^2\theta\sum_{n=1}^\infty\phi_n\cdot\int_s^\infty
\frac{\mbox{d}\hs\cdot\hs}{(\theta+4an\hs)^3}\nonumber\\
&=&1-\frac{\theta}{2}\sum_{n=1}^\infty\frac{\phi_n}{n^2}\left[
\frac{1}{\theta+4ans}-\frac{\theta}{2(\theta+4ans)^2}\right].
\end{eqnarray}
Thus far, our derivation has been exact. We now make an approximation that
applies for large $s$ by neglecting the terms proportional to
$(\theta+4ans)^{-2}$ and by neglecting $\theta$ compared to $4ans$ in the
denominators of $1/(\theta+4ans)$. This results in the approximation
\begin{equation}
R(D_s)\approx
\tilde{R}(D_s)\dfn 1-\frac{\theta}{8as}\sum_{n=1}^\infty\frac{\phi_n}{n^3}.
\end{equation}
Let us denote $C\dfn\frac{\theta}{8a}\sum_{n=1}^\infty\frac{\phi_n}{n^3}$.
Then, $\tilde{R}(D_s)=1-C/s$.
Applying a similar calculation to
$D_s=D_{\infty}+\int_s^\infty\mbox{d}\hs\cdot\mbox{mmse}_{hs}(\Delta|X)$,
yields, in a similar manner, the approximation
\begin{equation}
D_s\approx
\tilde{D}_s\dfn D_\infty+\frac{C}{2s^2}.
\end{equation}
It is easy now to express $s$ as a function of $D$ and substitute into the
rate equation to obtain
\begin{equation}
\label{highres}
R(D)\approx 1-\sqrt{2C(D-D_\infty)}.
\end{equation}
Finally, it remains to determine the coefficients $\{\phi_n\}$ and then the
constant $C$. The coefficients can easily be obtained by using the identity
$(1+u)^{-1}=\sum_{n=0}^\infty(-1)^nu^n$ ($u\in[0,1)$),
which yields, after simple algebra, $\phi_n=4n(-1)^{n+1}$.
Thus,
\begin{equation}
C=\frac{\theta}{2a}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^2}=\frac{\pi^2\theta}{24a}.
\end{equation}
and we have obtained a precise characterization of $R(D)$ in the
high--resolution regime:
\begin{equation}
\label{highreslim}
\lim_{D\downarrow
D_\infty}\frac{1-R(D)}{\sqrt{D-D_\infty}}=\sqrt{2C}=
\frac{\pi}{2}\cdot\sqrt{\frac{\theta}{3a}}.
\end{equation}
By applying a somewhat more refined analysis, one obtains
(similarly as in the above derivation in the high distortion regime) upper and lower
bounds to $R(D_s)$ and $D_s$, this time, as polynomials in $1/s$.
These again lend themselves to the derivation of upper and lower bounds
on $R(D)$, which are applicable in certain intervals of low distortion.
Specifically, the resulting upper bound is
\begin{equation}
R(D)\le 1-\sqrt{2C(D-D_{\infty})}+C_1(D-D_{\infty}),
\end{equation}
where $C_1=\frac{9\theta}{\pi^2a}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^3}$,
and it is valid in the range $D\in[D_\infty, D_{\infty}+C/(2C_1^2)]$.
The obtained lower bound is
\begin{equation}
R(D)\ge 1-\frac{\sqrt{6C(D-D_{\infty})}}{2\cos\left[\frac{1}{3}\sin^{-1}\left(
2C_1\sqrt{\frac{6(D-D_{\infty})}{C}}\right)+\frac{\pi}{6}\right]},
\end{equation}
and it applies to the range $D\in[D_\infty,D_\infty+C/(12C_1^2)]$.
Both bounds have the same leading term in asymptotic behavior,
which supports eq.\ (\ref{highreslim}).
The details of this derivation are omitted since they are very similar to
those of the high--distortion analysis.
\subsection{High Resolution for a General $L^r$ Distortion Measure}
Consider the case where the distortion measure is given by the $L^r$ metric,
$d(x,y)=|x-y|^r$ for some fixed $r > 0$. Let the reproduction symbols
be selected independently at random according to the uniform pdf
\begin{equation}
q(y)=\left\{\begin{array}{ll}
\frac{1}{2A} & |y|\le A\\
0 & \mbox{elsewhere}\end{array}\right.
\end{equation}
Then
\begin{equation}
w_s(y|x)=\frac{e^{-s|y-x|^r}}{\int_{-A}^{+A}\mbox{d}y'\cdot e^{-s|y'-x|^r}}
\end{equation}
and so
\begin{eqnarray}
D_s&=&\int_{-\infty}^{+\infty}\mbox{d}xp(x)\cdot
\frac{\int_{-A}^{+A}\mbox{d}y\cdot |x-y|^re^{-s|y-x|^r}}
{\int_{-A}^{+A}\mbox{d}y\cdot e^{-s|y-x|^r}}\nonumber\\
&=&-\int_{-\infty}^{+\infty}\mbox{d}xp(x)\cdot\frac{\partial}{\partial
s}\ln\left[\int_{-A}^{+A}\mbox{d}y\cdot e^{-s|y-x|^r}\right].
\end{eqnarray}
Now, in the high--resolution limit, where $s$ is very large, the
integrand $e^{-s|y-x|^r}$ decays very rapidly as $y$ takes values
away from $x$, and so, for every $x\in(-A,+A)$ (which for large
enough $A$, is the dominant interval for
the outer integral over $p(x)\mbox{d}x$), the boundaries,
$-A$ and $+A$, of the inner integral can be extended to $-\infty$ and $+\infty$
within a negligible error term
(whose derivative w.r.t. $s$
is negligible too). Having done this, the inner integral no longer
depends on $\bx$, which also means that the outer integration over $x$ becomes
superfluous.
This results in
\begin{eqnarray}
D_s&=&-\frac{\partial}{\partial s}\ln\left[\int_{-\infty}^{+\infty}
\mbox{d}y\cdot e^{-s|y|^r}\right]\nonumber\\
&=&-\frac{\partial}{\partial s}\ln\left[s^{-1/r}\int_{-\infty}^{+\infty}
\mbox{d}(s^{1/r}y)e^{-|s^{1/r}y|^r}\right]\nonumber\\
&=&-\frac{\partial}{\partial s}\ln\left[s^{-1/r}\int_{-\infty}^{+\infty}
\mbox{d}t\cdot e^{-|t|^r}\right]\nonumber\\
&=&-\frac{\partial}{\partial s}\ln(s^{-1/r})\nonumber\\
&=&\frac{1}{rs}.
\end{eqnarray}
Thus,
\begin{equation}
\mbox{mmse}_s(\Delta|X)=-\frac{\mbox{d}D_s}{\mbox{d}s}=\frac{1}{rs^2},
\end{equation}
which yields
\begin{equation}
\frac{\mbox{d}R_q(D_s)}{\mbox{d}s}=s\cdot\mbox{mmse}_s(\Delta|X)=\frac{1}{rs}
\end{equation}
and so
\begin{eqnarray}
R_q(D_s)&=&K+\frac{1}{r}\ln s\nonumber\\
&=&K+\frac{1}{r}\ln\left(\frac{1}{rD_s}\right)
\end{eqnarray}
where $K$ is an integration constant. We have therefore
obtained that in the high--resolution limit, the
rate--distortion function w.r.t.\ $q$ behaves according to
\begin{equation}
R_q(D)=K'-\frac{1}{r}\ln D.
\end{equation}
with $K'=K-(\ln r)/r$. While this simple derivation does not determine yet the
constant $K'$, it does provide the correct characteristics
of the dependence of $R_q(D)$ upon $D$
for small $D$. For the case of quadratic distortion, where $r=2$, one easily
identifies the familiar factor of $1/2$ in front of the log--distortion term.
The exact constant $K$ (or $K'$) can be determined by returning to the original expression of
$R_q(D)$ as the Legendre transform of the log--moment generating function of
the distortion (eq.\ (\ref{lgd1}),
and setting there $s=1/(rD)$ as the minimizing $s$ for the given $D$.
The resulting expression turns out to be
\begin{equation}
K'=\ln\left[\frac{rA}{\Gamma(1/r)}\right]-\frac{1}{r}\ln(er).
\end{equation}
\section{Conclusion}
In this paper, we derived relations between the rate--distortion
function $R_q(D)$ and the MMSE in estimating the distortion given the source
symbol. These relations have been discussed from several aspects,
and it was demonstrated how they can be used to obtain upper and
lower bounds on $R_q(D)$, as well as the exact asymptotic behavior in very
high and very low distortion.
The bounds derived in our examples were
induced from purely mathematical bounds on the expression of the MMSE
directly. We have not explored, however, examples of bounds on $R_q(D)$
that stem from estimation--theoretic bounds on the MMSE, as was described in
Section III. In future work, it would be interesting to explore the usefulness
of such bounds as well. Another interesting direction for further work would be
to make an attempt to extend our results to rate--distortion functions
pertaining to more involved settings, such as successive refinement coding,
and situations that include side information.
\section*{Appendix}
\noindent
{\it Proof of Theorem 1.}\\
Consider a random selection of a codebook of $M=e^{nR}$ codewords, where the
various codewords are drawn independently, and each codeword,
$\bY=(Y_1,\ldots,Y_n)$, is drawn according to the product measure
$Q(\by)=\prod_{i=1}^nq(y_i)$. Let $\bx=(x_1,\ldots,x_n)$ be a typical source
vector, i.e., the number of times each symbol $x\in\calX$ appears in
$\bx$ is (very close to) $np(x)$. We now ask what
is the probability of the event
$\{\sum_{i=1}^nd(x_i,Y_i)\le nD\}$? As this is a large deviations event
whenever $D < \sum_{x,y}p(x)q(y)d(x,y)$, this probability must decay
exponentially
with some rate function $I_q(D) > 0$, i.e.,
\begin{equation}
I_q(D)=\lim_{n\to\infty}\left[-\frac{1}{n}\ln\mbox{Pr}\left\{\sum_{i=1}^nd(x_i,Y_i)\le
nD\right\}\right].
\end{equation}
The function $I_q(D)$ can be determined in two ways.
The first is by the method of types \cite{CK81}, which easily yields
\begin{equation}
\label{mot}
I_q(D)=\min[I(X;Y')+D(q'\|q)],
\end{equation}
where the $Y'$ is an auxiliary random variable governed by
$q'(y)=\sum_{x\in\calX}p(x)w(y|x)$ and the minimum is over all
conditional pmf's $\{w(y|x)\}$ that satisfy the inequality
$\sum_{x\in\calX}p(x)\sum_{y\in\calY}w(y|x)d(x,y)\le D$.
The second method is based on large deviations theory \cite{DZ93} (see also
\cite{Merhav08}), which yields
\begin{equation}
\label{ldt}
I_q(D)=-\min_{s\ge 0}\left[sD+\sum_{x\in\calX}p(x)\ln Z_x(s)\right].
\end{equation}
We first argue that $I_q(D)=R_q(D)$.
The inequality $I_q(D)\le R_q(D)$ is
obvious, as $R_q(D)$ is obtained by confining the minimization over the
channels in (\ref{mot}) so as to comply with the additional constraint that
$\sum_{x\in\calX}p(x)w(y|x)=q(y)$ for all $y\in\calY$.
The reversed
inequality,
$I_q(D)\ge R_q(D)$, is obtained by the following coding argument:
On the one hand, a trivial extension of the converse to the rate--distortion
coding theorem \cite[p.\ 317]{CT06}, shows that
$R_q(D)$ is a lower bound to the rate--distortion performance
of any code that satisfies $\frac{1}{n}\sum_{i=1}^n\mbox{Pr}\{Y_i=y\}=q(y)$ for all
$y\in\calY$.\footnote{To see why this is true,
consider the functions $\delta_{k}(y)$, $y,k\in\calY$ (each
of which
is defined as equal one for
$y=k$ and zero otherwise) as $|\calY|$ distortion measures, indexed by
$k\in\calY$,
and consider the rate--distortion function w.r.t.\ the usual distortion constraint
and the $|\calY|$ additional
``distortion constraints'' $\bE\{\delta_k(Y)\}\le q(k)$ for all $k\in\calY$,
which, when satisfied, they all must be achieved with equality (since they must sum
to unity). The
rate--distortion function w.r.t.\ these $|\calY|+1$ constraints, which is
exactly $R_q(D)$, is easily shown (using the standard method) to be jointly convex in $D$ and $q$.}
On the other hand, we next show that $I_q(D)$ is an achievable rate
for codes in this class.
Consider the the random coding mechanism described in
the first paragraph of this proof, with $R=I_q(D)+\epsilon$,
with $\epsilon >0$
being arbitrarily small. Since the
probability that for a single randomly drawn codeword,
$\mbox{Pr}\{\sum_{i=1}^nd(x_i,Y_i)\le nD\}$ is of the exponential order of
$e^{-nI_q(D)}$, then the random selection of a codebook
of size $e^{n[I_q(D)+\epsilon]}$ constitutes
$e^{n[I_q(D)+\epsilon]}$ independent trials of an experiment whose probability
of success is of the exponential order of $e^{-nI_q(D)}$. Using standard random coding
arguments, the probability that at least one codeword, in that codebook, would fall within
distance $nD$ from the given typical $\bx$ becomes overwhelmingly large as
$n\to\infty$. Since
this randomly selected codebook satisfies also $\frac{1}{n}\sum_{i=1}^n\mbox{Pr}\{Y_i=y\}\to q(y)$
in probability (as $n\to\infty$)
for all $y\in\calY$ (by the weak law of large numbers), then $I_q(D)$ is
an achievable rate within the class of codes that satisfy
$\frac{1}{n}\sum_{i=1}^n\mbox{Pr}\{Y_i=y\}\to q(y)$ for all $i$.
Thus, $I_q(D)\ge R_q(D)$, which together with the reversed inequality proved
above, yields the equality $I_q(D)=R_q(D)$. Consequently, according to eq.\ (\ref{ldt}),
we have established the relation\footnote{
Eq.\ (\ref{lgdr}) appears also in \cite[p.\ 90, Corollary 4.2.3]{Gray90},
with a completely different proof, for the special case where $q$ minimizes
both sides of the equation (and hence it refers to $R(D)$). However, the
extension of that proof to a generic $q$ is not apparent to be straightforward because
here the minimization over the channels is limited by the reproduction
distribution constraint.}
\begin{equation}
\label{lgdr}
R_q(D)=-\min_{s\ge 0}\left[sD+\sum_{x\in\calX}p(x)\ln Z_x(s)\right].
\end{equation}
As this minimization problem is a convex problem ($\ln Z_x(s)$ is convex in
$s$), the minimizing $s$ for a given $D$ is obtained by taking the derivative
of the r.h.s., which leads to
\begin{eqnarray}
D&=&-\sum_{x\in\calX}p(x)\cdot\frac{\partial\ln Z_x(s)}{\partial s}\nonumber\\
&=&\sum_{x\in\calX}p(x)\cdot\frac{\sum_{y\in\calY}q(y)d(x,y)e^{-sd(x,y)}}
{\sum_{y\in\calY}q(y)e^{-sd(x,y)}}.
\end{eqnarray}
This equation yields the distortion level $D$ for a given value of the
minimizing $s$ in eq.\ (\ref{lgdr}). Let us then denote
\begin{equation}
\label{ds}
D_s=\sum_{x\in\calX}p(x)\cdot\frac{\sum_{y\in\calY}q(y)d(x,y)e^{-sd(x,y)}}
{\sum_{y\in\calY}q(y)e^{-sd(x,y)}}.
\end{equation}
This notation obviously means that
\begin{equation}
\label{rds}
R_q(D_s)=-sD_s-\sum_{x\in\calX}p(x)\ln Z_x(s).
\end{equation}
Taking the derivative of (\ref{ds}), we readily obtain
\begin{eqnarray}
\frac{\mbox{d}D_s}{\mbox{d}s}&=&\sum_{x\in\calX}p(x)\frac{\partial}{\partial
s}\left[\frac{\sum_{y\in\calY}q(y)d(x,y)e^{-sd(x,y)}}
{\sum_{y\in\calY}q(y)e^{-sd(x,y)}}\right]\nonumber\\
&=&-\sum_{x\in\calX}p(x)
\left[\frac{\sum_{y\in\calY}q(y)d^2(x,y)e^{-sd(x,y)}}{\sum_{y\in\calY}q(y)e^{-sd(x,y)}}-\right.\nonumber\\
& &\left.\left(\frac{\sum_{y\in\calY}q(y)d(x,y)e^{-sd(x,y)}}
{\sum_{y\in\calY}q(y)e^{-sd(x,y)}}\right)^2\right]\nonumber\\
&=&-\sum_{x\in\calX}p(x)\cdot\mbox{Var}_s\{d(x,Y)|X=x\}\nonumber\\
&=&-\mbox{mmse}_s(\Delta|X),
\end{eqnarray}
where $\mbox{Var}_s\{d(x,Y)|X=x\}$ is the variance of $d(x,Y)$ w.r.t.\
the conditional pmf $\{w_s(y|x)\}$. The last line follows from the fact the
expectation of $\mbox{Var}_s\{d(X,Y)|X\}$ w.r.t.\ $\{p(x)\}$ is exactly the
MMSE of $d(X,Y)$ based on $X$.
The integral forms of this equation
are then precisely as in part (a) of the theorem with the corresponding integration
constants. Finally, differentiating both sides of eq.\ (\ref{rds}), we get
\begin{eqnarray}
\frac{\mbox{d}R(D_s)}{\mbox{d}s}&=&-s\cdot\frac{\mbox{d}D_s}{\mbox{d}s}-D_s-
\sum_{x\in\calX}p(x)\cdot\frac{\partial\ln Z_x(s)}{\partial s}\nonumber\\
&=&-s\cdot\frac{\mbox{d}D_s}{\mbox{d}s}-D_s+D_s\nonumber\\
&=&-s\cdot\frac{\mbox{d}D_s}{\mbox{d}s}\nonumber\\
&=&s\cdot\mbox{mmse}_s(\Delta|X),
\end{eqnarray}
which when integrated back, yields part (b) of the theorem.
This completes the proof of Theorem \ref{thm1}.
|
1,116,691,498,397 | arxiv | \section{Introduction}
Consider an oligopolistic market, where the data of the production cost functions and constraints
of each producer/firm are available to all his rivals. In such a case each producer can compute his
optimal non-cooperative Cournot-Nash strategy by solving the corresponding variational inequality,
see \cite{MSS} and \cite{OKZ}. It may happen, however, that in the course of time some external
parameters change, e.g., the prices of the inputs or the parameters of the inverse demand function
describing the behavior of the customers. In such a case, the strategies should be adjusted but, as
thoroughly analyzed in \cite{Fl}, each change is generally associated with some expenses, called
{\em costs of change.} Thus, given a certain {\em initial strategy profile} (productions of all
firms), we face then a different equilibrium model, in which the costs of change enter the
objectives of some (or all) producers. Since these costs are typically nonsmooth, the
respective variational inequality, describing the new non-cooperative equilibrium, becomes
substantially more complicated, both from the theoretical as well as from the numerical point of
view. One can imagine that such updates of strategies are performed repetitively. This leads to a
discrete-time evolution process, where the firms respond to changing conditions by repetitive
solution of the mentioned rather complicated variational inequality (with updated data). As
discussed in \cite[Chapter 12]{OKZ}, it may also happen that one of the producers, having an advantage over the others, takes over the role of a Leader and switches to the {\em Stackelberg
strategy}, whereas the remaining firms continue to play non-cooperatively with each other on the lower level as
Followers. In this case, our discrete-time evolution process, considered from the point of view of
the Leader, amounts to repetitive solution of {\em mathematical programs with equilibrium
constraints} (MPECs) with a nonsmooth objective and the above mentioned variational inequality
among the constraints.
Further, it is interesting to note that the above described model has, in case of positively homogeneous costs of change, a similar structure as
some infinite-dimensional variational systems used in continuum mechanics to model a class of {\em rate-independent processes} cf., e.g., \cite{MR} or \cite{FKV}.
The plan of the paper is as follows. In the preliminary Section 2 we collect the necessary background from variational analysis. Section 3 consists of two parts. In the first one we introduce a general parameter-dependent non-cooperative equilibrium problem which is later used for modeling of the considered oligopolistic market. By employing standard arguments, existence of the respective solutions (equilibria) is shown. In the second part we then consider a parameter-dependent variational system which encompasses the mentioned equilibrium problem and is amenable to advanced tools of variational analysis. In this way one obtains useful stability results concerning the respective {\em solution map}. They are used in the sequel but they are also important for its own sake.
Thereafter, in Section 4, this equilibrium problem is specialized to a
form, corresponding to the oligopolistic market model from \cite{MSS}.
In this case, the solution map is even single-valued and locally Lipschitzian. In Section 5 we then consider a modification of the 5-firm example from \cite{MSS} with the aim to illustrate the role of costs of change and to describe a possible numerical approach to the computation of the respective equilibria. Whereas Section 5.1 deals with noncooperative Cournot-Nash equilibrium, Section 5.2 concerns
the situation, when one of the firms prefers to apply the Stackelberg strategy.
In both cases our main numerical tool is the so-called Gauss-Seidel method from \cite{Ka} tailored to the considered type of problems.\\
The following notation is employed. Given a vector $x \in \mathbb{R}^{n}, [x]$ denotes the linear subspace generated by $x$ and $[x]^{\perp}$stands for its orthogonal complement. For a multifunction $F: \mathbb{R}^{n} \rightrightarrows \mathbb{R}^{m}, \mathrm{gph}\, F$ signifies the graph of $F$, $\delta_{A}$ is the indicatory function of a set $A$ and $\overline{\mathbb{R}} = R \cup \{+ \infty \}$ is the extended real line. $\mathbb{B}$ stands for the unit ball and, for a cone $K$, $K^{\circ}$ denotes its (negative) polar.
\section{Background from variational analysis}
Throughout the whole paper, we will make an extensive use of the following basic notions of modern variational analysis.
\begin{definition}\label{Def.1}
Let $A$ be a closed set in $\mathbb{R}^{n}$ and $\bar{x}\in A$. Then
\[
T_{A}(\bar{x}):=\mathop{{\rm Lim}\,{\rm sup}}\limits_{t \searrow 0} \frac{A-\bar{x}}{t}
\]
is the {\em tangent (contingent, Bouligand)} cone to $A$ at $\bar{x}$,
$$\widehat{N}_{A}(\bar{x}):= (T_{A}(\bar{x}))^{\circ}
$$
is {\em regular (Fr\'{e}chet) normal cone} to $A$ at $\bar{x}$, and
$$
N_{A}(\bar{x}):= \mathop{{\rm Lim}\,{\rm sup}}\limits_{A \atop x\rightarrow \bar{x}} \widehat{N}_{A}(x) =
\{x^{*}|\exists \stackrel{A}{x^{*}_{i} \rightarrow x^{*}}, x^{*}_{i}\in \widehat{N}_{A}(x_{i}) \mbox{ such that } x^{*}_{i} \rightarrow x^{*}\}
$$
is the {\em limiting (Mordukhovich) normal cone} to $A$ at $\bar{x}$.
\end{definition}
In this definition "$\mathop{{\rm Lim}\,{\rm sup}}$" stands for the Painlev\'{e}-Kuratowski {\em outer set limit}.
If $A$ is convex, then $\widehat{N}_{A}(\bar{x})=N_{A}(\bar{x})$ amounts to the classical normal cone in the sense of convex analysis and we write $N_{A}(\bar{x})$.
In the sequel, we will also employ the co-called critical cone. In the setting of Definition \ref{Def.1} with an given normal $d^{*}\in \widehat{N}_{A}(\bar{x})$, the cone
\[
\mathcal{K}_{A}(\bar{x},d^{*}):= T_{A}(\bar{x})\cap [d^{*}]^{\perp}
\]
is called the {\em critical cone} to $A$ at $\bar{x}$ with respect to $d^{*}$.
The above listed cones enable us to describe the local behavior of set-valued maps via various generalized derivatives. Consider a closed-graph multifunction $F$ and the point $(\bar{x},\bar{y})\in \mathrm{gph}\, F$.
\begin{definition}\label{Def.2}
\begin{enumerate}
\item [(i)] The multifunction $DF (\bar{x},\bar{y}):\mathbb{R}^{n} \rightrightarrows \mathbb{R}^{m}$, defined by
\[
DF(\bar{x},\bar{y})(u):=\{v \in \mathbb{R}^{m} | (u,v)\in T_{\mathrm{gph}\, F}(\bar{x},\bar{y})\}, d \in \mathbb{R}^{n},
\]
is called the {\em graphical derivative} of $F$ at $(\bar{x},\bar{y})$;
\item [(ii)] The multifunction $D^{*}F(\bar{x},\bar{y}): \mathbb{R}^{m} \rightrightarrows \mathbb{R}^{n}$, defined by
\[
D^{*}F(\bar{x},\bar{y})(v^{*}):=\{u^{*} \in \mathbb{R}^{n} | (u^{*},-v^{*})\in N_{\mathrm{gph}\, F}(\bar{x},\bar{y})\}, v^{*} \in \mathbb{R}^{m},
\]
is called the {\em limiting (Mordukhovich) coderivative} of $F$ at $(\bar{x},\bar{y})$.
\end{enumerate}
\end{definition}
Next we turn our attention to a proper convex, lower-semicontinuous (lsc) function $q:\mathbb{R}^{n}\rightarrow\overline{\mathbb{R}}$. Given an $\bar{x} \in \mbox{\rm dom}\, q$, by $\partial q(\bar{x})$ we denote the classical Moreau-Rockafellar {\em subdifferential} of $q$ at $\bar{x}$. In this case, for the {\em subderivative} function $dq(\bar{x}):\mathbb{R}^{n}\rightarrow\overline{\mathbb{R}}$ (\cite[Definition 8.1]{RW}) it holds that
\[
dq(\bar{x})(w)= q^{\prime}(\bar{x};w) := \lim\limits_{\tau\searrow 0}\frac{q(\bar{x}+\tau w)-q(\bar{x})}{\tau}
\mbox{ for all }~~w \in \mathbb{R}^{n}.
\]
In Section 3 we will employ also second-order subdifferentials and second-order subderivatives of $q$.
\begin{definition}\label{Def.3}
Let $\bar{v}\in \partial q(\bar{x})$. The multifunction $\partial^{2}q(\bar{x},\bar{v}):\mathbb{R}^{n} \rightrightarrows\mathbb{R}^{n}$ defined by
\[
\partial^{2}q(\bar{x},\bar{v})(v^{*}):= D^{*}\partial q(\bar{x},\bar{v})(v^{*}), ~~ v^{*}\in \mathbb{R}^{n},
\]
is called the {\em second-order subdifferential} of $q$ at $(\bar{x},\bar{v})$.
\end{definition}
If $q$ is separable, i.e., $q(x)=\sum\limits^{n}_{i=1}q_{i}(x_{i})$ with some proper convex, lsc functions $q_{i} : \mathbb{R}\rightarrow\overline{\mathbb{R}} $, then
\[
\partial^{2} q(\bar{x},\bar{v})(v^{*}) =
\left [ \begin{array}{c}
\partial^{2} q_{1}(\bar{x}_{1},\bar{v}_{1})(v^{*}_{1})\\
\vdots \\
\partial^{2} q_{n}(\bar{x}_{n},\bar{v}_{n})(v^{*}_{n})
\end{array}
\right ],
\]
where $\bar{v}_{i}, v^{*}_{i}$ are the {\em i}th components of the vectors, $\bar{v},v^{*}$, respectively. \\
Concerning second-order subderivatives (\cite[Definition 13.3]{RW}), we confine ourselves to the case when $q$ is, in addition, {\em piecewise linear-quadratic}. This means that $\mbox{\rm dom}\, q$ can be represented as the union of finitely many polyhedral sets, relative to each of which $q(x)$ is given in the form $\frac{1}{2}\langle x,Ax \rangle + \langle a,x \rangle + \alpha $ for some scalar $\alpha \in \mathbb{R}$, vector $a \in \mathbb{R}^{n}$ and a symmetric $[n \times n]$ matrix $A$, cf. \cite[Definition 10.20]{RW}.
In this particular case it has been proved in \cite[Proposition 13.9]{RW} that, with $\bar{v}\in \partial q(\bar{x})$ and $w \in \mathbb{R}^{n}$ the second-order subderivative $d^{2}q(\bar{x} | \bar{v})$ is proper convex and piecewise linear quadratic and
\begin{equation}\label{eq-102}
d^{2}q(\bar{x} | \bar{v})(w)= q^{\prime \prime} (\bar{x};w)+ \delta_{K(\bar{x},\bar{v})}(w),
\end{equation}
where
\[
q^{\prime \prime} (\bar{x};w) := \lim\limits_{\tau\searrow 0}
\frac{q(\bar{x}+\tau w)-q(\bar{x}) - \tau q^{\prime}(\bar{x};w)}
{\frac{1}{2}\tau^{2}}
\]
is the {\em one-sided second directional derivative} of $q$ at $\bar{x}$ in direction $w$
and $K(\bar{x},\bar{v}):=\{w | q^{\prime}(\bar{x};w)= \langle \bar{v},w\rangle \}$. For a general theory of second-order subderivatives (without our restrictive requirements) the interested reader is referred to \cite[Chapter 13 B]{RW}.
We conclude now this section with the definitions of two important Lipschitzian stability notions for multifunctions which will be extensively employed in the sequel.
\begin{definition}\label{Def.4}
Consider a multifunction $S:\mathbb{R}^{m} \rightrightarrows \mathbb{R}^{n}$ and a point $(\bar{u},\bar{v})\in \mathrm{gph}\, S$.
\begin{enumerate}
\item [(i)]
$S$ is said to have the {\em Aubin property} around $(\bar{u},\bar{v})$, provided there are neighborhoods $\mathcal{U}$ of $\bar{u}$, $\mathcal{V}$ of $\bar{v}$ along with a constant $\eta \geq 0$ such that
\[
S(u_{1}) \cap \mathcal{V} \subset S(u_{2})+\eta \| u_{1}-u_{2}\| \mathbb{B} \mbox{ for all } u_{1},u_{2}\in \mathcal{U}.
\]
\item [(ii)]
We say that $S$ has a
{\em single-valued and Lipschitzian localization} around $(\bar{u},\bar{v})$, provided there are neighborhoods $\mathcal{U}$ of $\bar{u}$, $\mathcal{V}$ of $\bar{v}$ and a Lipschitzian mapping $s:\mathcal{U} \rightarrow \mathbb{R}^{n}$ such that $s(\bar{u})=\bar{v}$ and
\[
S(u)\cap \mathcal{V} = \{s (u)\} \mbox{ for all } u \in \mathcal{U}.
\]
\end{enumerate}
\end{definition}
\noindent Further important stability notions can be found, e.g., in \cite{DR}.
\section{General equilibrium model: Existence and stability}
Consider a non-cooperative game of $l$ players, each of which solves the optimization problem
\begin{equation}\label{eq-1}
\begin{array}{ll}
\mbox{ minimize } & f_{i}(p,x_{i},x_{-i}) + q_{i}(x_{i})\\
\mbox{ subject to } & \\
& x_{i} \in A_{i},
\end{array}
\end{equation}
$i = 1,2,\ldots, l$. In (\ref{eq-1}), $x_{i}\in \mathbb{R}^{n}$ is the {\em strategy} of the
$i$th player,
$$x_{-i} := (x_{1}, \ldots, x_{i-1}, x_{i+1}, \ldots, x_{l})
\newline \in (\mathbb{R}^{n})^{l-1}$$ is the
{\em strategy profile} of the remaining players and $p \in \mathbb{R}^{m}$ is a parameter, common
for all players. Further, the functions
$$f_{i}:\mathbb{R}^{m}\times (\mathbb{R}^{n})^{l}\rightarrow
\mathbb{R} \quad
\mbox{ and }
\quad q_{i}: \mathbb{R}^{n}\rightarrow \mathbb{R}, \quad i = 1,2,\ldots, l,$$ are continuously
differentiable and convex continuous, respectively, and the sets of {\em admissible strategies}
$A_{i}, i=1,2,\ldots, l$, are closed and convex. The objective in (\ref{eq-1}) is thus the sum of a
smooth function depending on the whole strategy profile $x:=(x_{1},x_{2},\ldots, x_{l})$ and a
convex (not necessarily smooth) function depending only on $x_{i}$. Let us recall that, given a
parameter vector $\bar{p}$, the strategy profile $\bar{x}=(\bar{x}_{1}, \bar{x}_{2}, \ldots,
\bar{x}_{l})$ is a corresponding {\em Nash equilibrium} provided
\[
\bar{x}_{i} \in \mathop{\rm arg\,min}\limits_{x_{i}\in A_{i}}
\big[ f_{i}(\bar{p},x_{i},\bar{x}_{-i})+q_{i}(x_{i}) \big] \mbox{ for all } i.
\]
Denote by $S:\mathbb{R}^{m}\rightrightarrows (\mathbb{R}^{n})^{l}$ the solution mapping which assigns
each $p$ the corresponding (possibly empty) set of Nash equilibria.
The famous Nash Theorem \cite[Theorem 12.2]{Au} yields the next statement.
\begin{theorem}\label{Thm.1}
Given $\bar{p} \in \mathbb{R}^{n}$, assume that
\begin{enumerate}
\item [(A1)]
for all admissible values of $x_{-i}$
functions $f_{i}(\bar{p}, \cdot, x_{-i}), i=1,2,\ldots, l$, are convex, and
\item [(A2)]
sets $A_{i}, i=1,2,\ldots, l$, are bounded.
\end{enumerate}
Then $S(\bar{p})\neq \emptyset$.
\end{theorem}
Suppose from now on that (A1) holds true for all $p$ from an open set $\mathcal{B}\subset \mathbb{R}^{m}$.
Then one has that $\mathcal{B}\subset \mbox{\rm dom}\, S$ and for $p \in \mathcal{B}$
\begin{equation}\label{eq-2}
S(p)=\{x | \, 0 \in F(p,x)+Q(x)\},
\end{equation}
where
\[
\begin{split}
& F(p,x)= \left[ \begin{array}{c}
F_{1}(p,x) \\
\vdots \\
F_{l}(p,x)
\end{array}\right ]
\mbox{ with } F_{i}(p,x)=\nabla_{x_{i}}f_{i}(p,x_{i},x_{-i}), \, i=1,2,\ldots, l, \,\,\mbox{ and
}\\[2ex]
& Q(x)= \partial \tilde{q}(x) \mbox{ with } \tilde{q}(x)=\sum\limits^{l}_{i=1} \tilde{q}_{i}(x_{i}) \mbox{ and } \tilde{q}_{i}(x_{i})= q_{i}(x_{i})+\delta_{A}(x_{i}), \, i = 1,2,\ldots, l.
\end{split}
\]
This follows immediately from the fact that under the posed assumptions the solution set of
(\ref{eq-1}) is characterized by the first-order condition
\[
0 \in \nabla_{x_{i}}f_{i}(p, x_{i},x_{-i}) + \partial \tilde{q}_{i}(x_{i}), ~ i = 1,2,\ldots, l.
\]
\if{
Next we state two results concerning the local stability and sensitivity of $S$ around a given
reference point. They are not indispensable for the numerical technique developed in Section 4, but
the first one will be helpful in the ImP approach of Section 5.
Consider a parameter $\bar{p}\in \mathcal{B}$ and an $\bar{x} \in S(\bar{p})$. Further, put
$\bar{v}:= -F(\bar{p},\bar{x})$ and $\bar{v}_{i}= -F_{i}(\bar{p},\bar{x}), ~ i=1,2,\ldots, l$.
\begin{theorem}\label{Thm.1}
Assume that the {\em adjoint} GE
\begin{equation}\label{eq-3}
0 \in \sum\limits^{l}_{i=1}\nabla_{x}F_{i}(\bar{p},\bar{x})^{T}u_{i}+
\left[ \begin{array}{c}
\partial^{2}\tilde{q}_{1}(\bar{x}_{1},\bar{v}_{1})(u_{1}) \\
\vdots\\
\partial^{2}\tilde{q}_{l}(\bar{x}_{l},\bar{v}_{l})(u_{l})
\end{array}\right]
\end{equation}
in variable $u=(u_{1},u_{2}, \ldots, u_{l})\in (\mathbb{R}^{n})^{l}$ has only the trivial
solution $u=0$. Then $S$ has the Aubin property around $(\bar{p},\bar{x})$.
\end{theorem}
\proof
The statement follows immediately from \cite[Section 4.4]{M1} taking into account that, thanks to
the separability of $Q$,
\[
D^{*}Q(\bar{x},\bar{v})(u)=
\left[ \begin{array}{c}
D^{*}\partial\tilde{q}_{1}(\bar{x}_{1},\bar{v}_{1})(u_{1}) \\
\vdots\\
D^{*} \partial\tilde{q}_{l}(\bar{x}_{l},\bar{v}_{l})(u_{l})
\end{array}\right] =
\left[ \begin{array}{c}
\partial^{2}\tilde{q}_{1}(\bar{x}_{1},\bar{v}_{1})(u_{1}) \\
\vdots\\
\partial^{2}\tilde{q}_{l}(\bar{x}_{l},\bar{v}_{l})(u_{l})
\end{array}\right].
\]
\endproof
In applications one has thus to compute the second-order subdifferentials of functions
$\tilde{q}_{i}$. If $n=1$, this can be done on the basis of the results from \cite{MorOut01}.
Denote by $\Sigma$ the mapping which assigns a pair $(\xi,p)\in (\mathbb{R}^{n})^{l}\times
\mathbb{R}^{m}$ the set of solutions to the GE
\[
\xi \in F(p,x)+Q(x).
\]
Then the condition of Theorem \ref{Thm.1} is necessary and sufficient for the Aubin property of
$\Sigma$ around $(0,\bar{p},\bar{x})\in \mathrm{gph}\, \Sigma$.
\begin{theorem}\label{Thm.2}
Let the assumption of Theorem \ref{Thm.1} be fulfilled. Further assume that the functions $q_{i},
i=1,2,\ldots, l$, are piecewise linear quadratic, cf. \cite[Definition 10.20]{RW}. Then for all
directions $h \in \mathbb{R}^{m}$ one has
\begin{equation}\label{eq-4}
DS(\bar{p},\bar{x})(h)=\left\{k ~|~ \nabla_{p}F(\bar{p},\bar{x})h + \nabla_{x}F(\bar{p},\bar{x})k
+
\left[ \begin{array}{c}
\partial \varphi_{1}(k_{1}) \\
\vdots \\
\partial \varphi_{l}(k_{l})
\end{array}\right ]
\right\},
\end{equation}
where $\varphi_{i}(k)= \frac{1}{2}d^{2}\tilde{q}_{i} (\bar{x}_{i}, \bar{v}_{i})(k), ~
i=1,2,\ldots, l$.
\end{theorem}
\proof
In terms of GE (\ref{eq-2}) the condition imposed in Theorem \ref{Thm.1} implies the fulfillment
of the standard qualification condition (cp. \cite[Theorem 6.14]{RW})
\begin{equation}\label{eq-5}
\left. \begin{split}
& 0= \nabla_{p}F(\bar{p},\bar{x})^{T}u\\ & 0 \in \nabla_{x}F(\bar{p},\bar{x})^{T}u +
D^{*}Q(\bar{x},\bar{v})(u)
\end{split}\right\}\Rightarrow u =0.
\end{equation}
Clearly,
\begin{equation}\label{eq-201}
\mathrm{gph}\, S = \left\{(p,x) \left|
\left[ \begin{split}
& x \\ - & F(p,x)
\end{split}\right ] \in \mathrm{gph}\, Q \right. \right\}
\end{equation}
and implication (\ref{eq-5}) ensures that the constraint system on the right-hand side of
(\ref{eq-201}) fulfills the Abadie constraint qualification at $(\bar{p},\bar{x})$, see
\cite[Proposition 1]{HO}. It follows that
\[
T_{\mathrm{gph}\, S}(\bar{p},\bar{x})= \left\{(h,k) \left|
\left[ \begin{split}
& k \\ - & \nabla F(\bar{p},\bar{x})(h,k)
\end{split}\right ] \in T_{\mathrm{gph}\, Q}(\bar{x}, -F(\bar{p},\bar{x}))
\right.\right\}
\]
or, equivalently,
\[
DS(\bar{p},\bar{x})(h)= \{k | 0 \in \nabla f(\bar{p},\bar{x})(h,k)+ DQ(\bar{x},\bar{u})(k)\}
\]
for all $h \in \mathbb{R}^{s}$. So it remains just to compute $DQ(\bar{x},\bar{u})(k)$. To this
aim observe first that, thanks to the assumptions imposed on $q_{i}$ and $A_{i}$, functions
$\tilde{q}_{i}$ are {\em fully amenable} \cite[Example 10.24]{RW}. This implies that they are {\em
twice epi-differentiable, prox-regular} and {\em subdifferentially continuous} \cite[Theorem 13.14
and Proposition 13.32]{RW} and so we may invoke \cite[Theorem 13.40]{RW}, according to which the
sets $\mathrm{gph}\, \partial \tilde{q}_{i}, i = 1,2,\ldots, l$, are {\em geometrically derivable}. Since
$\mathrm{gph}\, Q = \XX\limits^{l}_{i=1} \mathrm{gph}\, \partial \tilde{q}_{i}$, the inclusion in \cite[Proposition
6.41]{RW} becomes equality
\[
T_{\mathrm{gph}\, Q}(\bar{x},\bar{v})= \XX\limits^{l}_{i=1}T_{\mathrm{gph}\,}\partial \tilde{q}_{i}
(\bar{x}_{i},\bar{v}_{i}),
\]
which implies in turn that
\begin{equation}\label{eq-14}
DQ(\bar{x},\bar{u})(k)=
\left[ \begin{array}{c}
D\partial\tilde{q}_{1}(\bar{x}_{1},\bar{v}_{1})(k_{1}) \\
\vdots\\
D \partial\tilde{q}_{l}(\bar{x}_{l},\bar{v}_{l})(k_{l})
\end{array}\right], ~ k \in \mathbb{R}^{s}.
\end{equation}
Next we recall \cite[Theorem 13.40]{RW} once more and conclude that the graphical derivatives on
the right-hand side of (\ref{eq-14}) can be expressed in terms of second subderivatives via the
relations
\begin{equation}\label{eq-15}
D \partial\tilde{q}_{i}(\bar{x}_{i},\bar{v}_{i}) = \partial \varphi_{i} \mbox{ for } \varphi_{i} =
\frac{1}{2}d^{2}\tilde{q}_{i} (\bar{x}_{i}, \bar{v}_{i}), ~ i=1,2,\ldots, l.
\end{equation}
The proof is complete.
\endproof
The satisfaction of the assumption of Theorem \ref{Thm.1} can be sometimes ensured in a very simple
way.
\begin{corollary}\label{Cor.1}
Assume in the setting of Theorem \ref{Thm.1} that the $[nl \times nl]$ matrix
$\nabla_{x}F(\bar{p},\bar{x})$ is positive definite. Then $S$ has the Aubin property around
$(\bar{p},\bar{x})$. Moreover, if $q_{i}$ are piecewise linear quadratic, $i-1,2,\ldots, l$, then
$\bar{x}$ is locally unique in $S(\bar{p})$.
\end{corollary}
\proof
Let us premultiply GE (\ref{eq-3}) from the left-hand side by $u=(u_{1}, u_{2}, \ldots, u_{l})$
which yields the relation
\begin{equation}\label{eq-16}
0 \in \langle \nabla_{x}F(\bar{p},\bar{x})u,u \rangle + \sum\limits^{l}_{i=1}
\langle u_{i}, \partial^{2}\tilde{q}_{i}(\bar{x}_{i},\bar{v}_{i}) (u_{i})\rangle .
\end{equation}
The second term on the right-hand side of (\ref{eq-16}) amounts to the product $\langle u,D^{*}
\partial\tilde{q}(\bar{x},\bar{v}) u\rangle $, where $\tilde{q}(x)= \sum\limits^{l}_{i=1}
\tilde{q}_{i}
(\bar{x}_{i})$. Since $\tilde{q}$ is convex by the assumptions, the mapping $\partial\tilde{q}:
\mathbb{R}^{nl} \rightrightarrows \mathbb{R}^{nl}$ is maximal monotone \cite[Theorem 12.17]{RW}. We
can thus invoke \cite[Theorem 2.1]{PR} according to which this term is nonnegative. It follows that
(\ref{eq-16}) possesses only the trivial solution $u=0$, which proves the first assertion. To prove
the second one, observe that, thanks to \cite[Theorem 13.57]{RW}, one has
\[
DQ(\bar{x},\bar{v})(u)\subset D^{*}Q(\bar{x},\bar{v})(u)
\]
for all $u \in (\mathbb{R}^{n})^{l}$. This implies by the same argument as above that the
graphical derivative
\[
DS(\bar{p},\bar{x})(0)= \{k | 0 \in \nabla_{x}F(\bar{p},\bar{x})k+DQ(\bar{x},\bar{v})(k)\}
\]
has only the trivial solution $k = 0$. From the Levy-Rockafellar criterion \cite[Theorem
4C.1]{DR} it follows now that $S$ possesses also the isolated calmness property at
$(\bar{p},\bar{x})$ which entails the local uniqueness of $\bar{x}$ in $S(\bar{p})$. The statement
is proved.
\endproof
Let us illustrate the results of Theorem \ref{Thm.1} and Corollary \ref{Cor.1} via a simple
academic example.
\begin{example}
Put $s=2, n=l=1$ and consider the GE
\[
0 \in p_{1} + p_{2}x + \partial \tilde{q}(x),
\]
where $\tilde{q}(x)= |x| + \delta_{A}(x)$ with $A = [0,1]$. Consider the reference pair
$((\bar{p}_{1}, \bar{p}_{2}),\bar{x})= ((-1,1),0)$ so that $\bar{v}=1$. Since
$\nabla_{x}F(\bar{p},\bar{x})=\bar{p}_{2}=1$, Corollary \ref{Cor.1} can be applied and we conclude
that $S$ has the Aubin property around $(\bar{p},\bar{x})$. Moreover, the absolute value function
is piecewise linear quadratic, and so $\bar{x}$ is locally unique in $\bar{p}$. To compute
$DS(\bar{p},\bar{x})$ we recall \cite[Definition 13.3]{RW} according to which
\[
\varphi(k):=d^{2}q(\bar{x},\bar{v})(k)= \delta_{\mathbb{R}_{+}}(k),
\]
and thus
\begin{equation}\label{eq-17}
DS(\bar{p},\bar{x})(h_{1},h_{2})=\{k | 0 \in h_{1}+k+ N_{\mathbb{R}_{+}}(k)\}.
\end{equation}
The GE in (\ref{eq-17}) amounts to the complementarity problem
\[
k \geq 0, ~ h_{1}+k \geq 0, ~ \langle h_{1}+k,k\rangle =0
\]
and we may conclude that
\[
DS(\bar{p},\bar{x})(h_{1},h_{2})=
\left \{ \begin{array}{cl}
- h_{1} & \mbox{ if } h_{1}< 0\\ 0 & \mbox{ otherwise },
\end{array}\right.
\]
see\\
Let us illustrate the results of Theorem \ref{Thm.1} and Corollary \ref{Cor.1} via a simple academic example.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figure_S}
\includegraphics[width=0.48\textwidth]{figure_DS}
\caption{The set of equilibria $S(p_1, p_2)$ (left) and the graphical derivative $DS(h_1,h_2)$ evaluated at the point $p_1=-1, p_2=1$ of Example 1.}
\label{mesh}
\end{figure}
\end{example}
On the basis of Corollary \ref{Cor.1} we can now prove the following non-local result.
\begin{theorem}\label{Thm.3}
Assume that (A1) is fulfilled for all $p$ from some open set $\mathcal{B}$, (A2) holds true and
$\nabla_{x}F(p,x)$ is positive definite over $\mathcal{B} \times \XX\limits^{l}_{i=1} A_{i}$. Then
$S$ is single-valued and locally Lipschitz on $\mathcal{B}$.
\end{theorem}
\proof
As already mentioned, under the posed assumptions, $S$ is nonempty-valued on $\mathcal{B}$. Next
we observe that, by virtue of \cite[Theorem 3.43]{OR}, for each $p \in \mathcal{B}$ the operator
$F(p,\cdot)$ is strictly monotone over $\XX\limits^{l}_{i=1} A_{i}$. Since the multifunction $Q$
amounts to the subdifferential of the proper, convex function $\tilde{q}_{1}(x_{1})+ \ldots +
\tilde{q}_{l}(x_{l}), Q$ is monotone (\cite[Theorem 12.17]{RW}). It follows by \cite[Exercise
12.4c]{RW} that for each $p \in \mathcal{B}$ the multifunction
\[
G(p, \cdot):=F(p, \cdot) + Q(\cdot)
\]
is strictly monotone over $\XX\limits^{l}_{i=1} A_{i}$. This enables us to invoke a complement
to
\cite[Theorem 12.51]{RW} (mentioned at the end of Section 12G) and state that $S$ is in fact
single-valued on $\mathcal{B}$.
From Corollary \ref{Cor.1} we know that $S$ has the Aubin property around all points $(p,x)\in
\mathrm{gph}\, S \cap (\mathcal{B} \times (\mathbb{R}^{n})^{l})$. Since this property amounts to the local
Lipschitz continuity for single-valued maps, the statement has been established.
\endproof
\fi{
\begin{remark}\label{Rem.1}
The GE in (\ref{eq-2}) can be equivalently written down in the form:
For a given $\bar{p}$ find $\bar{x}$ such that
\[
\langle F(\bar{p},\bar{x}), x - \bar{x}\rangle + \tilde{q}(x)-\tilde{q}(\bar{x})\geq 0 ~ {\rm for ~all} ~ x.
\]
Our equilibrium is thus governed by a variational inequality (VI) of the second kind, cf. \cite{KS}, \cite[page 96]{FP}.
\end{remark}
Next we will concentrate on the (local) analysis of $S$ (given by (\ref{eq-2})) under the less restrictive assumptions that, with $s:= ln$,
\begin{description}
\item [(i)]
$F:\mathbb{R}^{m}\times\mathbb{R}^{s}\rightarrow\mathbb{R}^{s}$ is continuously differentiable, and
\item [(ii)]
$Q(\cdot)= \partial \tilde{q} (\cdot)$ for a proper convex, lsc function $\tilde{q}:\mathbb{R}^{s} \rightarrow \overline{\mathbb{R}}$.
\end{description}
In this way, the obtained results will be applicable not only to the equilibrium problem stated above, but to a broader class of parametrized VIs of the second kind. Note that stability of the {\em generalized equation} (GE)
\begin{equation}\label{eq-50}
0 \in F(p,x)+\partial \tilde{q}(x)
\end{equation}
has been investigated, among other works, in \cite[Chapter 13]{RW} even without any convexity assumptions imposed on $\tilde{q}$. As proved in \cite[Theorem 13.48]{RW}, $S$ has the Aubin property around $(\bar{p},\bar{x}) \in \mathrm{gph}\, S$ provided the {\em adjoint} GE
\begin{equation}\label{eq-51}
0 \in \nabla_{x}F(\bar{p},\bar{x})^{T}u + \partial^{2}\tilde{q}(\bar{x},-F(\bar{p},\bar{x}))(u)
\end{equation}
in variable $u \in \mathbb{R}^{s}$ has only the trivial solution $u = 0$.
This condition is automatically fulfilled provided $\nabla_{x}F(\bar{p},\bar{x})$ is positive definite. Indeed, due to the assumptions imposed on $\tilde{q}$, the mapping $\partial \tilde{q}$ is maximal monotone \cite[Theorem 12.17]{RW}. We can thus invoke \cite[Theorem 2.1]{PR}, according to which
\[
\langle u,v \rangle \geq 0~ \mbox{ for all } ~v \in \partial^{2}\tilde{q} (\bar{x},-F(\bar{p},\bar{x}))(u).
\]
The result thus follows from the inequality
\[
0 \leq \langle u,\nabla_{x}F(\bar{p},\bar{x})^{T}u\rangle.
\]
Let us now derive conditions ensuring the existence of a single-valued and Lipschitzian localization of $S$ around $(\bar{p},\bar{x})$. To this purpose we employ \cite[Theorem 3G.4]{DR}, according to which this property of $S$ is implied by the existence of a single-valued and Lipschitzian localization of the associated partially linearized mapping $\Sigma : \mathbb{R}^{s}\rightrightarrows \mathbb{R}^{s}$ defined by
\begin{equation}\label{eq-52}
\Sigma(w):=\left \{ x | w \in F(\bar{p},\bar{x})+\nabla_{x}F(\bar{p},\bar{x})(x-\bar{x})+\partial \tilde{q}(x)\right \}
\end{equation}
around $(0,\bar{x})$. This implication leads immediately to the next statement.
\begin{proposition}\label{Prop.1}
Assume that $\nabla_{x}F(\bar{p},\bar{x})$ is positive definite. Then $S$ has a single-valued and Lipschitzian localization around $(\bar{p},\bar{x})$.
\end{proposition}
\proof
Observe first that, by \cite[Theorem 13.48]{RW}, $\Sigma$ has the Aubin property around $(0,\bar{x})$ if and only if $(\ref{eq-51})$ has only the trivial solution $u=0$ which, in turn, is ensured by the positive definiteness of $\nabla_{x}F(\bar{p},\bar{x})$. So the assertion follows from \cite[Theorem 3G.5]{DR} provided the mapping
\[
\Phi(x):= F(\bar{p},\bar{x}) + \nabla_{x}F(\bar{p},\bar{x})(x-\bar{x})+\partial \tilde{q}(x)
\]
is {\em locally monotone} at $(\bar{x},0)$, i.e., for some neighborhood $\mathcal{U}$ of $(\bar{x},-F(\bar{x}))$, one has
\[
\langle x^{\prime}-x, \nabla_{x}F(\bar{p},\bar{x})(x^{\prime}-x) \rangle +
\langle x^{\prime}-x, y^{\prime}-y\rangle \geq 0 ~ \forall ~
(x,y),(x^{\prime}, y^{\prime})\in \mathrm{gph}\, \partial \tilde{q} \cap \mathcal{U}.
\]
This holds trivially due to the posed assumptions and we are done.
\hfill$\square$
\endproof
In some situations the assumption of positive definiteness of $\nabla_{x}F(\bar{p},\bar{x})$ can be weakened.
\begin{proposition}\label{Prop.2}
Assume that $\tilde{q}$ is convex, piecewise linear-quadratic and the mapping
\begin{equation}\label{eq-53}
\Xi(w):=\left \{ k \in \mathbb{R}^{n} | w \in \nabla_{x}F(\bar{p},\bar{x}) k + \partial \varphi (k)\right \}
\end{equation}
with $\varphi(k):=\frac{1}{2} d^{2}\tilde{q} (\bar{x}| -F(\bar{p},\bar{x}))(k)$ is single-valued on $\mathbb{R}^{s}$. Then $S$ has a single-valued and Lipschitzian localization around $(\bar{p},\bar{x})$.
\end{proposition}
\proof
Again, by virtue of \cite[Theorem 3G.4]{DR} it suffices to
show that the single-valuedness of $\Xi$ implies the existence of a single-valued and Lipschitzian localization of $\Sigma$ around $(0,\bar{x})$. Clearly,
\[
\mathrm{gph}\, \Sigma = \left \{ (w,x) \left | \left [ \begin{split}
& x - \bar{x}\\
& w - \nabla_{x}F(\bar{p},\bar{x})(x-\bar{x})
\end{split}\right ]
\in \mathrm{gph}\, \partial \tilde{q} -\left [ \begin{split}
& \bar{x}\\
& - F(\bar{p},\bar{x})
\end{split}\right ] \right. \right \}
\]
so that $\Sigma$ is a polyhedral multifunction due to our assumptions imposed on $\tilde{q}$, cf. \cite[Theorem 12.30]{RW}. It follows from \cite{Rob76} (see also \cite[Cor.2.5]{OKZ}) that due to the polyhedrality of $\Sigma$, it suffices to ensure the single-valuedness of $\Sigma(\cdot) \cap \mathcal{V}$ on $\mathcal{U}$, where $\mathcal{U}$ is a convex neighborhood of $0 \in \mathbb{R}^{s}$ and $\mathcal{V}$ is a neighborhood of $\bar{x}$. Let us select these neighborhoods in such a way that
\[
\mathrm{gph}\,\partial \tilde{q} -
\left [ \begin{split}
& \qquad \bar{x}\\
& - F(\bar{p},\bar{x})
\end{split}\right ] = T_{\mathrm{gph}\, \partial \tilde{q}} (\bar{x}, - F(\bar{p},\bar{x})),
\]
which is possible due to the polyhedrality of $\partial \tilde{q}$. Then one has
\[
\mathrm{gph}\, \Sigma \cap (\mathcal{U} \times \mathcal{V})=\{(w,\bar{x} +k)\in \mathcal{U} \times \mathcal{V} | w \in \nabla_{x}F(\bar{p},\bar{x})k+D\partial \tilde{q}(\bar{x},- F(\bar{p},\bar{x}))(k)\}.
\]
Under the posed assumptions for any $k \in \mathbb{R}^{n}$
\[
D\partial \tilde{q}(\bar{x},- F(\bar{p},\bar{x}))(k) = \partial \varphi(k),
\]
cf. \cite[Theorem 13.40]{RW}, so that
$\mathrm{gph}\, \Sigma \cap (\mathcal{U} \times \mathcal{V})=\{(w,k+\bar{x})\in \mathcal{U} \times \mathcal{V} | (w,k)\in \mathrm{gph}\, \Xi \}$. Since $D \partial \tilde{q}(\bar{x},- F(\bar{p},\bar{x}))(\cdot)$ is positively homogeneous, $\partial \varphi(\cdot)$ is positively homogeneous as well and so the single-valuedness of
$\Sigma(\cdot)\cap \mathcal{V}$ on $\mathcal{U}$ amounts exactly to the single-valuedness of $\Xi$ on $\mathbb{R}^{s}$.
\hfill$\square$
\endproof
Note that, by virtue of \cite[Theorem 4.4]{GO3}, in the setting of Proposition \ref{Prop.2} one has
\begin{equation}\label{eq-300}
DS(\bar{p},\bar{x})(h)= \{k | 0 \in \nabla_{p}F(\bar{p},\bar{x})h + \nabla_{x}F(\bar{p},\bar{x})k+\partial \varphi (k)\}
\end{equation}
for all $h \in \mathbb{R}^{m}$.
If $\tilde{q} = \delta_{A}$ for a convex polyhedral set $A$, then $\partial \tilde{q}(x)=N_{A}(x)$ and, by virtue of (\ref{eq-102}), $\varphi(k)=\delta_{\mathcal{K}_{A}(\bar{x},-F(\bar{p},\bar{x}))}(k)$. It follows that the GE in (\ref{eq-53}) attains the form
\[
w \in \nabla_{x}F(\bar{p},\bar{x})k + N_{\mathcal{K}_{A}(\bar{x},-F(\bar{p},\bar{x}))}(k).
\]
This is in agreement with \cite[Theorem 5.3]{OKZ} and \cite[Theorem 4F.1]{DR}. Let us illustrate the general case of Proposition \ref{Prop.2} via a simple academic\\
\begin{example}
Put $m=2, l=1, n=1$ and consider the GE (\ref{eq-50}), where $F(p,x)=p_{1}+p_{2} x, \tilde{q}(x)=
|x| + \delta_{A}(x)$ with $A=[0,1]$ and the reference pair $(\bar{p},\bar{x})=((-1,1),0)$. Clearly, $\bar{v}:= -F(\bar{p},\bar{x}) = 1$, and we may again employ formula (\ref{eq-102}). One has
$K(\bar{x},\bar{v})=\mathbb{R}_{+}, f^{\prime \prime} (\bar{x},w)=0$ for any $w\in \mathbb{R}_{+}$ and so we obtain that
\[
\varphi(k)=\frac{1}{2} d^{2} \tilde{q}(\bar{x}|- F(\bar{p},\bar{x}))(k)=\delta_{\mathbb{R}_{+}}(k).
\]
It follows that
\begin{multline}
\Xi(w)=\{k|w\in k+\partial\delta_{\mathbb{R}_{+}}(k)\}=\{k|w \in k+N_{\mathbb{R}_{+}}(k)\}=
\{k \geq 0|k-w\geq 0, \langle k,k-w\rangle =0\}.
\end{multline}
Since $\Xi$ is clearly single-valued on $\mathbb{R}$, we may conclude that the respective $S$ has indeed the single-valued and Lipschitzian localization around $(\bar{p},\bar{x})$.
By virtue of (\ref{eq-300})
\[
DS (\bar{p},\bar{x})(h)=\{k | 0 \in h_{1} + k + N_{\mathbb{R}_{+}}(k)\}.
\]
Both mappings $S$ and $DS (\bar{p},\bar{x})$ are depicted in Fig.1.
\end{example}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figure_S_and_DS}
\caption{The set in equilibria $S(p_1, p_2)$ (left) and the graphical derivative $DS(\bar{p},\bar{x})(h_1,h_2)$ of Example 1.}
\label{mesh}
\end{figure}
In some cases one can apply the following criterion based on Proposition \ref{Prop.2} and \cite[Theorem 13.9]{RW}.
\begin{proposition}\label{Prop.3}
Assume that $\tilde{q}$ is convex, piecewise linear-quadratic, $ \tilde{q}^{\prime\prime}(\bar{x};\cdot)$ is convex and, with
\[
K:= \{k | q^{\prime}(\bar{x}; k)=\langle -F(\bar{p},\bar{x}), k\rangle \},
\]
one has
\[
\partial \frac{1}{2}d^{2}\tilde{q}(\bar{x}| -F(\bar{p},\bar{x}))(k)= \partial \frac{1}{2} \tilde{q}^{\prime\prime}(\bar{x};k)+N_{K}(k).
\]
Further suppose that the matrix $\nabla_{x}F(\bar{p},\bar{x})$ is copositive with respect to $K$.
Then $S$ has a single-valued and Lipschitzian localization around $(\bar{p},\bar{x})$.
\end{proposition}
\proof
We show that the assumptions of Proposition \ref{Prop.2} are fulfilled. To this aim consider
the quantity
\[
V:=\langle \nabla_{x} F(\bar{p},\bar{x})(k_{1}-k_{2}), k_{1}-k_{2}\rangle + \langle \xi_{1}-\xi_{2}, k_{1}-k_{2} \rangle + \langle \eta_{1}-\eta_{2}, k_{1}-k_{2}\rangle
\]
with $\xi_{1}\in \partial \frac{1}{2} \tilde{q}^{\prime \prime}(\bar{x}; k_{1}), \xi_{2}\in \partial \frac{1}{2} \tilde{q}^{\prime \prime}(\bar{x}; k_{2}), \eta_{1} \in N_{K}(k_{1}) \mbox{ and } \eta_{2} \in N_{K}(k_{2})$.
It follows that $k_{1}, k_{2} \in K$ so that, by the imposed assumptions, there is a $\sigma > 0$ such that $V \geq\sigma \| k_{1}-k_{2} \|^{2}$. Hence the operator on the right-hand side of
\[
\xi \in \nabla_{x}F(\bar{p},\bar{x})k + \partial \frac{1}{2} \left[\tilde{q}^{\prime \prime}(\bar{x}; k)\right]+N_{K}(k).
\]
is strongly monotone. This implies, by virtue of \cite[Proposition 12.54]{RW}, the single-valuedness of the respective mapping $\Xi$ and we are done. \hfill$\square$
\endproof
\section{Optimal strategies of producers}
As stated in the Introduction, our motivation for a study of mapping (\ref{eq-2}) came from an
attempt to optimize the production strategies of firms with respect to changing external
parameters like input prices, parameters of inverse demand functions etc. These parameters evolve
in time and the corresponding adjustments of production strategies have (at least at some
producers) to take into account the already mentioned costs of change. The appropriate variant of
the GE in (\ref{eq-2}) (depending on the considered type of market) has thus to be solved at each
time step with the updated values of the parameters. In this section we will analyze from this
point of view a standard oligopolistic market described thoroughly in \cite{MSS} and in \cite[Chapter
12]{OKZ}. So, in the framework (\ref{eq-1}) we will assume that $n$ is the number of produced homogeneous commodities, $p=(p_{1},p_{2})\in \mathbb{R}^{m_{1}} \times
\mathbb{R}^{m_{2}}, m_{1}+m_{2}=m$ and
\begin{equation}\label{eq-100}
f_{i} (p,x_{i}, x_{-i})= c_{i}(p_{1},x_{i}) - \langle x_{i}, \pi (p_{2},T)\rangle
\end{equation}
with $T=\sum\limits^{l}_{i=1} x_{i}$.
Functions $c_{i}: \mathbb{R}^{m_{1}} \times \mathbb{R}^{n} \rightarrow
\mathbb{R}$
represent the {\em production costs} of the $i$th producer and $\pi: \mathbb{R}^{m_{2}} \times \mathbb{R}^{n}
\rightarrow \mathbb{R}$ is the {\em inverse demand function} which assigns each value of the
parameter $p_{2}$ and the overall production vector $T$ the price at which the (price-taking) consumers
are willing to demand. Additionally, we assume that, with some non-negative reals $\beta_{i}$,
\begin{equation}\label{eq-101}
q_{i}(x_{i})= \beta_{i}\|x_{i}-a_{i} \|, \quad i=1,2,\ldots, l,
\end{equation}
where $||\cdot ||$ stands for an arbitrary norm in $ \mathbb{R}^{n} $.
Sets $A_{i} \subset \mathbb{R}^{n}$ specify the {\em sets of feasible productions} and functions $q_{i}$ represent the
costs of change associated with the change of production from a given vector $a_{i}$ to $x_{i}$.
Thus
$$a_{i}\in A_{i}, \quad i=1,2,\ldots, l,$$ are "previous" productions which have to be changed taking into account the "new" values of parameters $p_{1},p_{2}$.
Clearly, one could definitely work also with more complicated functions $q_i$.
Let us denote the total costs (negative profits) of the single firms by
\[
J_{i}(p,x_{i}, x_{-i}):= f_{i}(p,x_{i}, x_{-i})+ q_{i} (x_{i}), \quad i=1,2,\ldots,l.
\]
In accordance with \cite{MSS} and \cite{OKZ} we will now assume for brevity that $n=1$ (so that $s=l$) and impose
the following assumptions:
\begin{enumerate}
\item [(S1)]
$\exists$ an open set $\mathcal{B}_{1}\subset \mathbb{R}^{m_{1}} $ and open sets
$\mathcal{D}_{i} \supset A_{i}$ such that for for $i=1,2,\ldots, l$
\begin{itemize}
\item
$c_{i}$ are twice continuously differentiable on $\mathcal{B}_{1} \times
\mathcal{D}_{i}$;
\item
$c_{i}(p_{1}, \cdot)$ are convex for all $p_{1} \in \mathcal{B}_{1}$.
\end{itemize}
\item [(S2)]
$\exists$ an open set $\mathcal{B}_{2}\subset \mathbb{R}^{m_{2}}$ such that
\begin{itemize}
\item
$\pi$ is twice continuously differentiable on $\mathcal{B}_{2} \times {\rm int\,}
\mathbb{R}_{+}$;
\item
$\vartheta \pi (p_{2},\vartheta)$ is a concave function of $\vartheta$ for all $p_{2}\in
\mathcal{B}_{2}$.
\end{itemize}
\item [(S3)]
Sets $A_{i}\subset \mathbb{R}_{+}$ are closed bounded intervals and at least one of them
belongs to ${\rm int\,} \mathbb{R}_{+}$.
\end{enumerate}
Note that thanks to (S3) one has that $T > 0$ for any feasible production profile
$$(x_{1},x_{2},\ldots, x_{l})\in \mathbb{R}^{l} $$ and hence the second term in (\ref{eq-100})
(representing the revenue) is well-defined.
By virtue of \cite[Lemmas 12.1 and 12.2]{OKZ} we conclude that, with $F_{i}$ and $q_{i}$
given by (\ref{eq-100}) and (\ref{eq-101}), respectively, and $\mathcal{B} = \mathcal{B}_{1}
\times \mathcal{B}_{2}$, the assumptions of Proposition \ref{Prop.1} are fulfilled. This means in
particular that
for all vectors $(a_{1},a_{2},\ldots, a_{i})\in A_{1} \times A_{2} \times \ldots \times A_{l}$ the respective mapping $S:(p_{1},p_{2})\mapsto x$ has a single-valued and Lipschitzian localization around any triple $(p_{1},p_{2},x)$, where $(p_{1},p_{2})\in \mathcal{B}_{1}
\times \mathcal{B}_{2}$ and $x \in S(p_{1},p_{2})$. Under the posed assumptions, however, a stronger statement can be established.
\begin{theorem}\label{Thm.2}
Let $a \in A_{1}\times A_{2}\times \ldots \times A_{l}$.
Under the posed assumptions (S1)-(S3) the solution mapping $S$ is single-valued and Lipschitzian over $\mathcal{B}_{1}
\times \mathcal{B}_{2}$.
\end{theorem}
\proof
Given the vectors $a_{i}, i=1,2,\ldots, l$, and the parameters $p_{1}, p_{2}$, the GE in (\ref{eq-2}) attains the form
\begin{equation}
\begin{split}
\label{eq-200}
0 \in
\left[ \begin{array}{c}
\nabla_{x_{1}}c_{1}(p_{1},x_{1})-x_{1}\nabla_{x_{1}}\pi(p_{2},T)-\pi(p_{2},T) \\
\vdots\\
\nabla_{x_{l}}c_{l}(p_{1},x_{l})-x_{l}\nabla_{x_{l}}\pi(p_{2},T)-\pi(p_{2},T)
\end{array}\right]
+ \left[ \begin{array}{c}
\Lambda_{1}(x_{1}-a_{1}) \\
\vdots\\
\Lambda_{l}(x_{l}-a_{l})
\end{array}\right]
\\ \\
+ ~~ N_{A_{1}}(x_{1})\times \ldots \times N_{A_{l}}(x_{l}),
\end{split}
\end{equation}
where
\[
\Lambda_{i} (x_{i}-a_{i})= \left\{
\begin{split}
\beta_{i} & \mbox{ if } x_{i} > a_{i}\\
-\beta_{i} & \mbox{ if } x_{i} < a_{i}\\
[-\beta_{i} & \beta_{i}] \mbox{ otherwise. }
\end{split}
\right.
\]
From \cite[Lemma 12.2]{OKZ} and \cite[Proposition 12.3]{RW} it follows that for any $(p_{1},p_{2}) \in \mathcal{B}_{1}
\times \mathcal{B}_{2}$ the first operator on the right-hand side of (\ref{eq-200}) is strictly monotone in variable $x$. Moreover, the second one, as the subdifferential of a proper convex function is monotone (\cite[Theorem 12.17]{RW}). Their sum is strictly monotone by virtue of \cite[Exercise 12.4(c)]{RW} and so we may recall \cite[Example 12.48]{RW} according to which $S(p_{1}, p_{2})$ can have no more than one element for any $(p_{1},p_{2}) \in \mathcal{B}_{1}
\times \mathcal{B}_{2}$. This, combined with Theorem \ref{Thm.1} and the Lipschitzian stability of $S$ mentioned above proves the result.
\hfill$\square$
\endproof
In the next section we will be dealing with the mapping $Z:\mathbb{R}\rightrightarrows \mathbb{R}^{l-1}$ which, for given fixed values of $a,p_{1}$ and $p_{2}$, assigns each vector $x_{1} \in A_{1}$ a solution $(x_{2}, \ldots, x_{l})$ of the GE
\begin{equation}\label{eq-400}
\begin{split}
0 \in
\left[ \begin{array}{c}
\nabla_{x_{2}}c_{2}(p_{1},x_{2})-\langle x_{2}\nabla_{x_{2}}\pi(p_{2},T)\rangle -\pi(p_{2},T) \\
\vdots\\
\nabla_{x_{l}}c_{l}(p_{1},x_{l})-\langle x_{l}\nabla_{x_{l}}\pi(p_{2},T)\rangle -\pi(p_{2},T)
\end{array}\right]
+ \left[ \begin{array}{c}
\Lambda_{2}(x_{2}-a_{2}) \\
\vdots\\
\Lambda_{l}(x_{l}-a_{l})
\end{array}\right]\\
\\
+ ~~ N_{A_{2}}(x_{2})\times \ldots \times N_{A_{l}}(x_{l}).
\end{split}
\end{equation}
Variable $x_{1}$ enters GE (\ref{eq-400}) via $T(=\sum\limits^{l}_{i=1} x_{i})$. Using the same argumentation as in Theorem \ref{Thm.2} we obtain the following result.
\begin{theorem}\label{Thm.3}
Let $a_{i}\in A_{i}$ for $i=1,2,\ldots, l, p_{1}\in \mathcal{B}_{1}$ and $p_{2}\in \mathcal{B}_{2}$. Then, under the assumptions of Theorem \ref{Thm.2}, mapping $Z$ is single-valued and Lipschitzian over $A_{1}$.
\end{theorem}
This statement enables us to consider the situation when the first producer decides to replace the non-cooperative by the Stackelberg strategy, cf. [11, page 220]. In this case, to maximize his profit, he has, for given values of $a, p_{1}$ and $p_{2}$, to solve the MPEC
\begin{equation}\label{eq-401}
\begin{array}{cl}
\mbox{ minimize } & c_{1}(p_{1},x_{1})-\langle x_{1}, \pi(p_{2},T)\rangle + \beta_{1} | x_{1}-a_{1} |\\
x_{1} & \\
\mbox{ subject to } & \\
& x_{-1} = Z(x_{1})\\
& \quad x_{1} \in A_{1}.
\end{array}
\end{equation}
Thanks to Theorem \ref{Thm.3} problem (\ref{eq-401}) can be replaced by the (nonsmooth) minimization problem
\begin{equation}\label{eq-402}
\begin{array}{ll}
\mbox{ minimize } & \Theta (x_{1})\\
\mbox{ subject to } & \\
& x_{1} \in A_{1}
\end{array}
\end{equation}
in variable $x_{1}$. In \eqref{eq-402}, $\Theta: \mathbb{R}\rightarrow\mathbb{R}$ is the composition defined by
\begin{equation}\label{eq-403}
\Theta (x_{1}) = c_{l}(p_{1},x_{l})-\langle x_{l}, \pi(p_{2},x_{1}+\mathcal{L}(Z(x_{1})))
\rangle + \beta_{1} | x_{1}-a_{1} |,
\end{equation}
where the mapping $\mathcal{L}:\mathbb{R}^{l-1} \rightarrow\mathbb{R}$ is defined by
\[
\mathcal{L}(x_{2}, x_{3}, \ldots, x_{l}) = \sum\limits^{l}_{i=2} x_{i}.
\]
Problem (\ref{eq-402}) is thus a minimization of a locally Lipschitzian function to which various numerical approaches can be applied.
\section{Numerical experiments}
We consider an example from \cite[Section 12.1]{OKZ} enhanced by a nonsmooth term reflecting the cost of change.
We have five firms (i.e., $l=5$) supplying production quantities (productions)
$$ x_1, x_2, \dots, x_5 $$
of one (i.e., $n=1$) homogeneous commodity to a common market with the inverse demand function
$$ \pi(\gamma, T)=5000^{1/ \gamma} T^{-1/ \gamma}, $$
where $\gamma$ is a positive parameter termed {\em demand elasticity}.
In our tests, however, this parameter will be fixed ($\gamma =1 $). The resulting inverse demand function is depicted in Figure \ref{fig_p_and_c} (left).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figure_p_and_c_1}
\caption{The inverse demand function $\pi=\pi(\gamma, T)$ with $\gamma = 1$ (left) and production cost functions $c_i=c_i(x_i), i=1,\dots,5$, for $t=1$ (right).}
\label{fig_p_and_c}
\end{figure}
The production cost functions have the form
$$c_i(b_{i},x_i)=b_i x_i + \frac{\delta_i}{\delta_i+1} {K_i}^{-1/ \delta_i} x_i^{(1 + \delta_i)/ \delta_i}, $$
where $b_i, \delta_i, K_i, i=1, \dots, 5$, are positive parameters. For brevity we assume that only the parameters $b_i$, reflecting the impact of the input prices on the production costs, evolve in time, whereas parameters $\delta_i, K_i$ attain the same constant values as in \cite[Table 12.1]{OKZ}. The cost of change will arise only at the Firms 1, 2 and 3 with different multiplicative constants
$$\beta_1=0.5, \quad \beta_2=1, \quad \beta_3=2.$$
At the remaining firms any change of production does not incure additional costs ($\beta_4=\beta_5=0$). We will study the behaviour of the market over three time intervals,
$t \in \{1, 2, 3 \}$
with the initial productions (at $t=0$)
$$a_1=47.81, \quad a_2=51.14, \quad a_3=51.32, \quad a_4=48.55, \quad a_5 = 43.48, $$
corresponding to the standard Cournot-Nash equilibrium with the parameters taken over from \cite{MSS}.
The evolution of parameters $b_i$ is displayed in Table \ref{tab1} and the production cost functions for $t=1$ are depicted in Figure \ref{fig_p_and_c} (right).
\subsection{Cournot-Nash equilibria}
\newcolumntype{b}{X}
\newcolumntype{d}{>{\hsize=1\hsize}X}
\newcolumntype{s}{>{\hsize=.55\hsize}X}
\begin{table}
\normalsize
\begin{tabularx}{\linewidth}{s s d d d s s}
\hline
& $i$ & 1 & 2 & 3 & 4 & 5 \\
\hline
t=1 & $b_i$ & 9 & 7 & 3 & 4 & 2 \\
t=2 & $b_i$ & 10 & 8 & 5 & 4 & 2 \\
t=3 & $b_i$ & 11 & 9 & 8 & 4 & 2 \\
\hline
\end{tabularx}
\caption{Time dependent input parameters for the production costs.} \label{tab1}
\normalsize
\begin{tabularx}{\linewidth}{s s d d d s s}
\hline
& $\;i$ & 1 & 2 & 3 & 4 & 5 \\
\hline
t=0 & $\,a_i$ & 47.81& 51.14& 51.32& 48.55& 43.48\\
\hline
t=1 & $\;x_i$ & 49.41& 51.14& 54.24& 48.05& 43.09\\
& $-J_i$ & 377.23 (-0.80)& 459.95 & 639.95 (-5.83)& 503.44& 507.09\\
\hline
t=2 & $\;x_i$ & 49.41& 51.14& 54.24& 48.05& 43.09\\
& $-J_i$ & 328.62 & 408.81 & 537.30 & 503.44& 507.09\\
\hline
t=3 & $\;x_i$ & 45.71& 51.14& 51.58& 48.76& 43.64\\
& $-J_i$ & 286.75 (-1.85)& 379.76 & 386.92 (-5.31)& 527.22& 527.81\\
\hline
\end{tabularx}
\caption{Cournot-Nash equilibria.} \label{tab:Cournot}
\normalsize
\begin{tabularx}{\linewidth}{s s d d d s s}
\hline
& $\;i$ & 1 & 2 & 3 & 4 & 5 \\
\hline
t=0 & $\;a_i$ & 47.81& 51.14& 51.32& 48.55& 43.48\\
\hline
t=1 & $\;x_i$ & 54.95& 51.14& 53.59& 47.52& 42.68\\
& $-J_i$ & 380.49 (-3.57)& 443.52 & 619.80 (-4.54)& 486.00& 491.88\\
\hline
t=2 & $\;x_i$ & 53.09& 51.14& 53.59& 47.72& 42.84\\
& $-J_i$ & 329.49 (-0.93)& 398.58 & 523.65 & 492.55& 497.60\\
\hline
t=3 & $\;x_i$ & 53.05& 50.46& 50.77& 48.11& 43.14\\
& $-J_i$ & 289.65 (-0.02)& 356.57 (-0.68)& 364.33 (-5.64)& 505.29& 508.71\\
\hline
\end{tabularx}
\caption{Stackelberg-Cournot-Nash equilibria. Firm 1 is a leader.} \label{tab:Stackelberg}
\end{table}
For the computation of respective Cournot-Nash equilibria we make use of a ``nonsmooth'' variant of the Gauss-Seidel method described in \cite[Algorithm 2]{Ka}. Using the notation of Section 4, in the main step S.2 one computes the iteration $x^{k}=(x^{k}_{1}, \ldots, x^{k}_{l})$ via $l$ nonsmooth (but convex) consecutive minimizations
\begin{equation}\label{eq-1000}
x^{k}_{i} \in \mathop{\rm arg\,min}\limits\limits_{x_{i}\in A_{i}}f_{i}(p, x^{k}_{1}, \ldots, x^{k}_{i-l}, x_{i}, x^{k-1}_{i+1}, \ldots, x^{k-1}_{l})
+q_{i}(x_{i}), i=1,2,\ldots, l,
\end{equation}
where $p$ is the given parameter and $ x^{k-1}=(x^{k-1}_{1}, \ldots,x^{k-1}_{l})$ is the previous iteration.\\
A respective modification of the convergence result \cite[Theorem 6.1]{Ka} takes the following form.
\begin{theorem}\label{Thm.4}
In addition to the assumptions posed in Section 4 suppose that the sequence $x_{k}$
converges for $k \rightarrow \infty$ to some $x^{*}=(x^{*}_{1}, \ldots, x^{*}_{l})\in \mathbb{R}^{l}$. Then $x^{*}$ is a Cournot-Nash equilibrium.
\end{theorem}
\proof
Observe that for $i=1,2,\ldots, l$ the points $x^{k}_{i}$ fulfill the optimality condition
\[
0 \in \nabla_{x_{i}}
f_{i}(p, x^{k}_{1}, \ldots, x^{k}_{i}, x^{k-1}_{i+l}, \ldots, x^{k-1}_{l}) + \partial q_{i}(x^{k}_{i})+ N_{A_{i}}(x^{k}_{i}).
\]
Thus, there are elements $\xi^{k}_{i}\in \partial q_{i}(x^{k}_{i})$ and $\eta^{k}_{i}\in N_{A_{i}}(x^{k}_{i})$ such that, by taking subsequences (without relabeling) if necessary, $\xi^{k}_{i} \rightarrow \xi^{*}_{i}, \eta^{k}_{i} \rightarrow \eta^{*}_{i}$, satisfying the conditions
\begin{equation}\label{eq-1001}
0 \in \nabla_{x_{i}}
f_{i}(p, x^{*}_{1}, \ldots, x^{*}_{l} ) + \xi^{*}_{i}+ \eta^{*}_{i}, \xi^{*}_{i} \in \partial q_i(x^{*}_{i}), \eta^{*}_{i} \in N_{A_{i}}(x^{*}_{i}),
\end{equation}
$i=1,2,\ldots, l$. This follows from the outer semicontinuity of the subdifferential mapping and from the uniform boundedness of the subdifferentials $\partial q_{i}$. Conditions (\ref{eq-1001}) say that $x^{*}$ is a Cournot-Nash equilibrium and we are done.
\hfill$\square$
\endproof
As the stopping rule we employ an approximative version of the optimality conditions (\ref{eq-1001}).
The obtained results are summarized in Table \ref{tab:Cournot} and show the productions and profits (negative total costs) of all firms at single time instances. In parentheses we display the costs of change (with negative signs) which decrease the profits of the Firms 1, 2, 3 in case of any change of their production strategy.
Note that firms 1 and 3 increased significantly their productions in time 1 and decreased them in time 3, whereas Firm 2, due to the cost of change, kept its production unchanged during the whole time.
Figure \ref{cournot_optima} shows the total cost functions $J_i$ at time 1. Note that the equilibrium production of Firm 2 lies in a kink point because its cost of change is zero. Expectantly, functions $J_i$ are smooth for $i=4, 5$.\\
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{figure_Cournot_1}
\caption{The total cost functions and Cournot-Nash equilibria for t=1. }
\label{cournot_optima}
\end{figure}
\subsection{Stackelberg-Cournot-Nash equilibria}
Next we will consider the same market as in the previous section where, however, the first producer decides now to replace the non-cooperative by the Stackeberg strategy. This leads to the minimization of $\Theta$ over $A_1$,
cf. \cite[page 220]{OKZ}.
MPEC \eqref{eq-401} is, however, more complicated than the MPECs solved with this approach in \cite{OKZ} because the presence of costs of change makes the lower-level equilibrium much more difficult and the objective is not continuously differentiable.
On the other hand, since $n=1$, we may use for the minimization of $\Theta$ any suitable routine for nonsmooth constrained univariate minimization and the same routice may be used inside the Gass-Seidel method for the solution of the "lower-level" problem, needed in the computations of the values of $\Theta$.
To this aim we used an inbuilt Matlab function \verb+fminbnd+ (\cite{Br}).
\begin{remark}
Numerical results of both subsections 5.1, 5.2 were generated by own Matlab code available freely for downloading and testing at:
\begin{center}
{\em \url{https://www.mathworks.com/matlabcentral/fileexchange/72771}} .
\end{center}
The code is flexible and allows for easy modifications to different models.
\end{remark}
The obtained results are summarized in Table 3. They are quite different from their counterparts in Table 2 and show that, switching to the Stackelberg strategy, Firm 1 substantially improves its profit. In contrast to the noncooperative strategy, it has now to change its production at each time step and also Firm 2, who preserved in the Cournot-Nash case the same production over the whole time, is now forced to change it at $t=2$. Of course, our data are purely academic and can hardly be used for some economic interpretations. On the other hand, the results are sound and show the potential of the suggested techniques to be applied also in some more realistic situations.\\
\section*{Conclusion}
\noindent In the first half of the paper we have studied a parametrized variational inequality of the second kind. In this form, one can write down, for example, a condition which characterizes solutions of some parameter-dependent Nash equilibrium problems. By using standard tools of variational analysis sufficient conditions have been derived ensuring the existence of a single-valued and Lipschitzian localization of the respective solution mapping. Apart from post-optimal analysis, the obtained results can be used in computation of respective equilibria for given values of the parameter via continuation (\cite{AG}) or Newton-type methods (\cite{GO8}).
The second part of the paper has been inspired, on one hand, by the successful theory of rate-independence processes (\cite{MR},\cite{FKV}) and, on the other hand, by the important economic paper \cite{Fl}. It turns out that in some market models the cost of change of the production strategy can be viewed as the economic counterpart of the dissipation energy, arising in rate independent dissipative models of nonlinear mechanics of solids. Cost of change (dissipation energy) occurs further, e.g., in modeling the behavior of some national banks who try to regulate the inflation rate, among other instruments, via buying or selling suitable amounts of the domestic currency on international financial markets \cite{R}.
The considered market model, obtained by augmenting the cost of change to a standard model from \cite{MSS} possesses very strong stability properties and is amenable to various numerical approaches. Both in the case of the Cournot-Nash equilibrium as well as in the two-level (Stackelberg) case we have used rather simple techniques based on the "nonsmooth" line search from \cite{Br}. In case of multiple commodities and/or more complicated costs of change one could employ more sophisticated approaches based either on the second-order subdifferentials or on the second subderivatives discussed in Section 3. This will be the subject of our future work.
|
1,116,691,498,398 | arxiv | \section{Introduction}
Due to the high resolution with which NOG samples the dominating structures
of the nearby Universe, this catalog is more suitable than
IRAS-selected galaxy
samples for mapping the galaxy density field on quite small scales ($<2$ Mpc).
However, high-density sampling rate, achieved with the NOG selection
criteria ($B_T^c \leq 14, cz \leq 6000, b \geq |20^{\circ}|$),
is counteracted by systematic effects arising
from the cutoffs in the selection parameters or non-uniformities in
the original catalogs which may vave not been properly homogenized in our
sample. So we have tried to correct and minimize these
biases testing the sample completeness by means of a count-magnitude
analysis and deriving the appropriate
luminosity and redshift selection functions.
Historically, redshift surveys have provided the raw basis for investigating
the three dimensional nature of the Universe.
However, a large and complete sample of galaxy {\em distances}
would represent a
marked improvement over redshift surveys for measuring the properties of the
galaxy 3D distribution since the density map in redshift space can be a
systematically distorted version of the real picture.
So we have carried out the task of replacing accurated ``true distances''
measurements for all the galaxies of the sample for which redshift
information was available.
This has been carried out (Marinoni et al. 1998) by modelling the Doppler
perturbations induced by peculiar motions and disentangling
the cosmological component of the redshift which is the one acting as
a distance indicator.
\begin{figure}
\plotfiddle{marinonic1.eps}{7.5cm}{0}{85}{42.5}{-268}{-60}
\caption{Plots showing the velocity field in the CMB frame for the
modified cluster dipole model ({\em left}) and the multi-attractor model
fitted to the Mark III data set ({\em right}). The vector shown are
projections of the 3D velocity field in the Supergalactic plane SGX, SGY.
The contours correspond to the same velocity vector modulus; contour spacing
is 100 km $s^{-1}$, the heavy contours marking 100 $km s^{-1}$ and 200
$km s^{-1}$ for the two models respectively.}
\end{figure}
\section{Distance reconstruction}
In order to correct raw redshift-distances we used two basic models of the
peculiar velocity field. These two models mean to be representative of
the two competing and most popular pictures of the the z=0 kinematics. As a matter of fact, they describe the velocity field giving two opposite
interpretations of the amplitude and the length scale coherence of the
motions.
The first model is the optical cluster 3D-dipole reconstruction scheme of
Branchini \& Plionis (1996) that we modify with the inclusion of a local
model of Virgocentric infall. This model shows a region where the flow
bifurcates towards the Great Attractor and towards the Perseus-Pisces complex.
The second description of the peculiar velocity field has been worked out
using a multi-attractor model fitted to the Mark III peculiar velocity
catalogue (Willick et al. 1997).
This is a collection of homogeneized distances for a sample of
galaxies of different morphological type and distributed in a nearly
isotropic way in the sky.
In applying this reconstruction scheme we have adopted
a King density profile for characterizing the mass distribution
of each attractor
(i.e. Virgo cluster, the Great Attractor, the Perseus-Pisces
and Shapley superclusters) and a weakly non-linear series expansion
by Reg\"os \& Geller (1989), to relate the peculiar velocity field and the
mass fluctuations.
The emerging picture is the one in which the principle feature of the velocity field in the PP region is a coherent streming flows in the general direction
of the GA and Shapley superclusters.
Inverting the non linear redshift--distance relations predicted by the
above--mentioned velocity field models, we derive the distances
of galaxies.
The use of different velocity field models allows us to check to what
extent differences in the description of the peculiar flows influences
the estimate of galaxy distances in the nearby universe.
We note that these differences turn out to be more prominent at the largest
and smallest distances rather than for intermediate distances (i.e.,
for $2000 < r < 4000$ km/s, where $r$ is the distance expressed in
km/s).
We have also studied the stability of the luminosity function and
of the derived selection function against variations in the adopted
peculiar velocity field models. Following the lines described in
Marinoni et al. 1999, we found that peculiar motion effects are of the order
of statistical uncertainties and cause at most variations of 1 $\sigma$ in
$\alpha$ and 2$\sigma$ in $M^{*}_B$.
\section{Density Reconstruction}
The galaxy distribution is intrinsically a point process.
The problem of reconstructing the density fluctuation $\delta({\bf r})$
is connected with finding the best transformation scheme for {\em diluting}
the point distribution into a continuous density field. After having
devised an algorithm to infer real distances from measured redshifts,
the remaining problems we have to overcome are:
\begin{itemize}
\item the number density of galaxies,
in a flux-limited redshift sample, is a decreasing
function of distance and a small error in the selection function, used
to recover the real population of objects, causes
a systematic error in the density field.
\item the mean interparticle spacing of a redshift catalog is an
increasing function of distance with corresponding ever increasing
shot noise. Correcting for this effect introduce a lack of statistical
similarity between the nearby and faraway parts of the catalog.
\item there is a 34 \% of the sky which is uncovered by the NOG catalog.
\end{itemize}
We address all these issues smoothing with a normalized Gaussian filter
having a smoothing length which is a properly defined
increasing function of distance. Moreover, we assign each galaxy a weight
given by the inverse of the sample selection function in order to well
calibrate the median value of the density.
The specific features of the galaxy distribution field
are shown in figure 2 where we plot the 0.5 spaced contours of
the galaxy density contrast $\delta$ in the Supergalactic Plane.
It is clear how NOG can constrain the shape and dimensionality of
high-amplitude, nearby structures as the so-called Supergalactic Plane.
It is also clear from a first visual impression how much irregulars
are the shapes of the major structures and how much rougly symmetric
is the distribution of high and low-density regions.
A full description of the density peaks and voids
characterizing the whole volume of the catalog will be presented in
a forthcoming paper.
\begin{figure}
\plotfiddle{marinonic2.eps}{8.5cm}{0}{53}{53}{-183}{-90}
\caption{The {\em real space} density field of NOG galaxies in the
Supergalactic Plane.
A Gaussian filter with an average smoothing length of 500 km/s has been
applied. Dashed contours represent negative values of $\delta$
i.e. underdense regions with respect to the average density.
Some prominent structures dominating the local volume
such as the Hydra-Centaurus-GA complex, Virgo, Perseus-Pisces, Cetus Wall
and Sculptor void are clearly visible.
}
\end{figure}
|
1,116,691,498,399 | arxiv | \section{Introduction}
Artificial intelligence methods are playing an increasingly important role in global economics. The growing importance and, at the same time, the risks associated with AI are driving a vibrant discussion about the responsible development of artificial intelligence. Examples of negative consequences resulting from black-box models show that interpretability, transparency, safety, and fairness are essential yet sometimes overlooked components of AI systems.
Efforts to secure the responsible development of AI systems are ongoing at many levels and in many communities, both policymakers and academics \citep{gill_responsible_2020, barredo_arrieta_explainable_2020, baniecki_dalex_2020}.
Naturally, national strategies for the development of responsible AI, sector regulations related to the safe use of AI, as well as academic research related to new methods that ensure the transparency and verifiability of models are all interrelated. Strategies are based on discussions in the scientific community and are often sources of inspiration for subsequent research work. The need for regulation stems from risks, often identified by the research community, but when regulations are created, they become a powerful tool for developing methods to meet expectations. Scientific work in AI is particularly strongly connected to the economy, which means that a large part of it responds to the threads identified in regulations and strategies.
Although this impact is strong, we know little about the dynamics and structure of this impact. Analyses of AI-related policies are carried out by the OECD AI Policy Observatory\footnote{\url{https://www.oecd.ai/}}, and by the European Commission's AI Watch\footnote{\url{https://knowledge4policy.ec.europa.eu/ai-watch_en}} at the European level. However, academics working in responsible AI are most often locked in an information bubble of articles on XAI that are discussed at prestigious conferences, journals, and on preprint servers such as arXiv\footnote{\url{https://arxiv.org/}}. This interaction is complicated by the different aims of the stakeholders developing AI solutions versus the stakeholders that use AI solutions, who typically do not understand their limitations.
We know that there is a gap between the expectations (enshrined in strategies and regulations) and the reality (presented in research papers) related to AI \citep{krafft_defining}. And due to the increasing number of documents, it is close to impossible to analyze this gap by manually analyzing all source documents.
\subsection{Our Contribution}
To address this problem, we need a standardized knowledge base that can be processed in an automated way. In this paper, we present the concept and implementation of such a framework. To achieve that, we build a set of tools for scrapping, filtering, and preprocessing relevant documents. Our system extract information from documents using Natural Language Processing (NLP). The proposed framework processes not only AI regulations, which have been developed relatively recently, but also guidelines, whitepapers, and academic articles. To study the dynamics of influence between academia and policymakers, we must detect interconnections between papers and policy documents, both explicit (citations, references) and implicit (similarities in approach to concepts, same authors affiliations). The tools developed in this study allow following the process of institutionalizing ideas on how technologies associated with artificial intelligence should be regulated.
The described system is, to our knowledge, the first such solution that combines research papers, strategies, and regulations with rich annotations. In the second part of the paper, we showcase a set of analyses that such a system can perform. However, this is by no means an exhaustive list of use cases. In this work, we focused on XAI papers, but the proposed method could be used more broadly to analyze any subfield of AI. We believe that this work creates the foundation for future analyses of cross-dependencies between strategies, articles, and regulations.
\subsection{Relation to social sciences}
Studying the developments of regulations regarding AI by using automated AI systems is interesting, not only because of practical reasons. This framework not only allows discovering the directions in which regulations develop. Additionally, it could be used to study a topic that has always been important for social science -- the relation between humans and technology. At least from the research conducted by \cite{Ogburn}, scholars are interested in studying how and how quickly the culture embraces new technology. How much time do we need to develop norms and rules telling us how we should understand the new technology, how we should use it, and how we should be punished for not following the rules. Therefore no matter how extraordinary for us technologies associated with AI are, the type of problems they pose on the general level is not new. What is new is the possibility to use new technology to study its cultural embracing.
Social science offers a variety of ways of studying relations between technology and culture -- the system presented in this paper is built around a take on this issue coming from political science, more specifically, the studies on public policies. On the one hand, our system can extract the significant characteristics of AI policies. On the other hand, it can grasp the dynamic behind the process of shaping these policies. It allows studying policy design~\citep{siddiki_2020} and policy process~\citep{Weible} at the same time. The system enables us to analyze which sets of AI experts influence policies towards AI and, at the same time, study the characteristics of these policies using Institutional Grammar (IG). There have been attempts on automated IG tagging using NLP~\citep{Rice2021}, but code is not available.
\subsection{Related work}
\label{sec:related_work}
Using text processing techniques to tackle political science issues has been in use for a while, but only recently has there been the adoption of modern NLP methods~\citep{glavas_computational_2019,hollibaugh_use_2019}. A recent example of automated policy texts analysis is~\cite{linder_text_2020}, where information extraction methods were used to mine similarities between public policies texts.
There are also recent examples of combining network analysis and NLP in political science. Namely, ~\cite{zaytsev_entity_2019} used named entity recognition and the Chinese Whispers algorithm in a quantitative approach to identifying actors' coalitions in the influence of policymaking.
Studying dynamics of machine learning research has been catching interest lately~\citep{martinez-plumed_research_2021}; however, there has been no record of quantitatively analyzing such dynamics between academic papers and public policies. The topic of the influence of research over policy has been studied using traditional methodologies~\citep{newman_policy_2016}.
There were attempts at analyzing the relationship between policymakers and academia in the context of XAI \citep{krafft_defining, LANGER2021103473}; however, the methodology of such studies never included analyzing documents produced both by academia and policymakers
\subsection{IG as a novel approach to information extraction}
\label{sec:ig}
The Institutional Grammar (IG) was created to solve the discussion regarding one of the crucial issues in social science -- the nature of institution~\citep{ig_ostrom}, more specifically, how institutions regulate human behavior. However, it is now used mainly as an analytical tool in policy design studies. This type of research is focused on "the purposeful, functional, and normative qualities of public policies" \cite[p.~1]{siddiki_2020}, and it is especially concerned with "the content of policies and how this content is organized" \citep{siddiki_2020, SchneiderIngram}. Because the content of policies is expressed mainly through legal regulations, IG is mainly used to analyze legal regulations. The tool allows not only to develop research in political science, but it has also found applications in computer science ~\citep{Frantz2016}. IG's attractive feature seems to be its ability to transfer legal text into a computer-readable format.
IG has been developing since its creation by \citeauthor{ig_ostrom}. Its most current version -- IG 2.0 -- is presented in a codebook written in cooperation between political and computer scientists ~\citep{ig_codebook}.
The basic unit of IG analysis is a statement. There are two types of statements: constitutive ones and regulative ones. Constitutive statements define crucial elements of a particular policy~\textit{For the purpose of this Regulation, ‘provider’ means a legal person that develops an AI system} where regulative statements provide information on which activities are allowed, forbidden, or obligatory in a particular policy setting ~\textit{The European Data Protection Supervisor may impose administrative fines on Union institutions.} Each type of statement could be parsed using proper IG components (see Table~\ref{tab:ig_components}). In the case of our examples, they should be parsed as follows:~\textit{For the purpose of this Regulation(AC), ‘provider’(E) means (F) a legal person that develops an AI system (P)} and~\textit{The European Data Protection Supervisor (A) may (D) impose (I) administrative fines (B).}
\begin{table}
\begin{center}
\caption{IG main components depending on statement type (regulative or constitutive) based on~\cite[pp.~10-11]{ig_codebook}.}
\label{tab:ig_components}
\begin{tabular}{ |p{2cm}|p{3cm}|p{2.2cm}|p{3cm}| }
\hline
Regulative statements & Description & Constitutive statements & Description\\
\hline \hline
Attribute (A)& The addressee of the statement. & Constituted Entity (E)& The entity being defined.\\
\hline
Aim (I)& The action of addressee regulated by the statement. & Constitutive Function (F)& A verb used to define Constituted Entity.\\
\hline
Deontic (D)& An operator determining the level of discretion or constraint associated with Aim. & Modal (M)& An operator determining the level of necessity and possibility of defining Constituted Entity.\\
\hline
Object (B)& The receiver of the action described by Aim & Constituting Properties (P)& The entity against which Constituted Entity is defined.\\
\hline
Activation Condition (AC)& The setting to which the statements apply. & Activation Condition (AC)& The setting to which the statements apply.\\
\hline
Execution Constraint (EC)& Quality of action described by Aim & Execution Constraint (EC)& Quality of Constitutive Function.\\
\hline
\end{tabular}
\end{center}
\vspace{-0.7cm}
\end{table}
The scope of an IG implementation into research depends on its goals. Our study on AI regulations follows those analyses where only some IG components were identified in legal texts and examined~\citep{Heikkila2018}.
\section{The architecture of the MAIR framework}\label{sec:sect2}
To automatically analyze AI regulations' dynamics, we must first gather policy documents and academic papers, enrich them with relevant meta-information, and find interconnections between them.
The ideas described in sections \ref{sec:related_work} and \ref{sec:ig} inspired the development of the MAIR (Monitoring of AI Regulations, strategies, and research papers) framework. The architecture of this framework is shown in Fig.~\ref{fig:system_pipline}. The framework is fed with documents retrieved from four sources: OECD AI Policy Observatory\footnote{OECD AI Policy Observatory website: \url{https://oecd.ai/}, last download date: 19 Mar 2021.}, and NESTA AI Governance Database\footnote{NESTA AI Governance Database website: \url{https://www.nesta.org.uk/data-visualisation-and-interactive/ai-governance-database/}, last download date: 19 Mar 2021.} with policy documents and arXiv \citep{clement2019arxiv} as well as Semantic Scholar Research Corpus (S2ORC) \citep{lo-wang-2020-s2orc} for research papers. These documents are usually available as pdf files and are scrapped with Beautiful Soup\footnote{Beautiful Soup library is available on PyPi: \url{https://pypi.org/project/beautifulsoup4/}.}.
System MAIR automatically detects some sections of texts, such as headers, and bibliography, to later extract citations and affiliations only from those parts of the text.
We extract and collect metadata, such as authors, source websites, and others, for later processing along with the content of documents. Then we run series of information extraction processes -- determine policy document function, extract deontic sentences along with Institutional Grammar attributes, determine authors and affiliations, find cross-citations between documents and other relevant data. All of those processes are described in detail in Sect.~\ref{sec.inf_extr}. All data gathering, processing, and extraction steps are managed by the DVC pipeline~\citep{ruslan-kuprieiev}, which allows for easy update of all results in case of new data available.
The source code of the framework on the open GPL-3 license is available in the GitHub MAIR repository\footnote{GitHub MAIR repository: \url{https://github.com/ModelOriented/MAIR}.}.
In the framework, we use two corpora of articles from arXiv:
\begin{itemize}
\item \verb'arXiv.AI' that consists of all AI-related papers. These papers are identified based on the categories identified by authors (see Appendix \ref{arxiv-categories-appendix}). Due to its volume, this corpus contains only metadata. Today there are 164,105 documents in this corpus. This resource is used to identify papers referenced in policy documents in Sect. \ref{citation-network-construction}. \label{arxiv-AI}
\item \verb'arXiv.XAI' that consists of a subset of the above related specifically to the domain of Explainable Artificial Intelligence and Interpretable Machine Learning, filtered by combinations of domain keywords (see Appendix \ref{xai-keywords}). It contains 742 papers with full texts along with metadata. Additionally, we extract a citation network by calling Semantic Scholar API.
\label{arxiv-XAI}
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{diagrams/system.pdf}
\caption{The process of acquiring and enriching documents for the MAIR database. The first level indicated by cloud icons identifies data sources, different for regulations and strategies and different for research papers. Subsequent components with a white background refer to the technical processing of the retrieved documents. The grey background indicates system elements that enrich documents with additional annotations, create additional meta-data or links between documents. The enriched documents are then stored in three databases describing different types of extracted data.}
\label{fig:system_pipline}
\end{figure}
\clearpage
\section{Tools for knowledge extraction}
\label{sec.inf_extr}
This section describes NLP methods used to extract various characteristics from policy documents and academic papers. We extract document-wide qualities (document function, affiliations), links between documents, and a list of deontic sentences tagged with Institutional Grammar (IG) for every document. Scrapped metadata such as document issuing year are stored together with those extracted pieces of information and could be later used for the analysis of various aspects of dynamics.
\subsection{Classification of document function}
Policy documents from Nesta and OECD fall into several categories based on their function. However, the classification provided by the authors is ambiguous and inconsistent between the two sources. We develop a more systematized classification system that we use as metadata in further analysis. Specifically, we define categories, perform manual annotation and train an NLP model for the automatic categorization.
For each document, we assign one of the following categories:
\begin{enumerate}
\item \textbf{Diagnosis} -- reports, and other documents describing the current state of AI;
\item \textbf{Principles} -- sets of ethical rules regarding AI;
\item \textbf{Strategies} -- documents describing actions that should be taken towards AI;
\item \textbf{Pre-regulations} -- proposals of legal regulations addressing AI;
\item \textbf{Regulations} -- legal regulations addressing AI;
\item \textbf{Body} -- documents establishing AI-related organizations.
\end{enumerate}
In the manual labeling process, we achieve 77\% agreement between two trained annotators and solve conflicts with a third annotator (the main author of this paper).
To classify documents, we use a few-shot learning model based on Task-Aware Representation of Sentences \citep{halder_task-aware_2020}, implemented in flairNLP \citep{akbik2019flair}.
We achieve 80.8\% accuracy on the holdout set. A detailed breakdown of accuracy per class is provided in Appendix \ref{sec:function_class_performance}.
\subsection{Extraction of Institutional Grammar (IG) tags}
In our solution, we focused on extracting 4 IG attributes from texts -- Attributes, Aims, Deontics, and Objects. Knowing that automatization of sentence tagging according to IG is not an easy task~\cite{Rice2021}, we simplified our approach to this analytical tool~\citep{Heikkila2018}. First of all, we tagged only these sentences which have modal verbs, because, through these type of sentences rights, obligations, and restrictions are usually expressed. Secondly, we treated all selected sentences as regulative statements. By doing this, we lost information if activities associated with deontic describe essential functions of entities to which statements are addressed or only potential actions they are capable of performing. Thirdly, we did not want to analyze very complex sentence structures, but we focused only on main sentences. Therefore our implementation of IG was tailored to the specific needs of our research.
The algorithm is based on dependency trees. The first step, after initial preprocessing, is splitting texts into sentences,, and parsing with spacy dependency parser \citep{spacy}. Then, we locate sentences with deontics from a closed list. The algorithm uses dependency relationships to locate finds verb (Aim), then subjects (Attribute) or passive subjects (Object). If there is no direct subject, we search the tree for clausal subject\footnote{For definition of clausal subject, see \url{https://universaldependencies.org/u/dep/csubj.html}} subsentence, and extract the subject of such subsentence. Then, any additional ObjectS are identified. We recursively repeat such a procedure for every verb that is in conjunction with the parent verb of deontic. In the end, we add every subject conjugated with any of the previously found subjects (same for objects). If any of found subjects is a pronoun, we perform the additional step of coreference resolution to find the entity to which the pronoun is referring. For this, we used Neuralcoref Spacy extension\footnote{Github repository with neuralcoref code: \url{https://github.com/huggingface/neuralcoref}}, which implements the method presented in \citep{clark_deep_2016}.
Every deontic is then mapped onto one of the 3 categories: \textit{shall}, \textit{must} or \textit{can}, and every Object, Attribute, and Aim is lemmatized to simplify the further analysis. The details of the tagging algorithm are presented in the Appendix~\ref{sec:algorithm}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{diagrams/parsed_trees.pdf}
\label{fig:dependency}
\caption{Examples of dependency parsed deontic sentences. Arrows represent dependencies, arrow labels are dependency relations types, and bottom words are part-of-speech tags -- all three are produced by the spacy parser and are used in the IG tagging algorithm. Colors are indicating IG tags assigned to words. In the first sentence, recognized Deontic (starting point for the algorithm) is "should" (which is translated in our nomenclature to "shall"). Our algorithm recognizes 3 Attributes ("designers", "builders", "manufacturers"), 1 Aim ("submit") and 2 Objects ("details", "documentation").
The second example is a passive sentence. The starting point for the algorithm is "must", there are two recognized Aims ("logged" and "retained") and 1 Object ("any decisions").}
\end{figure}
\subsection{Extraction of authors' affiliations }
One of the angles of our analysis is to discover players that actively impact the discourse and shape the AI regulations. To do this, we decided to extract the affiliations of the authors of papers. Here, we use an arXiv XAI dataset with sources of 742 papers (described in Sect. \ref{arxiv-XAI}).
Since arXiv collects metadata limited to the optional field and the submitter, the information is too sparse for any further application. For this reason, we extract it from the paper itself. Specifically, we choose to work on the LaTeX sources in a structured format. It is, however, not immediate to extract affiliations from this format, as multiple tags are enclosing this information. Additionally, there is no standard format for placing the affiliations in the paper, so they often mix with the author's names or the exact addresses. As a simplification, we do not intend to link specific authors with their institutions but instead, find a set of affiliations for the article.
Overall our pipeline consists of four steps:
\begin{enumerate}
\item \textbf{Locate} the rough position of the affiliation in the text. We do it based on a list of identified LaTeX tags. Therefore we avoid extractions of institutions referred to in the text which are not authors' affiliations.
\item \textbf{Extract} the names of the institutions.
\item \textbf{Match} different names of one organization into one. For example, university name with/without the department
\item \textbf{Classify} the organization as either academia or business.
\end{enumerate}
For various steps, we explored several options, including SpaCy \citep{spacy} Named Entity Recognition (NER) for extraction, utilizing the email domain as a proxy identifier of an institution for matching, and rule-based classification.
We then used Named Entity Linking (NEL)~\citep{6823700} for matching affiliations names with the external database. Specifically, we use the tool called Babelfy~\citep{moro-etal-2014-entity} which extracts entities and matches them against a DBpedia knowledge graph. Finally, we classify the affiliations based on their DBpedia entry tags.
A demonstration of the affiliation extractor run on Fig.~\ref{fig:affiliation_extraction}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{diagrams/affiliation_extraction.pdf}
\caption{The process of affiliation extraction. In the Locate step, we use LaTeX tags to identify the potential location of the affiliations (left). Extract and Match stages performed with Named Entity Linking (NEL) result in matching with a corresponding DBpedia entry (middle). Classify step outputs type of affiliation based on DBpedia metadata (right).}
\label{fig:affiliation_extraction}
\end{figure}
\subsection{Construction of citation network}
\label{citation-network-construction}
We constructed a citation network coupling research papers (arXiv) and policy documents (OECD, NESTA) focusing on references in policy documents pointing to academic papers. Our data consists of 196 policy documents and the \verb'arXiv.AI' dataset of 164,105 papers (described in detail in Sect.~\ref{arxiv-AI}).
Policy documents do not contain any structured referencing format and are provided in the PDF format, which causes a lack of metadata about citations and prohibits us from using any of the existing tools assuming a consistent format of references.
We apply techniques from the field of Information Extraction to tackle issues of document linking~\citep{sil-etal-2012-linking} using metadata such as title, author names~\citep{shoaib2020author} in a free text \citep{essay73817}.
In this case, we pair each policy document with each paper and determine a match by either a paper's arXiv \verb'id' or a pair of (\verb'title', \verb'author') -- a demonstration of the link extraction method is shown in Fig.~\ref{fig:link_extractor}.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{diagrams/link_extractor.pdf}
\caption{In the policy documents (left), we mine references pointing to research papers (right). A link is identified if there is a match in the metadata. In this example, the top of two papers is matched by a combination of title and author last name.}
\label{fig:link_extractor}
\end{figure}
As a result, we obtain a bipartite graph of 202 links -- 37 policy documents citing 146 papers.
\section{Analysis of the MAIR corpus of documents}
Section \ref{sec:sect2} introduces the MAIR framework for collecting strategies, regulations, and research papers about XAI. Section \ref{sec.inf_extr} describes a set of techniques to enrich this corpus with additional meta-information extracted with advanced NLP tools. Such a corpus may serve many dedicated analyses related to the interdependencies between different stakeholders such as countries, IT companies, or academics. In this section, we present some example analyses of this corpus. It is not our aim to solve a specific research question but rather to show the versatility and usefulness of the developed resource.
The first example focuses on the temporal analysis of citations between scientific articles and their cross-connections with policy documents; the second brings to the front inter-dependencies among different players -- in particular academia and industry. The third shows the use of deontic information to track differences in attitudes towards the human-AI relationship. Each of these examples describes an independent research problem.
\subsection{XAI papers citation network}
\label{sec.arxiv_cit_an}
In the first case, we filter the \verb'arXiv.XAI' network of 742 arXiv papers (described in Sect. \ref{arxiv-XAI}) so that we take into account only nodes that cite (out connections) or are cited by (in-connections) at least one of the 742 papers. In result we restrict ourselves to a directed citation graph $G_c$ of $N_c=525$ nodes and $E_c=1919$ edges and no correlations among nodes' in- and out-degrees (Pearson's $r=-0.03$) consisting of one giant component with 505 nodes and 7 small clusters. The graph is shown in Fig.~\ref{fig:netc}A with color-coded nodes reflecting affiliation type. However, due to rather high density, the picture does not bring any specific insights. On the other hand, the analysis of the proportion of incoming and outgoing links reveals that the majority of connections goes to papers characterized affiliations identified both from academia as well as industry (see Table~\ref{tab:tabc}). There is also a significant difference in the profile of in-coming and outgoing links, e.g., although industry-affiliated papers have a very similar number of in- and out-connections (85 vs 87), it is almost three times less likely that an industry paper cites an academia one than an opposite situation to occur. Nonetheless, this picture might be dimmed by the significant number of not categorized papers.
\begin{table}[]
\centering
\caption{Breakdown of links in the citation graph $G_c$: rows give the number of outgoing connections while columns represent incoming ones. "Academia \& industry" means papers with affiliations from both academia and industry.}
\label{tab:tabc} \begin{tabular}{|c|cccc|c|}
\hline
out \textbackslash ~in & academia & academia \& industry & industry & none & $\Sigma$\\
\hline\hline
academia & 69 & 156 & 17 & 166 & 408\\
both & 94 & 221 & 20 & 227 & 562\\
academia \& industry & 6 & 32 & 3 & 46 & 87\\
none & 118 & 314 & 45 & 385 & 862\\
\hline
$\Sigma$ & 287 & 723 & 85 & 824 & 1919\\
\hline
\end{tabular}
\end{table}
To find the most influential nodes in $G_c$ we have used \texttt{igraph} \texttt{R} package implementation of the Page Rank algorithm \citep{igraph} -- Fig.~\ref{fig:netc}B presents 20 top-ranked nodes marked on in-degree vs publication time plot. As expected, in general, Page Rank (which, for a given node, is the higher the more highly Page Rank nodes are pointing towards it) promotes nodes representing earlier papers as they tend to accumulate citations reflected by the number of incoming links. In the following step, we have identified in $G_c$ 16 nodes that are being cited in total 23 times by 7 different OECD and NESTA policy documents -- they are marked with orange circles in Fig.~\ref{fig:netc}B, their size representing a number of obtained citations by different policy documents.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{diagrams/fig_net_c.pdf}
\caption{Analysis of XAI citation network $G_c$. A) XAI citation network with colour-coded nodes reflecting affiliation type (see legend); B) node in-degree $k_{in}$ vs publication time (as a log-linear plot is being used we plot $k_{in} + 1$ on the Y-axis); each black dot represent a single paper, 20 most influential papers according to Page Rank score are marked in green with numbers giving their rank, 16 papers cited by OECD and NESTA policy documents are marked with orange circles, their size being proportional to a number of such references; C) a box-plot of 10000 samples consisting of randomly (preserving time constraints imposed by citing policy documents) choosing nodes in $G_c$ and summing their Page Rank score, blue circle reflects he actual sum of Page Rank scores obtained for the cited papers.}
\label{fig:netc}
\end{figure}
As nearly half of the cited nodes are among the most influential ones, this allows for setting a hypothesis stating that policy documents tend to point to important scientific papers rather than selecting articles not fully recognized in the field. To test this hypothesis, we define $PR$ as the sum of Page Rank score of 23 randomly selected nodes in $G_c$ keeping time constraints imposed by the publication date of policy documents (i.e., if a given policy document was published in 2020, we take into account only arXiv papers prior to that year). The results of 10000 repetitions are shown in Fig.~\ref{fig:netc}C in the form o box-plot as compared to the actual sum of Page Rank scores obtained for the cited papers (blue circle in Fig.~\ref{fig:netc}B), proving that the cited articles are, in fact, much more influential than a random set.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{diagrams/fig_net_b.pdf}
\caption{Aspects of XAI bibliographic coupling network. A) Explanation of the bibliographic coupling: two papers (here denoted as 1 and 5) cite, respectively, articles 2,3,4,8,9,10 and 7,6,8,9,10, only three papers out of 8 overlap, thus $w_{15}=3/8$; B) Histogram of the link weight $w_{ij}$ in $G_b$; C) Size of the giant component $N^S_b(\theta)$ normalized by the total number of nodes $N_b(\theta)$ for a given threshold value $\theta$; D)--H) Bibliographic coupling networks for different values of threshold $\theta$, respectively, 0.2, 0.25, 0.3, 0.35, 0.5. Node colors represent affiliation type as in Fig. \ref{fig:netc}A. The size of the node scales logarithmically with the Page Rank score obtained for the smaller set analysis, and the node shape informs if the paper has been cited by a document (rectangle) or not (circle); I) Percentage of homogeneous links academia--academia (blue) and academia \& industry--academia \& industry (violet) versus threshold $\theta$.}
\label{fig:netb}
\end{figure}
\subsection{XAI bibliographic coupling network}
The graph described in the previous section represents actual links among arXiv papers. However, such a network is usually sparse and comes with a simple binary answer: either a paper is cited or not. To modify this approach we chose the so-called bibliographic coupling introduced by \cite{Kessler1963} which is simply the Jaccard index of out-neighborhoods $n_i^{out}$ and $n_j^{out}$ of two papers $i$ and $j$ \citep{Steinert2016}, i.e.,
\begin{equation}
w_{ij} = \frac{|n_i^{out} \cap n_j^{out}|}{|n_i^{out} \cup n_j^{out}|}.
\end{equation}
The idea of bibliographic coupling is depicted in Fig.~\ref{fig:netb}A: in this way, we can take into account all information available in the references of each arXiv paper, unlike in the previous case. Additionally, we deal with a network where each link connecting nodes $i$ and $j$ is characterized with a weight $w_{ij} \in [0;1]$ reflecting similarity between two papers. The resulting bibliographic coupling undirected graph $G_b$ consists of $N_b=725$ nodes and $E_b=85598$ edges (in fact there should be $262450$ edges but we omit those carrying $w_{ij}=0$) with weight distribution roughly following an exponential function (see Fig. \ref{fig:netb}B). If follows that we are now able to use the concept of the {\it weight threshold} \citep[e.g.,][]{Chmiel2007} by keeping only such edges $w_{ij}$ whose weights fulfill the condition $w_{ij} \ge \theta$, where $\theta$ is a threshold parameter and $\theta \in [0,1]$. Such a procedure simply transforms the weighted graph $G_b$ into a set of unweighted networks $G_b(\theta)$ each constructed for a given parameter $\theta$ that are then subject to further analysis \citep{Sienkiewicz2018}. In particular, for some specific (critical) value of $\theta = \theta_c$ the network, initially percolated (i.e., constructed in such a way that is possible to arrive from any node $i$ to any other node $j$), breaks down into several small components. To track this phenomenon in a quantitative way for each $G_b(\theta)$ we calculate the size of its giant component $N^S_b(\theta)$ (i.e., the largest cluster in the network) and divide it by the relevant graph size $N_b(\theta)$. Figure~\ref{fig:netb}C allows localizing the breakdown at roughly $\theta_c \approx 0.25$, which can be visualized by a set of graphs in Fig.~\ref{fig:netb}D--H that not only reflect this process but also present other properties of the network such as affiliation (node color), importance (node size) or relation to policy documents (node shape). By increasing $\theta$ we bring to the front the strongest connections in the network (e.g., Fig.~\ref{fig:netb}H), which tend to be in the majority homogeneous as seen in Fig.~\ref{fig:netb}I where the share of academia and academia \& industry in-links are plotted against $\theta$. Contrary to that, homogeneous industry links are seldom observed and not likely to survive the introduction of high thresholds.
The citation network analysis presented in Sect. \ref{sec.arxiv_cit_an} suggests that key players in the extracted arXiv papers network are recognized as relevant from the policy documents perspective. On the other hand, when we turn to bibliographic coupling graphs, we can spot the persistence of homogeneous links among academia-like nodes that overtake the graph when the weakest connections are filtered out.
\subsection{Deontic analysis}
In this chapter, we present an example of a deontic analysis of documents from the MAIR database. We processed both legal documents and scientific papers so as to extract Attributes, Aims, Deontics, and Objects from individual sentences.
Panel A in Fig.~\ref{fig:deontic-panels1} shows how often the analysis of legal documents and academic papers identified a particular word as an object according to institutional grammar. Although the global size of both corpora of text was comparable, we can find objects which were much more frequently identified in the case of scientific publications (\verb'model', \verb'method', \verb'explanation', \verb'agent', \verb'feature') as well as those which are much more frequently encountered in legal documents (\verb'sector', \verb'government', \verb'agency'). A particularly interesting situation concerns the words \verb'driver' and \verb'vehicle', which appear very often in regulations and other legal documents, much less frequently in scientific publications. This may suggest that the topic of autonomous cars has a much stronger impact on the imagination of policymakers since many legal documents are devoted to it. For the XAI research community, it is not a foreground topic.
Based on the frequency of occurrence, we identified eight objects that underwent further deontic analysis (agent, machine, human, ai, people, algorithm, user, system). For each object, we determined whether it is accompanied by a term from the can / shall / must group (sentences in which negations occurred were few in number and were excluded from this analysis). The normalized frequency of each deontic in relation to the object was then presented in panel B of Fig.~\ref{fig:deontic-panels1}. Normalizations were carried out separately for scientific articles and separately for legal documents. The normalization was intended to remove the effect of the different frequencies of deontics in each group of texts. The ternary plots show the relative frequency of each deontic with a given object for both scientific articles and legal texts. Interestingly, in the case of legal documents, the word "AI" occurred more frequently near the deontic "can", the word agent or human near the deontic "must", and the word user near the deontic "shall". Such a shallow analysis allows for orientation in the area of global attitudes towards specific objects. At the same time, we see that the same objects occur in other contexts in the case of scientific papers. The object "user" definitely occurs more frequently in the context of the deontic "can", as do "machine" and "agent". We can see that scientific articles more often emphasize capabilities than strategies. At the same time, we observe an opposite trend for the word AI, which in the case of scientific papers more often occurs with the deontic "shall".
Shallow global analysis of objects and deontics suggests what kinds of objects are interesting to analyze. Having selected interesting phrases, we can use word trees to show the context in which certain phrases occur. Figure \ref{fig:deontic-panels2} shows word trees for selected phrases for research papers and legal documents. This type of interactive data mining allows for the analysis of well-defined questions. But to identify interesting questions, it is useful to use institutional grammar.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{diagrams/occurences_papers_strategies.png}
\includegraphics[width=0.49\textwidth]{diagrams/triangle_papers_strategies.pdf}
\caption{Panel A describes the frequency of occurrence of each Object in research articles vs strategies. Only objects with more than 40 occurrences. Panel B presents for the selected eight objects the normalized context in which they are found in scientific articles (red dots) and strategies (blue dots). For the objects 'human', 'machine', 'agent', we can see a shift from 'must' in strategies to 'can' in scientific articles. For the object 'user', we have a shift along the dimension 'shall'.}
\label{fig:deontic-panels1}
\includegraphics[width=0.95\textwidth]{diagrams/word_tree_papers_strategies.png}
\caption{Contexts of selected words in the analysed texts. Panels B and D show the context from scientific articles and panels A and C from strategies.}
\label{fig:deontic-panels2}
\end{figure}
\clearpage
\section{Conclusions and Discussion}
The number of policy documents related to AI, like strategies and regulations, is growing rapidly. The number of research papers dedicated to interpretable and explainable AI is increasing at a higher rate.
Literature related to the XAI field is divided into several polarised communities, ranging from advocates of solutions that explain any black-box model to researchers arguing that XAI should not be used for high-stake decisions. A different perspective is presented by researchers from companies offering ML products from researchers using these solutions and bearing responsibility for errors in their work. The number and variety of these documents make it almost impossible to keep track of them continuously. Yet understanding the relationship between available methods and regulators' expectations is critical to implementing responsible AI.
This paper introduces a novel framework for the automated analysis of documents related to trustworthy AI. This system integrates a set of state-of-the-art solutions from the fields of natural language processing (NLP), institutional grammar (IG), and network analysis (NA).
Each of these solutions is used to enrich raw text documents with relevant meta-information.
In this work, we have also shown the collection of focused analyses that can be performed for enriched documents, allowing us to use both from deontic information, author's affiliation, or graph of references information between documents.
As the interest in regulating XAI is increasing, it is essential to monitor how well reception and understanding of XAI by policymakers align with visions of XAI methods creators. In the future, such a system can be used to contribute to better cooperation between XAI researchers and AI policymakers. E.g. we will quickly assess if public policies are highly influenced by methods developed in papers written by specific opinion leaders.
\subsection{Limitations and future research}
Our system is now using a very simplified version of the Institutional Grammar tagger and shallow analysis of documents. In our future research, we would like to extend it by improving its ability for deep analysis of sentences with all their internal complexities. We also would like to be able to distinguish regulative statements from constitutive ones.
Another limitation of our system is the relevance of policy documents -- we gathered them from databases that are manually updated. This limits our ability to draw conclusions from our analysis. To overcome these limitations, we should gather documents directly from relevant websites. What is more, to comprehensively analyze and understand the process of setting up regulations on XAI and AI, the system should also gather and process ethical guidelines on AI of private companies and even newspaper articles regarding XAI and AI. These documents are often cited in policy documents and influence the formulation of rules on new technologies.
Let us also mention that the analysis of both types of networks is directly affected by affiliations and references extraction methods. In effect, as can be seen in Table~\ref{tab:tabc}, affiliations of several nodes are labeled as ``none'', which introduces high uncertainty to the analysis of link homogeneity seen in Fig.~\ref{fig:netb}. Similarly, XAI papers' references are limited to arXiv papers only, which can influence both weight distribution and a relation among nodes in $G_b$. Future plans for the use of complex network analysis include identifying relation types among the papers based on the way they appear in the text \citep{Catalini2015} and examining different types of nodes' influence \citep{Lu2016} and citation measures \citep{Steinert2016}.
\begin{acknowledgements}
Work on this project is financially supported by the NCN Sonata Bis-9 grant 2019/34/E/ST6/00052 \\
We are grateful to Anna Wróblewska for helpful discussions, and to Hubert Baniecki, Tomasz Stanisławek, and Krzysztof Kowalczyk for providing feedback on an early version of this paper.
\end{acknowledgements}
\bibliographystyle{spbasic}
\section*{Acknowledgements}}{}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\newcommand{\texttt}{\texttt}
\newcommand{\mathrm}{\mathrm}
\usepackage{algorithm}
\begin{document}
\title{
MAIR: Framework for mining relationships between research articles, strategies, and regulations in the field of explainable artificial intelligence}
\author[2,1]{\orcidlink{0000-0002-3695-9809}Stanisław Giziński}
\author[2,1]{\orcidlink{0000-0002-9181-0126}Michał Kuźba}
\author[3]{\orcidlink{0000-0003-2664-2135}Bartosz Pieliński}
\author[4]{\\\orcidlink{0000-0003-2097-1499}Julian Sienkiewicz}
\author[5]{\orcidlink{0000-0003-0563-7855}Stanisław Łaniewski}
\author[1,2]{\orcidlink{0000-0001-8423-1823}Przemysław Biecek}
\affil[1]{ MI$^2$ Data Lab, Faculty of Mathematics and Information Science, Warsaw University of Technology}
\affil[2]{ Faculty of Mathematics, Informatics and Mechanics, University of Warsaw}
\affil[3]{ Faculty of Political Science and International Studies, University of Warsaw}
\affil[4]{ Faculty of Physics, Warsaw University of Technology }
\affil[5]{ Quantitative Psychology and Economics, Faculty of Economics, University of Warsaw}
\maketitle
\captionsetup[figure]{labelfont={bf},name={Fig.},labelsep=period, font=small}
\captionsetup[table]{labelfont={bf},name={Table},labelsep=period, font=small}
\input{content}
\end{document} |
1,116,691,498,400 | arxiv | \section*{DATA AVAILABILITY}
\section*{REFERENCES}
|
1,116,691,498,401 | arxiv | \section{Introduction}
Blockchain was first used in a peer-to-peer electronic cash system \cite{nakamoto2008bitcoin} to provide immutability through the chain data structure, consensus, and redundant storage. Clients can query and modify the state of the blockchain by full and light nodes. A full node has a complete blockchain ledger. It exploits memory to synchronize blockchain data, broadcast transactions, verify transactions and update data in real-time. A light node maintains the block headers of the ledger. It can validate transactions by interacting with full nodes. Since blockchain is an application protocol built on the Internet, clients rely on a continuous connectivity network to interact with blockchain nodes \cite{cong2021dtnb}.
However, 2.9 billion people in the world have never used the Internet, and parts of people who have used the Internet have only occasional access to the Internet \cite{ituit}. Although blockchain technologies have developed rapidly in recent years, \textit{how to conduct trusted interactions between the blockchain and offline clients becomes a key challenge}.
The feasible solution exploits satellites to relay offline transactions to blockchain nodes \cite{ blockstream}\cite{spacechain}. Clients buy specific transmission devices to connect to satellites and initiate transactions. However, this approach has the following problems. \textit{First, Expensive Costs.} It requires blockchain solutions to cooperate with centralized satellite companies to provide access services. And the costs of connections are passed on to clients. Moreover, it requires clients purchase expensive launch equipment to use the service. \textit{Second, Difficult Interoperability.} \cite{blockstream} and \cite{smartsmesh} build offline networks for Bitcoin and Ethereum respectively. It is difficult to adapt to different blockchains quickly, and it is also impossible to achieve interoperability between them. \textit{Third, Limited Computation.} Bitcoin's blockchain alone is 385GB as of March 2022 \cite{statista}. On-chain data is becoming a valuable asset. Currently, offline clients can not use on-chain data to perform complicated computations.
To deal with the challenges above, we propose BcMON, the blockchain middleware for offline networks. BcMON is composed of three components, including Offline Blockchain Services (OFBS), Cross-chain Blockchain Service (CCBS), and Computing Blockchain Service (CPBS). OFBS realizes the interaction between offline clients and the blockchain. Based on OFBS, CCBS realizes the interoperability between offline clients and multiple blockchains. Based on OFBS and CCBS, CPBS realizes that offline clients perform complex queries and computations on the blockchains.
Main contributions of this paper are summarized as follows.
\begin{itemize}
\item We propose a blockchain middleware for the offline network. To the best of our knowledge, this is the first work on blockchain middleware for the offline network. We believe this is a timely study, as the blockchain is widely used in many scenarios.
\item We design SMS-based OFBS to reduce the costs of offline transactions accessing the blockchain. To protect the integrity of offline transactions, we propose a reliable interaction mechanism based on offline channels.
\item We propose CCBS to validate the authenticity of offline cross-chain transactions. Moreover, we propose a two-phase consensus to protect the atomicity and integrity of offline cross-chain transactions.
\item We propose CPBS to support offline clients to perform complex queries and computations on the blockchains. And we design a threshold signature-based interaction mechanism to help offline clients validate results.
\end{itemize}
The rest of the paper is organized as follows. Offline blockchain service design is introduced in Section \uppercase\expandafter{\romannumeral2}. Cross-chain blockchain service design is presented in Section \uppercase\expandafter{\romannumeral3}. Computing blockchain service design is elaborated in Section \uppercase\expandafter{\romannumeral4}. The proposed method is evaluated in Section \uppercase\expandafter{\romannumeral5}. Related work is discussed in Section \uppercase\expandafter{\romannumeral6} and and the paper is concluded in Section \uppercase\expandafter{\romannumeral7}.
\section{Offline Blockchain Service Design}
\label{ofbs-section}
This section mainly describes the architecture of BcMON, as shown in Figure \ref{architecture}.
\subsection{Architecture}
The architecture is divided into four layers: the user layer, channel layer, BcMON layer, and blockchain layer. And the BcMON layer includes OFBS, CCBS, and CPBS.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{img/architecture.pdf}
\caption{Architecture of BcMON}
\label{architecture}
\end{figure}
\paragraph{User Layer} The user layer is composed of offline clients in a weak communication environment. Offline clients send data packets containing transactions to the BcMON layer, which is the access to the blockchain network.
Offline clients are required to own accounts in the blockchain. Then they can exploit services supported by the blockchain middleware. Some offline clients who have occasional opportunities to connect to the Internet can independently generate public keys, private keys, and account addresses. Other offline clients who have no chance to connect to the Internet can entrust a trusted third party to escrow the public and private keys, like operators of mobile phones.
\paragraph{BcMON Layer} The BcMON layer consists of three components, including OFBS, CCBS, and CPBS. The underlying network architecture of the three components is a peer-to-peer network. And nodes of the three components are composed of infrastructures with strong computing and storage capabilities, like cell towers.
OFBS receives SMS messages from offline clients and forwards them to specific middleware according to demands. If offline clients initiate transactions with a single blockchain, OFBS forwards data packets to the specific blockchains. See details in Section \ref{ofbs_workflow}. If offline clients initiate transactions with multiple blockchains, OFBS forwards data packets to CCBS, and CCBS will perform operations, see details in Section \ref{ccbs_workflow}. If offline clients initiate a transaction with complex computation on on-chain data, OFBS forwards data packets to CPBS, and CPBS will perform operations, see details in Section \ref{cpbs_workflow}.
\paragraph{Channel Layer} The channel layer consists of virtual links between offline clients. Since offline clients have limited resources and some blockchains have limited throughput, frequent interactions with blockchains via OFBS are inefficient and wasteful of energy. Therefore, OFBS sets up virtual links between offline clients, supported by smart contracts and SMS services. Transactions initiated in the virtual links are not immediately submitted to the blockchain until the number of transactions is up to the max buffer size or meets other conditions.
\paragraph{Blockchain Layer} The blockchain layer consists of multiple blockchains. Each blockchain is an independent decentralized network. The way that blockchains interact with offline clients is OFBS. CCBS provides blockchain cross-chain interoperability. CPBS provides complex queries and computing power.
\subsection{Workflow}
\label{ofbs_workflow}
Offline clients have little ability to connect to the Internet, which gives them no chance to interact with the blockchains. This section introduces the workflow of SMS-based OFBS to achieve interactions between offline clients and blockchains. OFBS is divided into on-chain and off-chain parts. For a better demonstration of OFBS, a piece of a sequence diagram is detailed in Fig. \ref{workflow_ofbs}. \textit{Challenge 1: If a tree falls in the forest and no one is around to hear it, does it make a sound \cite{poon2016bitcoin}?}
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in, height=4in]{process/process_1.pdf}
\caption{Workflow of OFBS}
\label{workflow_ofbs}
\end{figure}
\paragraph{Initialize} SMS service providers initialize infrastructures (like cell towers) as relay nodes for peer-to-peer networks. The initialization of infrastructure includes bandwidth, buffer size, wakeup time, time out, and other required information. Offline clients and relayers initialize private keys, public keys, account addresses, and certifications. The private key is used to sign transactions. The account address is used to identify clients and relayers uniquely. The digital certificate is used to access OFBS and blockchains. Moreover, SMS service providers are required to deploy channel contracts in advance on the consortium blockchains.
The channel contracts are the intermediary that transfers state changes on the chain to off-chain to reduce the frequency of interaction with the blockchain. The functions of the contract include opening the channel, updating the channel, and closing the channel. The channel used here is different from the state channel \cite{poon2016bitcoin}. The state channel requires clients to be online, while the channel of OFBS can help offline clients access blockchains.
\paragraph{OpenChannel} Offline clients packages short messages according to the SMS protocol specification. The data packet includes the message content, destination number, encoding format, type, and other information. And the message content consists of the destination address, amount, signature, and timestamp. The data packet is sent to relayers.
In the traditional SMS process, after relayers receive the data packet, it forwards it to Short Message Center (SMC). Then SMC delivers it to the corresponding base station where the destination number is located. Finally, the base station forwards it to the connected device.
In OFBS, the SMS-based data packet is forward to relay nodes. Since relay nodes are a peer-to-peer network, they need to execute consensus to maintain data integrity. Besides, the data packet is signed by the private key, and relay nodes can not modify the content. After relayers consensus, relayers invoke OPENCHANNEL(), and the data packet is written into the channel contract running on blockchain for consensus. The channel contract will escrow balances of offline clients for consequent transactions. After blockchain consensus, the channel contract will emit OpenChannel Event. Relay nodes monitor events and callback results to offline clients. See details in Algorithm \ref{contract}.
\begin{algorithm}[!t]
\caption{{Channel Contract}}
\begin{algorithmic}[1]
\Function{OpenChannel}{client, relay, balance}
\State channel[relay] = \{relay,balance, nonce=0, txs,...\}
\State emit open channel event
\EndFunction
\State
\Function{UpdateChannel}{relay, balance, tx, serial}
\State channel[relay] = \{relay, balance, nonce, txs, ...\}
\State channel.Update += 1
\If{channel.Update == threshold}
\State Aggregate(channel[relay])
\State channel.Update == 0
\State emit update channel event
\EndIf
\EndFunction
\State
\Function{CloseChannel}{replayer, balance}
\If {Aggregate(channel[relay])}
\State refund and delete channel
\State emit close channel event
\EndIf
\EndFunction
\end{algorithmic}
\label{contract}
\end{algorithm}
\paragraph{Off-chain Interaction} Since the channel is opened by OFBS, offline clients can directly interact with each other without trust. Offline clients initiate a SMS packet signed by the private key to relay nodes. Relay nodes execute consensus among nodes and select the leader. The leader broadcasts the message to workers for votes. Nodes that receive the message insert it into the database and push it to the queue. If the last wakeup time is bigger than timeout and the number of messages is bigger than the buffer size, relay nodes will proceed with the message to the blockchain to update the channel state. And the wakeup time is reset to the current moment. See details in Algorithm \ref{interaction}.
\begin{algorithm}[!t]
\caption{{Off-Chain Interaction}}
\begin{algorithmic}[1]
\State add modem(comport, baudrate, devid)
\State add worker(modem, buffer, wakeup, timeout)
\State add node(worker, mutex, requestPool, msgQueue)
\State deploy channel contract
\For {each node in nodes \textbf{in parallel}}
\State send SMS request(uuid, mobile, message)
\State execute consensus among base stations
\State worker.EnqueueMessage(message, insertToDB=true)
\If{wakeup $>$ timeout \& len(messages) $>$ buffer}
\State modem.SendSMS(mobile, message)
\State worker.EnqueueMessage(message, false)
\State proceed message to on-chain consensus
\State node.worker.wakeup = now
\EndIf
\EndFor
\end{algorithmic}
\label{interaction}
\end{algorithm}
Since offline clients can not directly query and modify the balance on the chain, it is hard for them to check the validity of transactions. Therefore, relay nodes need to ensure the transfer account has sufficient balance to support the transfer amount, check the validity of nonce to prevent replay attacks, and validate the signature of transactions.
\paragraph{Update Channel} OFBS sets the buffer pool and wakeup timeout to reduce the cost of frequent interaction with blockchains. If the update condition is met, messages on the queue are packed by relay nodes and written into the blockchain. We provide an on-chain aggregation method, while the off-chain aggregation method refers to Section \ref{ccbs_workflow}. Since we exploit consortium blockchains that provide high throughput and storage capabilities, reducing on-chain consumption costs is no longer our goal. Relay nodes invoke UPDATECHANNEL(), update amounts of offline clients, and emit UpdateChannel Event. Relay nodes monitor events and callback results to offline clients. See details in Algorithm \ref{contract}.
We support there are $\mathcal N$ relay nodes and $\mathcal F$ malicious nodes. The aggregation threshold is $\mathcal T$. We use $\mathcal X$ as a random variable to identify the number of malicious nodes in OFBS. Therefore, the probability of a faulty system is as follows.
\begin{equation}
P[\mathcal X \geq \mathcal F] = \sum_{\mathcal X}^{\mathcal F} \frac{C_{\mathcal N - \mathcal F}^{\mathcal T - \mathcal X} C_{\mathcal F}^{\mathcal X}}{C_{\mathcal N}^{\mathcal T}}
\end{equation}
\paragraph{Close Channel} When offline clients decide to quit quick interaction with other clients, they can choose to close the channel. Offline clients initiate a SMS request signed by private keys to relay nodes. Relay nodes execute consensus and proceed messages to on-chain consensus. Relay nodes invoke CLOSECHANNEL() to submit the latest off-chain states and aggregate states. Then they refund balances, delete channels, and emit CloseChannel Event. After monitoring events, they callback results to offline clients.
\section{Cross-chain Blockchain Service Design}
\label{ccbs_workflow}
Section \ref{ofbs-section} elaborates how offline clients interact with the blockchain. Next, we introduce CCBS to validate the authenticity of offline cross-chain transactions and propose a two-phase consensus to protect atomicity and integrity. \textit{Challenge 2: How do offline clients interact with Hyperledger Fabric and Xuperchain at the same time?}
\subsection{Overview}
\paragraph{CCBS} CCBS is a blockchain middleware for cross-chain transactions. It is based on OFBS to help offline clients forward and callback cross-chain transactions. CCBS interacts with the blockchain is to monitor proxy contracts deployed on the blockchain, including SourceContract and DestContract. CCBS has three types of nodes. Relay nodes are responsible for off-chain consensus and aggregation. The leader node is selected from relay nodes to monitor, broadcast, and aggregate cross-chain transactions. Apart from the duties of relay nodes, the monitoring nodes are the regulator of blockchains. These nodes can access real-time transactions to audit.
\paragraph{Destination Contract (DestChain)} DestChain is the destination for cross-chain transactions. A cross-chain transaction can get involved in more than one DestChain. Since CCBS exploits contracts as interfaces, it can redirect multiple transactions to many blockchains in one cross-chain transaction. Besides, DestChain can be the SourceChain, which depends on the relative relationship between blockchains.
\paragraph{Proxy Contract (SourceContract and DestContract)} The proxy contract is an interface for CCBS. For DestChain, the proxy contract is DestContract. Since one cross-chain transaction can be forwarded to multiple blockchains, there may be more than one DestContract deployed in blockchains. For SourceChain, the proxy contract is SourceContract. SourceContract receives cross-chain transactions, pushes them to the pending queue, and deletes them after the callback.
\paragraph{Source Blockchain (SourceChain)} SourceChain is the source of cross-chain transactions. Offline clients first initiate a cross-chain request through OFBS to SourceContract on the SourceChain. CCBS monitors the SourceContract and proceeds with corresponding operations. CCBS also monitors the DestContract on DestChain and callbacks related results to SourceChain.
\subsection{Workflow}
The workflow of CCBS includes four main stages. For a better demonstration of CCBS, a piece of a sequence diagram is detailed in Fig. \ref{workflow_ccbs}. The main symbols used in the article are shown in Table \ref{symbol}.
\begin{table}[!t]
\caption{MAIN SYMBOLS USED IN THIS PAPER}
\begin{center}
\resizebox{\linewidth}{!}{
\Huge
\begin{tabular}{|c|c|}
\hline
\textbf{Denotion} & \textbf{Description}\\
\hline
\hline
$even_{k}$& requests events generated by virtual spaces\\
\hline
$sig$& Elliptic curve signatures\\
\hline
$apub_{th}$& Aggregated threshold public keys of all nodes\\
\hline
$subsig_{th}, subpub_{th}$ & Aggregated threshold public keys of part nodes\\
\hline
$S_b, D_b,C_p$ & SourceChain, DestChain, and Proxy Contract \\
\hline
$R_{i}, L_j$& Relay and Leader in CCBS\\
\hline
$g_1,g_2$ &Generators of multiplicative cyclic groups $G_1$ and $G_2$ \\
\hline
$e$ &The bilinear map: $G_1 \times G_2 \rightarrow G_3$ \\
\hline
$bsk_i,bpk_i, bsig_i$& The private key, public key and signature of nodes in Bilinear Aggregate Signature\\
\hline
$m,H$ & The message and hash function of messages\\
\hline
\end{tabular}}
\label{symbol}
\end{center}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in, height=4in]{process/process_2.pdf}
\caption{Workflow of CCBS}
\label{workflow_ccbs}
\end{figure}
\paragraph{Initialize} Offline clients initialize the public key, private key, account, certificate from blockchains or OFBS. Relay nodes $R_i$ of CCBS initialize the private key $bsk_i$, public key $bpk_i $ of Bilinear Aggregate Signature, and other required keys. Proxy Contract $C_p$ should be deployed in advance in multiple blockchains. And $R_i$ should broadcast $bpk_i$ to other $R_i$, aiming to aggregate $apub_{th}$. The aggregated public key $apub_{th}$ is written to $C_p$ in advance, which is used to verify the authenticity of results from $R_i$.
\paragraph{Request Stage} An offline client initiates cross-chain transactions $request_k\mbox{(from, to, amount, data, ...)}$ through OFBS. OFBS determines if the type of $request_k$ is a cross-chain transaction, then forwards $request_k$ to CCBS. CCBS first invokes CROSSQUERY() to write $request_k$ into SourceContract of SourceChain $C_p^{S_b}$. The $request_k$ is added into pending queues and emits RequestEvent.
\paragraph{Off-chain Stage} Relay nodes $R_i$ elect a leader $L_j$ and $L_j$ listens to RequestEvent of $C_p^{S_b}$. Once $L_j$ listen for a non-empty message, $L_j$ extracts $request_k$ from RequestEvent. Then $L_j$ broadcast $request_k$ to other $R_i$ for the off-chain consensus.
\begin{algorithm}[!t]
\caption{{Workflow}}
\begin{algorithmic}[1]
\Require $request_{0...n}$
\State $L_j \gets$ select from $R_i$ in consensus
\State CROSSQUERY($request_{k}$) $\gets$ OFBS
\State RequestEvent $\leftarrow $ Triggered by $C_p^{S_b}$
\State $L_j$ listen to RequestEvent
\For{each $request_{k} \in request_{0...n}$ \textbf{in parallel}}
\State $L_j \rightarrow$ HANDLEREQUEST(payload)
\For{$R_i$ \textbf{in parallel}}
\State $R_i\rightarrow$ HANDLEPREPARE()
\State $R_i\rightarrow$ HANDLEAGGRE()
\State $R_i\rightarrow$ HANDLEREPLY()
\EndFor
\State $L_j \rightarrow$ CROSSACCEPT() in $C_p^{D_b}$
\State $L_j \rightarrow$ CROSSCALLBACK() in $C_p^{S_b}$
\EndFor
\end{algorithmic}
\label{negotiation}
\end{algorithm}
The off-chain consensus process is divided into four stages, including \textit{Request, Prepare, Aggregate, and Reply stage}. Algorithms \ref{oracle_consensus} shows the specific process. It should be noted that the following process can be adapted into multiple DestChain as shown in Fig. \ref{workflow_ccbs}. For convenience, we only show a single DestChain in the following process.
\begin{algorithm}[!t]
\caption{{Off-chain Stage}$\quad\triangleright \textrm{Run on CCBS}$}
\begin{algorithmic}[1]
\Function{HandleRequest}{payload}
\State $request_k \gets $ RequestEvent
\For{each $request_{k} \in request$ \textbf{in parallel}}
\State prepareMsg $\gets$ $request_k$
\State Broadcast(Sign(prepareMsg, $sk_l$))
\EndFor
\EndFunction
\State
\Function{HandlePrepare}{payload}
\For{$R_i \in$ Relay Nodes \textbf{in parallel}}
\If{Verify(prepareMsg,$sig_{l}, pk_{l}$)}
\State aggregateMsg $\gets$ Params($request_k$)
\State Broadcast(Sign(aggregateMsg, $bsk_i$)) to $R_i$
\EndIf
\EndFor
\EndFunction
\State
\Function{HandleAggregate}{payload}
\For{$R_i \in$ Relay Nodes \textbf{in parallel}}
\State collect $bsig_i$ of aggregateMsg
\If{$e(g_1,bsig_i) == e(bpk_i, H)$}
\State log.append(aggregateMsg, $bsig_i$, $bpk_i$)
\EndIf
\State $ subsig_{th} \gets \prod_{i=1}^{w}bsig_i, subpub_{th} \gets \prod_{i=1}^{w}bpk_i$
\EndFor
\EndFunction
\State
\Function{HandleReply}{payload}
\State provide ($request_k, subsig_{th}, subpub_{th}$)
\State invoke CROSSACCEPT() of $C_p^{D_b}$
\EndFunction
\end{algorithmic}
\label{oracle_consensus}
\end{algorithm}
\begin{itemize}
\item \textit{Request:} $L_j$ executes HANDLEREQUEST(), and broadcasts prepareMsg to $R_i$. $L_j$ signs prepareMsg by the private key so that $R_i$ can verify the authenticity by the public key of $L_j$.
\item \textit{Prepare:} $R_i$ executes HANDLEPREPARE() to verify prepareMsg from $L_k$. If verified, $R_i$ signs aggregateMsg by $bsk_i$ and broadcast aggregateMsg to other $R_i$, as shown in \eqref{prepare-1}-\eqref{prepare-3}.
\begin{equation}
g_i^{bsk_i} \rightarrow bpk_i
\label{prepare-1}
\end{equation}
\begin{equation}
H(m) \rightarrow H, H^{bsk_i} \rightarrow bsig_i \in G_2
\label{prepare-2}
\end{equation}
\begin{equation}
e(g_1,bsig_i) = e(g_1^{bsk_i},H)=e(bpk_i, H)
\label{prepare-3}
\end{equation}
\item \textit{Aggregate:} $R_i$ collects $bsig_i$ of aggregateMsg and verify the authenticity through $bpk_i$. If $R_i$ receives enough aggregateMsgs of same results, $R_i$ executes HANDLEAGGREGATE() to aggregate $bsig_i$ and get the aggregated signature $subsig_{th}$ and public key $subpub_{th}$, as shown in \eqref{agg-1}-(\ref{agg-3}).
\begin{equation}
H_i=H(m_i),i=1,2,…,w
\label{agg-1}
\end{equation}
\begin{equation}
subsig_{th} = \prod_{i=1}^{w}bsig_i
\label{agg-2}
\end{equation}
\begin{equation}
e(g_1,subsig_{th})=\prod_{i=1}^{w}e(g_1^{bsk_i},H_i)
\label{agg-3}
\end{equation}
\item \textit{Reply:} $R_i$ executes HANDLEREPLY() to invoke CROSSACCEPT() of DestContract of DestChain $C_p^{D_b}$. It provides $request_k$, $subsig_{th}$ and $subpub_{th}$ to $C_p^{D_b}$.
\end{itemize}
\begin{algorithm}[!t]
\caption{{Proxy Contract}$\quad\triangleright \textrm{Run on blockchains}$}
\begin{algorithmic}[1]
\Function{CROSSQUERY}{payload}
\State Accept $request_{k}$ from OFBS
\State $apub_{th} \gets$ constructor()
\State $pending[req] \gets (request_{k}, apub_{th})$
\State Emit RequestEvent
\EndFunction
\State
\Function{CROSSACCEPT}{payload}
\State Accept $(request_{k}, subsig,subpub, mask)$ from $R_i$
\State $apub_{th} \gets$ constructor()
\State $\mbox{multiSig} \gets \mbox{bls.Multisig}(subsig_{th}, subpub_{th}, mask) $
\If{$\mbox{multiSig.Verify}(apub_{th}, request_k)$}
\State Accept $(request_{k}$ and modify state
\EndIf
\State Emit \textit{AccpetEvent}
\EndFunction
\State
\Function{CROSSCALLBACK}{payload}
\State Accept $(result_{k}, subsig,subpub, mask)$ from $R_i$
\State $(request_{k}, apub_{th}) \gets pendingReq[req]$
\State $\mbox{multiSig} \gets \mbox{bls.Multisig}(subsig_{th}, subpub_{th}, mask) $
\If{$\mbox{multiSig.Verify}(apub_{th}, result_k)$}
\State Delete $request_{k}$ from $pending[req]$
\EndIf
\State Emit \textit{CallbackEvent}
\EndFunction
\end{algorithmic}
\label{proxy_contract}
\end{algorithm}
\paragraph{Execution Stage} $C_p^{D_b}$ exploits public keys of all relay nodes $apub_{th}$ to verifiy $(request_k, subsig_{th}, subpub_{th})$, as shown in \eqref{agg-4}-(\ref{agg-5}). Mask represents the $R_i$ who really signed. If verified, $C_p^{D_b}$ execute operations and emits AcceptEvent.
\begin{equation}
\mbox{multiSig} \gets \mbox{bls.Multisig}(subsig_{th}, subpub_{th}, mask)
\label{agg-4}
\end{equation}
\begin{equation}
\mbox{multiSig.Verify}(apub_{th}, request_k)
\label{agg-5}
\end{equation}
\paragraph{Callback Stage} $L_j$ listens to AcceptEvent of $C_p^{D_b}$ and extracts the callback result $result_k$ from $event_k$. Then $L_j$ broadcasts $result_k$ to $R_i$ and executes Off-chain Stage. $R_i$ executes HANDLEREPLY() to invoke CROSSCALLBACK() of $C_p^{S_b}$ and provide $result_k$, $subsig_{th}$ and $subpub_{th}$ to $C_p^{S_b}$. $C_p^{S_b}$ verifies the authenticity of $result_k$, delete $request_k$ from pending queue and emit CallbackEvent. OFBS listens to $event_k$ and response $result_k$ to offline clients.
The above process is as shown in Algorithm \ref{negotiation}, \ref{proxy_contract} and \ref{oracle_consensus}.
\section{Computing Blockchain Service Design (CPBS)}
\label{cpbs_workflow}
In this section, we introduce CPBS to help offline clients implement complex queries and computation on the blockchains. \textit{Challenge 3: How can offline clients analyze account activity using on-chain data from the past few months?}
\subsection{Overview}
\paragraph{CPBS} CPBS is a blockchain middleware implementing complex queries and computations for offline clients. CCBS is composed of multiple relay nodes and constructs a peer-to-peer network. It is based on OFBS to help offline clients forward and callback results. It is also based on CCBS to help offline clients forward and callback results from multiple blockchains. Therefore, when offline clients only specify the data source involving one chain, it only needs to combine OFBS and CPBS. Otherwise, it needs to combine OFBS, CCBS, and CPBS to obtain results.
\textit{Single On-Chain Source} Offline clients initiate tasks to OFBS, and OFBS forwards them to CPBS. After request, execute, aggregate, reply stages, relay nodes of OFBS callback results to OFBS. Finally, OFBS callbacks results to offline clients. See details in Section \ref{ocbs-workflow}.
\textit{Multiple On-Chain Sources} The process of multiple on-chain sources is similar to the single on-chain source. The difference is that when data sources of tasks get involved in multiple blockchains, relay nodes exploit CCBS to extract data from multiple blockchains.
\paragraph{CompChain \& CompContract} CompContract $C_c$ is an interface to receive tasks from offline clients, while CompChain $C_b$ is used for executing CompContract. Relay nodes of CPBS monitor events of CompContract to see if there are unresolved tasks.
\paragraph{DestChain} DestChain is the data source specified by offline clients. Offline clients can specify multiple blockchains as data sources. Therefore, there may be more than one DestChain.
\paragraph{Offline device} Offline clients issue tasks through OFBS. For example, they can command CPBS to analyze specified account activity using on-chain data from the past few months.
\subsection{Workflow}
\label{ocbs-workflow}
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in, height=4in]{process/process_3.pdf}
\caption{Workflow of CPBS}
\label{workflow_cpbs}
\end{figure}
The workflow of CPBS includes four stages. For a better demonstration of CPBS, a piece of a sequence diagram is detailed in Fig. \ref{workflow_cpbs}.
\paragraph{Request} Offline clients initiate $task_k$ to OFBS. Since offline clients interact with OFBS through SMS, we can predefine some classic tasks so that offline clients can enter numbers to select tasks. For example, offline clients can enter number 1 to analyze account activities. Then OFBS forwards $task_k$ to CompContract $C_c$ and CompChain $C_b$ emits RequestEvent.
\paragraph{Execute} CPBS listens to RequestEvent and broadcast $task_k$ to relay nodes $R_i$. $R_i$ gets the data from the specified data source after it parses out $task_k$. If the specified data source gets involved in multiple blockchains, $R_i$ exploits CCBS to obtain data. After $R_i$ obtains data from data sources, it executes $task_k$ and output $result_k$. $R_i$ signs $result_k$ with $bsk_i$ and broadcasts $result_k$, $bsig_i$ and $bpk_i$ to other $R_i$.
\paragraph{Aggregate} $R_i$ collects $result_k$, $bsig_i$ and $bpk_i$. If $R_i$ receives enough results, $R_i$ aggregate the results and obtains $subpub_{th}$ and $subsig_{th}$, as shown in \eqref{agg-1}-(\ref{agg-3}).
\paragraph{Reply} $R_i$ writes $result_k$, $subpub_{th}$ and $subsig_{th}$ into $C_c^{C_b}$. $C_c^{C_b}$ verifies results, as shown in \eqref{agg-4}-(\ref{agg-5}). If verified, $C_c^{C_b}$ emits CallbackEvent. OFBS listens to CallbackEvent and response $result_k$ to offline clients.
\section{Experimental Evaluations}
This section evaluates the performance of the BcMON. First, we conduct a security analysis. Second, we evaluate the query performance of blockchain clients under poor network connections. Third, we evaluate the overhead of OFBS, CCBS, and OCBS.
\subsection{Security Analysis}
\paragraph{Illegal Transaction} Since offline clients have little chance to connect to the Internet. It is hard to determine whether offline clients have enough balance to pay the transfer amount. And it is hard to verify the signatures of transactions and whether the transaction is not a replay transaction. BcMON constructs three types of blockchain middleware to connect clients and blockchains. And the blockchain middlewares are required to execute consensus when submitting results.
\paragraph{Modify transactions of offline clients} Relay nodes of OFBS may do evil and modify transactions of offline clients. Since transactions are signed by offline clients, the tampered transaction will not pass verifications on the blockchain.
\paragraph{Ignore transactions of offline clients} Relay nodes of OFBS may intentionally ignore transactions of some offline clients. In general, relay nodes are represented by cell towers, which are no incentive to ignore transactions of offline clients. However, offline clients can replace SMS providers to change relay nodes if this happens. Or they can quit the network when there is an occasional internet connection.
\paragraph{Update old states of OFBS} Since OFBS exploits the channel to reduce interactions with blockchains, relay nodes of OFBS may update old states to the channel contracts. To prevent this issue, OFBS uses the on-chain aggregation for consensus among relay nodes. Therefore, the final state of the channel is the latest.
\paragraph{Atomicity and Consistency of CCBS} Offline clients are hard to determine whether the transaction succeeds or fails on blockchains. They also do not ensure consistency on blockchains. Therefore, CCBS constructs a two-phase consensus to keep atomicity and consistency.
\paragraph{Correctness and completeness of CPBS} There are malicious relay nodes of CPBS that return incorrect results. CPBS is different from Oracle \cite{mammadzada2019blockchain}, which may yield different results. Since the data sources come from blockchain, they are deterministic data. Therefore, Correct results can be aggregated as long as a sufficient number of honest nodes return the same correct results, verifying the correctness and completeness.
\subsection{Query Performance Under Poor Connection}
\label{exp_query}
We exploited Xuperchain V3.10\footnote{https://github.com/xuperchain/xuperchain.git} to construct a local blockchain network. We queried the account, block, and transaction through the blockchain client named xclient. Each xclient is a grpc connect. Xuperchain is deployed on macOS Catalina 10.15.4, CPU 2.3 GHz Intel Core i5 with two cores, 16 GB 2133 MHz LPDDR3, 304.2 Mbit/s, and Go1.17.1. We initiated query requests concurrently through GoRoutine to calculate the throughput and total time. Besides, we also adjusted network connection (device, latency $\tau$, bandwidth $\beta$, and packet loss $\iota$) through comcast\footnote{https://github.com/tylertreat/comcast}. Based on the above settings, we simulated four network states (DEFAULT, WIFI, EDGE, and GPRS) to evaluate the impact of the network performance of blockchain. The parameters are shown in Table \ref{parameters}.
\begin{table}[!t]
\centering
\caption{Simulation Parameters}
\label{parameters}
\begin{tabular}{c|c}
\hline
Parameter & Value \\
\hline
\hline
DEFAULT & Default congifuration\\
\hline
WIFI & $\tau$=40, $\beta$=30000, $\iota$=0.2 \\
\hline
EDGE & $\tau$=300, $\beta$=250, $\iota$=1.5 \\
\hline
GPRS & $\tau$=500, $\beta$=50, $\iota$=2 \\
\hline
Concurrent clients & DEFAULT=100, WIFI=20, EDGE=10, GPRS=5 \\
\hline
\multirow{4}{*}{MaxCount} & DEFAULT={[}1000,10000,1000{]} \\
& WIFI={[}40,200,40{]}, \\
& EDGE={[}40,120,20{]} \\
& GPRS={[}9,45,9{]} \\
\hline
\multirow{4}{*}{Throughput} & DEFAULT={[}100000,1000000,100000{]} \\
& WIFI={[}800,4000,800{]}, \\
& EDGE={[}400,1200,200{]} \\
& GPRS={[}45,225,45{]} \\
\hline
Target protocol & tcp,udp,icmp \\
\hline
Device & eth0 \\
\hline
\hline
\end{tabular}
\end{table}
The asynchronous query requests exceed the QPS (Queries Per Second) by two orders of magnitude, avoiding the impact of boundary situations. Fig. \ref{xclient_network_throughput} and \ref{xclient_network_time} are the throughput and query time of different network performances. The QPS used by DEFAULT (Fig. \ref{xclient_network_throughput} and Fig. \ref{xclient_network_time} (a) is unbearable for WIFI, EDGE and GPRS (Fig. \ref{xclient_network_throughput} and Fig. \ref{xclient_network_time} (b) (c) (d)). And the network resources spent in querying blocks and accounts are about 50\% of the query transactions. In summary, blockchain technology cannot be connected to the vast majority of people worldwide.
\begin{figure}[!t]
\centering
\subfigure[DEFAULT]{\includegraphics[width=1.5in]{figure/xlientquery/tps_xclient_query.pdf}}
\subfigure[WIFI]{\includegraphics[width=1.5in]{figure/xclientnetwork/tps_xclient_query_40_30000.pdf}}
\subfigure[EDGE]{\includegraphics[width=1.5in]{figure/xclientnetwork/tps_xclient_query_300_250.pdf}}
\subfigure[GPRS]{\includegraphics[width=1.5in]{figure/xclientnetwork/tps_xclient_query_500_50.pdf}}
\caption{Throughput}
\label{xclient_network_throughput}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure[DEFAULT]{\includegraphics[width=1.5in]{figure/xlientquery/time_xclient_query.pdf}}
\subfigure[WIFI]{\includegraphics[width=1.5in]{figure/xclientnetwork/time_xclient_query_40_30000.pdf}}
\subfigure[EDGE]{\includegraphics[width=1.5in]{figure/xclientnetwork/time_xclient_query_300_250.pdf}}
\subfigure[GPRS]{\includegraphics[width=1.5in]{figure/xclientnetwork/time_xclient_query_500_50.pdf}}
\caption{Query Time}
\label{xclient_network_time}
\end{figure}
\subsection{Overhead of OFBS}
\label{exp_ofbs}
The configuration related to Xuperchain is the same as Section \ref{exp_query}. We exploited contract-sdk-go to implement the channel contract. The database for SMS is sqlite3. OFBS was deployed on macOS Catalina 10.15.4, CPU 2.3 GHz Intel Core i5 with two cores, 16 GB 2133 MHz LPDDR3, and 304.2 Mbit/s. We exploited PBFT \cite{castro1999practical} for the off-chain consensus of OFBS. The on-chain consensus of Xuperchain is the default configuration of the SINGLE consensus. Since OFBS is the blockchain middleware, it is irrelevant to the on-chain consensus. Moreover, Xuperchain supports the pluggable consensus. Therefore, we choose the simplest on-chain consensus during the experiment.
We divided the experiments into the off-chain and the on/off-chain parts to evaluate the performance of OFBS. The off-chain part is relevant with OFBS. It needs to evaluate the off-chain service time when the number of relay nodes and concurrent clients increases. Since the scenarios OFBS applied are limited, we only exploited a small number of relay nodes [4, 9, 1] to participate in the experiment. The on/off-chain part includes the off-chain part and the on-chain part. It needs to evaluate the total service time when the number of relay nodes, concurrent clients, and the contract method is different.
Fig. \ref{chain-time} (a) shows the off-chain service time when the number of relay nodes is [4, 9, 1] and the number of concurrent clients is [10, 80, 10]. As can be seen from the figure, as the number of relay nodes increases, the average service time per transaction increases. This reason is that the off-chain SMS-based relay nodes need to access the database frequently. Fig. \ref{chain-time} (b) and (c) show the total service time when the number of relay nodes is [4, 9, 1], the number of concurrent clients is [10, 80, 10], and the contract methods are OpenChannel and UpdateChannel. The concurrent clients initiate OpenChannel and UpdateChannel requests to modify the state of the blockchain. And relay nodes of OFBS forward requests to the blockchain after off-chain consensus. As can be seen from the figure, OpenChannel consumes more time than UpdateChannel. Moreover, combined with Fig. \ref{chain-time} (a), the service time is mainly used to process on-chain transactions.
\begin{figure}[!t]
\centering
\subfigure[Off-Chain]{\includegraphics[width=1in]{figure/smsstatechannel/off-chain.pdf}}
\subfigure[OpenChannel]{\includegraphics[width=1in]{figure/smsstatechannel/OpenChannel.pdf}}
\subfigure[UpdateChannel]{\includegraphics[width=1in]{figure/smsstatechannel/UpdateChannel.pdf}}
\caption{Service ime}
\label{chain-time}
\end{figure}
\subsection{Overhead of CCBS}
The configuration of Xuperchain in OFBS is the same as \ref{exp_ofbs}. We simulated CCBS in Ubuntu 16.04 and shared the resources of 2.0GHZ 8-vCPUs, 16G, and Go1.16.4 linux/amd64. We exploited the homogeneous (FISCO BCOS and FISCO BCOS) and heterogeneous (Ethereum Ropsten and FISCO BCOS) blockchains to support seamless interactions. FISCO BCOS was also deployed in Ubuntu 16.04 and configured for four nodes and an organization. We connected Ethereum Ropsten through Infura. The number of relay nodes of CCBS is eight nodes if there are no special instructions. We compared with Swap \cite{tian2021enabling} to prove our efficiency since \textit{Swap} also took use of the proxy contract as a pivot.
Fig. \ref{Cross consensus time} shows the off-chain consensus time when the number of consequent transactions [50, 300, 50] and relay nodes [4, 8, 1] increases. As can be seen from the figure, the off-chain consensus time of CCBS is reduced by about 30\% compared to Swap in homogeneous and heterogeneous groups. Moreover, the off-chain consensus time of CCBS in the two groups is almost the same, while that of Swap is relatively different. The reason is that CCBS has fewer interactions with blockchains than Swap.
\begin{figure}[!t]
\centering
\subfigure[Heto of CCBS]{
\includegraphics[width=1.5in]{figure/crosschain/origin/heto-consensus.pdf}
}
\subfigure[Heto of Swap]{
\includegraphics[width=1.5in]{figure/crosschain/comparison/heto-consensus.pdf}
}
\subfigure[Homo of CCBS]{
\includegraphics[width=1.5in]{figure/crosschain/origin/homo-consensus.pdf}
}
\subfigure[Homo of Swap]{
\includegraphics[width=1.5in]{figure/crosschain/comparison/homo-consensus.pdf}
}
\caption{Consensus Time}
\label{Cross consensus time}
\end{figure}
Fig. \ref{Cross validate time} shows the validation time when the number of consequent transactions [50, 300, 50] and relay nodes [4, 8, 1] increases. As can be seen from the figure, the validation time of CCBS is reduced by 25\% compared to Swap in homogeneous and heterogeneous groups, which has the same reasons as above. Moreover, different approaches to deployment cause the time difference of CCBS since the homogeneous groups were deployed locally and part of heterogeneous groups were deployed by the cloud.
\begin{figure}[!t]
\centering
\subfigure[Heto of CCBS]{
\includegraphics[width=1.5in]{figure/crosschain/origin/heto-validate.pdf}
}
\subfigure[Heto of Swap]{
\includegraphics[width=1.5in]{figure/crosschain/comparison/heto-validate.pdf}
}
\subfigure[Homo of CCBS]{
\includegraphics[width=1.5in]{figure/crosschain/origin/homo-validate.pdf}
}
\subfigure[Homo of Swap]{
\includegraphics[width=1.5in]{figure/crosschain/comparison/homo-validate.pdf}
}
\caption{Validation Time}
\label{Cross validate time}
\end{figure}
Fig. \ref{Cross process time} shows the performance of CCBS under large-scale networks when the concurrent transaction is [10, 60, 10], the number of relay nodes is [10, 60, 10], and the data volumes are up to 10MB. It can be seen from the figure that CCBS has good performance in the large-scale network.
\begin{figure}[!t]
\centering
\subfigure[(Node, Concurrent)]{
\includegraphics[width=1in]{figure/crosschain/add/node_request.pdf}
}
\subfigure[(Node, Size)]{
\includegraphics[width=1in]{figure/crosschain/add/node_size.pdf}
}
\subfigure[(Concurrent, Size)]{
\includegraphics[width=1in]{figure/crosschain/add/request_size.pdf}
}
\caption{Process Time}
\label{Cross process time}
\end{figure}
\subsection{Overhead of CPBS}
The configuration of Xuperchain in CPBS is the same as Section \ref{exp_ofbs}. Fig. \ref{Latency of ocbs} shows the overhead of CPBS when the number of consequent transactions and relay nodes are [10, 80, 10]. The blue in Fig. \ref{Latency of ocbs}(a) is the total overhead of the on-chain and off-chain interaction, and the green is the overhead of the off-chain interaction. Since the overhead of the off-chain interaction is too small, we amplify it by 500 times. It can be seen that the off-chain interaction of CPBS reflects the efficiency of CPBS. Fig. \ref{Latency of ocbs}(b) shows the overhead of CPBS when the data volume increases. It can be seen that the number of relay nodes has little effect on the latency of CPBS.
\begin{figure}[!t]
\centering
\subfigure[Interaction]{
\includegraphics[width=3cm]{figure/ocbs/time.pdf}
}
\subfigure[Data Size]{
\includegraphics[width=3cm]{figure/ocbs/consensus.pdf} }
\caption{Overhead}
\label{Latency of ocbs}
\end{figure}
\section{Related Work}
Blockchain has derived much middleware to support different applications. This section briefly represents the most related state-of-art, including 1) blockchain middleware for offline networks, 2) cross-chain middleware for offline networks, and 3) computing Middleware for offline networks.
\subsection{Blockchain middleware for offline networks}
Yuntao W. \emph{et al.} \cite{wang2021disaster} proposed a lightweight blockchain-based collaborative framework for space-air-ground integrated network. They designed a delegated proof of stake consensus to share spare computing resources. Ming F. \emph{et al.} \cite{feng2019msnet} proposed a blockchain-based satellite communication network to protect safety. This approach can quickly detect and defend against cyber attacks. Chakrabarti C. \emph{et al.} \cite{chakrabarti2019blockchain} proposed a blockchain-based incentive scheme for the delay-tolerant network to establish emergency communication networks. Kongrath S. \emph{et al.} \cite{suankaewmanee2018performance} proposed MobiChain, a blockchain for mobile commerce. This mechanism connects blockchain with Sync Gateway \cite{ostrovsky2014synchronizing} via local direct connection or internet connection. However, those solutions are not work in offline networks.
Blockstream \cite{blockstream} exploited satellite to broadcast the Bitcoin blockchain around the world for free. SpaceChain \cite{spacechain} is building a blockchain-based satellite network to deploy an Ethereum node. kryptoradio\cite{kryptoradio} exploit DVB-T broadcasting blockchain data, which is non-reliant on the internet. However, these solutions require specific equipment for receiving signals of the specific blockchain, which has expensive costs. And the above solutions can not process the cross-chain transaction and complex computation for offline clients in multiple blockchains.
\subsection{Cross-chain middleware for offline networks}
The mainstream cross-chain middleware includes Notary, Sidechain, Hash Locking, and Relay Chain \cite{buterin2016chain}.
Polkadot \cite{wood2016polkadot} proposed a relay chain-based cross-chain framework. Cosmos \cite{kwon2018network} proposed that all blockchains share Cross-chain Hub supported by Tendermint to complete data exchange. However, those solutions rely on a predefined system, which makes other blockchains need to adapt to this system. Moreover, not all blockchains can connect to the relay chain since they need to bid for slots.
Hyperledger Cactus \cite{cactus2020} proposed a blockchain integration solution based on hash locking\cite{hashlock2019}. The solution encapsulates a gateway layer and an interaction between validators and blockchains. Lys, L. \textit{et al.} \cite{lys2020atomic} proposed an atomic cross-chain interaction scheme based on relayers and hash locking. Qi M. \textit{et al.} \cite{qi2020acctp} proposed a cross-chain transaction platform for high-value assets based on hash locking. However, those solutions are used for transferring assets between blockchains, which have limited applications.
Jin H. \textit{et al.} \cite{jin2018towards} proposed a passive cross-chain method based on monitor multiplexing reading. This method monitors the state of the network through a listener. Zhuotao L. \textit{et al.} \cite{liu2019hyperservice} proposed a secure interaction protocol for cross-chain transactions. Pillai, B. \textit{et al.} \cite{pillai2020cross} proposed a cross-chain interoperability protocol based on agents, which implements three-phase interactions between users and blockchains. However, those solutions can not consider malicious nodes in the protocol. Rui H. \cite{han2021vassago} proposed a cross-chain query method to implement an authentic provenance query. Tian H. \textit{et al.} \cite{tian2021enabling} proposed a cross-chain asset transaction protocol based on validators and proxy contracts in Ethereum. However, this protocol elects validators based on proof of work, which leads to excessively long cross-chain transactions. The above solutions are all required to connect to the Internet, which is not suitable for offline clients.
\subsection{Computing Middleware for offline networks}
Harry K. \emph{et al.} \cite{kalodner2020blocksci} proposed BlockSci, a blockchain analysis platform based on an in-memory database. Weihui Y. \emph{et al.} \cite{yang2020ldv} proposed LDV, a method based on directed acyclic graph and historical data prune to reduce the storage overhead of blockchain. Xiaohai D. \emph{et al.} \cite{dai2020lvq} proposed LVQ, a lightweight verifiable query approach for Bitcoin, which is based on Bloom filter integrated Merkle Tree, Merkle Tree, and Sorted Merkle Tree. Haotian W. \emph{et al.} \cite{wu2021vql} proposes VQL, an efficient and verifiable cloud query service for blockchain. Cheng X. \emph{et al.} \cite{xu2018query} proposed APP, a data query structure of access-policy-preserving grid-tree based on Merkle Tree. Yijing L. \emph{et al.} \cite{lin2022novel} proposed a decentralized learning method based on oracles, which uses the data of off-chain producers to provide consumers with highly credible computing results. Muhammad M. \emph{et al.} \cite{muzammal2019renovating} proposed ChainSQL, a blockchain-based database system to achieve data modification of the blockchain and query speed of the distributed databases simultaneously. Saide Z. \emph{et al.} \cite{zhu2019zkcrowd} proposed zkCrowd, which distributes tasks through the public chain based on DpoS, and dynamically executes tasks through the private subchain based on PBFT. Weilin Z. \emph{et al.} \cite{zheng2019nutbaas} proposed NutBaaS, a blockchain-as-a-service platform that lowers development thresholds through log-based network real-time monitoring. However, the above solutions are all required to connect to the Internet, which is not suitable for offline clients.
\section{Conclusions and Future Works}
We introduce BcMON, including OFBS for offline clients accessing the blockchain, CCBS for offline clients accessing multiple blockchains, and CPBS for offline clients implementing complex on-chain computation. To the best of our knowledge, BcMON is the first blockchain middleware for offline networks. The prototype of BcMON has been implemented to evaluate the performance of the blockchain middleware. We will focus on modifying the network protocol to provide a more efficient offline network in future work.
\section*{ACKNOWLEDGEMENT}
This work is supported by National Natural Science Foundation of China (62072049), BUPT Excellent Ph.D. Students Foundation (CX2021133) and BUPT Innovation and Entrepreneurship Support Program (2022-YC-A112).
|
1,116,691,498,402 | arxiv | \section{Introduction}
In \cite{I3} we considered an
inverse boundary value problem for the heat equation in one space
dimension. First we recall the problem. Let $a>0$ and $T>0$.
Let $u=u(x,t)$
be a solution of the problem:
$$\begin{array}{c}
\displaystyle
u_t=u_{xx}\,\,\mbox{in}\,]0,\,a[\times]0,\,T[,\\
\\
\displaystyle
u_x(a,t)=0\,\,\mbox{for}\,t\in\,]0,\,T[,\\
\\
\displaystyle
u(x,0)=0\,\,\mbox{in}\,]0,\,a[.
\end{array}
\tag {1.1}
$$
\noindent Then the problem considered therein is: extract $a$ from
a {\it single set} of the data $u(0,t)$ and $u_x(0,t)$ for $0<t<T$.
\noindent This is a simplest one space dimension version of the
problem of domain determination which is a typical inverse
boundary value problem for the heat equation and related to the
thermal imaging of unknown {\it discontinuity} such as cavity,
defect or inclusion inside a heat conductive body. There are
extensive studies for the uniqueness and stability issues of this
type of problems in multi dimensions. See \cite{BC, CRV} and
references therein for several results on the issues. However, in
our opinion, seeking an analytical formula that directly connects
information about discontinuity with the data also yields another
view for understanding of the problems.
Recently, some
new analytical methods for the such type inverse problems that the
governing equations are elliptic were introduced. In particular,
the {\it probe method} (\cite{I0, I4}) and {\it factorization
method} (\cite{K, K2}) gave ways of extracting
unknown discontinuity from the data that are given by
the Dirichlet-to-Neumann map in inverse boundary value problems;
the far field operator or the restriction of the
scattered fields onto a sphere surrounding unknown discontinuity
that are exerted by infinitely many point sources located on the
sphere in inverse obstacle scattering problems.
\noindent
These methods require infinitely many data.
By the way, the {\it enclosure method} also gives a way of extracting
the {\it convex hull} of unknown discontinuity from the
Dirichlet-to-Neuman map (\cite{Ie}). However in some cases, using
the idea of the enclosure method, one can give a way of extracting
unknown discontinuity by a single set of the Dirichlet and Neumann
data (see \cite{I1, Ie2}). In particular, the result in
\cite{Ie2} gave a constructive proof of a uniqueness theorem
established in \cite{FI} and the numerical implementation of a
reconstruction algorithm of the convex hull of unknown polygonal
inclusion or cavity has been done in \cite{IO, IO2}. Thus it is
quite interesting whether the enclosure method still works for the
inverse problem for the heat equation mentioned above. In
\cite{I3} we found a simple extraction formula of $a$ by using the
idea of the enclosure method. Needless to say, this result gives a
new, constructive and simple proof of the uniqueness theorem: the
data $u(0,t)$ and $u_x(0,t)$ for $0<t<T$ uniquely determine $a$
(under a suitable condition on $u_x(0,t)$). However, it should be
pointed out that the result in \cite{I3} is against the past
experiences in the study of inverse boundary value problems for
elliptic equations. Here we review the result and point out the
difference.
\noindent
Let $c$ be an arbitrary positive number.
Set
$$\displaystyle
z=-c\tau\left(1+i\sqrt{1-\frac{1}{c^2\tau}}\,\right),\,\,\tau>c^{-2}.
\tag {1.2}
$$
Let
$$\displaystyle
v(x,t)=e^{-z^2t}\,e^{xz}.
\tag {1.3}
$$
\noindent
Given $s\in\Bbb R$ we introduce the {\it indicator function}
$I_{c}(\tau;s)$:
$$\displaystyle
I_{c}(\tau;s)
=e^{\tau s}
\int_0^T\left(-v_x(0,t)u(0,t)+u_x(0,t)v(0,t)\right)dt,\,\,\tau>c^{-2}
$$
where $u$ satisfies (1.1) and $v$ is the function given by (1.3).
\noindent
Assume that we know a positive
number $M$ such that $M\ge a$. Let $c$ be an arbitrary positive
number satisfying $2Mc<T$. Assume that $u_x(0,t)=1$(note that this
is just for simplicity of description). Then we have the formula
$$\displaystyle
\lim_{\tau\longrightarrow\infty}
\frac{\displaystyle\log\vert I_{c}(\tau;0)\vert}
{\tau}
=-2ca.
\tag {1.4}
$$
and the following statements are true:
if $s\le 2ca$, then
$\displaystyle\lim_{\tau\longrightarrow\infty}\vert I_{c}(\tau;s)\vert=0$;
if $s>2ca$, then
$\displaystyle\lim_{\tau\longrightarrow\infty}\vert I_{c}(\tau;s)\vert=\infty$.
\noindent
Why is this interesting? The reason consists of two points.
The first point is an unexpected asymptotic behaviour of the
indicator function. More precisely,
integration by parts gives
$$
\displaystyle
I_{c}(\tau;s)
=-e^{\tau s}\int_0^Tu(a,t)v_x(a,t)dt+O(e^{-\tau(T-s)}).
$$
The function $e^{\tau s}v$ has the special character
\noindent
$\bullet$ if $s<cx+t$, then $\lim_{\tau\longrightarrow\infty}e^{\tau s}\vert v(x,t)\vert=0$
\noindent
$\bullet$ if $s>cx+t$, then $\lim_{\tau\longrightarrow\infty}e^{\tau s}\vert v(x,t)\vert=\infty$.
\noindent Therefore if $T>ca>s$, then $\vert I_c(\tau;s)\vert$ is
exponentially decaying as $\tau\longrightarrow\infty$. Since
$e^{\tau s}\vert v_x(a,t)\vert$ is exponentially growing in the
case when $T>s>ca$ and $0<t<s-ca$, usually we expect $\vert
I_c(\tau;s)\vert$ is exponentially growing as
$\tau\longrightarrow\infty$. Then a past experience suggests that
the right hand side of (1.4) should give $-ca$ which has the
meaning:
$$
-ca=\sup\{\left(\begin{array}{c}\displaystyle x
\\
\displaystyle
t\end{array}\right)\cdot
\left(\begin{array}{c}\displaystyle -c\\
\displaystyle
-1\end{array}\right)\,\vert\,(x,t)\in\,]a,\,\infty[\,\times\,]0,\,T[\}.
$$
This is nothing but the value of the support function for the unknown
domain $]a,\,\infty[\,\times\,]0,\,T[$ at the direction $(-c,\,-1)^T$.
However, in fact, we obtained $-2ca$ which has the meaning:
$$
-2ca=\sup\{\left(\begin{array}{c}\displaystyle x\\
\displaystyle
t\end{array}\right)\cdot
\left(\begin{array}{c} \displaystyle
-c\\
\displaystyle
-1\end{array}
\right)\,\vert\,(x,t)\in\,]2a,\,\infty[\,\times\,]0,\,T[\}.
$$
Since the $u$ of (1.1) can be extended as a solution of the heat
equation onto $]a,\,2a[$ by the reflection $x\longmapsto 2a-x$,
one may think that this is because of the simple Neumann boundary
condition at $x=a$. However, in \cite{I3} we saw the same
phenomenon to the case of the Robin boundary condition at $x=a$.
So we guess that this is a {\it universal phenomenon}. Based on this
belief, therein we gave another interpretation of
$2ca$. It is the {\it travel time} of a {\it virtual} signal with an arbitrary
fixed propagation speed $1/c$ that starts at the known boundary
$x=0$ and the initial time $t=0$, reflects at another unknown
boundary $x=a$ and returns to the original boundary $x=0$.
The second point is: needless to say, the formula (1.4) yields a new, simple, constructive proof
of the uniqueness theorem.
$\bullet$ Let
$T$ be a fixed arbitrary positive number.
For $j=1,2$ let $u_j$ satisfy (1.1) with $a=a_j(>0)$ and
$(u_j)_x(0,t)=1$ for all $t\in\,]0,\,T[$.
If $u_1(0,t)=u_2(0,t)$ for all $t\in\,]0,\,T[$, then
$a_1=a_2$.
\noindent
A standard and traditional proof of this type uniqueness theorem
(see, e.g., \cite{BC})
is done by using a contradiction argument
and starts with assuming, say $a_1<a_2$.
Then the uniqueness of the lateral Cauchy problem for the heat equation
gives $u_1(x,t)=u_2(x,t)$ for $(x,t)\in\,]0,\,a_1[\,\times\,]0,\,T[$. This yields that $u_2$ satisfies
$(u_2)_x(a_1,t)=(u_1)_x(a_1,t)=0$. Since $u_2$ satisfies the heat equation in $]a_1,\,a_2[\,\times\,]0,\,T[$,
the Neumann boundary conditions at $x=a_1,\,a_2$ and the initial condition $u_2(x,0)=0$, we obtain
$u_2(x,t)=0$ in $]a_1,\,a_2[\,\times\,]0,\,T[$. Then the unique continuation theorem for the solution
of the heat equation gives $u_2(x,t)=0$ in $]0,\,a_2[\,\times\,]0,\,T[$ and this thus yields
$(u_2)_x(0,t)=0$. Contradiction.
\noindent Clearly this argument can relax the condition on
$u_x(0,t)$ and tells us the importance of the uniqueness of the
lateral Cauchy problem or the unique continuation theorem for the
heat equation. However, this type proof gives no information about
how to extract unknown $a$ from the data $u(0,t)$ and $u_x(0,t)$.
\noindent
It should be pointed out that, in \cite{BC} another argument
in the case when $T=\infty$ and one
has the data $u(0,t)$ and $u_x(0,t)$ for all $t\in\,]0,\,\infty[$,
is introduced. Starting with assuming $a_1<a_2$, we see that
$u_2$ satisfies the Neumann boundary conditions at $x=a_1,\,a_2$.
This is same as above. However, here they do not make use of the
initial condition $u_2(x,0)=0$. Instead they have the identity
$$\displaystyle
\frac{d}{dt}\int_{a_1}^{a_2}u_2(x,t)dt=\int_{a_1}^{a_2}(u_2)_{xx}(x,t)dt=0.
$$
One the other hand, using the eigenfunction expansion, they show that
$$\displaystyle
\lim_{t\longrightarrow\infty}\vert\int_{a_1}^{a_2}u_2(x,t)dt\vert=\infty.
$$
Contradiction. The proof is quite interesting, however, clearly in
this argument the assumption $T=\infty$ is essential and the same
comment works also for this proof.
\noindent Summing up, one should find another type proof that
tells us the information about how to extract unknown $a$ from the
data $u(0,t)$ and $u_x(0,t)$ for $t\in\,]0,\,T[$ with $T<\infty$.
We think that our proof presented in this paper gives an answer to
this natural question.
The aim of this paper is: to confirm further that the interpretation
in the first point still works, at least, in one space dimensional case
by considering
three typical inverse boundary value problems for the heat
equations. Those three problems are:
extracting an unknown interface in a conductive material,
an unknown boundary in a layered material or a material with a smooth
conductivity.
\noindent In a future study we will consider the multidimensional
version of those problems.
\noindent
{\bf\noindent Remark 1.1.}
In this paper we always consider
the solutions of the heat equations in the context of a
variational formulation. In particular, every solutions in this paper
belongs to the space $W(0,\,T;H^1(\Omega), (H^1(\Omega))')$ with $u_x(0,t), u_x(a,t)\in L^2(0,T)$
where $\Omega=]0,\,a[$ and satisfies the governing equation in a weak sense.
We refer the reader to \cite{DL} for the detail.
\section{Statement of the results}
{\bf\noindent 2.1. Extracting interface}
Let $0<b<a$.
Define
$$
\gamma(x)=\left\{
\begin{array}{lr}
\displaystyle \gamma_1,&\quad \mbox{if $0<x<b$,}\\
\\
\displaystyle \gamma_2,&\quad \mbox{if $b<x<a$}
\end{array}
\right.
$$
where both $\gamma_1$ and $\gamma_2$ are positive constants and satisfies $\gamma_2\not=\gamma_1$.
Let $u$ be an arbitrary solution of the problem:
$$\begin{array}{c}
\displaystyle
u_t=(\gamma\,u_{x})_{x}\,\,\mbox{in}\,]0,\,a[\times\,]0,\,T[,\\
\\
\displaystyle
u(x,0)=0\,\mbox{in}\,\,]0,\,a[.
\end{array}
\tag {2.1}
$$
We assume that both $\gamma_1$ is {\it known}
and that $a$, $b$ and $\gamma_2$ are all {\it unknown}.
{\bf\noindent Inverse Problem A.} Extract $b$ from $u(0,t)$ and
$\gamma_1\,u_x(0,t)$ for $0<t<T$.
\noindent
Let $c$ be an arbitrary positive number. Let
$$\displaystyle
v(x,t)=e^{-z^2t}\Psi(x,z)
\tag {2.2}
$$
where $\Psi(x,z)=e^{x\,z_1}$, $z_1=z/\sqrt{\gamma_1}$ and $z$ is given by (1.2).
\noindent
The function $v$ is a complex valued function and satisfies the backward heat equation $v_t+\gamma_1\,v_{xx}=0$
in the whole space-time.
{\bf\noindent Definition 2.1.}
Given $c>0$
define the {\it indicator function}
$I_{c}(\tau)$ by the formula
$$\displaystyle
I_{c}(\tau)
=\int_0^T\left(-\gamma_1\,v_x(0,t)u(0,t)+\gamma_1\,u_x(0,t)v(0,t)\right)dt,\,\,\tau>c^{-2}
$$
where $u$ satisfies (2.1) and $v$ is the function given by (2.2).
Define
$$\displaystyle
w(x)=w(x,\tau)=\int_0^T\,e^{-z^2\,t}\,u(x,t)dt,\,\,0<x<a.
$$
Then this $w$ satisfies
$$\displaystyle
(\gamma\,w')'-z^2\,w=e^{-z^2\,T}\,u(x,T)\,\,\mbox{in}\,]0,\,a[
\tag {2.3}
$$
and
$$
\displaystyle
w'(0)=\int_0^Te^{-z^2\,t}u_x(0,t)dt,\,\,
w'(a)=\int_0^Te^{-z^2\,t}u_x(a,t)dt.
$$
Our first result is the following theorem:
\proclaim{\noindent Theorem 2.1.}
Assume that we know a positive number $M$ such that
$M\ge 2b/\sqrt{\gamma_1}$. Let $c$ be an arbitrary positive number satisfying
$Mc<T$.
Assume that
$$\displaystyle
\lim_{\tau\longrightarrow\infty}\,\vert\frac{w'(a)}{w'(0)}\vert
\exp\,\left(\displaystyle c\,\tau\,(\frac{b}{\sqrt{\gamma_1}}-\frac{(a-b)}{\sqrt{\gamma_2}})\right)=0
\tag {2.4}
$$
and there exist a positive constant $C$, positive
number $\tau_0$, real number $\mu$ such that, for all
$\tau>\tau_0$
$$
\displaystyle
C\,\tau^{\mu}\,\le\vert w'(0)\vert.
\tag {2.5}
$$
Then the formula
$$\displaystyle
\lim_{\tau\longrightarrow\infty}
\frac{\displaystyle\log\vert I_{c}(\tau)\vert}
{\tau}
=-2\,\frac{c\,b}{\sqrt{\gamma_1}},
$$
is valid.
\em \vskip2mm
\noindent Note that the boundary condition at $x=a$ is not
specified. However, the condition (2.4) implicitly restricts a
possible boundary condition at $x=a$. The situation dramatically
changes in the case when (a)
$b/\sqrt{\gamma_1}<(a-b)/\sqrt{\gamma_2}$ or (b)
$b/\sqrt{\gamma_1}>(a-b)/\sqrt{\gamma_2}$.
\noindent In case (a) the condition (2.5) automatically ensures
that (2.4) is valid since we have always $w'(a)=O(1)$ as
$\tau\longrightarrow\infty$(Remark 1.1). This is reasonable under
our interpretation since the signal started from $x=a$ at $t=0$
arrives at $x=0$ after the arrival of the signal started at $x=0$
at $t=0$ (see Figure 1).
\begin{figure}
\begin{center}
\epsfxsize=9cm
\epsfysize=9cm
\epsfbox{Fig1.eps}
\caption{(a) $b/\sqrt{\gamma_1}<(a-b)/\sqrt{\gamma_2}$.}
\end{center}
\end{figure}
\noindent However in case (b) to ensure (2.4) $w'(a)$ has to decay
exponentially as $\tau\longrightarrow\infty$. This is a strong
restriction and can be interpreted as a condition that kills a
signal started from $x=a$ at $t=0$ (see Figure 2).
\begin{figure}
\begin{center}
\epsfxsize=9cm
\epsfysize=9cm
\epsfbox{Fig2.eps}
\caption{(b) $b/\sqrt{\gamma_1}>(a-b)/\sqrt{\gamma_2}$.}
\end{center}
\end{figure}
The condition (2.5) gives a restriction on the behaviour of the flux $u_x(0,t)$ as $t\downarrow 0$.
Let $\delta$ satisfy $0<\delta<T$ and $m\ge 0$ be an integer.
Since
$$\displaystyle
z^2=\tau+i2c^2\tau^2\sqrt{1-\frac{1}{c^2\tau}},
$$
a change of variable yields, as $\tau\longrightarrow\infty$
$$\begin{array}{c}
\displaystyle
\int_0^{\delta}e^{-z^2 t}t^m dt
=\frac{1}{\tau^{m+1}}\int_0^{\tau\delta}e^{\displaystyle-\xi(1+i2c^2\tau\sqrt{1-\frac{1}{c^2\tau}}\,)}\xi^md\xi\\
\\
\displaystyle
=\frac{1}{\tau^{m+1}}\int_0^{\infty}e^{\displaystyle-c^2\tau(1+i\sqrt{1-\frac{1}{c^2\tau}}\,)^2\xi}\xi^md\xi
+O(\tau^{-\infty})
\\
\\
\displaystyle
=\frac{1}{\tau^{2(m+1)}}
\int_0^{\infty}e^{\displaystyle-c^2(1+i\sqrt{1-\frac{1}{c^2\tau}}\,)^2 t}t^mdt
+O(\tau^{-\infty}).
\end{array}
$$
Integration by parts gives
$$
\displaystyle
\int_0^{\infty}e^{\displaystyle-c^2(1+i\sqrt{1-\frac{1}{c^2\tau}}\,)^2
t}t^mdt =m!K(\tau)^{m+1}
$$
where
$$\displaystyle
K(\tau)=\frac{1}{\displaystyle
c^2(1+i\sqrt{1-\frac{1}{c^2\tau}})^2} \longrightarrow
\frac{-i}{2c^2}.
$$
Therefore we obtain, as $\tau\longrightarrow\infty$
$$
\displaystyle
\tau^{2(m+1)}\int_0^{\delta}e^{-z^2 t}t^mdt
\longrightarrow m!(-\frac{i}{2c^2})^{m+1}\not=0.
$$
It is easy to see that if $f\in L^2(0,\,T)$ and for a suitable positive constant $C$,
$\vert f(t)\vert\le Ct^{m'}$ a. e. in $]0,\,\delta[$, then,
as $\tau\longrightarrow\infty$
$$\displaystyle
\int_0^Te^{-z^2 t}f(t)dt=O(\tau^{-(m'+1)}).
$$
Now assume that, we have, for some $\delta$, a positive constant $C$
$$\displaystyle
\vert u_x(0,t)-(g_lt^l+g_{l+1}t^{l+1}+\cdots+g_nt^n)\vert\le Ct^{m'+1},\,\,\mbox{a. e. in $]0,\,\delta[$}
$$
where $n$, $l$ and $m'$ are integers and and satisfy $n\ge l\ge 0$ and $m'>2l+1$; $g_l$,
... $g_n$ are constants and satisfy $g_l\not=0$. Then from the
computation above we see that, as $\tau\longrightarrow\infty$
$$\displaystyle
\tau^{2(l+1)}\int_0^{T}e^{-z^2 t}u_x(0,t)dt
\longrightarrow g_ll!(-\frac{i}{2c^2})^{l+1}.
$$
This implies that $u_x(0,t)$ satisfies (2.5).
In particular, the condition (2.5) is satisfied if
$u_x(0,t)$ satisfies one of the following conditions
$\bullet$ $u_x(0,t)\in C^{2}([0,\,\delta])$
and $u_x(0,0)\not=0$
$\bullet$ $u_x(0,t)\in C^{2l+2}([0,\,\delta])$ with $l\ge 1$
and $t=0$ is the zero point of $u_x(0,t)$ with order $l$.
{\bf\noindent 2.2. Extracting unknown boundary. Layered material}
Let $0=b_0<b_1<b_2<\cdots<b_m=a$.
$$
\gamma(x)=\left\{
\begin{array}{lr}
\displaystyle\gamma_1,&\quad \mbox{if $b_0<x<b_1$,}\\
\\
\displaystyle\gamma_2,&\quad \mbox{if $b_1<x<b_2$,}\\
\\
\vdots\\
\\
\displaystyle\gamma_m, &\quad\mbox{if $b_{m-1}<x<b_m$}
\end{array}
\right.
$$
where $\displaystyle\gamma_1,\gamma_2,\cdots,\gamma_m$ are positive constants and $m\ge 1$.
\noindent Let $a>0$. Let $u=u(x,t)$ be an arbitrary solution of
the problem:
$$\begin{array}{c}
\displaystyle
u_t=(\gamma\, u_{x})_{x}\,\,\mbox{in}\,]0,\,a[\times]0,\,T[,\\
\\
\displaystyle
\gamma_m\,u_x(a,t)+\rho\,u(a,t)=0\,\,\mbox{for}\,t\in\,]0,\,T[,\\
\\
\displaystyle
u(x,0)=0\,\,\mbox{in}\,]0,\,a[
\end{array}
\tag {2.6}
$$
here $\rho\ge 0$ is an arbitrary fixed constant.
{\bf\noindent Inverse Problem B.} Assume that $\gamma_1$, $\cdots$,
$\gamma_m$ are {\it known} and that both of $\rho$ and $a$ are
{\it unknown}. Extract $a$ from $u(0,t)$ and $\gamma_1\,u_x(0,t)$
for $0<t<T$.
In this subsection, instead of $\Psi$ in the previous subsection the function $\Psi$ in
the following which is a consequence of Lemma 5.1 in Section 5 plays the same role;
$z$ is given by (1.2) and set $z_j=z/\sqrt{\gamma_j}$, $j=1,\cdots,m$.
\proclaim{\noindent Proposition 2.2.}
There exists a positive number
$C=C(c,\gamma_1,\cdots,\gamma_m)\,(>c^{-2})$ such that: for each $\tau>C$
there exist a unique $U=(B_1, A_2, B_2,\cdots, A_{m-1},B_{m-1}, A_m)$
such that the function $\Psi$ defined by the formula
$$
\Psi(x)=\Psi(x;c,\gamma_1,\cdots,\gamma_m,\tau)=\left\{
\begin{array}{lr}
\displaystyle e^{xz_1}+B_1e^{-xz_1},&\quad \mbox{if $x<b_1$,}\\
\\
\displaystyle A_2e^{xz_2}+B_2e^{-xz_2},&\quad \mbox{if $b_1<x<b_2$,}\\
\\
\vdots\\
\\
\displaystyle A_{m-1}e^{xz_{m-1}}+B_{m-1}e^{-xz_{m-1}}, &\quad\mbox{if $b_{m-2}<x<b_{m-1}$,}\\
\\
\displaystyle A_m e^{xz_m}, &\quad\mbox{if $b_{m-1}<x$}
\end{array}
\right.
$$
satisfies the equation
$$
\displaystyle (\tilde{\gamma}\Psi')'-z^2\Psi=0\,\,\mbox{in}\,\Bbb R
$$
where
$$
\tilde{\gamma}(x)=\left\{
\begin{array}{lr}
\displaystyle\gamma_1,&\quad \mbox{if $-\infty<x<b_1$,}\\
\\
\displaystyle\gamma_2,&\quad \mbox{if $b_1<x<b_2$,}\\
\\
\vdots\\
\\
\displaystyle\gamma_m, &\quad\mbox{if $b_{m-1}<x<\infty$.}
\end{array}
\right.
$$
\em \vskip2mm
Using $\Psi$ in Proposition 2.2, we define the special solution of the
backward heat equation $v_t+(\gamma v_x)_x=0$ in $\Bbb R$ by the
formula
$$
\displaystyle
v(x,t)=e^{-z^2t}\Psi(x;c,\gamma_1,\cdots,\gamma_m,\tau),\,\,
\tau>C(c;\gamma_1,\cdots,\gamma_m)
$$
\noindent Now we can define the indicator function.
{\bf\noindent Definition 2.2.} Given $c>0$ define the {\it
indicator function} $I_{c}(\tau)$ by the formula
$$\displaystyle
I_{c}(\tau)
=\int_0^T\left(-\gamma_1\,v_x(0,t)u(0,t)+\gamma_1\,u_x(0,t)v(0,t)\right)dt,\,\,
\tau>C(c;\gamma_1,\cdots,\gamma_m)
$$
where $u$ satisfies (2.6).
The following gives a solution to the problem mentioned above and
generalizes Theorem 4.1 of \cite{I3}.
\proclaim{\noindent Theorem 2.3.} Assume that we know a positive
number $M$ such that
$$\displaystyle
M\ge 2
\left(\frac{b_1}{\sqrt{\gamma_1}}+\sum_{j=1}^{m-1}\frac{(b_{j+1}-b_j)}{\sqrt{\gamma_{j+1}}}
\right).
$$
Let $c$ be an arbitrary positive
number satisfying $Mc<T$. Assume that there exist positive a constant $C$, positive
number $\tau_0(>c^{-2})$, real number $\mu$ such that, for all
$\tau>\tau_0$
$$
\displaystyle
C\,\tau^{\mu}\,\le\vert\int_0^T e^{-z^2\,t}u_x(0,t)dt\vert.
\tag {2.7}
$$
\noindent
Then the formula
$$\displaystyle
\lim_{\tau\longrightarrow\infty}
\frac{\displaystyle\log\vert I_{c}(\tau)\vert}
{\tau}
=-2c\left(\frac{b_1}{\sqrt{\gamma_1}}+\sum_{j=1}^{m-1}\frac{(b_{j+1}-b_j)}{\sqrt{\gamma_{j+1}}}\right),
\tag {2.8}
$$
is valid.
\em \vskip2mm
\noindent
Needless to say, the quantity
$$\displaystyle
2c\left(\frac{b_1}{\sqrt{\gamma_1}}+\sum_{j=1}^{m-1}\frac{(b_{j+1}-b_j)}{\sqrt{\gamma_{j+1}}}\right)
$$
can be interpreted as the travel time of a virtual signal
with propagation speeds
\newline{$\sqrt{\gamma_1}/c, \sqrt{\gamma_2}/c,\cdots,\sqrt{\gamma_m}/c$}
in the layeres $0<x<b_1, b_1<x<b_2,\cdots,b_{m-1}<x<a$, respectively that
starts at the known boundary
$x=0$ and the initial time $t=0$, reflects at another unknown
boundary $x=a$ and returns to the original boundary $x=0$.
{\bf\noindent 2.3. Extracting unknown boundary. Material with
smooth conductivity}
Let $a>0$. Let $M\ge a$.
Let $\gamma\in C^2([0,\,M])$ and satisfy $\gamma(x)>0$ for all $x\in\,[0,\,M]$.
\noindent Let $u=u(x,t)$ be an arbitrary solution of
the problem:
$$\begin{array}{c}
\displaystyle
u_t=(\gamma\, u_{x})_{x}\,\,\mbox{in}\,]0,\,a[\times]0,\,T[,\\
\\
\displaystyle
\gamma(a)\,u_x(a,t)+\rho\,u(a,t)=0\,\,\mbox{for}\,t\in\,]0,\,T[,\\
\\
\displaystyle
u(x,0)=0\,\,\mbox{in}\,]0,\,a[
\end{array}
\tag {2.9}
$$
here $\rho\ge 0$ is an arbitrary fixed constant.
{\bf\noindent Inverse Problem C.}
Assume that both $M$ and $\gamma$ are {\it known} and that both of $\rho$ and $a$ are
{\it unknown}. Extract $a$ from $u(0,t)$ and $\gamma(0)\,u_x(0,t)$
for $0<t<T$.
We start with a fact which can be deduced from a combination of Theorem 1 of p. 48 in \cite{N}
and the Liouville transform.
\proclaim{\noindent Proposition 2.4.}
Given $z$ with $\mbox{Re}\,z\le 0$ there exists a solution
$\Psi=\Psi(\,\cdot\,;z,M)$ of the equation $(\gamma y')'-z^2 y=0, \,0<x<M$ such that,
as $\vert z\vert\longrightarrow\infty$
$$\displaystyle
\Psi(x;z,M)=\{\gamma(x)\}^{-1/4}\,
\exp\,\left(z\,\int_0^x\frac{dx}{\sqrt{\gamma(x)}}\right)
\,\{1+O\left(\frac{1}{\vert z\vert}\right)\},
$$
$$\displaystyle
\Psi'(x;z,M)
=z\,\{\gamma(x)\}^{-3/4}\,
\exp\,\left(z\,\int_0^x\frac{dx}{\sqrt{\gamma(x)}}\right)
\,\{1+O\left(\frac{1}{\vert z\vert}\right)\}
$$
uniformly in $x\in\,[0,\,M]$.
\em \vskip2mm
Using $\Psi$ in Proposition 2.4, we define the special solution of the
backward heat equation $v_t+(\gamma v_x)_x=0$ in $]0,\,M[\times\,]0,\,T[$ by the
formula
$$
\displaystyle
v(x,t)=e^{-z^2t}\Psi(x;z,M).
$$
\noindent Now we can define the indicator function.
{\bf\noindent Definition 2.3.} Define the {\it
indicator function} $I(z)$ by the formula
$$\displaystyle
I(z)
=\int_0^T\left(-\gamma(0)\,v_x(0,t)u(0,t)+\gamma(0)\,u_x(0,t)v(0,t)\right)dt,\,\,
\mbox{Re}\,z\le 0
$$
where $u$ satisfies (2.9).
The following gives two solutions to the problem mentioned above and
generalizes Theorem 4.1 of \cite{I3}.
\proclaim{\noindent Theorem 2.5.} Assume that we know a positive
number $M$ such that $M\ge a$.
(1)
Let $c$ be an arbitrary positive
number satisfying
$$\displaystyle
T>2c\int_0^M\frac{dx}{\sqrt{\gamma(x)}}.
\tag {2.10}
$$
Let $z$ be the number given by (1.2).
Assume that there exist positive constant $C$ and a positive
number $\tau_0(>c^{-2})$, real number $\mu$ such that, for all
$\tau>\tau_0$
$$
\displaystyle
C\,\tau^{\mu}\,\le\vert\int_0^T\,e^{-z^2\, t}\,u_x(0,t)dt\vert.
\tag {2.11}
$$
\noindent
Then the formula
$$\displaystyle
\lim_{\tau\longrightarrow\infty}
\frac{\displaystyle\log\vert I(z)\vert}{\tau}
=-2c\int_0^a\frac{dx}{\sqrt{\gamma(x)}},
\tag {2.12}
$$
is valid.
(2)
Assume that there exist positive constant $C$ and a positive
number $\tau_0$, real number $\mu$ such that, for all
$\tau>\tau_0$
$$
\displaystyle
C\,\tau^{\mu}\,\le\vert\int_0^T\,e^{-\tau^2\, t}\,u_x(0,t)dt\vert.
\tag {2.13}
$$
Then the formula
$$\displaystyle
\lim_{\tau\longrightarrow\infty}
\frac{\displaystyle\log\vert I(-\tau)\vert}{\tau}
=-2\int_0^a\frac{dx}{\sqrt{\gamma(x)}},
\tag {2.14}
$$
is valid.
\em \vskip2mm
\noindent
The quantity
$$\displaystyle
2c\int_0^a\frac{dx}{\sqrt{\gamma(x)}}
$$
coincides with the travel time of a virtual signal with variable propagation speed
$\displaystyle\sqrt{\gamma(x)}/c$ that
starts at the known boundary
$x=0$ and the initial time $t=0$, reflects at another unknown
boundary $x=a$ and returns to the original boundary $x=0$.
The condition (2.13) is less restrictive compared with condition (2.11).
We can easily see that the condition (2.13) is satisfied
if $u_x(0,t)$ satisfies one of the following conditions
$\bullet$ for a positive constant $C$ $u_x(0,t)\ge C$ a.e. in $]0,\,\delta[$
$\bullet$ $u_x(0,t)\in C^{1}([0,\,\delta])$
and $u_x(0,0)\not=0$
$\bullet$ $u_x(0,t)\in C^{l+1}([0,\,\delta])$ with $l\ge 1$
and $t=0$ is the zero point of $u_x(0,t)$ with order $l$.
\noindent
See the end of subsection 2.1 for the comparison.
As a corollary of Theorem 2.5 we obtain a direct proof of {\it uniqueness theorem}:
the data $u(0,t)$ and $u_x(0,t)$ for $0<t<T$ uniquenely determine $a$
provided one of (2.11) for a $c$ satisfying (2.10) or (2.13) is satisfied
and both $M$ and $\gamma$ are known.
Note that $T$ is an arbitrary fixed positive number.
In the proof we never make use of
the uniqueness of the lateral Cauchy problem nor the unique continuation theorem for
the heat equation with a variable coefficient.
\section{Proof of Theorem 2.1}
Since $\Psi$ satisfies $\gamma_1\Psi''-z^2\Psi=0$ in $\Bbb R$,
integration by parts gives the expression of the indicator function:
$$\begin{array}{l}
\displaystyle
I_c(\tau)
=-\gamma_1\,\Psi'(a)w(a)+\gamma_2\,w'(a)\Psi(a)
+(\gamma_1-\gamma_2)\int_b^a w'(x)\Psi'(x)dx\\
\\
\displaystyle
-e^{-z^2\,T}\int_0^au(\xi,T)\Psi(\xi)d\xi.
\end{array}
\tag {3.1}
$$
\noindent
Let $y$ be the solution of the boundary value problem
$$\begin{array}{c}
\displaystyle
(\gamma\,y')'-z^2\,y=0\,\,\mbox{in}\,]0,\,a[,\\
\\
\displaystyle
y'(0)=w'(0),\,\,y'(a)=w'(a).
\end{array}
$$
\noindent Define the {\it principle part} of the indicator
function
$$\displaystyle
I_c^{0}(\tau)=
-\gamma_1\Psi'(a)y(a)+\gamma_2 y'(a)\Psi(a)
+(\gamma_1-\gamma_2)\,\int_b^a y'(x)\Psi'(x)dx.
\tag {3.2}
$$
It is easy to see that Theorem 2.1 is a direct consequence of the following two lemmas
\proclaim{\noindent Lemma 3.1.}
Assume that $w'(a)$ and $w'(0)$ satisfy (2.4) and (2.5).
Then, choosing suitable positive constants $C'_1$ and $C_2'$ and positive number $\tau_0'$, we have
$$
C_1'\,\tau^{\mu}\le\vert I_c^0(\tau)\vert e^{2\,c\,\tau\,b/\sqrt{\gamma_1}}
\le C_2',\,\,\forall\tau>\tau_0'.
$$
\em \vskip2mm
\proclaim{\noindent Lemma 3.2.}
As $\tau\longrightarrow\infty$
$$\displaystyle
I_c(\tau)=I_c^0(\tau)
+O\left(e^{-\tau\,T}\right).
$$
\em \vskip2mm
First we prove Lemma 3.1.
\noindent
The $y$ has the expression
$$
y(x)=y(x,\tau)=\left\{
\begin{array}{lr}
\displaystyle A_1\,e^{x\,z_1}+B_1\,e^{-x\,z_1},&\quad \mbox{if $0<x<b$,}\\
\\
\displaystyle A_2\,e^{x\,z_2}+B_2\,e^{-x\,z_2},&\quad \mbox{if $b<x<a$}
\end{array}
\right.
$$
and has to satisfy the transmission conditions:
$$\displaystyle
y(b-0)=y(b+0),\,\,\gamma_1\,y'(b-0)=\gamma_2\,y'(b+0).
$$
These and the relation $z_2/z_1=\sqrt{\gamma_1/\gamma_2}$ yield the system of equations:
$$\displaystyle
A_1\,e^{b\,z_1}+B_1\,e^{-b\,z_1}
=A_2\,e^{b\,z_2}+B_2\,e^{-b\,z_2};
\tag {3.3}
$$
$$\displaystyle
A_1\,e^{b\,z_1}-B_1\,e^{-b\,z_1}
=\sqrt{\frac{\gamma_2}{\gamma_1}}\,(A_2\,e^{b\,z_2}-B_2\,e^{-b\,z_2}).
\tag {3.4}
$$
\noindent
Moreover the boundary conditions $y'(0)=w'(0)$ and $y'(a)=w'(a)$ yield
$$\displaystyle
z_1\,(A_1-B_1)=w'(0)
\tag {3.5}
$$
and
$$
z_2\,(A_2\,e^{a\,z_2}-B_2\,e^{-a\,z_2})=w'(a).
\tag {3.6}
$$
\noindent
A combination of (3.3) and (3.4) gives
$$\begin{array}{l}
\displaystyle
A_1\,e^{b\,z_1}
=\frac{1}{2}\left(1+\sqrt{\frac{\gamma_2}{\gamma_1}}\,\right)\,A_2\,e^{b\,z_2}
+\frac{1}{2}
\left(1-\sqrt{\frac{\gamma_2}{\gamma_1}}\,\right)\,B_2\,e^{-b\,z_2},\\
\\
\displaystyle
B_1\,e^{-b\,z_1}
=\frac{1}{2}\left(1-\sqrt{\frac{\gamma_2}{\gamma_1}}\,\right)\,A_2\,e^{b\,z_2}
+\frac{1}{2}
\left(1+\sqrt{\frac{\gamma_2}{\gamma_1}}\,\right)
\,B_2\,e^{-b\,z_2}.
\end{array}
\tag {3.7}
$$
\noindent
Define
$$\displaystyle
T_{kl}=\frac{2\,\sqrt{\gamma_k}}{\sqrt{\gamma_k}+\sqrt{\gamma_l}},\,\,
R_{kl}=\frac{\sqrt{\gamma_k}-\sqrt{\gamma_l}}
{\sqrt{\gamma_k}+\sqrt{\gamma_l}}.
$$
We have
$$
\displaystyle
\frac{1}{2}
\left(1+\sqrt{\frac{\gamma_2}{\gamma_1}}\,\right)=
\frac{1}{T_{12}},\,\,
\frac{1}{2}\left(1-\sqrt{\frac{\gamma_2}{\gamma_1}}\,\right)
=\frac{R_{12}}{T_{12}}.
$$
Then (3.7) becomes equations
$$\displaystyle
A_1\,e^{b\,z_1}=\frac{1}{T_{12}}\,A_2\,e^{b\,z_2}
+\frac{R_{12}}{T_{12}}\,B_2\,e^{-b\,z_2},
\tag {3.8}
$$
$$\displaystyle
B_1\,e^{-b\,z_1}=\frac{R_{12}}{T_{12}}
\,A_2\,e^{b\,z_2}
+\frac{1}{T_{12}}\,B_2\,e^{-b\,z_2}.
\tag {3.9}
$$
\noindent
Substituting (3.8) and (3.9) into (3.5) and (3.6), we obtain
$$\displaystyle
R\left(\begin{array}{c} \displaystyle A_2\,e^{b\,z_2}\\
\\
\displaystyle B_2\,e^{-b\,z_2}\end{array}\right)
=\frac{1}{z}\,\left(\begin{array}{c}\displaystyle \sqrt{\gamma_2}\,w'(a)\\
\\
\displaystyle \sqrt{\gamma_1}\,w'(0)\,e^{b\,z_1}\end{array}\right)
\tag {3.10}
$$
where
$$\begin{array}{c}
\displaystyle
R=\left(\begin{array}{c}
\begin{array}{lr} e^{(a-b)\,z_2} & -e^{-(a-b)\,z_2}\end{array}
\\
\\
\displaystyle
\left(\begin{array}{lr} 1 & -e^{2b\,z_1}\end{array}\right)
\left(\begin{array}{lr} \displaystyle \frac{1}{T_{12}} & \displaystyle\frac{R_{12}}{T_{12}}\\
\\
\displaystyle
\frac{R_{12}}{T_{12}} & \displaystyle\frac{1}{T_{12}}\end{array}\right)
\end{array}
\right)\\
\\
\displaystyle
=\left(\begin{array}{lr} \displaystyle e^{(a-b)\,z_2} & \displaystyle -e^{-(a-b)\,z_2}\\
\\
\displaystyle
\frac{1}{T_{12}}\,(1-R_{12}\,e^{2b\,z_1}) & \displaystyle \frac{1}{T_{12}}\,
(R_{12}-e^{2b\,z_1})\end{array}
\right).
\end{array}
$$
\noindent
A direct computation gives
$$\displaystyle
(\mbox{det}\,R)\,T_{12}\,e^{(a-b)\,z_2}
=1+R_{12}(e^{2(a-b)\,z_2}-e^{2b\,z_1})
-e^{2b\,z_1+2(a-b)\,z_2}.
$$
This yields,
as $\tau\longrightarrow\infty$
$$\displaystyle
(\mbox{det}\,R)\,T_{12}\,e^{(a-b)\,z_2}
=1+O\left(e^{\displaystyle -2c\tau\,\min\,(b/\sqrt{\gamma_1},\,(a-b)/\sqrt{\gamma_2}\,)}\right).
\tag {3.11}
$$
Therefore $R$ is invertible for sufficiently large $\tau$.
It follows from (3.10) that $A_2$, $B_2$ have the form
$$\displaystyle
A_2\,=\frac{e^{-b\,z_2}}{z(\mbox{det}\,R)}
\left(\frac{1}{T_{12}}(R_{12}-e^{2b\,z_1})\,\sqrt{\gamma_2}\,w'(a)
+e^{-(a-b)\,z_2}\,\sqrt{\gamma_1}\,w'(0)\,e^{b\,z_1}\right),
\tag {3.12}
$$
$$\displaystyle
B_2\,=
\frac{e^{b\,z_2}}{z(\mbox{det}\,R)}
\left(\frac{1}{T_{12}}(R_{12}\,e^{2b\,z_1}-1)\,\sqrt{\gamma_2}\,w'(a)
+e^{(a-b)\,z_2}\,\sqrt{\gamma_1}\,w'(0)\,e^{b\,z_1}\right).
\tag {3.13}
$$
\noindent
Using (3.12) and (3.13), we get two crucial formulae:
$$
\displaystyle
(\gamma_1-\gamma_2)\,\int_b^a y'(x)\Psi'(x)dx
=\frac{1}{(\mbox{det}\,R)\,T_{12}}
\left(C\,\sqrt{\gamma_2}\,w'(a)+D\,T_{12}\,\sqrt{\gamma_1}\,w'(0)\right)
$$
where
$$\begin{array}{l}
\displaystyle
C=(\sqrt{\gamma_1}-\sqrt{\gamma_2})(R_{12}-e^{2b\,z_1})
\,(e^{a\,z_1+(a-b)\,z_2}-e^{b\,z_1})\\
\\
\displaystyle
+(\sqrt{\gamma_1}+\sqrt{\gamma_2})
(R_{12}\,e^{2b\,z_1}-1)\,
(e^{a\,z_1-(a-b)\,z_2}-e^{b\,z_1}),
\\
\\
\displaystyle
D=2\sqrt{\gamma_1}\,e^{(a+b)\,z_1}\\
\\
\displaystyle
-(\sqrt{\gamma_1}\,-\sqrt{\gamma_2})\,e^{2b\,z_1-(a-b)\,z_2}
-(\sqrt{\gamma_1}+\sqrt{\gamma_2})\,
e^{2b\,z_1+(a-b)\,z_2};
\end{array}
$$
$$\displaystyle
-\gamma_1\Psi'(a)y(a)+\gamma_2 y'(a)\Psi(a)
=\frac{1}{(\mbox{det\,R})\,T_{12}}
\left(\tilde{C}\sqrt{\gamma_2}\,w'(a)+\tilde{D}\,T_{12}\,\sqrt{\gamma_1}\,w'(0)\right)
$$
where
$$\begin{array}{l}
\displaystyle
\tilde{C}=-(\sqrt{\gamma_1}-\sqrt{\gamma_2})(R_{12}-e^{2b\,z_1})
e^{a\,z_1+(a-b)\,z_2}\\
\\
\displaystyle
-(\sqrt{\gamma_1}+\sqrt{\gamma_2})(R_{12}\,e^{2b\,z_1}-1)e^{a\,z_1-(a-b)\,z_2},\\
\\
\displaystyle
\tilde{D}=-2\sqrt{\gamma_1}\,e^{(a+b)\,z_1}.
\end{array}
$$
\noindent
A combination of those and (3.2) gives the representation formula
$$\begin{array}{l}
\displaystyle
(\mbox{det}\,R)\,T_{12}\,I_c^0(\tau)
=-2\,\sqrt{\gamma_1}\,\sqrt{\gamma_2}\,w'(a)\,e^{b\,z_1}\\
\\
\displaystyle
-
\left((\sqrt{\gamma_1}-\sqrt{\gamma_2})\,e^{2b\,z_1-(a-b)\,z_2}
+(\sqrt{\gamma_1}+\sqrt{\gamma_2})\,e^{2b\,z_1+(a-b)\,z_2}\right)
\,T_{12}\,\sqrt{\gamma_1}w'(0).
\end{array}
\tag {3.14}
$$
\noindent
Note that
$$\begin{array}{l}
\displaystyle
(\sqrt{\gamma_1}-\sqrt{\gamma_2})\,e^{2b\,z_1-(a-b)\,z_2}
+(\sqrt{\gamma_1}+\sqrt{\gamma_2})\,e^{2b\,z_1+(a-b)\,z_2}\\
\\
\displaystyle
=e^{2b\,z_1-(a-b)\,z_2}\,(\sqrt{\gamma_1}-\sqrt{\gamma_2})\left(1+O(e^{-2\,c\tau\,(a-b)/\sqrt{\gamma_2}})\right).
\end{array}
$$
Note also that Remark 1.1 gives $w'(0)=O(1)$ as $\tau\longrightarrow\infty$.
Using these, (3.11), (3.14), (2.4), (2.5) and the assumption $\gamma_1\not=\gamma_2$, we get the assertion of Lemma 3.1.
\noindent
$\Box$
Next we give a proof of Lemma 3.2.
Recalling (3.1) and (3.2), we get
$$\begin{array}{l}
\displaystyle
I_c(\tau)-I_c^0(\tau)\\
\\
\displaystyle
=-\gamma_1\,\Psi'(a)\epsilon(a)
+(\gamma_1-\gamma_2)\,\int_b^a\,\epsilon'(x)\,\Psi'(x)dx
-e^{-z^2\,T}\int_0^a\,u(\xi,T)\,\Psi(\xi)d\xi
\end{array}
\tag {3.15}
$$
where $\epsilon(y)=w(y)-y(x)$. One knows
$$\displaystyle
\int_0^a u(\xi,T)\Psi(\xi)=O(1).
\tag {3.16}
$$
\noindent
The $\epsilon$ satisfies
$$\begin{array}{l}
\displaystyle
(\gamma\,\epsilon')'-z^2\,\epsilon=e^{-z^2\,T}u(x,T)\,\,\mbox{in}\,]0,\,a[,\\
\\
\displaystyle
\epsilon'(0)=\epsilon'(a)=0.
\end{array}
$$
Multiplying the both sides of the equation by $\overline\epsilon$ and integrating the resultant over
interval $]0,\,a[$, we have
$$\displaystyle
\int_0^a\gamma\vert\epsilon'\vert^2 dx+z^2\int_0^a\vert\epsilon\vert^2 dx
=-e^{-z^2\,T}\int_0^a\,u(\xi,T)\,\overline\epsilon(\xi)\,d\xi.
$$
Since
$$\displaystyle
z^2=\tau+i2\,c^2\,\tau^2\,\sqrt{1-\frac{1}{c^2\,\tau}},
$$
a standard argument yields,
as $\tau\longrightarrow\infty$
$$\displaystyle
\Vert\epsilon\Vert_{H^1(]0,\,a[)}=O(e^{-\tau\,T}).
$$
\noindent
A combination of this and the embedding $H^1(]0,\,a[)\subset C^0([0,\,a])$ gives the estimates
$$\begin{array}{l}
\displaystyle
-\gamma_1\,\Psi'(a)\epsilon(a)=
O(\tau\,\exp\,\left(-\tau\,(T+\frac{c\,a}{\sqrt{\gamma_1}})\right)),\\
\\
\displaystyle
\int_b^a\,\epsilon'(x)\,\Psi'(x)\,dx=
O(\tau\,\exp\,\left(-\tau\,(T+\frac{c\,b}{\sqrt{\gamma_1}})\right)).
\end{array}
\tag {3.17}
$$
Now we obtain from (3.15), (3.16) and (3.17) the assertion of Lemma 3.2.
\noindent
$\Box$
\section{Proof of Theorem 2.3. Part 1. Asymptotic behaviour of $w(a)$}
Define
$$\displaystyle
w(x)=w(x,\tau)=\int_0^T u(x,t)\,e^{-z^2\,t}\,dt,\,0<x<a.
$$
This $w(x)$ satisfies
$$\begin{array}{c}
\displaystyle
(\gamma\,w')'-z^2\,w=e^{-z^2\,T}\,u(x,T)\,\,\mbox{in}\,]0,\,a[,\\
\\
\displaystyle
\gamma_m\,w'(a)+\rho\,w(a)=0.
\end{array}
$$
In this section we study the asymptotic behaviour of $w(a)$ as $\tau\longrightarrow\infty$.
For the purpose it suffices to study the asymptotic behaviour of
the solution of the boundary value problem as $\tau\longrightarrow\infty$:
$$\begin{array}{c}
\displaystyle
(\gamma\,y')'-z^2y=0\,\,\mbox{in}\,]0,\,a[,\\
\\
\displaystyle
y'(0)=w'(0),\\
\\
\displaystyle
\gamma_m\,y'(a)+\rho\,y(a)=0.
\end{array}
$$
\noindent
This is because of
\proclaim{\noindent Lemma 4.1.}
Let $\rho\ge 0$.
The formula
$$\displaystyle
w(a)=y(a)+O(e^{-\tau\,T}),
$$
is valid.
\em \vskip2mm
{\it\noindent Proof.}
Define
$$
\epsilon(x)=w(x)-y(x),\,\,0<x<a.
$$
This function satisfies
$$
\begin{array}{c}
\displaystyle
(\gamma\,\epsilon')'-z^2\,\epsilon=e^{-z^2\,T}u(x,T)\,\,\mbox{in}\,]0,\,a[,\\
\\
\displaystyle
\epsilon'(0)=0,\\
\\
\displaystyle
\gamma_m\,\epsilon'(a)+\rho\,\epsilon(a)=0.
\end{array}
$$
Hereafter a combination of a standard argument and the embedding
$H^1(]0,\,a[)\subset C^0([0,\,a])$ yields the desired estimate.
\noindent
$\Box$
For each $j=1,\cdots,m$ one can write
$$\displaystyle
y(x)=A_je^{x\cdot z_j}+B_je^{-xz_j},\,\,b_{j-1}<x<b_j.
$$
From the equation, it follows that
$$\begin{array}{c}
\displaystyle
y(b_j-0)=y_{j+1}(b_j+0),\\
\\
\displaystyle
\gamma_j\,y_j'(b_j-0)=\gamma_{j+1}\,y_{j+1}'(b_j+0).
\end{array}
$$
Therefore the coefficients $A_1,B_1,\cdots, A_m, B_m$ have to satisfy the system of equations:
$$\displaystyle
z_1(A_1-B_1)=w'(0);
\tag {4.1}
$$
for each $j=1,\cdots,m-1$
$$\begin{array}{c}
\displaystyle
A_je^{b_jz_j}+B_je^{-b_jz_j}
=A_{j+1}e^{b_jz_{j+1}}+B_{j+1}e^{-b_jz_{j+1}},\\
\\
\displaystyle
\gamma_j\,z_j(A_je^{b_jz_j}-B_je^{-b_jz_j})
=\gamma_{j+1}\,z_{j+1}(A_{j+1}e^{b_jz_{j+1}}-B_{j+1}e^{-b_jz_{j+1}});
\end{array}
\tag {4.2}
$$
$$\displaystyle
\gamma_m\,z_m(A_me^{az_m}-B_me^{-az_m})
+\rho(A_me^{az_m}+B_me^{-az_m})=0.
\tag {4.3}
$$
\noindent
Set
$$\begin{array}{lcr}
\displaystyle
X_j=\left(\begin{array}{c} A_j\,e^{b_jz_j}\\
\\
\displaystyle
B_j\,e^{-b_jz_j}\end{array}\right), &
\displaystyle
K_j=
\left(\begin{array}{cc} 1 & 1\\
\\
\displaystyle
\sqrt{\gamma_j} & -\sqrt{\gamma_j}
\end{array}
\right), &
\displaystyle
\alpha_j=\left(\begin{array}{cc}
1 & 0\\
\\
\displaystyle
0 & e^{-2(b_{j-1}-b_j)}\,z_j
\end{array}
\right).
\end{array}
$$
Then one can rewrite the equations (4.1), (4.2) and (4.3) in the
matrix form:
$$\displaystyle
\left(\begin{array}{cc} 1 & -e^{2b_1z_1}\end{array}\right)X_1=
\displaystyle\frac{w'(0)\,\sqrt{\gamma_1}}{z}e^{b_1\,z_1},
\tag {4.4}
$$
$$\displaystyle
e^{(b_j-b_{j+1})\,z_{j+1}}K_{j+1}\alpha_{j+1}X_{j+1}=K_jX_j,
\tag {4.5}
$$
$$\displaystyle
\left(\begin{array}{cc} \displaystyle(\sqrt{\gamma_m}+\frac{\rho}{z}) &
\displaystyle
-(\sqrt{\gamma_m}-\frac{\rho}{z})\end{array}\right)
X_m=0.
\tag {4.6}
$$
\noindent
Set
$$\displaystyle
L(z)=(K_1^{-1}\,K_2\,\alpha_2)\cdots(K_{m-1}^{-1}\,K_m\,\alpha_m).
$$
From (4.5) we have
$$\displaystyle
X_1=e^{\displaystyle\sum_{j=1}^{m-1}(b_j-b_{j+1})\,z_{j+1}}L(z)X_m.
$$
\noindent
Substituting this into (4.4), we obtain
$$\displaystyle
\left(\begin{array}{cc} 1 & -e^{2b_1z_1}\end{array}\right)L(z)X_m=
\displaystyle\frac{w'(0)\,\sqrt{\gamma_1}}{z}e^{b_1\,z_1}
e^{\displaystyle -\sum_{j=1}^{m-1}(b_j-b_{j+1})\,z_{j+1}}.
\tag {4.7}
$$
The equations (4.6) and (4.7) are equivalent to the equation
$$
\displaystyle
\left(\begin{array}{c}
\displaystyle\begin{array}{cc} \displaystyle 1 & -\frac{\displaystyle\sqrt{\gamma_m}-\frac{\rho}{z}}
{\displaystyle\sqrt{\gamma_m}+\frac{\rho}{z}}\end{array}\\
\\
\displaystyle
\left(\begin{array}{cc} 1 & -e^{2b_1\,z_1}\end{array}\right)L(z)
\end{array}
\right)\,X_m
=\displaystyle\frac{w'(0)\,\sqrt{\gamma_1}}{z}e^{b_1\,z_1}
e^{\displaystyle -\sum_{j=1}^{m-1}(b_j-b_{j+1})\,z_{j+1}}
\left(\begin{array}{c} 0\\
\\
\displaystyle 1\end{array}\right).
\tag {4.8}
$$
Solving equation (4.8), we obtain $X_m$. The problem is the asymptotic behaviour
of $L(z)$ as $\tau\longrightarrow\infty$.
Define the {\it transmission coefficient} $T_{kl}$ and {\it reflection coefficient} $R_{kl}$
by the formula
$$\displaystyle
T_{kl}=\frac{2\sqrt{\gamma_k}}{\sqrt{\gamma_k}+\sqrt{\gamma_l}},
\,\,
R_{kl}
=\frac{\sqrt{\gamma_k}-\sqrt{\gamma_l}}
{\sqrt{\gamma_k}+\sqrt{\gamma_l}}.
$$
Set $\delta=\min_{j=1,\cdots,m}\,\{(b_j-b_{j-1})/\sqrt{\gamma_j}\}(>0)$.
\proclaim{\noindent Lemma 4.2.}
We have, as $\tau\longrightarrow\infty$
$$\displaystyle
L(z)=\frac{1}{\displaystyle T_{12}\cdots T_{m-1,\,m}}
\left(\begin{array}{cc}
1 & 0\\
\\
\displaystyle
R_{12} & 0
\end{array}
\right)
+O(e^{-2c\delta\tau}).
\tag {4.9}
$$
\em \vskip2mm
{\it\noindent Proof.}
Using the expression
$$\displaystyle
\alpha_j=\left(\begin{array}{cc}
\displaystyle
1 & 0
\\
\\
\displaystyle
0 & 0
\end{array}
\right)
+O(e^{-2c\tau(b_j-b_{j-1})/\sqrt{\gamma_j}})
$$
and
$$\displaystyle
K_j^{-1}
=
\frac{1}{\displaystyle 2\sqrt{\gamma_j}}
\left(\begin{array}{cc}
\displaystyle
\sqrt{\gamma_j} & 1\\
\\
\displaystyle
\sqrt{\gamma_j} & -1
\end{array}\right),
$$
we have
$$\displaystyle
K_j\,\alpha_j\,K_j^{-1}
=
\frac{1}{\displaystyle 2\sqrt{\gamma_j}}
\left(\begin{array}{c}
\displaystyle
1\\
\\
\displaystyle
\sqrt{\gamma_j}
\end{array}
\right)
\left(\begin{array}{cc}
\displaystyle
\sqrt{\gamma_j} & 1
\end{array}
\right)
+O(e^{-2c\tau(b_j-b_{j-1})/\sqrt{\gamma_j}}).
$$
This gives
$$\begin{array}{l}
\displaystyle
(K_2\alpha_2K_2^{-1})\cdots(K_m\alpha_mK_m^{-1})\\
\\
\displaystyle
=(\frac{1}{2})^{m-1}
\frac{\displaystyle \Pi_{j=2}^{m-1}(\sqrt{\gamma_j}+\sqrt{\gamma_{j+1}})}
{\displaystyle\sqrt{\Pi_{j=2}^m\gamma_j}}
\left(\begin{array}{c} \displaystyle 1\\
\\
\displaystyle
\sqrt{\gamma_2}
\end{array}
\right)
\left(\begin{array}{cc}
\displaystyle
\sqrt{\gamma_m} & 1\end{array}\right)
+O(e^{-2c\delta\tau}).
\end{array}
$$
\noindent
Since
$$\displaystyle
K_1^{-1}\,
\left(\begin{array}{c} \displaystyle 1\\
\\
\displaystyle
\sqrt{\gamma_2}
\end{array}
\right)
\left(\begin{array}{cc}
\displaystyle
\sqrt{\gamma_m} & 1\end{array}\right)\,K_m
=\sqrt{\frac{\gamma_m}{\gamma_1}}\,(\sqrt{\gamma_1}+\sqrt{\gamma_2})\,
\left(\begin{array}{cc}
\displaystyle
1 & 0
\\
\\
\displaystyle
\frac{\sqrt{\gamma_1}-\sqrt{\gamma_2}}
{\sqrt{\gamma_1}+\sqrt{\gamma_2}} & 0
\end{array}
\right),
$$
we obtain
$$\displaystyle
L(z)
=(\frac{1}{2})^{m-1}
\frac{\displaystyle \Pi_{j=1}^{m-1}(\sqrt{\gamma_j}+\sqrt{\gamma_{j+1}})}
{\displaystyle\sqrt{\Pi_{j=1}^{m-1}\gamma_j}}
\left(\begin{array}{cc}
\displaystyle
1 & 0
\\
\\
\displaystyle
\frac{\sqrt{\gamma_1}-\sqrt{\gamma_2}}{\sqrt{\gamma_1}+\sqrt{\gamma_2}} & 0
\end{array}
\right)
+O(e^{-2c\delta\tau}).
$$
This is nothing but (4.9).
\noindent
$\Box$
\noindent
As a direct consequence of (4.9) we have
$$\begin{array}{l}
\displaystyle
\left(\begin{array}{c}
\displaystyle\begin{array}{cc} \displaystyle 1 & -\frac{\displaystyle\sqrt{\gamma_m}-\frac{\rho}{z}}
{\displaystyle\sqrt{\gamma_m}+\frac{\rho}{z}}\end{array}\\
\\
\displaystyle
\left(\begin{array}{cc} 1 & -e^{2b_1\,z_1}\end{array}\right)L(z)
\end{array}
\right)^{-1}\\
\\
\displaystyle
=
\left(\begin{array}{lr}
\displaystyle 0 & T_{12}\cdots T_{m-1,m}\\
\\
\displaystyle -1 & T_{12}\cdots T_{m-1,m}\\
\\
\end{array}
\right)
+\frac{2\rho}{z\,\sqrt{\gamma_m}}\,
\left(\begin{array}{lr}
\displaystyle 0 & 0\\
\\
\displaystyle
-1 & T_{12}\cdots T_{m-1,m}\end{array}
\right)
+O\left(\frac{1}{\tau^2}\right)
\end{array}
$$
and (4.8) therefore yields
$$
\displaystyle
X_m=
\displaystyle\frac{w'(0)\,\sqrt{\gamma_1}}{z}e^{b_1\,z_1}
e^{\displaystyle -\sum_{j=1}^{m-1}(b_j-b_{j+1})\,z_{j+1}}
T_{12}\cdots T_{m-1,\,m}
\{
\left(\begin{array}{c} \displaystyle 1\\
\\
\displaystyle 1+\frac{2\rho}{z\,\sqrt{\gamma_m}}\end{array}\right)
+O\left(\frac{1}{\tau^2}\right)\}
$$
as $\tau\longrightarrow\infty$.
Define
$$\displaystyle
\varphi_j=\sum_{l=1}^{j-1}b_l\,(z_l-z_{l+1}), \,\,j=2,\cdots,m
$$
and
$$
\varphi(x)=\left\{
\begin{array}{lr}
\displaystyle x\,z_1,&\quad \mbox{if $-\infty<x<b_1$,}\\
\\
\displaystyle x\,z_2+\varphi_2,&\quad \mbox{if $b_1<x<b_2$,}\\
\\
\vdots\\
\\
\displaystyle x\,z_m+\varphi_m, &\quad\mbox{if $b_{m-1}<x<\infty$.}
\end{array}
\right.
$$
The function $\varphi$ has the unique continuous extension to the
whole real line since $\varphi(b_j-0)=\varphi(b_j+0)$ for each
$j=1,\cdots,m-1$. We denote the extension by $\varphi$ again.
\noindent
Since $y(a)=(X_m)_1+(X_m)_2$, $a=b_m$ and
$$\begin{array}{l}
\displaystyle
b_1\,\,z_1-\sum_{j=1}^{m-1}(b_j-b_{j+1})\,z_{j+1}\\
\\
\displaystyle
=b_1\,z_1-(b_1-b_2)\,z_2-(b_2-b_3)\,z_3-\cdots
-(b_{m-1}-b_m)\,z_m\\
\\
\displaystyle
=b_1(z_1-z_2)+b_2\,(z_2-z_3)+\cdots+b_{m-1}\,(z_{m-1}-z_m)+b_m\,z_m
=\varphi(b_m),
\end{array}
$$
we obtain the formula
$$
\displaystyle
y(a)
=\frac{2w'(0)\,\sqrt{\gamma_1}}{z}\,e^{\varphi(a)}\,
T_{12}\cdots T_{m-1,\,m}
\{1+\frac{\rho}{z\,\sqrt{\gamma_m}}+O\left(\frac{1}{\tau^2}\right)\}.
\tag {4.10}
$$
Note that
$$\varphi(a)=-z\,\left(\frac{b_1}{\sqrt{\gamma_1}}+\sum_{j=1}^{m-1}\,\frac{b_{j+1}-b_j}
{\sqrt{\gamma_{j+1}}}\right).
\tag {4.11}
$$
\section{Proof of Theorem 2.3. Part 2. Asymptotic behaviour of $\Psi(a)$}
Integration by parts yields that
the function $\Psi$ is the (weak) solution of the equation
$(\tilde{\gamma}\,y')'-z^2y=0$ in $\Bbb R$ if and only if
$\Psi$ satisfies, for each $j=1,\cdots,m-1$
$$\begin{array}{c}
\displaystyle
\Psi(b_j-0)=\Psi(b_j+0),\\
\\
\displaystyle
\gamma_j\,\Psi'(b_j-0)=\gamma_{j+1}\,\Psi'(b_j+0).
\end{array}
$$
This yields the system of equations for $B_1, A_2,B_2,\cdots, B_{m-1},
A_m$:
$$\displaystyle
e^{(b_j-b_{j+1})\,z_{j+1}}K_{j+1}\alpha_{j+1}Y_{j+1}=K_jY_j,\,\,
j=1,\cdots,m-1
\tag {5.1}
$$
where
$$\begin{array}{lcr}
\displaystyle
Y_1=\left(\begin{array}{c}\displaystyle e^{b_1\,z_1}\\
\\
\displaystyle
B_1 e^{-b_1\,z_1}
\end{array}
\right), &
Y_j=\left(\begin{array}{c}
\displaystyle
A_j e^{b_j\,z_j}\\
\\
\displaystyle
B_j e^{-b_j\,z_j}
\end{array}
\right), &
Y_m=A_m\,e^{b_m\,z_m}\left(\begin{array}{c}\displaystyle
1\\
\\
\displaystyle
0
\end{array}
\right).
\end{array}
$$
From (5.2) one has
$$\begin{array}{l}
\displaystyle
Y_1=e^{\displaystyle\sum_{j=1}^{m-1}(b_j-b_{j+1})\,z_{j+1}}
L(z)Y_m\\
\\
\displaystyle
=A_m
\,e^{b_m\,z_m}\,e^{\displaystyle\sum_{j=1}^{m-1}(b_j-b_{j+1})\,z_{j+1}}L(z)\left(\begin{array}{c}
\displaystyle 1\\
\\
\displaystyle
0
\end{array}
\right).
\end{array}
\tag {5.3}
$$
\noindent
Set $\delta=\min_{j=1,\cdots,m}\,\{(b_j-b_{j-1})/\sqrt{\gamma_j}\}(>0)$.
\noindent
The following indicates the meaning of the coefficients $T_{kl}$
and $R_{kl}$.
\proclaim{\noindent Lemma 5.1.}
For sufficiently large $\tau$ the system of equations (5.1) is uniquely solvable
and, as $\tau\longrightarrow\infty$ the formulae
$$\displaystyle
A_j\,e^{-b_1\,z_1}\,e^{b_j\,z_j}\,
e^{\displaystyle
\sum_{l=1}^{j-1}(b_{l}-b_{l+1})\,z_{l+1}}
=T_{12}\cdots T_{j-1,\,j}+O(e^{-2c\delta\tau})
\tag {5.4}
$$
and
$$
\displaystyle
B_j\,e^{-b_1\,z_1}\,e^{-b_j\,z_j}\,
e^{\displaystyle
\sum_{l=1}^{j-1}(b_{l}-b_{l+1})\,z_{l+1}}
=R_{j,\,j+1}\,T_{12}\cdots T_{j-1,\,j}
+O(e^{-2c\delta\tau}),
\tag {5.5}
$$
are valid.
\em \vskip2mm
{\it\noindent Proof.}
Using (4.9) and (5.3), we find $A_m$ for a sufficient large $\tau$
and obtain (5.4) for $j=m$.
Next (5.1) for $j=m-1$ yields that $A_{m-1}$ and $B_{m-1}$ are uniquely determined
and that the formulae (5.4) and (5.5) for $j=m-1$.
For general $j$ we make use of the recurrence formulae
$$\begin{array}{l}
\displaystyle
A_j=\frac{1}{T_{j,\,j+1}}A_{j+1}\,e^{b_j\,(z_{j+1}-z_j)}
+\frac{R_{j,\,j+1}}{T_{j,\,j+1}}B_{j+1}\,e^{-b_j\,(z_{j+1}+z_j)},\\
\\
\displaystyle
B_j=\frac{R_{j,\,j+1}}{T_{j,\,j+1}}A_{j+1}\,e^{b_j(z_{j+1}+z_j)}
+\frac{1}{T_{j,\,j+1}}B_{j+1}\,e^{-b_j\,(z_{j+1}-z_j)},
\end{array}
$$
which are equivalent to (5.1).
Note that we set $A_1=1$ and $B_m=0$.
\noindent
$\Box$
\noindent
From Lemma 5.1 one has, as $\tau\longrightarrow\infty$
$$\begin{array}{c}
\displaystyle
A_2\,e^{-\varphi_2}=T_{12}+O(e^{-2c\delta\tau}),\\
\\
\displaystyle
A_3\,e^{-\varphi_3}=T_{12}\,T_{23}+O(e^{-2c\delta\tau}),\\
\\
\displaystyle
\vdots
\\
\\
\displaystyle
A_m\,e^{-\varphi_m}=T_{12}\,T_{23}\cdots T_{m-1,m}+O(e^{-2c\delta\tau});
\\
\\
\displaystyle
B_1\,e^{-2b_1\,z_1}=R_{12}+O(e^{-2c\delta\tau}),\\
\\
\displaystyle
B_2\,e^{-2b_2\,z_2}\,e^{-\varphi_2}
=T_{12}\,R_{23}+O(e^{-2c\delta\tau}),\\
\\
\displaystyle
\vdots\\
\\
\displaystyle
B_{m-1}\,e^{-2b_{m-1}\,z_{m-1}}
\,e^{-\varphi_{m-1}}
=T_{12}\,T_{23}\cdots T_{m-2,\,m-1}R_{m-1,\,m}+O(e^{-2c\delta\tau}).
\end{array}
$$
\noindent
Moreover, it follows that
$$
\frac{B_j\, e^{-x\,z_j}}
{A_j\,e^{x\,z_j}}=O(e^{-2c\tau(b_j-x)/\sqrt{\gamma_j}}),\,\,x<b_j.
$$
This gives, as $\tau\longrightarrow\infty$ the asymptotic formula of $\Psi$:
$$
\Psi(x)\sim\left\{
\begin{array}{lr}
\displaystyle e^{\varphi(x)},&\quad \mbox{if $-\infty<x<b_1$,}\\
\\
\displaystyle T_{12}\,e^{\varphi(x)},&\quad \mbox{if $b_1<x<b_2$,}\\
\\
\vdots\\
\\
\displaystyle T_{12}\cdots T_{m-1,\,m}\,e^{\varphi(x)}, &\quad\mbox{if $b_{m-1}<x<\infty$.}
\end{array}
\right.
$$
\noindent
The formula (5.4) for $j=m$ gives
$$\displaystyle
\Psi(a)\,e^{-\varphi(a)}
=T_{12}\,\cdots T_{m-1,\,m}
+O(e^{-2c\delta\tau}).
\tag {5.6}
$$
\noindent The following estimates are a direct corollary of (5.4)
and (5.5):
$$\begin{array}{l}
\displaystyle
\vert A_j\vert
=O
\left(e^{-c\,b_1\tau/\sqrt{\gamma_1}}\,e^{c\,b_j\tau/\sqrt{\gamma_j}}\,
e^{\displaystyle -c\sum_{l=1}^{j-1}(b_{l+1}-b_l)\,\tau/\sqrt{\gamma_{l+1}}}\right);\\
\\
\displaystyle
\vert B_j\vert
=O\left(e^{-c\,b_1\tau/\sqrt{\gamma_1}}\,e^{-c\,b_j\tau/\sqrt{\gamma_j}}\,
e^{\displaystyle -c\,\sum_{l=1}^{j-1}(b_{l+1}-b_l)\,\tau/\sqrt{\gamma_{l+1}}}\right).
\end{array}
$$
\noindent
Applying these estimates to the expression
$$\displaystyle
\int_{b_{j-1}}^{b_j}u(\xi,T)\Psi(\xi)d\xi
=A_j\int_{b_{j-1}}^{b_j}u(\xi,T)e^{\xi\,z_j}d\xi
+B_j\int_{b_{j-1}}^{b_j}u(\xi,T)e^{-\xi\,z_j}d\xi,
$$
we obtain the estimate
$$\displaystyle
\int_0^au(\xi,T)\Psi(\xi)d\xi
=O(1).
\tag {5.7}
$$
Recalling the definition of $w$, the equation $\Psi'a)=z_m\Psi(a)$ and using integration by parts we
have
$$\begin{array}{l}
\displaystyle
I_{c}(\tau)
=\gamma_1\,w'(0)\Psi(0)
-w(0)\,\gamma_1\Psi'(0)\\
\\
\displaystyle
=-(\rho\,+\gamma_m\,z_m)\Psi(a)w(a)
-e^{-z^2T}\int_0^a u(\xi,T)\Psi(\xi)d\xi.
\end{array}
$$
\noindent
Then Lemma 4.1, (4.10), (5.6) and (5.7) yield
the asymptotic formula for the indicator function.
\proclaim{\noindent Proposition 5.2.}
As $\tau\longrightarrow\infty$ the formula
$$\begin{array}{l}
\displaystyle
I_{c}(\tau)\,e^{-2\,\varphi(a)}
=-2\,\sqrt{\gamma_1\,\gamma_m}\,w'(0)\,(T_{12}\cdots T_{m-1,\,m})^2
\{1+\frac{2\rho}{z\,\sqrt{\gamma_m}}+O(\frac{1}{\tau^2})\}
\\
\\
\displaystyle
+O(\tau e^{\displaystyle -\tau\{T-
2c\,(
\frac{b_1}{\sqrt{\gamma_1}}+
\sum_{j=1}^{m-1}\frac{b_{j+1}-b_j}{\sqrt{\gamma_{j+1}}}
)\}}),
\end{array}
$$
is valid.
\em \vskip2mm
\noindent
Theorem 2.3 is an immediate corollary of (4.11), Proposition 5.2,
the expression
$$\displaystyle
w'(0)=\int_0^T e^{-z^2\,t}u_x(0,t)dt,
$$
the assumption (2.7) and the fact $w'(0)=O(1)$ as $\tau\longrightarrow\infty$.
{\bf\noindent Remark 5.1.}
Once $\varphi(a)$ is known (see (4.11)), then one immediately obtains the extraction formula of $\rho$:
$$
\displaystyle
\lim_{\tau\longrightarrow\infty}\left(\frac{I_c(\tau)\,e^{-2\,\varphi(a)}}{2\sqrt{\gamma_1\,\gamma_m}\,w'(0)\,
(T_{12}\cdots T_{m-1,\,m})^2}+1\right)\,z\,\sqrt{\gamma_m}=-2\rho.
$$
\section{Proof of Theorem 2.5}
Define
$$\displaystyle
w(x,\tau)=\int_0^T e^{-z^2\,t}\,u(x,t)dt,\,\,0<x<a.
$$
This $w$ satisfies
$$\begin{array}{c}
\displaystyle
(\gamma\,w')'-z^2\,w=e^{-z^2\,T}u(x,T),\,\,\mbox{in}\,]0,\,a[,\\
\\
\displaystyle
\gamma(a)\,w'(a)+\rho\,w(a)=0.
\end{array}
$$
Integration by parts gives the expression
$$
I(z)
=-w(a,z)(\gamma(a)\,\Psi'(a;z,M)+\rho\,\Psi(a;z,M))
-\int_0^a\,e^{-z^2\,T}u(x,T)\Psi(x;z,M)dx.
\tag {6.1}
$$
Proposition 2.4 gives
$$\displaystyle
\gamma(a)\,\Psi'(a;z,M)+\rho\,\Psi(a;z,M)
=\frac{z}{\gamma(a)^{3/4}}e^{K_a\,z\,\pi}
\{1+O\left(\frac{1}{\vert z\vert}\right)\}.
\tag {6.2}
$$
Therefore it suffices to study the asymptotic behaviour of
$w(a,z)$ as $\vert z\vert\longrightarrow\infty$. For the purpose
first we study the asymptotic behaviour of the unique solution of
the boundary value problem as $\vert z\vert\longrightarrow\infty$:
$$\begin{array}{c}
\displaystyle
(\gamma y')'-z^2y=0\,\,\mbox{in}\,]0,\,a[,\\
\\
\displaystyle
y'(0)=1,\\
\\
\displaystyle
\gamma(a)y'(a)+\rho\,y(a)=0.
\end{array}
$$
We make use of the Liouville transform.
Define
$$\displaystyle
K_a=\frac{1}{\pi}\int_0^a\frac{dx}{\sqrt{\gamma(x)}}
$$
and
$$
\displaystyle
s(x;a)=\frac{1}{K_a}\int_0^x\frac{dx}{\sqrt{\gamma(x)}},\,\,0\le x\le a.
$$
Denote by $x(s;a),\,0\le s\le\pi$ the inverse of the function $s=s(x;a)$.
Set
$$\displaystyle
\tilde{y}(s)=K_a\gamma(0)^{1/4}\gamma(x(s;a))^{1/4}y(x(s;a)).
\tag {6.3}
$$
Then this $\tilde{y}$ satisfies
$$\begin{array}{c}
\displaystyle
\tilde{y}''-(K_a^2z^2+g_a(s))\tilde{y}=0\,\,\mbox{in}\,]0,\,\pi[,\\
\\
\displaystyle
\tilde{y}'(0)-h_a\,\tilde{y}(0)=1,\\
\\
\displaystyle
\tilde{y}'(\pi)+H_a\,\tilde{y}(\pi)=0
\end{array}
$$
where
$$\displaystyle
h_a=\frac{\gamma'(0)}{4K_a\,\gamma(0)^{3/2}},\,\,
H_a=\frac{4\rho-\gamma'(a)}
{4K_a\,\gamma(a)^{3/2}}.
$$
and
$$\displaystyle
f_a(s)=\gamma(x(s;a))^{1/4},\,\,
g_a(s)=\frac{f_a''(s)}{f_a(s)}.
$$
\proclaim{\noindent Lemma 6.1.}
As $\vert z\vert\longrightarrow\infty$ we have
$$\displaystyle
\tilde{y}(\pi)=\frac{2}{K_a\,z}e^{K_a\,z\,\pi}\{1+O\left(\frac{1}{\vert z\vert}\right)\}.
\tag {6.4}
$$
\em \vskip2mm
{\it\noindent Proof.}
It is easy to see that $\tilde{y}$ has the expression
$$\displaystyle
\tilde{y}(s)=\frac{\varphi(s)}{\varphi'(0)-h_a\varphi(0)}
$$
where $\varphi$ is the unique solution of the initial value
problem:
$$\begin{array}{c}
\displaystyle
\varphi''-(K_a^2\,z^2+g(s))\varphi=0\,\,\mbox{in}\,]0,\,\pi[,\\
\\
\displaystyle
\varphi(\pi)=1,\\
\\
\displaystyle
\varphi'(\pi)=-H_a.
\end{array}
$$
Here we cite a known important fact:
there exists a fundamental system of solutions $e_1(x,z)$ and $e_2(x,z)$
of equation $y''-(K_a^2\,z^2+g_a(s))y=0$ in $]0,\,\pi[$ such that
as $\vert z\vert\longrightarrow\infty$, $\mbox{Re}\,z\le 0$, uniformly in $x$,
$$\begin{array}{c}
\displaystyle
e_1(s,z)=e^{K_a\,z\,s}
\{1+O\left(\frac{1}{\vert z\vert}\right)\},\\
\\
\displaystyle
e_1'(s,z)=K_a\,z\,e^{K_a\,z\,s}\{1+O\left(\frac{1}{\vert z\vert}\right)\},\\
\\
\displaystyle
e_2(s,z)=e^{-K_a\,z\,s}\{1+O\left(\frac{1}{\vert z\vert}\right)\},\\
\\
\displaystyle
e_2'(s,z)=-K_a\,z\,e^{-K_a\,z\,s}\{1+O\left(\frac{1}{\vert z\vert}\right)\}
\end{array}
$$
See again Theorem 1 of p. 48 in \cite{N}.
\noindent
Thus one can write
$$
\varphi(s)=A_1\,e_1(s,z)+A_2\,e_2(s,z)
$$
where $A_1$ and $A_2$ are constants. The initial conditions on
$\varphi$ yields
$$\begin{array}{c}
\displaystyle
A_1=\frac{1}{W(e_1(\,\cdot\,,z),e_2(\,\cdot\,,z))(\pi)}(e_2'(\pi,z)+e_2(\pi,z)H_a),\\
\\
\displaystyle
A_2=-\frac{1}{
W(e_1(\,\cdot\,,z),e_2(\,\cdot\,,z))(\pi)
}(e_1'(\pi,z)+e_1(\pi,z)H_a).
\end{array}
$$
The asymptotic behaviour of $e_1$ and $e_2$ yields
$$\displaystyle
W(e_1(\,\cdot\,,z),e_2(\,\cdot\,,z))(\pi)
=-2K_a\,z
\{1+O\left(\frac{1}{\vert z\vert}\right)\}
$$
and also
$$\displaystyle
A_1=\frac{1}{2}e^{-K_a\,z\,\pi}
\{1+O\left(\frac{1}{\vert z\vert}\right)\},\,\,
A_2=\frac{1}{2}e^{K_a\,z\,\pi}
\{1+O\left(\frac{1}{\vert z\vert}\right)\}.
$$
From these we obtain
$$\begin{array}{c}
\displaystyle
\varphi'(0)-h_a\,\varphi(0)
=\frac{K_a\,z}{2}e^{-K_a\,z\pi}
\{1+O\left(\frac{1}{\vert z\vert}\right)\},\\
\\
\displaystyle
\varphi(\pi)=1+O\left(\frac{1}{\vert z\vert}\right).
\end{array}
$$
This yields (6.4).
\noindent
$\Box$
\noindent
A combination of (6.3) and (6.4) gives
$$\displaystyle
y(a)=\frac{2\,e^{K_a\,z\,\pi}}{K_a^2\,\gamma(0)^{1/4}\,\gamma(a)^{1/4}z}
\{1+O\left(\frac{1}{\vert z\vert}\right)\}.
\tag {6.5}
$$
Using a standard argument, we have
$$\displaystyle
\frac{w(x,z)}
{w'(0,z)}=y(x)+O(e^{\displaystyle -\mbox{Re}\,z^2\,T}).
\tag {6.6}
$$
A combination of (6.5) and (6.6) yields
$$\displaystyle
w(a,z)=\frac{2w'(0,z)}{K_a^2\,\gamma(0)^{1/4}\,\gamma(a)^{1/4}}
\frac{e^{K_a\,z\,\pi}}{z}
\{1+O\left(\frac{1}{\vert z\vert}\right)\}
+O(e^{\displaystyle -\mbox{Re}\,z^2\,T}w'(0,z)).
\tag {6.7}
$$
Now from (6.1), (6.2) and (6.7) we obtain the crucial formula:
$$\begin{array}{c}
\displaystyle
I(z)
=-\frac{2\,w'(0,z)}
{K_a^2\,\gamma(0)^{1/4}}
e^{2\,K_a\,z\,\pi}\{1+O\left(\frac{1}{\vert z\vert}\right)\}
\\
\\
\displaystyle
+O\left(e^{\displaystyle -\mbox{Re}\,z^2\,T}\vert w'(0,z)\vert\,\vert z\vert\,
e^{\displaystyle K_a\,\mbox{Re}\,z\pi}\right)
+O\left(e^{\displaystyle -\mbox{Re}\,z^2\,T}\right).
\end{array}
\tag {6.8}
$$
Then Theorem 2.5 follows from this formula (6.8),
for the choices $z=-c\tau(1+i\sqrt{1-1/c^2\tau})$ (case (a))
and $z=-\tau$ (case (b)) and the expression
$$
\displaystyle w'(0,z)=\int_0^T e^{-z^2\,t}u_x(0,t)dt.
$$
More precisely, both (2.11) in case (a) and (2.13) in case (b) ensure
$$\displaystyle
C\tau^{\mu}\le\vert w'(0,z)\vert,\,\,\tau\ge\tau^0.
$$
From Remark 1.1 one has $w'(0,z)=O(1)$ as $\tau\longrightarrow\infty$.
These yields the estimate: for suitable positive constants $C_1$, $C_2$ and for all $\tau>>1$
$$\displaystyle
C_1\tau^{\mu}\le\vert I(z)\vert e^{\displaystyle -2\,K_a\,\mbox{Re}\,z\pi}\le C_2.
$$
Note that (2.10) is essential in the case (a). This completes the proof of Theorem 2.5.
$$\quad$$
\centerline{{\bf Acknowledgement}}
This research was partially supported by Grant-in-Aid for
Scientific Research (C)(No. 18540160) of Japan Society for
the Promotion of Science.
|
1,116,691,498,403 | arxiv | \section{Introduction}
In the late 1960's, pulsars were discovered in the Crab Nebula and the Vela supernova remnant, proving the relationship between neutron star formation and supernovae.
These remained the only such associations for many years, but now, especially as a result of X-ray observations, there are many more.
There have also been discoveries extending the known range of neutron star and supernova properties.
These developments allow a fresh look at the role of neutron stars in supernovae and their remnants.
\section{The Variety of Neutron Stars}
The effect of a neutron star (NS) on its surrounding supernova and the observability of the neutron star depend on the magnetic field strength and the rotation period of the neutron star.
Recent studies of the radio pulsar population find a normal distribution of initial rotation period with $<P_0>\sim 300$ ms and $\sigma_{P_0}\sim 150$ ms and a lognormal distribution of magnetic field with $<\log(B_0/G)>\sim 12.65$ and $\sigma_{\log(B_0)}\sim 0.55$ \cite{fkaspi06}.
When magnetars are included in the neutron star population, the range of inferred magnetic field for the population, including thermally emitting isolated NSs, radio pulsars, and magnetars, is increased.
Popov et al. \cite{popov10} find a lognormal distribution with $<\log(B_0/G)>\sim 13.25$ and $\sigma_{\log(B_0)}\sim 0.6$; about 10\% of NSs are born as magnetars in their model.
The highly magnetized neutron stars are not generally radio emitters, although there is some overlap between the two populations.
Another approach to the population of NSs is to examine the compact sources at the centers of young ($\le 3000$ yr old) core collapse supernova remnants.
The types of compact objects and examples of each are:
\begin{itemize}
\item
Rotation powered pulsars (or compact sources with wind nebulae) with external interaction: G21.5--0.9, 0540--69, MSH 15-52, Kes 75, G11.2--0.3
\item
Rotation powered pulsars (or compact sources with wind nebulae) without external interaction: Crab Nebula, 3C58, G54.1+0.3
\item
Magnetars: Kes 73, CTB 37B, G327.24+0.13
\item
Compact central objects: Cas A, Puppis A, G350.1--0.3
\item
No compact object detected: 1E 1202.2--7219.
\end{itemize}
Most of these objects are described in \cite{chev05}, where references can be found.
More recent discoveries involve the neutron stars in G21.5--0.9 \cite{gupta05,camilo06}, CTB 37B \cite{HG10b},
G327.24-0.3 \cite{GG07}, and G350.1--0.3 \cite{gaens08}.
The rotation powered pulsars (RPPs) have been divided into 2 groups depending on whether the interaction with the external medium has been detected; deeper observations may result in the observation of such interaction, as in the case of G21--0.9 \cite{math05}.
Among the rotation powered pulsars studied in \cite{chev05}, the range of estimated initial periods is $10-100$ ms, with an average of 40 ms.
These estimates are smaller than the estimate of the mean from the radio pulsar population \cite{fkaspi06},
suggesting that only the high $\dot E$ part of the young pulsar distribution is being observed.
Young pulsars with low $\dot E$ and weak external interaction would be difficult to detect.
An important recent development is the study of X-ray periods and $\dot P$'s for 3 compact central objects (CCOs).
The results on the period and magnetic field are 105 ms and $3.1\times 10^{10}$ G for Kes 79 \cite{HG10a}, 112 ms and $<9.8\times 10^{11}$ G for Puppis A \cite{GH09}, and 424 ms and $<3.3\times 10^{11}$ G for G296.5+10.0 \cite{GH07} .
Pulsations have not been detected from the compact X-ray source in Cas A, but a recent interpretation of the X-ray spectrum in terms of a carbon atmosphere indicates a magnetic field $<8\times 10^{10}$ G due to the lack of line features \cite{HH09}.
These results imply that there is a significant population of low magnetic field neutron stars that are not included in the radio pulsar population because of their low radio luminosities.
The numbers are small, so it is not possible to draw firm conclusions about the distribution of pulsar magnetic fields.
As with pulsars with long initial periods, central compact objects in remnants with weak external interaction are probably missing from the current observed population.
The 3 observed CCOs with periods have such low magnetic fields that little evolution in period is expected over the age of the remnant.
The 3 objects have an average $P$ of 214 ms, which is close to the mean of 300 ms estimated for young radio pulsars \cite{fkaspi06}.
In this case, the X-ray luminosity is thermal and may not be related to the spin rate, so that the periods are more representative than for RPPs.
Magnetars are observed to have long periods $\sim 5-12$ s and are presumed to be strongly spun down, so there is little information about their initial periods in their current spin parameters.
In the dynamo theory for the buildup of the high magnetic field, the initial period is $1-2$ ms and the rotational energy is rapidly deposited in the supernova \cite{DT92}.
The prediction of this model is that the supernovae should be of high energy and magnetars could have high space velocities \cite{DT92}.
These predictions have not been confirmed \cite{vink06}, so a ms birth period does not appear to be required for the formation of a magnetar.
For magnetic dipole radiation with a neutron star radius of $10^6$ cm and $\sin^2{\alpha}=1/2$, where $\alpha$ is the angle between the magnetic and the rotation axes, the spindown power of a pulsar is $\dot E\approx 6\times 10^{34}B_{13}^2(P/300~{\rm ms})^{-4}$ ergs s$^{-1}$, where $B_{13}$ is the pulsar magnetic field in units of $10^{13}$ G and $P$ is scaled to the mean found in \cite{fkaspi06}.
For $B=10^{11}$ G and $P=100$ ms, $\dot E=5\times 10^{32}$ ergs s$^{-1}$, which is less that the typical thermal luminosity emitted by CCOs.
Thus, such a neutron star is plausible for the remnant 1E 0102.2 in the SMC, which is found to have an X-ray luminosity that is below that of the Cas A CCO \cite{rutkowski10}.
Kaplan et al. \cite{kaplan04} have set limits on central sources in shell remnants below 1/10 the Cas A CCO luminosity.
In the case of 1E 0102.2, a massive star event is indicated for the young remnant, whereas Type Ia events or a central black hole remain possibilities for the others.
However, the likely low $\dot E$s for pulsars can easily account for faint central sources.
\section{Progenitors and Magnetic Fields}
The results described in the previous section show that the range of magnetic fields for young neutrons is large: from $3\times 10^{10}$ G to $~10^{15}$ G.
The factors that determine the range are not understood, but the mass of the progenitor star may be related to the field strength.
An indication of this came from the finding that the magnetars CXO J164710.2-455216 and SGR 1806-20 are associated with young stellar clusters that imply a progenitor mass $>(40-50)~M_{\odot}$ \cite{muno06,figer05}.
However, the magnetar SGR1900+14 was found in a cluster with lower mass stars, implying a progenitor mass of $17\pm 2~M_{\odot}$ \cite{davies09}.
A magnetar lies within the young remnant Kes 73 \cite{VG97}, which shows evidence for circumstellar interaction that can be interpreted as indicating a Type IIL/b progenitor
\cite{chev05}; this would imply an intermediate mass progenitor.
The technique of using an associated cluster to estimate progenitor mass was recently used to estimate the progenitor mass of G54.1+0.3, yielding a mass of $\sim 17~M_{\odot}$ (B. Koo, in prep.); in this case, the $P$ and $\dot P$ of the pulsar imply a magnetic field of $1\times 10^{13}$ G.
In other cases associated clusters are generally not observed, so that other means must be used to estimate the progenitor mass.
There are number of ways in which the core collapse supernova Type can be inferred from observations of young supernova remnants \cite{chev05}. The main Types are IIP, where the explosion occurs with most of its H envelope intact, IIL/b in which only a fraction of the H envelope is left at the time of the explosion, and Ib/c in which the H envelope is completely lost.
For single stars, this listing of Types is in order of increasing progenitor mass, but the later Types may occur at lower masses for stars in close binary systems.
The compilation of objects in \cite{chev05} did not show a clear trend of magnetic field strength with Type.
However, 0540--69 was listed as a Type Ib/c event, but recent observations show that it has low velocity hydrogen \cite{sera05}, so it is more likely to be a Type IIP.
With this change, there does appear to be a trend with Type, such that the likely more massive progenitors leave more strongly magnetized pulsars.
Other recent observations have led to new information in this area.
As mentioned in the previous section, the X-ray spectrum of the Cas A central source suggests a low magnetic field, $<8\times 10^{10}$ G.
Observations of the light echo of the Cas A supernova have shown that it was a Type IIb event, like SN 1993J \cite{krause08}; the supernova had been previously inferred to be of this Type
from the properties of the supernova remnant \cite{chev05}.
Nucleosynthesis constraints suggest a progenitor mass of Cas A of $(15-25)~M_{\odot}$ \cite{young06}.
The Puppis A remnant, which also has a neutron star with low magnetic field, was also inferred to have a Type IIL/b supernova based on the circumstellar interaction and the composition of freely expanding ejecta \cite{chev05}.
On the other hand, the Crab Nebula, which has a pulsar with an estimated field of $4\times 10^{12}$ G, is inferred to have a progenitor mass of $(8-10)~M_{\odot}$ from nucleosynthesis arguments.
The progenitor mass is lower that that of the CCO's, but the magnetic field is higher.
Over the entire range of magnetic field ($3\times 10^{10}-10^{15}$ G), there is some tendency
for more massive progenitors to yield a more highly magnetized neutron star, but it is clear
that other parameters, such as rotation, metallicity, or binarity, must also play a role.
\section{Neutron Stars in Supernovae}
In SN 1987A, neutrinos were observed over a time of 10 s, indicating the presence of a neutron star on that timescale \cite{arnett89}.
The youngest neutron star in a remnant is the Cas A compact object, so there is period from $10-10^{10}$ s over which neutron stars are not clearly observed.
Although direct evidence is lacking, there is the expectation that pulsar power should be present in supernovae and so there has been attention to the ways in which that power might manifest itself.
The notion that pulsar power is responsible for the light curves of core collapse supernova was proposed soon after the discovery of pulsars.
Initial models attributed the explosion energy as well as the light to the pulsar \cite{bo74}, but the sweeping of all the progenitor mass into a shell, as expected in this case, is not compatible with observations.
Subsequent models assumed that there was an initial explosion and that the pulsar power was responsible for the light curve \cite{gaffet77a,gaffet77b}.
However, models with an instantaneous explosion in a massive star at the end of its evolution with allowance for power from radioactivity have been widely successful in reproducing supernova light curves and spectra, so that pulsar models have not been favored.
The discovery of peculiar and luminous supernovae has brought back the idea of pulsar power \cite{maeda07,kasen10,woosley10}.
With magnetar power, the timescale for power input from the pulsar may be comparable to the diffusion time for radiation, so that the pulsar rotational energy can be efficiently turned into radiation for the light curve.
Some assumptions are made in these models.
One is that the spindown power for the pulsar is given by the formula for a rotating magnetic dipole in a vacuum.
A newly formed neutron star would find itself in an especially dense environment.
Fallback of matter from the surrounding supernova can accrete to the neutron star; this effect may be important for pulsar magnetic fields $\sim 10^{12}$ G, but at magnetar fields, the magnetic effects are likely to dominate near the neutron star \cite{chev89}.
The application of the spindown formula still requires that supersonic/superAlfvenic flow develop around the pulsar so that the vacuum solution applies.
In the standard MHD picture for pulsar nebulae, the flow in the wind nebula must decelerate from mildly relativistic velocities near the wind termination shock to the velocity of the outer boundary.
For the case of a slow outer boundary, as expected for a recently formed pulsar, this is likely to require a pulsar wind with relatively small magnetization.
Observed pulsar wind nebulae are not especially radiatively efficient.
Most of the pulsar spindown power goes into the internal energy of the shocked wind bubble and the kinetic energy of the swept up shell \cite{chev05}.
The nebula with the highest radiative efficiency is the Crab Nebula, which radiates $\sim 1/3$ of the pulsar spindown power in synchrotron radiation; the high efficiency is due to the high magnetic field resulting from the high spindown power and youth of the system.
For the very young pulsars considered here, synchrotron radiation losses are expected to be even more efficient.
Inverse Compton losses of energetic particles in the supernova radiation field are also efficient at early times.
Another issue is the degree of asymmetry of the pulsar nebula.
Models of gamma-ray bursts with magnetar power have shown that relativistic jets can be produced for a $10^{15}$ G neutron star with a $(1-2)$ ms initial rotation period \cite{kom07,bucc08}.
An important effect in producing the collimation is the buildup of toroidal magnetic flux in the shocked wind nebula and the hoop stresses of the toroidal flux.
Once the jet breaks out of the star, most of the pulsar power goes into jets and not into the supernova envelope.
The models considered by \cite{kasen10} have initial pulsars with lower magnetic fields, but comparable rotation energies and asymmetries could limit the transfer of power to the supernova envelope.
Given the parameters ejecta mass $M$ and pulsar initial spin $P_0$ and magnetic field $B$, values can be found that approximately reproduce the peak luminosity and timescale of very luminous supernovae \cite{kasen10}.
In particular, the values $M=5~M_{\odot}$, $P_0=2$ ms, and $B=2\times 10^{14}$ G give a reasonable fit to the luminosity of SN 2008es \cite{kasen10,miller09,gezari09}.
However, the luminosity does not provide a strong constraint on the model.
Another source of power, such as interaction with dense mass loss, could presumably give a similar result if the mass loss parameters could be varied.
What is needed are properties of the magnetar model that are characteristic of this mechanism.
With the assumption of evolution with constant braking index $n$, the pulsar power
has the time dependence
$\dot E \propto (1+t/t_p)^{-(n+1)/(n-1)}$,
where $t_p$ is the initial spindown timescale.
Measured braking indices cover a range from 1.4 to 2.9 (see \cite{chev05} for references).
The light curve thus evolves to a power law with time in the declining phase, with the power law index ranging from 2 to 6, if the braking indices from pulsars with ages $10^3-10^4$ yr are applicable to the early times considered here.
As discussed by \cite{gezari09}, the decline of SN 2008es is approximately exponential and drops below the magnetar model at an age $\sim 100$ days \cite{kasen10}.
Another supernova, SN 2010gx, had similar luminosity, light curve shape, and temperature to SN 2008es, and was followed to a fainter magnitude relative to the peak \cite{past10}.
The sharp drop of 5 magnitudes
appears to be incompatible with the slow decline expected with power input from $^{56}$Co decays or spindown of a pulsar, even with $n=1.4$.
The light curve can presumably be reproduced by interaction with dense mass loss near
the supernova, although the required density distribution is ad hoc.
One difference between SN 2010gx and SN 2008es is that SN 2010gx did not show lines of H or He.
However, SN 2010gx is representative of a group of events \cite{past10} and their similar properties suggests that the same mechanism may be operating in all of them.
Other distinguishing properties of pulsar power may come from spectroscopic observations.
With pulsar power in a supernova, the power goes into inflating a wind bubble in the supernova interior, bounded by an accelerating shell \cite{chev77}.
The shell acceleration stops when most of the pulsar rotational energy has been deposited so that the shell freely expands with the supernova ejecta.
Using their magnetar model, Kasen and Bildsten \cite{kasen10} modeled the luminosity evolution of SN 2007bi, which was suggested to be a pair instability supernova with power input from radioactivity \cite{galyam09}.
The light curve showed a decline over 100's of days that was consistent with radioactivity, but the late emission expected from pulsar power also gives a reasonable fit.
A distinguishing feature is provided by the late spectroscopy, which show centrally peaked lines, including Fe lines, as expected for radioactive power input.
In the pulsar power case, there should be relatively little material at low velocity.
Even though pair instability supernovae had not been expected for stripped envelope stars, which is the case for SN 2007bi, radioactivity is the preferred mechanism.
This is also the case for SN 1998bw, which \cite{woosley10} investigated as a possible supernova light curve with pulsar power, but noted that radioactivity provided a more natural explanation for the luminosity evolution.
The late spectra of SN 1998bw show centrally peaked line emission \cite{patat01}, which is another problem for a pulsar interpretation.
Even if pulsar power does not dominate the supernova light, there is the possibility that it plays some role, especially at late times when the main supernova light has faded.
Strong limits have been set in the case of SN 1987A, where the lack of optical evidence for a compact object suggests a power $<8\times 10^{33}$ ergs s$^{-1}$ \cite{graves05}; searches for a compact object at radio and X-ray wavelengths have also come up empty \cite{ng09}.
As the ejecta expand, there is a better chance to observe through the ejecta to the central compact object, but the strengthening interaction with the circumstellar medium makes observations of the center more difficult.
Continuing circumstellar interaction in supernovae generally hinders the observation of a central compact object.
SN 1986J showed signs of a central compact object at radio wavelengths, but recent observations show that circumstellar interaction is a possibility \cite{bieten10}.
Perna et al. \cite{perna08} have recently gathered X-ray observations of supernovae; most are upper limits and those that have been detected have generally been attributed to circumstellar interaction.
Assuming a standard relation between X-ray luminosity $L_x$ and $\dot E$, Perna et al.
find that the limits rule out most pulsars being born with periods in the ms range.
On a timescale of $4-10$ yr after the explosion, the heating and ionization of the supernova material by the energetic radiation from a pulsar nebula can lead to observable effects at optical wavelengths \cite{CF92}.
The characteristic increasing velocity with age expected in this situation has not yet been observed.
Observations of neutron stars have shown evidence for a broad range of initial magnetic fields.
Initial periods are more uncertain, but the evidence points to relatively long ($\ge 100$ ms) periods being typical.
The result is that pulsars do not generally manifest themselves in supernovae, but there should be rare cases of initially energetic pulsars that have an observational effect.
We may have already observed such cases, but the evidence for any particular event is not yet secure.
\begin{theacknowledgments}
I am grateful to the organizers for a pleasant and stimulating meeting.
This research was supported in part by NASA grant NNX07AG78G.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
1,116,691,498,404 | arxiv | \section{Introduction}
We fix throughout a unique factorization domain $D$ with field of fractions $F$, allowing for
the possibility that $D=F$, and write $\chi(F)$ for the characteristic of $F$. We also fix $\ell\in\N$, $m_1,\dots,m_\ell\in\N$, $m=\mathrm{lcm}\{m_1,\dots,m_\ell\}$, and $N_1,\dots,N_\ell\in D$. A prime means a prime positive integer.
In this paper, we give a necessary and sufficient condition for
\begin{equation}
\label{extension}
[F[\sqrt[m_1]{N_1},\dots,\sqrt[m_\ell]{N_\ell}]:F]=m_1\cdots m_\ell,
\end{equation}
assuming only $\chi(F)\nmid m$. This settles a problem posed by Mordell \cite{M} in 1953, in the case of repeated radical extensions.
The degrees of repeated radical extensions have been studied by several authors, including Hasse~\cite{H},
Besicovitch \cite{B}, Mordell \cite{M}, Siegel \cite{S}, Richards \cite{Ri}, Ursell \cite{U}, Jian-Ping~\cite{J}, Albu~\cite{A},
and Carr and O'Sullivan \cite{CS}.
The question of when $[F[\sqrt[n]{a}]:F]=n$ was solved by Vahlen \cite{V} in 1895 if $F=\Q$,
Capelli \cite{C} in 1897 if $F$ has characteristic~0, and R\'edei \cite[Theorem 428]{R} in 1959 in general.
\medskip
\noindent{\bf Irreducibility Criterion (C).} The polynomial $X^n-a\in F[X]$ is irreducible if and only if $a\notin F^p$
for every prime factor $p$ of~$n$, and if $4|n$ then $a\notin -4F^4$.
\medskip
In particular, if $-a\notin F^2$ and $a\notin F^p$ for every prime factor $p$ of~$n$, then $X^n-a$ is irreducible.
The special case of (C) when $n$ is prime is due to Abel; a very simple proof of this case can be found in \cite[Theorem 427]{R}.
\medskip
Provided $F$ contains a primitive $m$th root of unity, Hasse \cite{H} showed that (\ref{extension}) holds if and only~if
\begin{equation}
\label{Hasse}
(\sqrt[m_1]{N_1})^{a_1}\times\cdots\times (\sqrt[m_\ell]{N_\ell})^{a_\ell}\in F, a_i\geq 0,\text{ only when }m_1|a_1,\dots,m_\ell|a_\ell.
\end{equation}
Later, Besicovitch \cite{B} proved (\ref{extension}) assuming: $D=\Z$; each $N_i$ is positive and has a prime factor that divides it only once and does not divide any other $N_j$; each $\sqrt[m_i]{N_i}$ is positive and real (the $m_1\cdots m_\ell$ embeddings of
$\Q[\sqrt[m_1]{N_1},\dots, \sqrt[m_\ell]{N_\ell}]$ into $\C$ then yield (\ref{extension}) for all other $m_i$th roots of the $N_i$). The special case of Besicovitch's result when $m_1=\dots=m_\ell$ and every $N_i$ is prime appears in Richards~\cite{Ri} (for the more elementary case
$m_1=\dots=m_\ell=2$, see \cite{F,Ro}). Assuming that $N_1,\dots,N_\ell$ are pairwise relatively prime, Ursell \cite{U} obtained
a variation of Besicovitch's theorem.
Mordell \cite{M} combined and extended the results of Hasse and Besicovitch, and proved (\ref{extension}) assuming (\ref{Hasse}), and that $F$ contains a primitive $m$th root of unity or that $F$ is a subfield of $\R$ with all $\sqrt[m_1]{N_1},\dots, \sqrt[m_\ell]{N_\ell}$ real. In the latter case, all $N_i$ such that $m_i$ is even must be positive. Siegel~\cite{S} gave a theoretical
description of the value of $[F[\sqrt[m_1]{N_1},\dots,\sqrt[m_\ell]{N_\ell}]:F]$ under Mordell's condition that $F$ be a subfield of $\R$ with all $\sqrt[m_i]{N_i}$ real. Albu \cite{A} extended the work of Mordell and Siegel to the case when $F$ contains a primitive $m$th root of unity or all $m$th roots of unity in $F[\sqrt[m_1]{N_1},\dots,\sqrt[m_\ell]{N_\ell}]$ belong to $\{1,-1\}$. Under these weaker assumptions, (\ref{extension}) is still shown in \cite{A} to be a consequence of (\ref{Hasse}). It is worth noting that, except for (\ref{Hasse}), none of the aforementioned conditions
are necessary for (\ref{extension}) to hold. A different approach was taken by Jian-Ping~\cite{J}, using valuation theory,
when $F$ is an algebraic number field; he succeeded in avoiding any assumptions on roots of unity
and proved a more general version of (\ref{extension}), applicable to repeated extensions via Eisenstein polynomials, not just binomials.
Nevertheless, Jian-Ping's hypotheses are also unnecessary for (\ref{extension}) to hold. Indeed, when the ring of integers of
$F$ is a UFD, each $N_i$ is forced to have an irreducible factor that divides it only once and does not divide any other $N_j$. More recently, Carr and O'Sullivan \cite{CO} proved a fairly general result on the linear independence of roots and reproved Mordell's theorem as an application.
Set $J=\{1,\dots,\ell\}$, $\P=\{p\,|\, p\text{ is a prime factor of }m\}$, and for each $i\in J$ and $p\in\P$ let $m_i(p)$ be the $p$-part of $m_i$, so that $m_i(p)=p^{n_i}$, where $n_i\geq 0$, $p^{n_i}|m_i$ and $p^{n_i+1}\nmid m_i$. It is clear that
\begin{equation}
\label{exten}
[F[\sqrt[m_1]{N_1},\dots,\sqrt[m_\ell]{N_\ell}]:F]=m_1\cdots m_\ell\Leftrightarrow [F[\sqrt[m_1(p)]{N_1},\dots,\sqrt[m_\ell(p)]{N_\ell}]:F]=m_1(p)\cdots m_\ell(p)
\end{equation}
for all $p\in\P$. We are thus reduced to study the case when each $m_i=m_i(p)$ for a fixed prime $p$. We split this case in two subcases
depending on the parity of $p$. For each prime $p$, we set
$$
\S_{p}=\{N_1,N_1^{e_1}N_2,N_1^{e_1}N_2^{e_2}N_3,\dots, N_1^{e_1}\cdots N_{\ell-1}^{e_{\ell-1}}N_\ell, 0\leq e_i<p\}.
$$
In particular, $\S_{2}$ consists of all $N_1^{e_1}\cdots N_{\ell}^{e_{\ell}}$ such that $e_i\in\{0,1\}$ and
$(e_1,\dots,e_\ell)\neq (0,\dots,0)$.
\bigskip
\noindent{\bf Theorem A.} Let $n_1,\dots,n_\ell\in\N$, $p$ an odd prime such that $\chi(F)\neq p$, and suppose $m_i=p^{n_i}$ for all
$i\in J$. Then (\ref{extension}) holds if and only if $\S_{p}\cap D^p=\emptyset$.
\bigskip
The well-known example $[\Q[\sqrt[4]{-1},\sqrt[4]{2}]:\Q]=8$ shows that Theorem A fails if $p=2$. The above criteria impose
general conditions that disallow this example. Close examination of numerous pathological cases led us to the exact conditions required
when $p=2$. We say that
$(N_1,\dots,N_\ell)$ is $2$-defective
if the following two conditions hold: $\S_2\cap D^2=\emptyset$ but $\S_2\cap (-D^2)\neq\emptyset$
(this readily implies that $|\S_2\cap (-D^2)|=1$,
as shown in Lemma 5); if $-d^2=M=N_1^{f_1}\cdots N_\ell^{f_\ell}$ is the only element of $\S_2\cap (-D^2)$, where $d\in D$, $0\leq f_i<2$, and
$M^\sharp=\{i\in J\,|\, f_i=1\}$ is nonempty (since $\S_2\cap D^2=\emptyset$, the exponents $f_i$ are uniquely determined by $M$, whence $M^\sharp$ is well-defined), then $4|m_i$ for all $i\in M^\sharp$, and if $i\in M^\sharp$, then
\begin{equation}
\label{spe}
\pm 2d\underset{j\neq i}\Pi N_j^{e_j}\in D^2\text{ for some choice of }0\leq e_j<1.
\end{equation}
Since $-M=N_1^{f_1}\cdots N_\ell^{f_\ell}\in D^2$, the outcome of (\ref{spe}) is independent of the actual choice of $i\in M^\sharp$.
\bigskip
\noindent{\bf Theorem B.} Let $n_1,\dots,n_\ell\in\N$ and suppose that $\chi(F)\neq 2$ and $m_i=2^{n_i}$ for all $i\in J$. Then (\ref{extension}) holds if and only if $\S_2\cap D^2=\emptyset$ and $(N_1,\dots,N_\ell)$ is not 2-defective.
\bigskip
Combining (\ref{exten}) with Theorems A and B we immediately obtain a general criterion for (\ref{extension}). This requires additional
notation. For each $p\in\P$, we set $J(p)=\{i\,|\, i\in J\text{ and }p|m_i\}$, and write
$$
J(p)=\{i(p,1),\dots,i(p,\ell(p))\},\quad i(p,1)<\dots< i(p,\ell(p)),
$$
$$
\S(p)=\{N_{i(p,1)},N_{i(p,1)}^{e_1}N_{i(p,2)},N_{i(p,1)}^{e_1}N_{i(p,2)}^{e_2}N_{i(p,3)},\dots,N_{i(p,1)}^{e_1}\cdots
N_{i(p,\ell(p)-1)}^{e_{\ell(p)-1}}N_{i(p,\ell(p))}, 0\leq e_j<p\}.
$$
\bigskip
\noindent{\bf Theorem C.} Suppose that $\chi(F)\neq m$. Then (\ref{extension}) holds if and only if $\S(p)\cap D^p=\emptyset$
for every $p\in\P$ and, if $2\in\P$, then $(N_{i(2,1)},\dots,N_{i(2,\ell(2))})$ is not 2-defective.
\bigskip
The next example illustrates the use of Theorems B and C, and lies outside of the scope of the aforementioned criteria.
\bigskip
\noindent{\bf Example 1.} Suppose that $-1\notin F^2$ and each of $A,B,C\in D$ has an irreducible factor that divides it only
once and does not divide any of the two other elements. Then
$$
[F[\sqrt[m_1]{AB},\sqrt[m_2]{BC},\sqrt[m_3]{-CA}]:F]=m_1m_2m_3
$$
if and only if at least one of $m_1,m_2,m_3$ is not divisible by 4 or none of $\pm 2A,\pm 2B,\pm 2C\in D^2$.
\bigskip
As we are dealing with a classical and basic problem, we purposely resort to elementary and complete arguments
in order maximize the potential readership of our solution.
\section{Lemmata}
Given a nonzero $a\in F$ we write $\langle a\rangle$ for the subgroup of $F^\times$ generated by $a$.
\medskip
\noindent{\bf Lemma 1.} Let $p$ be a prime such that $\chi(F)\neq p$ and suppose $b_1,\dots,b_n\in F$ are nonzero. Then
\begin{equation}\label{le1}
F[\sqrt[p]{b_1},\dots,\sqrt[p]{b_n}]^p\cap F=F^p\langle b_1,\dots,b_n\rangle.
\end{equation}
\medskip
\noindent{\sc Proof.} Let $M,N\in F$ be nonzero. We claim that if $\sqrt[p]{M}\in F[\sqrt[p]{N}]$ then
$M\in F^p\langle N\rangle$. This is clear if $N\in F^p$ so we assume $N\notin F^p$.
Set $K=F[\zeta]$, where $\zeta$ is a primitive $p$th root of unity. Then $K/F$ is a Galois extension
with Galois group isomorphic to a subgroup of $(\Z/p\Z)^\times$. In particular, $[K:F]$ divides $(p-1)$.
It follows that $K^p\cap F=F^p$. Indeed, suppose $a\in F$ and $\a\in K$ satisfies $\a^p=a$. Since $[F[\a]:F]$
divides $[K:F]$, it also divides $p-1$. As $p\nmid (p-1)$, $X^p-a\in F[X]$ is reducible, whence $a\in F^p$ by (C).
By assumption, $N\notin F^p$. Thus $N\notin K^p$ as indicated above, so $X^p-N\in K[X]$ is irreducible by (C).
Thus $\{1,\sqrt[p]{N},\dots,\sqrt[p]{N^{p-1}}\}$ is a $K$-basis of $K[\sqrt[p]{N}]$. By assumption, $\sqrt[p]{M}\in K[\sqrt[p]{N}]$, so
\begin{equation}
\label{ko}
\sqrt[p]{M}=a_0+a_1\sqrt[p]{N}+\cdots+a_{p-1}\sqrt[p]{N^{p-1}},\quad a_i\in K.
\end{equation}
Note that $K[\sqrt[p]{N}]/K$ is a Galois extension with cyclic Galois group $\langle\sigma\rangle$, where
$\sigma(\sqrt[p]{N})=\zeta \sqrt[p]{N}$. Since $\sqrt[p]{M}$ is a root of $X^p-M$, we must have
$\sigma(\sqrt[p]{M})=\zeta^i\sqrt[p]{M}$ for some $0\leq i<p$. Applying $\sigma$ to (\ref{ko}), we obtain
$$
\zeta^i\sqrt[p]{M}=a_0+a_1\zeta\sqrt[p]{N}+\cdots+a_{p-1}\zeta^{p-1}\sqrt[p]{N^{p-1}}.
$$
On the other hand, multiplying (\ref{ko}) by $\zeta^i$ yields
$$
\zeta^i\sqrt[p]{M}=a_0\zeta^i +a_1\zeta^i\sqrt[p]{N}+\cdots+a_{p-1}\zeta^i\sqrt[p]{N^{p-1}}.
$$
From the $K$-linear independence of $1,\sqrt[p]{N},\dots,\sqrt[p]{N^{p-1}}$ we infer
that $a_j=0$ for all $j\neq i$. Thus
$$
\sqrt[p]{M}=a \sqrt[p]{N^i},\quad a\in K,
$$
whence $M=a^p N^i$. Thus $M/N^{-i}\in K^p\cap F=F^p$, so $M\in F^p\langle N\rangle$.
\medskip
By above, $F[\sqrt[p]{b_1}]^p\cap F=F^p\langle b_1\rangle$. Suppose $n>1$ and $F[\sqrt[p]{b_1},\dots,\sqrt[p]{b_{n-1}}]^p\cap F=F^p\langle b_1,\dots,b_{n-1}\rangle$. Then
$$
\begin{aligned}
F[\sqrt[p]{b_1},\dots,\sqrt[p]{b_{n-1}},\sqrt[p]{b_n}]^p\cap F &=
(F[\sqrt[p]{b_1},\dots,\sqrt[p]{b_{n-1}}][\sqrt[p]{b_n}])^p\cap F[\sqrt[p]{b_1},\dots,\sqrt[p]{b_{n-1}}]\cap F\\
&=
F[\sqrt[p]{b_1},\dots,\sqrt[p]{b_{n-1}}]^p\langle b_{n}\rangle\cap F.
\end{aligned}
$$
Let $\a\in F[\sqrt[p]{b_1},\dots,\sqrt[p]{b_{n-1}}]^p\langle b_n\rangle\cap F$. Then $\a\in F$ and
$\a b_{n}^{i}\in F[\sqrt[p]{b_1},\dots,\sqrt[p]{b_{n-1}}]^p\cap F$ for some $i\in\Z$. Thus $\a b_{n}^{i}\in F^p\langle b_1,\dots,b_{n-1}\rangle$
and therefore $\a\in F^p\langle b_1,\dots,b_{n-1},b_{n}\rangle$.\qed
\bigskip
\noindent{\bf Lemma 2.} Suppose $-1\notin F^2$ and $\pm a\notin F^2$. Then for any $n\in\N$, we have $\sqrt{-1}\notin F[\sqrt[2^n]{a}]$.
\bigskip
\noindent{\sc Proof.} We show by induction that $\sqrt{-1}\notin F[\sqrt[2^n]{a}]$ and $\pm \sqrt[2^n]{a}\notin F[\sqrt[2^n]{a}]^2$.
The fact that $\sqrt{-1}\notin F[\sqrt{a}]$ follows from Lemma 1. Suppose, if possible, that $\pm \sqrt{a}=z^2$, where $z\in F[\sqrt{a}]$. Then $z=x+y\sqrt{a}$,
where $x,y\in F$, so that $\pm\sqrt{a}=x^2+ay^2+2xy\sqrt{a}$. It follows that $x^2+ay^2=0$. As $-a\notin F^2$, we infer $x=y=0$,
a contradiction.
Assume we have shown that $\sqrt{-1}\notin F[\sqrt[2^n]{a}]$ and $\pm \sqrt[2^n]{a}\notin F[\sqrt[2^n]{a}]^2$ for some $n\in\N$.
Since $\pm a\notin F^2$, (C) implies $[F[\sqrt[2^n]{a}]:F]=2^n$ and $[F[\sqrt[2^{n+1}]{a}]:F]=2^{n+1}$. But $\sqrt{-1}\notin F[\sqrt[2^n]{a}]$,
so $F[\sqrt[2^{n+1}]{a}]=F[\sqrt[2^{n}]{a},\sqrt{-1}]$. Thus $\sqrt[2^{n+1}]{a}=\a+\b\sqrt{-1}$ for unique $\a,\b\in F[\sqrt[2^{n}]{a}]$. Squaring, we get $\sqrt[2^{n}]{a}=\a^2-\b^2+2\a\b\sqrt{-1}$, which implies $\a\b=0$ and $\a^2-\b^2=\sqrt[2^n]{a}$, a contradiction.\qed
\bigskip
\noindent{\bf Lemma 3.} Suppose $\chi(F)\neq 2$, let $n\in\N$, and set $K=F[\zeta]$, where $\zeta$ is a primitive $2^n$th root of unity. If $n\leq 2$ or $-1\in F^2$, then $G=\mathrm{Gal}(K/F)$ is cyclic.
\bigskip
\noindent{\sc Proof.} We have an embedding $\Psi:G\to (\Z/2^n\Z)^\times$, $\sigma\to [s]$, where
$\sigma(\zeta)=\zeta^s$. This settles the case $n\leq 2$. Assume henceforth that $n\geq 3$.
We have $(\Z/2^n\Z)^\times=\langle a,b\rangle$, where $a=[5]$, $b=[-1]$ and $\langle a\rangle\cap\langle b\rangle$
is trivial \cite[Chapter VI]{Vi}. By hypothesis, $-1=\alpha^2$, where $\alpha\in F\cap\langle\zeta\rangle$. Suppose, if possible,
that $b\in\Psi(G)$, say $b=\Psi(\sigma)$. Then $\sigma(\alpha)=\alpha^{-1}=-\alpha$, since $\alpha$ is a power of $\zeta$,
and $\sigma(\a)=\a$, since $\a\in F$. This contradiction shows that $b\notin \Psi(G)$. Now any subgroup $S$ of $\langle a,b\rangle$
that does not contain $b$ must be cyclic (if $S$ is not trivial, it is generated by $a^i$ or $a^i b$, where $i$ is the smallest
positive integer such that an element of this type is in $S$). Thus $G$ is cyclic.\qed
\bigskip
\noindent{\bf Lemma 4.} Let $n\in\N$, $p$ an odd prime such that $\chi(F)\neq p$, and set $K=F[\zeta]$, where $\zeta$ is a primitive $p^n$th root of unity. Then $\mathrm{Gal}(K/F)$ is cyclic.
\bigskip
\noindent{\sc Proof.} $\mathrm{Gal}(K/F)$ is isomorphic to a subgroup of $(\Z/p^n\Z)^\times$,
which is a cyclic group.\qed
\bigskip
\noindent{\bf Lemma 5.} Suppose $\S_2\cap D^2=\emptyset$. Then $|S_2\cap (-D^2)|\leq 1$, with $|S_2\cap (-D^2)|=0$ if $-1\in F^2$.
\bigskip
\noindent{\sc Proof.} Suppose $M\neq N$ are in $\S_2\cap (-D^2)$. Then $MN\in D^2$ and $MN=e^2P$, where $P\in\S_2$ and $e\in D$. Thus $P\in D^2$, against $\S_2\cap D^2=\emptyset$. If $-1\in F^2$ then $-D^2=D^2$, so $\S_2\cap (-D^2)=\emptyset$.\qed
\bigskip
\noindent{\bf Lemma 6.} Suppose $\chi(F)\neq 2$, let $n\in\N$ and set $K=F[\zeta]$, where $\zeta$ is a primitive $2^n$th root of unity. Assume $\S_2\cap D^2=\emptyset$. Then
$|\S_2\cap K^2|\in \{0,1,3\}$. Moreover, if $|\S_2\cap K^2|=3$ then one of the elements of
$\S_2\cap K^2$ is in $\S_2\cap (-D^2)$, and we have $-1\notin F^2$, $n\geq 3$.
\bigskip
\noindent{\sc Proof.} Suppose $M\neq N$ are in $\S_2\cap K^2$. Then $MN\in K^2$ and $MN=e^2P$, where $P\in\S_2$ and $e\in D$, so $P\in D^2$.
Lemma 1 implies that $F[\sqrt{M}],F[\sqrt{N}],F[\sqrt{P}]$ are distinct intermediate
subfields of $K/F$ of degree 2. In particular, $\mathrm{Gal}(K/F)$ is not cyclic. Now $\mathrm{Gal}(K/F)$ is isomorphic to a subgroup of $(\Z/2^n\Z)^\times$, so $n\geq 3$ and $(\Z/2^n\Z)^\times\cong (\Z/2^{n-2}\Z)\times (\Z/2\Z)$. Any subgroup of
$(\Z/2^{n-2}\Z)\times (\Z/2\Z)$ has at most 3 subgroups of index 2, so the Galois correspondence implies that any intermediate
subfield of $K/F$ of degree 2 must be equal to one of $F[\sqrt{M}],F[\sqrt{N}],F[\sqrt{P}]$. Lemma 1 readily implies that no
element from $\S_2$ different from $M,N,P$ is in $K^2$. By Lemma 3, $-1\notin F^2$, so
$F[\sqrt{-1}]$ must be equal to one of $F[\sqrt{M}],F[\sqrt{N}],F[\sqrt{P}]$, and Lemma 1 implies that one of $M,N,P$ is in $-D^2$.\qed
\bigskip
\noindent{\bf Lemma 7.} Let $n_1,\dots,n_\ell\in\N$ and $p$ a prime such that $\chi(F)\neq p$ and $m_i=p^{n_i}$ for all $i\in J$.
Let $K=F[\zeta]$, where $\zeta$ is a primitive $m$th root of unity, $m=\mathrm{lcm}\{m_1,\dots,m_\ell\}$.
Suppose that $\S_p\cap K^p=\emptyset$. Then $[K[\sqrt[m_1]{N_1},\dots,\sqrt[m_\ell]{N_\ell}]:K]=m_1\cdots m_\ell$.
\bigskip
\noindent{\sc Proof.} By assumption $N_1\notin K^p$. Moreover, if $4|m_1$ then $-1\in K^2$ and therefore $-N_1\notin K^2$.
It follows from (C) that $[K[\sqrt[m_1]{N_1}]:K]=m_1$.
Suppose $[K[\sqrt[m_1]{N_1},\dots,\sqrt[m_i]{N_i}]:K]=m_1\cdots m_i$ for some $1\leq i<\ell$.
Assume, if possible,
that $\sqrt[p]{N_{i+1}}\in K[\sqrt[m_1]{N_1},\dots,\sqrt[m_i]{N_i}]$. Then $K[\sqrt[p]{N_{i+1}}]$
is an intermediate subfield of degree $p$ in the Galois extension $K[\sqrt[m_1]{N_1},\dots,\sqrt[m_i]{N_i}]/K$, with Galois group
$G=\langle \sigma_1,\dots,\sigma_i\rangle$, where
$$
\sigma_k(\sqrt[m_k]{N_k})=\zeta^{m/m_k}\, \sqrt[m_k]{N_k},\; \sigma_k(\sqrt[m_j]{N_j})=\sqrt[m_j]{N_j},\; j\neq k.
$$
Any subgroup of $G$ of index $p$ contains $G^p$, so by the Galois correspondence $K[\sqrt[p]{N_{i+1}}]$ is contained in the fixed field of $G^p$, namely $K[\sqrt[p]{N_1},\dots,\sqrt[p]{N_i}]$. Lemma 1 implies that $N_1^{e_1}\cdots N_i^{e_{i}}N_{i+1}\in K^p$ for some $0\leq e_i<p$,
against $\S_p\cap K^p=\emptyset$. Thus $\sqrt[p]{N_{i+1}}\notin K[\sqrt[m_1]{N_1},\dots,\sqrt[m_i]{N_i}]$.
Assume if possible, that $4|m_{i+1}$ and $\sqrt{-N_{i+1}}\in K[\sqrt[m_1]{N_1},\dots,\sqrt[m_i]{N_i}]$. Then the above argument yields
$N_1^{e_1}\cdots N_i^{e_{i+1}}N_{i+1}\in -K^2$ for some $0\leq i<p$. But $-K^2=K^2$, so $\S_p\cap K^p=\emptyset$ is violated.
This shows $\sqrt{-N_{i+1}}\notin K[\sqrt[m_1]{N_1},\dots,\sqrt[m_i]{N_i}]$ when $4|m_{i+1}$.
We deduce from (C) that $[K[\sqrt[m_1]{N_1},\dots,\sqrt[m_{i+1}]{N_{i+1}}]:K]=m_1\cdots m_{i+1}$.\qed
\bigskip
\noindent{\bf Lemma 8.} Let $n_1,\dots,n_\ell\in\N$ and $p$ a prime such that $\chi(F)\neq p$ and $m_i=p^{n_i}$ for all $i\in J$.
Let $K=F[\zeta]$, where $\zeta$ is a primitive $m$th root of unity, $m=\mathrm{lcm}\{m_1,\dots,m_\ell\}$.
Suppose that $\S_p\cap D^p=\emptyset$ and $|\S_p\cap K^p|=1$,
say $M=N_1^{f_1}\cdots N_\ell^{f_\ell}$, where $0\leq f_i<p$, and $M^\sharp=\{i\in J\,|\, f_i=1\}$ is nonempty. For $i\in M^\sharp$,
set $V_i=\{\sqrt[m_1]{N_1},\dots,\sqrt[m_\ell]{N_\ell}\}\setminus \{\sqrt[m_i]{N_i}\}$ and let $m[i]$ be the product of all $m_j$ with
$j\neq i$.
Then
(a) $[K[V_i]:K]=m[i]$ and $\sqrt[p]{N_i}\notin F[V_i]$ for all $i\in M^\sharp$.
(b) If $p$ is odd or $m_i=2$ for at least one $i\in M^\sharp$, then (\ref{extension}) holds.
(c) If $4|m$ and $\S_2\cap (-D^2)=\emptyset$, then $\sqrt{-N_i}\notin F[V_i]$ for all $i\in M^\sharp$, so (\ref{extension}) holds.
(d) If $4|m_i$ for all $i\in M^\sharp$ and $M\in \S_2\cap (-D^2)$, say $M=-d^2$ with $d\in D$, then (\ref{extension}) holds if and only if given any $i\in M^\sharp$,
(\ref{spe}) fails.
\bigskip
\noindent{\sc Proof.} Let $i\in M^\sharp$. By Lemma 7, we have $[K[V_i]:K]=m[i]$ and hence $[F[V_i]:F]=m[i]$. Suppose, if possible, that
$\sqrt[p]{N_i}\in F[V_i]$ and set $Y_i=\{\zeta\}\cup V_i$. Then
$F[\sqrt[p]{N_i}]$
is an intermediate subfield of degree $p$ in the Galois extension $F[Y_i]/F$,
with Galois group $G=H\rtimes U$, where $H=\langle \sigma_j\,|\, j\neq i\rangle$ is the Galois group of
$F[Y_i]/F[\zeta]$
and each $\sigma_k$ is as in the proof of Lemma 7, and $U$ is the Galois group of
$F[Y_i]/F[V_i]$. The subgroup $S$ of $G$ corresponding to
$F[\sqrt[p]{N_i}]\subseteq F[V_i]$ in the Galois correspondence has index $p$ and
contains~$U$. Therefore $S\supseteq H^p\rtimes U$, so $F[\sqrt[p]{N_i}]$ is contained in the fixed
field of $H^p\rtimes U$, namely $F[W_i]$, where $W_i=\{\sqrt[p]{N_1},\dots,\sqrt[p]{N_\ell}\}\setminus \{\sqrt[p]{N_i}\}$.
It follows from Lemma 1 that
$N_1^{e_1}\cdots N_{\ell}^{e_\ell}\in F^p$, where all $0\leq e_j<p$ and $e_i=1$. By the rational root theorem, $F^p\cap D=D^p$,
so $\S_p\cap D^p=\emptyset$ is violated.
If $m_i=2$ for at least one $i\in M^\sharp$, then (\ref{extension}) has been established. Likewise, if $p$ is odd, then (\ref{extension}) follows from (C). Suppose next that $4|m$ and $\S_2\cap (-D^2)=\emptyset$.
We claim that $\sqrt{-N_i}\notin F[V_i]$. If not, arguing as above, we see that $-N_1^{e_1}\cdots N_{\ell}^{e_\ell}\in F^2\subseteq K^2$, where all $0\leq e_j<2$ and $e_i=1$. On the other hand, $M\in K^2$ and $-1\in K^2$,
so $-M\in K^2$ and therefore the product of all $N_j^{e_j+f_j}$, with $j\neq i$, must be in $K^2$. The uniqueness of $M$ in $\S_2\cap K^2$ forces $e_j=f_j$ for all
$j\neq i$. Thus $-M\in F^2$ and hence $M\in\S_2\cap(-D^2)$, a contradiction. Thus (\ref{extension}) follows from (C) in this case as well.
Suppose finally that $4|m_i$ for all $i\in M^\sharp$ and $M\in \S_2\cap (-D^2)$, say $M=-d^2$ with $d\in D$. Fix any $i\in M^\sharp$ and set $L_i=F[V_i]$.
It remains to decide when $N_i\in -4L_i^4$. Since $4|m_j$ for all $j\in M^\sharp$, the product of all $N_j^{f_j}$ with $j\neq i$
and $j\in M^\sharp$, belongs to $L_i^4$. Thus
$$N_i\in -4L_i^4\Leftrightarrow M\in -4L_i^4 \Leftrightarrow d^2\in 4L_i^4 \Leftrightarrow \pm 2d\in L_i^2,$$
and, by Lemma 1, this happens if and only if (\ref{spe}) holds.\qed
\section{Proofs of Theorems A and B}
\noindent{\sc Proof of Theorem A.} It is clear that (\ref{extension}) implies $\S_p\cap D^p=\emptyset$. Suppose
$\S_p\cap D^p=\emptyset$ and let $K=F[\zeta]$, where $\zeta$ is a primitive $m$th root of unity.
By Lemmas 7 and 8, it suffices to show that $|\S_p\cap K^p|\leq 1$. Suppose not and let $M\neq N$ be in $\S_p\cap K^p$.
As $M,N$ have degree~$p$ over~$F$, we see that $p|[K:F]$. By Lemma 4, $\mathrm{Gal}(K/F)$ has a unique subgroup of index~$p$, so by the Galois correspondence, $K/F$ has a unique intermediate field of degree~$p$. We deduce
$F[\sqrt[p]{M}]=F[\sqrt[p]{N}]$ and Lemma 1 implies $MN^i\in F^p$ for some $i\in\Z$. Since $M\neq N$, this is disallowed by $\S_p\cap D^p=\emptyset$.\qed
\bigskip
\noindent{\sc Proof of Theorem B.} It is clear that $\S_2\cap D^2=\emptyset$ follows from (\ref{extension}).
Suppose $\S_2\cap D^2=\emptyset$. We will show that (\ref{extension}) holds if and only if $(N_1,\dots,N_\ell)$ is not
2-defective.
By Lemmas 6, 7 and 8, we may restrict to the case when $|\S_2\cap K|=3$, in which case by Lemmas 5 and 6 there is a single
element $M\in \S_2\cap (-D^2)$, and we necessarily have $-1\notin F^2$ and $8|m$.
Now $-d^2=M=N_1^{f_1}\cdots N_\ell^{f_\ell}$, where $0\leq f_i<2$ and $M^\sharp=\{i\in J\,|\, f_i=1\}$ is nonempty.
Fix any $i\in M^\sharp$ and let $S^i_2$ stand for the analogue of $S_2$ corresponding to $\{N_1,\dots,N_\ell\}\setminus\{N_i\}$.
By the uniqueness of $M$ in $\S_2\cap (-D^2)$, we see that $\S_2^i\cap (-D^2)=\emptyset$. It follows from Lemma 6 that $|\S_2^i\cap K^2|\leq 1$.
Suppose first that $|\S_2^i\cap K^2|=0$. Set $V_i=\{\sqrt[m_1]{N_1},\dots,\sqrt[m_\ell]{N_\ell}\}\setminus \{\sqrt[m_i]{N_i}\}$,
and let $m[i]$ be the product of all $m_k$ such that $k\neq i$. Then $[K[V_i]:K]=m[i]$ by Lemma 7. Thus $F[V_i]$ is linearly disjoint
from $K$ over $F$. It follows that $\sqrt{-1}\notin F[V_i]$. For if $\sqrt{-1}\in F[V_i]$, then from $-1\notin F^2$ we deduce
that $1,\sqrt{-1}$ are $F$-linearly independent elements from $F[V_i]$, and hence $K$-linearly independent elements from $K[V_i]$,
which cannot be as $4|m$. Since $M\in -D^2$, we have $F[\sqrt{-1}]=F[\sqrt{M}]$. Thus $\sqrt{M}\notin F[V_i]$
and therefore $\sqrt{N_i}\notin F[V_i]$. If there is some $i\in M^\sharp$ such that $m_i=2$, this shows that (\ref{extension}) holds.
If, on the other hand, $4|m_i$ for all $i\in M^\sharp$, then (\ref{extension}) holds if and only if (\ref{spe}) fails, as in the proof of Lemma 8.
Suppose next that $|\S_2^i\cap K^2|=1$ and let $N\in S_2^i\cap K^2$. Note that $N\notin -D^2$. We have $N=N_1^{g_1}\cdots N_\ell^{g_\ell}$, where $g_i=0$, $0\leq g_j<2$ and $N^\sharp=\{j\in J\,|\, g_j=1\}$ is nonempty. Fix any $j\in N^\sharp$ and let
$S^{i,j}_2$ stand for the analogue of $S_2$ corresponding to $\{N_1,\dots,N_\ell\}\setminus\{N_i,N_j\}$. It is then clear that
$S_2^{i,j}\cap K^2=\emptyset$. Set $V_{i,j}=\{\sqrt[m_1]{N_1},\dots,\sqrt[m_\ell]{N_\ell}\}\setminus \{\sqrt[m_i]{N_i},\sqrt[m_j]{N_j}\}$,
and let $m[i]$ (resp. $m[i,j]$) be the product of all $m_k$ such that $k\neq i$ (resp. $k\neq i,j$).
Then $[K[V_{i,j}]:K]=m[i,j]$ by Lemma~7. As above,
we deduce that $\sqrt{-1}\notin F[V_{i,j}]$. Since $4|m$ and $S^i_2\cap (-D^2)=\emptyset$, Lemma 8 ensures that $[F[V_i]:F]=m[i]$ as well as
$\sqrt{\pm N_j}\notin F[V_{i,j}]$. We deduce from Lemma 2 that $\sqrt{-1}\notin F[V_i]$. The rest of the argument follows as in the above case.\qed
\bigskip
\section{Primitive elements}
Isaacs \cite{I} considered the problem of when $F[\a,\b]=F[\a+\b]$ for algebraic separable elements $\a,\b$ of degrees $m,n$ over $F$.
He proved that if $[F[\a,\b]:F]=mn$ (he actually assumed $\gcd(m,n)=1$ but used only the stated condition) but $F[\a,\b]\neq F[\a+\b]$
then $F$ has prime characteristic $p$ and the following conditions hold: $p|mn$ or $p<\min\{m,n\}$; if $m,n$ are prime powers, then $p|mn$; $p$ divides the order of the Galois group of a normal closure of $F[\a,\b]$.
The condition $p<\min\{m,n\}$ was later improved to $p<\min\{m,n\}/2$ by Divi\v{s} \cite{D}.
Using Isaacs' result we readily see that $F[\sqrt[m_1]{N_1},\dots,\sqrt[m_\ell]{N_\ell}]=F[b_1\sqrt[m_1]{N_1}+\cdots+b_\ell\sqrt[m_\ell]{N_\ell}]$ for any nonzero
$b_1,\dots,b_\ell\in F$ in Theorems A, B and C, provided the following conditions hold: $\chi(F)\neq p$ and $\S_p\cap D^p=\emptyset$ in
Theorem A; $\chi(F)\neq 2$, $\S_2\cap D^2=\emptyset$, and $(N_1,\dots,N_\ell)$ is not 2-defective in
Theorem B; $\chi(F)\nmid m\varphi(m)$ (Euler's function), $\S_2\cap D^p=\emptyset$ for all $p\in\P$, and
$(N_{i(2,1)},\dots,N_{i(2,\ell(2))})$ is not 2-defective in
Theorem C.
It {\em is} possible that $F[\a,\b]/F$ be a finite Galois extension, $[F[\a,\b]:F]=[F[\a]:F][F[\b]:F]$ and still $F[\a,\b]\neq F[\a+\b]$.
A family of examples can be found in \cite[Example 2.3]{CS}.
|
1,116,691,498,405 | arxiv |
\subsection{Simple and Bounded-Degree}\label{a:degree}
If $G$ is not initially simple, we first report and discard the self-loops.
For each group of pairwise parallel edges $\{e_1,\ldots,e_q\}$
we report directed parallelisms $e_i\to e_1$ for each $i=2,\ldots,q$
and remove all edges $e_2,\ldots,e_q$ from $G$.
Suppose now that $G$ is simple.
We first compute the embedding of $G$ in linear time \cite{Hopcroft:74}.
We build a graph $H$ obtained from $G$
by replacing each vertex $v$ of $G$ of degree at least $4$
with an undirected cycle $C_v$ of length $\deg(v)$ in a standard way
(but consistently with the embedding of $G$).
After this step, all the vertices of $H$ have degree no more than $3$.
Moreover, $H$ is planar and has a linear number of edges and vertices.
If we now build a data structure for maintaining $H$ under contractions,
we may first contract all the edges constituting the cycles $C_v$ first,
but without reporting the self-loops and parallelisms involving these
edges.
Note that after these contractions the graph $H$ in fact becomes equal to $G$.
Clearly, the size of $H$ is linear in the size of $G$.
\subsection{Supporting Edge Weights}\label{sec:edgeweights}
In this section we show how to modify the data structure of
Section~\ref{sec:loopdetection} so that, given a weight function
$\ell:E_0\to\mathbb{R}$, for each
reported directed parallelism $\alpha(Y_{3-i})\to\alpha(Y_i)$ (see Section~\ref{sec:interface})
we have $\ell(\alpha(Y_{i}))\leq\ell(\alpha(Y_{3-i}))$.
We maintain an array $\delta$ defined as follows.
Let $Y$ be a group of parallel edges represented
by a tree $T\in\ensuremath{\mathcal{T}}$.
Then, $\delta[\alpha(Y)]$ is equal to an edge $e\in T$
such that $\ell(e)$ is minimum.
Initially, for each $e\in E_0$, we have $\delta(e)=e$.
To maintain the invariant posed on $\delta$ throughout
any sequence of contractions, we do the following.
Suppose the data structure of Section~\ref{sec:loopdetection} reports
a parallelism $\alpha(Y_1)\to\alpha(Y_2)$.
Then, if $\delta[\alpha(Y_1)]\leq \delta[\alpha(Y_2)]$,
the weight supporting layer reports $\delta[\alpha(Y_2)]\to\delta[\alpha(Y_1)]$
instead
and sets $\delta[\alpha(Y_2)]=\delta[\alpha(Y_1)]$.
On the other hand, when $\delta[\alpha(Y_1)]>\delta[\alpha(Y_2)]$,
we only report $\delta[\alpha(Y_1)]\to\delta[\alpha(Y_2)]$ instead.
\section{The Data Structure Interface}\label{sec:interface}
In this section we specify the set of operations that our data
structure supports so that it fits our applications.
It proves beneficial
to look at the graph undergoing contractions
from two perspectives.
\begin{enumerate}
\item The \emph{adjacency viewpoint} allows us to track the neighbor sets
of the individual vertices, as if $G$ was simple at all times.
\item The \emph{edge status viewpoint} allows us to track, for all the original
edges $E_0$, whether they became self-loops or parallel edges, and also track how
$E_0$ is partitioned into classes of pairwise-parallel edges.
\end{enumerate}
Let $G_0=(V_0,E_0)$ be a planar graph used to initialize the data structure.
Recall that any contraction alters both the set of vertices and the set of edges of the graph.
Throughout, we let $G=(V,E)$ denote the \emph{current} version of the graph, unless otherwise stated.
Each edge $e\in E(G)$ can be either a self-loop, an edge parallel to some
other edge $e'\neq e$ (we call such an edge \emph{parallel}), or an edge
that is not parallel to any other edge of $G$ (we call it \emph{simple} in this case).
An edge $e\in E(G)$ that is simple might either get contracted
or might change into a parallel edge as a result of contracting other edges.
Similarly, a parallel edge might either get contracted or might
change into a self-loop.
Note that, during contractions, neither can a parallel edge ever become simple, nor can a self-loop become parallel.
Observe that parallelism is an equivalence relation
on the edges of $G$.
Once two edges $e_1,e_2$ connecting vertices $u,v\in V$ become parallel, they stay parallel until some edge $e_3$ (possibly equal to $e_1$ or $e_2$)
parallel to both of them gets contracted.
However, groups of parallel edges might merge (Figure~\ref{fig:bundles})
and this might also be a valuable piece of information.
\ifshort
\begin{wrapfigure}[7]{r}{0.3\textwidth}
\vspace{-0.7em}
\fi
\iffull
\begin{wrapfigure}[9]{r}{0.3\textwidth}
\fi
\includegraphics[width=0.27\textwidth]{bundles}
\caption{Contracting the blue dotted edge will merge two groups of parallel edges.\label{fig:bundles}}
\end{wrapfigure}
To succinctly describe how the groups of parallel edges change, we report
parallelism in a directed manner, as follows.
Each group $Y\subseteq E$ of parallel edges in $G$ is assumed to have its
\emph{representative} edge $\alpha(Y)$.
For $e \in Y$ we define $\alpha(e) = \alpha(Y)$.
When two groups of parallel edges $Y_1,Y_2\subseteq E$ merge as a result
of a contraction, the data structure chooses $\alpha(Y_i)$ for some $i\in\{1,2\}$
to be the new representative of the group $Y_1\cup Y_2$ and reports
an ordered pair $\alpha(Y_{3-i})\to \alpha(Y_i)$ to the user.
We call each such pair a \emph{directed parallelism}.
After such an event, $\alpha(Y_{3-i})$ will not be reported as a part
of a directed parallelism anymore.
The choice of $i$
can also be made according to some fixed strategy, e.g., if the edges are assigned weights $\ell(\cdot)$ then
we may choose $\alpha(Y_i)$ so that $\ell(\alpha(Y_i))\leq\ell(\alpha(Y_{3-i}))$.
This is convenient in what Klein and Mozes \cite{Klein:book} call \emph{strict optimization problems},
such as MST,
where we can discard one of any two parallel edges based only on these edges.
Note that at any point of time the set of directed parallelisms reported so far
can be seen as a forest of rooted trees $\ensuremath{\mathcal{T}}$, such that each tree $T$ of $\ensuremath{\mathcal{T}}$ represents
a group $Y$ of parallel edges of $G$. The root of $T$ is equal to $\alpha(Y)$.
When some edge is contracted, all edges parallel to it are reported as self-loops.
Clearly, each edge $e$ is reported as a self-loop at most once.
Moreover, it is reported as a part of a directed parallelism $e\to e'$, $e'\neq e$, at most
once.
We are now ready to define the complete interface of our data structure.
\newcommand{\ensuremath{\textup{\texttt{init}}}}{\ensuremath{\textup{\texttt{init}}}}
\newcommand{\ensuremath{\textup{\texttt{contract}}}}{\ensuremath{\textup{\texttt{contract}}}}
\newcommand{\ensuremath{\textup{\texttt{edge}}}}{\ensuremath{\textup{\texttt{edge}}}}
\newcommand{\ensuremath{\textup{\texttt{vertices}}}}{\ensuremath{\textup{\texttt{vertices}}}}
\newcommand{\ensuremath{\textup{\texttt{deg}}}}{\ensuremath{\textup{\texttt{deg}}}}
\newcommand{\ensuremath{\textup{\texttt{neighbors}}}}{\ensuremath{\textup{\texttt{neighbors}}}}
\newcommand{\ensuremath{\textbf{nil}}}{\ensuremath{\textbf{nil}}}
\begin{itemize}
\item $\ensuremath{\textup{\texttt{init}}}(G_0=(V_0,E_0),\ell)$: initialize the data structure. $\ell$ is an optional weight function.
\item $(s,P,L):=\ensuremath{\textup{\texttt{contract}}}(e)$, for $e\in E$:
contract the edge $e$.
Let $e=uv$.
The call $\ensuremath{\textup{\texttt{contract}}}(e)$ returns a vertex $s$ resulting from
merging $u$ and $v$, and two lists $P$, $L$ of new directed parallelisms
and self-loops, respectively, reported as a result of contraction of $e$.
\item $\ensuremath{\textup{\texttt{vertices}}}(e)$, for $e\in E$: return $u,v\in V$ such that $e=uv$.
\item $\ensuremath{\textup{\texttt{neighbors}}}(u)$, for $u\in V$: return an iterator to the list $\{(v,\alpha(uv)):v\in N(u)\}$.
\item $\ensuremath{\textup{\texttt{deg}}}(u)$, for $u\in V$: find the number of neighbors of $u$ in $G$.
\item $\ensuremath{\textup{\texttt{edge}}}(u,v)$, for $u,v\in V$: if $uv\in E$, then return $\alpha(uv)$.
Otherwise, return $\ensuremath{\textbf{nil}}$.
\end{itemize}
The following theorem summarizes the performance of
our data structure.
\begin{theorem}\label{thm:main}
Let $G=(V,E)$ be a planar graph with $|V|=n$ and $|E|=m$.
There exists a data structure supporting
$\ensuremath{\textup{\texttt{edge}}}$, $\ensuremath{\textup{\texttt{vertices}}}$, $\ensuremath{\textup{\texttt{neighbors}}}$ and $\ensuremath{\textup{\texttt{deg}}}$
in $O(1)$ worst-case time, and whose initialization and any sequence of $\ensuremath{\textup{\texttt{contract}}}$ operations take $O(n+m)$ expected time, or $O(n+m)$ worst-case time, if no $\ensuremath{\textup{\texttt{edge}}}$ operations
are performed.
The data structure supports iterating through the neighbor list of a vertex
with $O(1)$ overhead per element.
\end{theorem}
\section{Introduction}
An edge contraction is one of the fundamental graph operations.
Given an undirected graph and an edge~$e$, contracting the edge $e$ consists in removing it from the graph and merging its endpoints.
The notion of a contraction has been used to describe a number of prominent graph algorithms, including Edmonds' algorithm for computing maximum matchings~\cite{Edmonds65paths} or Karger's minimum cut algorithm~\cite{Karger:1993}.
Edge contractions are of particular interest in planar graphs, as a number of planar graph properties are easiest described using contractions.
For example, it is well-known that a graph is planar precisely when it cannot be transformed into $K_5$ or $K_{3,3}$ by contracting edges or removing vertices or edges.
Moreover, contracting an edge preserves planarity.
While a contraction operation is conceptually very simple, its efficient implementation is challenging.
By using standard data structures (e.g. balanced binary trees), one can maintain adjacency lists of a graph in polylogarithmic amortized time.
However, in many planar graph algorithms this becomes a bottleneck.
As an example,
consider the problem of computing a $5$-coloring of a planar graph. There exists a very simple algorithm based on contractions~\cite{Tarjan80}, but efficient implementations use some more involved planar graph properties~\cite{Frederickson84,Tarjan80,Robertson:1996}. For example, the algorithm by Matula, Shiloach and Tarjan~\cite{Tarjan80} uses the fact that every planar graph has either a vertex of degree at most 4 or a vertex of degree 5 adjacent to at least four vertices each having degree at most 11.
Similarly, although there exists a very simple algorithm for computing a MST of a planar graph based on edge contractions, various different methods have been used to implement it efficiently~\cite{Frederickson84,Mares,Matsui}.
\ifshort
\vspace{-2mm}
\fi
\subparagraph{Our Results.}
We show a data structure that can efficiently maintain a planar graph subject to edge contractions in $O(n)$ total time,
assuming the standard word-RAM model with word size $\Omega(\log{n})$.
It can report groups of parallel edges and self-loops that emerge.
It also supports constant-time adjacency queries and maintains the neighbor lists and degrees explicitly.
The data structure can be used as a black-box to implement planar graph algorithms that use contractions.
In particular, it can be used to give clean and conceptually simple implementations of the algorithms for computing $5$-coloring or MST
that do not manipulate the embedding.
More importantly, by using our data structure we give improved algorithms for a few problems in planar graphs. In particular, we obtain optimal algorithms for decremental $2$-edge-connectivity, finding unique perfect matching, and computing maximal $3$-edge-connected subgraphs.
We also obtain improved algorithms for decremental $2$-vertex and $3$-edge connectivity,
where the bottleneck in the state-of-the-art algorithms \cite{Giammarresi:96} is detecting parallel edges under contractions.
For detailed theorem statements, see Sections~\ref{sec:interface} and~\ref{sec:overview}.
\ifshort
\vspace{-2mm}
\fi
\subparagraph{Related work.} The problem of detecting
self-loops and parallel edges under contractions is implicitly addressed by Giammarresi and Italiano~\cite{Giammarresi:96} in their work on decremental (edge-, vertex-) connectivity in planar graphs.
Their data structure uses $O(n\log^2{n})$ total time.
In their book, Klein and Mozes~\cite{Klein:book} show that there exists a data structure
maintaining a planar graph under edge contractions and deletions and
answering adjacency queries in $O(1)$ worst-case time.
The update time is $O(\log{n})$.
This result is based on the work of Brodal and Fagerberg \cite{Brodal:1999}, who showed
how to maintain a bounded outdegree orientation of a dynamic planar graph
so that edge insertions and deletions are supported in $O(\log{n})$ amortized time.
Gustedt \cite{Gustedt} showed an optimal solution to the union-find
problem, in the case when at any time, the actual subsets form
disjoint, connected subgraphs of a given planar graph $G$.
In other words, in this problem the allowed unions correspond
to the edges of a planar graph and the execution of a union operation
can be seen as a contraction of the respective edge.
\ifshort
\vspace{-2mm}
\fi
\subparagraph{Our Techniques.}
It is relatively easy to give a simple \emph{vertex merging data structure} for general graphs,
that would process any sequence of contractions in $O(m\log^2{n})$ total time and support the same queries as our data structure in $O(\log{n})$ time.
To this end, one can store the lists $N(v)$ of neighbors of individual vertices as balanced binary trees.
Upon a contraction of an edge $uv$, or a more general operation of merging
two (not necessarily adjacent)
vertices $u,v$,
$N(u)$ and $N(v)$ are merged by inserting the
smaller set into the larger one (and detecting loops and parallel edges by the way, at
no additional cost).
If we used hash tables instead of balanced BSTs, we could achieve $O(\log{n})$ expected
amortized update time and $O(1)$ query time.
In fact, such an approach was used
in~\cite{Giammarresi:96}.
To obtain the speed-up we take advantage of planarity.
Our general idea is to partition the graph into small pieces
and use the above simple-minded vertex merging data structures
to solve our problem separately for each of the pieces and
for the subgraph induced by the vertices contained in multiple pieces
(the so-called boundary vertices).
Due to the nature of edge contractions,
we need to specify how
the partition evolves when our graph changes.
The data structure builds an $r$-division (see Section~\ref{sec:preliminaries}) $\ensuremath{\mathcal{R}}=P_1,P_2,\ldots$ of $G_0$ for $r=\log^4{n}$.
The set $\bnd{\ensuremath{\mathcal{R}}}$ of boundary vertices (i.e., those shared among at least
two pieces) has size $O(n / \log^2 n)$.
Let $(V_0,E_0)$ denote the original graph, and $(V,E)$ denote the current graph (after performing some number of contractions).
Then we can denote by $\ensuremath{\phi}:V_0\to V$ a function
such that the initial vertex $v_0\in V_0$ is contracted into $\ensuremath{\phi}(v_0)$.
We use vertex merging data structures to detect parallel edges and self-loops in the ``top-level'' subgraph
$G[\ensuremath{\phi}(\bnd{\ensuremath{\mathcal{R}}})]$, which contains only edges between boundary vertices, and separately for the ``bottom-level'' subgraphs $G[\ensuremath{\phi}(V(P_i))]\setminus G[\ensuremath{\phi}(\ensuremath{\mathcal{R}})]$.
At any time, each edge of $G$ is contained in exactly one
of the defined subgraphs, and
thus, the distribution of responsibility for handling individual edges
is based solely on the initial $r$-division.
However, such an assignment of responsibilities gives rise to additional
difficulties.
First, a contraction of an edge in a lower-level subgraph might
cause some edges ``flow'' from this subgraph to the top-level subgraph (i.e., we may get new edges connecting boundary vertices).
As such an operation turns out to be costly in our implementation,
we need to prove that the number of such events is only
$O(n/\log^2{n})$.
Another difficulty lies in the need of keeping the individual
data structures synchronized: when an edge of the top-level subgraph
is contracted, pairs of vertices in multiple lower-level subgraphs might
need to be merged.
We cannot afford iterating through all the lower-level subgraphs
after each contraction in $G[\ensuremath{\phi}(\bnd{\ensuremath{\mathcal{R}}})]$.
This problem is solved by maintaining a system of pointers between representations of the
same vertex of $V$ in different data structures
and another clever application of the smaller-to-larger merge strategy.
Such a two-level data structure would yield a data structure with
$O(n\log\log{n})$ total update time.
To obtain a linear time data structure, we further partition the pieces $P_i$ and
add another layer of maintained subgraphs on $O(\log^4 \log^4 n) = O(\log^4 \log n)$ vertices.
These subgraphs are so small that we can precompute in $O(n)$ time the self-loops and parallel edges
for every possible graph on $t$ vertices and every possible sequence of edge contractions.
We note that this overall idea of recursively reducing a problem with an $r$-division to a size when microencoding
can be used has been previously exploited in~\cite{Gustedt}~and~\cite{Lacki:2015} (Gustedt~\cite{Gustedt} did not use $r$-divisions, but his concept of a \emph{patching} could be replaced with an $r$-division).
Our data structure can be also seen as a solution to a more general version of the planar union-find
problem studied by Gustedt~\cite{Gustedt}.
However, maintaining the status of each edge~$e$ of the initial graph $G$ (i.e., whether
$e$ has become a self-loop or a parallel edge) subject to edge contractions
turns out to be a serious technical challenge.
For example, in~\cite{Gustedt}, the requirements posed on the bottom-level
union-find data structures are in a sense relaxed and it is not necessary for those
to be synchronized with the top-level union-find data structure.
\ifshort
\vspace{-3mm}
\fi
\subparagraph{Organization of the Paper.}
The remaining part of this paper is organized as follows.
In Section~\ref{sec:preliminaries}, we introduce the needed notation and definitions,
whereas in Section~\ref{sec:interface} we define the operations that our data structure
supports.
Then, in Section~\ref{sec:overview} we present a series of applications of our data structure.
In Section~\ref{sec:loopdetection}, we provide a detailed implementation of our data structure.
\ifshort
Due to space constraints, many of the proofs, along with
the pseudocode for example algorithms using our data structure,
can be found in the full version of this paper.
\fi
\section{Maintaining a Planar Graph Under Contractions}\label{sec:loopdetection}
In this section we prove Theorem~\ref{thm:main}.
\ifshort
We defer the discussion on supporting arbitrary weights $\ell(\cdot)$ to the full version.
\fi
\iffull
We defer the discussion on supporting arbitrary weights $\ell(\cdot)$ to Appendix~\ref{sec:edgeweights}.
\fi
Hence, in the following, we assume all edges have equal weights.
\ifshort
\vspace{-3mm}
\fi
\subsection{A Vertex Merging Data Structure}\label{sec:vids}
We first consider a more general problem,
which we call the \emph{bordered vertex merging} problem.
The data structure presented below will constitute a basic
building block of the multi-level data structure.
Let us now describe the data structure for the bordered vertex merging problem in detail.
Suppose we have a dynamic \emph{simple} planar graph $G=(V,E)$ and a \emph{border set} $B\subseteq V$.
Assume $G$ is initially equal to $G_0=(V_0,E_0)$ and no edge of $E_0$ connects two vertices of $B$.
The data structure handles the following update operations.
\begin{itemize}
\item Merge (or in other words, an identification) of two vertices $u,v\in V$ ($u\neq v$), such that the graph is still planar.
If $\{u,v\}\not\subseteq B$, then $u$ and $v$ have to be connected by an edge
and in such a case the merge is equivalent to a contraction of $uv$.
\item Insertion of an edge $e=uv$ (where $uv\notin E$ is not required), preserving planarity.
\end{itemize}
After each update operation the data structure reports the parallel edges and self-loops that emerge. Once reported, each set of parallel edges is merged into one representative edge.
Moreover, the data structure reports and removes any edges that have both endpoints in $B$.
Thus, the following invariants are satisfied before the first and after each modification:
\begin{enumerate}
\item $G$ is planar and simple.
\item No edge of $E$ has both its endpoints in $B$.
\end{enumerate}
Clearly, merging vertices alters the set $V$ by replacing
two vertices $u,v$ with a single vertex.
Thus, at each step, each vertex of $G$ corresponds to a set of vertices of the initial graph $G_0$.
We explicitly maintain a mapping $\ensuremath{\phi}:V_0\to V$ such that for $a\in V_0$, $\ensuremath{\phi}(a)$
is a vertex of the current vertex set $V$ ``containing'' $a$.
The reverse mapping $\ensuremath{\phi}^{-1}:V\to 2^{V_0}$ is also stored explicitly.
We now define how the merge of $u$ and $v$ influences the set $B$.
When $\{u,v\}\subseteq B$, the resulting vertex is also
in $B$.
When $u\in B, v\notin B$ (or $v\in B,u\notin B$, resp.),
the resulting vertex is included in $B$ in place of $u$ ($v$, resp.).
Finally, for $u,v\notin B$,
the resulting vertex does not belong to $B$ either.
Let $\tilde{E}$ be the set of inserted edges.
At any time, the edges of $E$ constitute a subset of $E_0\cup\tilde{E}$ in the following sense:
for each $e=xy\in E$ there exists an edge $e'=uv\in E_0\cup\tilde{E}$ such that $\ensuremath{\mathrm{id}}(e)=\ensuremath{\mathrm{id}}(e')$,
and vertices $u$ and $v$ have been merged into $x$ and $y$, respectively.
Note that some modifications might break the second invariant:
both an edge insertion and a merge might introduce an edge $e$ with both endpoints
in $B$.
We call such an edge a \emph{border edge}.
Each border edge $e$ that is not a self-loop
is reported and deleted from (or not inserted to) $G$.
Apart from reporting and removing new edges of $B\times B$ appearing in $E$, we also report
the newly created parallel edges that might arise
after the modification and remove them.
The reporting of parallel edges is done in the form of directed parallelisms,
as described in Section~\ref{sec:interface}.
Again, it is easy to see that each edge of $E_0\cup \tilde{E}$ is reported
as the first coordinate of a directed parallelism at most once.
Note that an edge $e$ may be first reported parallel (in a directed parallelism
of the form $e'\to e$, where $e'\neq e$) and then reported border.
\ifshort
\vspace{-3mm}
\fi
\subparagraph{The Graph Representation.}
The data structure for the bordered vertex merging problem internally maintains $G$ using the data structure of the following lemma for planar graphs.
\begin{lemma}[\cite{Brodal:1999}]\label{lem:adjq}
There exists a deterministic, linear-space data structure, initialized in $O(n)$
time, and maintaining a
dynamic, simple planar graph $H$ with $n$ vertices, so that:
\begin{itemize}
\item adjacency queries in $H$ can be performed in $O(1)$ worst-case time,
\item edge insertions and deletions can be performed in $O(\log{n})$ amortized time.
\end{itemize}
\end{lemma}
\begin{fact}\label{fac:adj}
The data structure of Lemma~\ref{lem:adjq} can be easily extended so that:
\begin{itemize}
\item Doubly-linked lists $N(v)$ of neighbors, for $v\in V$, are maintained within the same bounds.
\item For each edge $xy$ of $H$, some auxiliary data associated with $e$ can
be accessed and updated in $O(1)$ worst-case time.
\end{itemize}
\end{fact}
In addition to the data structure of Lemma~\ref{lem:adjq} representing $G$,
for each unordered pair $x,y$ of vertices adjacent in $G$,
we maintain an edge $\alpha(x,y)=e$,
where $e$ is the unique edge in $E$ connecting $x$ and $y$.
Recall that in fact $\alpha(x,y)$ corresponds to some of the original
edges of $E_0$ or one of the inserted edges $\tilde{E}$.
By Fact~\ref{fac:adj}, we can access $\alpha(x,y)$ in constant time.
The mapping $\ensuremath{\phi}$ is stored in an array, whereas the sets $\ensuremath{\phi}^{-1}(\cdot)$ -- in doubly-linked lists.
Suppose we merge two vertices $u,v\in V$.
Instead of creating a new vertex $w$, we merge one of these
vertices into the other.
Suppose we merge $u$ into $v$.
In terms of the operations supported by the data structure of Lemma~\ref{lem:adjq},
we need to remove each edge $ux$
and insert an edge $vx$, unless $v$ has been adjacent to $x$ before.
To update our representation, we only need to perform the following steps:
\begin{itemize}
\item For each $v_0\in \ensuremath{\phi}^{-1}(u)$, set $\ensuremath{\phi}(v_0)=v$ and add $v_0$ to $\ensuremath{\phi}^{-1}(v)$.
\item Compute the list $N_u = \{(x,\alpha(u,x)) : x\in N(u)\}$.
Remove all edges adjacent to $u$ from~$G$.
For each $(x,\alpha(u,x))\in N_u$, $x\neq v$, check whether $x\in N(v)$
(this can be done in $O(1)$ time, by Lemma~\ref{lem:adjq}).
If so, report the parallelism $\alpha(u,x)\to\alpha(v,x)$.
Otherwise, if $vx$ is not a border edge, insert an edge $vx$ to $G$
and set $\alpha(v,x)=\alpha(u,x)$.
If, on the other hand, $v\in B$ and $x\in B$ (i.e., $vx$ is a border edge), report
$\alpha(u,x)$ as a border edge.
\end{itemize}
Observe that our order of updates issued to $G$ guarantees that $G$ remains planar at all times.
The decision whether we merge $u$ into $v$ or $v$ into $u$ heavily affects
both the correctness and efficiency of the data structure.
First, if one of $u,v$ (say $v$) is contained in~$B$, whereas the other (say $u$) is not,
we merge $u$ into $v$.
If, however, we have $\{u,v\}\subseteq B$ or $\{u,v\}\subseteq V\setminus B$, we
pick a vertex (say $u$) with a smaller set $\ensuremath{\phi}^{-1}(u)$ and merge $u$ into $v$.
To handle an insertion of a new edge $e=xy$, we first check whether
$xy$ is a border edge.
If so, we discard $e$ and report it.
Otherwise, check whether $x$ and $y$ are adjacent in $G$.
If so, report the parallelism $e\to\alpha(x,y)$.
If not, add an edge $xy$ to $G$ and set $\alpha(x,y)=e$.
\begin{lemma}\label{lem:brute_repr}
Let $G$ be a graph initially equal to a simple planar graph $G_0=(V_0,E_0)$ such that $n=|V_0|$.
There is a data structure for the bordered vertex merging problem that processes any sequence of modifications of $G_0$, along with reporting parallelisms and border edges, in $O((n+f)\log^2{n}+m)$ total time, where $m$
is the total number of edge insertions and $f$ is the total
number of insertions of edges connecting non-adjacent vertices.
\end{lemma}
\begin{proof}
Clearly, by Lemma~\ref{lem:adjq}, building the initial representation takes $O(n\log{n})$ time,
as we insert $O(n)$ edges to $G$.
The reporting of parallel edges and border edges
takes $O(n+m)$ time,
since each (initial or inserted)
edge is reported as a border edge or occurs as the first coordinate
of a reported directed parallelism at most once.
Also note that, by Lemma~\ref{lem:adjq}, an insertion of a parallel edge costs $O(1)$ time,
for a total of $O(m)$ time over all insertions, as $G$ is not updated
in that case.
Recall that, by Fact~\ref{fac:adj}, accessing and updating
values $\alpha(x,y)$ for $xy\in E(G)$ takes $O(1)$ time.
The total cost of maintaining the representation of $G$ is $O(g\log{n})$, where
$g$ is the total number of edge updates to the data structure of Lemma~\ref{lem:adjq}.
We prove that $g=O((n+f)\log{n})$.
To this end, we look at the merge of $u$ into $v$ from a different perspective:
instead of removing an edge $e=ux$ and inserting an edge $vx$,
imagine that we simply change an endpoint $u$ of $e$
to $v$, but the edge itself does not lose its identity.
Then, new edges in $G$ are only created either during
the initialization or by inserting an edge connecting
the vertices that have not been previously adjacent in $G$.
Hence, there are $O(n+f)$ creations of new edges.
Consider some edge $e=xy$ of $G$ immediately after its creation.
Denote by $q(e)$ the pair
\iffull
\linebreak
\fi
$(|\ensuremath{\phi}^{-1}(x)|,|\ensuremath{\phi}^{-1}(y)|)$.
The value of $q(e)$ always changes when some endpoint of $e$ is updated.
Suppose a merge of $u$ into $v$ ($u\neq v$) causes the change of some
endpoint $u$ of $e$ to $v$.
We either we have $u\notin B$ and $v\in B$
or $|\ensuremath{\phi}^{-1}(v)|\geq |\ensuremath{\phi}^{-1}(u)|$ before the merge.
The former situation can arise at most once per each endpoint of $e$, since we always merge
a non-border vertex into a border vertex,
if such case arises.
In the latter case, on the other hand, one coordinate of $q(e)$
grows at least by a factor of $2$, and clearly this can happen at most $O(\log{n})$ times,
as the size of any $\ensuremath{\phi}^{-1}(x)$ is never more than $n$.
Since there are $O(n+f)$ ``created'' edges, and each such edge undergoes
$O(\log{n})$ endpoint updates, indeed we have $g=O((n+f)\log{n})$.
A very similar argument can be used to show that the total time needed to maintain the mapping
$\ensuremath{\phi}$ along with the reverse mapping $\ensuremath{\phi}^{-1}$ is $O(n\log{n})$.
\end{proof}
\ifshort
\vspace{-4mm}
\fi
\subparagraph{A Micro Data Structure.}
In order to obtain an optimal data structure, we need the following specialized version
of the bordered vertex merging data structure that handles very small
graphs in linear total time.
Suppose we disallow inserting new edges into $G$.
Additionally, assume we are allowed to perform some preprocessing in time $O(n)$.
Then, due to a monotonous nature of allowed operations on~$G$,
when the size of $G_0$ is very small compared to $n$,
we can maintain $G$ faster than by using the data structure
of Lemma~\ref{lem:brute_repr}.
\begin{lemma}\label{lem:micro}
After preprocessing in $O(n)$ time, we can repeatedly solve
the bordered vertex merging problem without edge insertions
for planar simple graphs $G_0$ with $t=O(\log^4{\log^4{n}})$ vertices
in $O(t)$ time.
\end{lemma}
\iffull
\begin{proof}
Let $f(n)=c\log^4{\log^4{n}}$ for some $c>0$.
We use the preprocessing time to simulate every possible sequence
of modifications on every possible graph $G_0=(V_0,E_0)$ with no more
than $f(n)$
vertices and each possible $B\subseteq V_0$.
The simulation allows us to precompute for each step the list of self-loops and directed
parallelisms to be reported.
We identify the vertices $V_0$ with the set $\{1,\ldots,t\}$
and assume that edges of $E_0$ are assigned identifiers
from the set $1,\ldots,|E_0|$ such that $e=uv\in E_0$ is assigned
an identifier equal to the position of the pair $(u,v)$
in the sorted list $\{(u,v):u<v \land uv\in E_0\}$.
Any possible graph $G_0$ can be encoded with $O(f(n)^2)$ bits,
representing the adjacency matrix of $G_0$.
For a given $G_0$ with $t$ vertices, each possible $B\subseteq V_0$ can
be easily encoded with additional $O(t)=O(f(n))$ bits.
On a graph $G$ initially equal to $G_0$, at most $t$ merges can be performed.
Clearly, a single operation on $G$ can be encoded as a pair
of affected vertices, i.e., $O(\log{t})$ bits.
Each possible sequence $S$ of modifications of $G$ (not necessarily maximal) can be thus encoded
with additional $O(t\log{t})=O(f(n)^2)$ bits.
We conclude that each triple $(G_0,B,S)$ can be encoded with $O(\ensuremath{\mathrm{poly}}(f(n)))$ bits
and thus there are no more than $O(2^{\ensuremath{\mathrm{poly}}(f(n))})$ such triples.
For each triple $\psi=(G_0,B,S)$, we do the following:
\begin{itemize}
\item We compute its bit encoding $z(\psi)$.
\item We use the data structure $D$ of Lemma~\ref{lem:brute_repr}
to simulate the sequence of updates $S$ on a graph $G$ initially equal to
$G_0$ and a border set $B$.
\item Afterwards, a record $Q[z(\psi)]$ is filled with the following information:
\begin{itemize}
\item mappings $\ensuremath{\phi}$ and $\ensuremath{\phi}^{-1}$ computed by $D$,
\item the lists of border edges and directed parallelisms that were reported
after the last modification of the sequence $S$.
\item the bit encodings $z(\psi')$ of all the triples $\phi'=(G_0,B,S')$,
such that $S'$ extends $S$ by a single modification.
\end{itemize}
\end{itemize}
For each triple $\psi=(G_0,B,S)$, all the needed information can be clearly
computed in time polynomial in $f(n)$.
Hence, in total we need $O(2^{\ensuremath{\mathrm{poly}}(f(n))})$ time to compute all the necessary
information.
As $O(\ensuremath{\mathrm{poly}}(f(n)))=o(\log{n})$
any bit encoding $z(\psi)$ is an integer of order $O(n)$
and fits in $O(1)$ machine words.
Now, to handle any sequence of modifications on a graph $G_0$ with at most
$f(n)$ vertices and a border set $B\subseteq V_0$, we first
compute in linear time the bit encoding $z(\psi^{*})$ of $\psi^{*}=(G_0,B,S)$, where initially
$S=\emptyset$.
Each modification $Y$ is executed as follows: we use the information in $Q[z(\psi^*)]$
to find the bit encoding $z(\psi')$ of the configuration $\psi'=(G_0,B,S\cup \{Y\})$
and we move from the configuration $\psi^*$ to $\psi'$.
Next, we read from $Q[z(\psi^{'})]$ which edges should be reported as parallel edges
or border edges.
As we only move between the configurations by updating the bit encoding of the current configuration
and possibly report edges, the whole sequence of updates takes time linear in the size of $G_0$.
Clearly, the record $Q[z(\psi^{*})]$ can be used to
access the mappings $\ensuremath{\phi}$ and $\ensuremath{\phi}^{-1}$ in constant time.
\end{proof}
\fi
\subsection{A Multi-Level Data Structure}
Recall that our goal is to maintain $G$ under contractions.
Below we describe in detail how to take advantage of graph partitioning
and bordered vertex merging data structures to obtain a linear
time solution.
To simplify the further presentation, we assume that the initial version $G_0=(V_0,E_0)$
of $G$ is simple and of constant degree.
\ifshort
The standard reduction assuring that is described in the full version.
\fi
\iffull
The standard reduction assuring that is described in Appendix~\ref{a:degree}.
\fi
We build an $r$-division $\ensuremath{\mathcal{R}}=\{P_1,P_2,\ldots,\}$ of $G$ with $r=\log^4{n}$, where $n = |V_0|$ (see Lemma~\ref{lem:rdiv}).
Then, for each piece $P_i\in\ensuremath{\mathcal{R}}$, we build an $r$-division $\ensuremath{\mathcal{R}}_i=\{P_{i,1},P_{i,2},\ldots\}$
of $P_i$ with $r=\log^4{\log^4{n}}$.
By Lemma~\ref{lem:rdiv}, building all the necessary pieces takes $O(n)$ time
in total.
Since $G_0$ is of constant degree, any vertex $v\in V_0$ is contained in $O(1)$ pieces of $\ensuremath{\mathcal{R}}$.
Analogously, for any $v\in P_i$, $v$ is contained in $O(1)$ pieces of $\ensuremath{\mathcal{R}}_i$.
\newcommand{\ensuremath{\mathcal{G}}}{\ensuremath{\mathcal{G}}}
\newcommand{\ensuremath{\mathcal{L}}}{\ensuremath{\mathcal{L}}}
\newcommand{\ensuremath{\mathcal{M}}}{\ensuremath{\mathcal{M}}}
\newcommand{\ensuremath{\mathcal{F}}}{\ensuremath{\mathcal{F}}}
\newcommand{\ensuremath{\Pi}}{\ensuremath{\Pi}}
\newcommand{\ensuremath{\mathcal{D}}}{\ensuremath{\mathcal{D}}}
\newcommand{\ensuremath{\pi}}{\ensuremath{\pi}}
As $G$ undergoes contractions, let $\ensuremath{\phi}:V_0\to V$ be a mapping such that
for each $v\in V_0$, $v$~``has been merged'' into $\ensuremath{\phi}(v)$.
As we later describe, a vertex resulting from contracting an edge $uv$ will be called either $u$ or $v$,
which guarantees that $V \subseteq V_0$ at all times.
Of course, initially $\ensuremath{\phi}(v)=v$ for each $v\in V=V_0$.
\newcommand{\simp}[1]{\ensuremath{\overline{#1}}}
\newcommand{\ensuremath{par}}{\ensuremath{par}}
Let $\simp{G}=(V,\simp{E})$ denote the maximal simple subgraph of $G$, i.e.,
the graph $G$ with self-loops discarded and each group $Y$ of parallel
edges replaced with a single edge $\alpha(Y)$.
The key component of our data structure is a 3-level set
of
(possibly micro-) bordered vertex merging data structures
$\ensuremath{\Pi}=\{\ensuremath{\pi}\}\cup \{\ensuremath{\pi}_i:P_i\in\ensuremath{\mathcal{R}}\}\cup \{\ensuremath{\pi}_{i,j}:P_i\in\ensuremath{\mathcal{R}}, P_{i,j}\in\ensuremath{\mathcal{R}}_i\}$.
The data structures $\ensuremath{\Pi}$ form a tree such that $\ensuremath{\pi}$ is the root,
$\{\ensuremath{\pi}_i:P_i\in\ensuremath{\mathcal{R}}\}$ are the children of $\ensuremath{\pi}$ and
$\{\ensuremath{\pi}_{i,j}:P_{i,j}\in\ensuremath{\mathcal{R}}_i\}$ are the children
of $\ensuremath{\pi}_i$.
For $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$, let $\ensuremath{par}(\ensuremath{\mathcal{D}})$ be the parent of $\ensuremath{\mathcal{D}}$
and let $A(\ensuremath{\mathcal{D}})$ be the set of ancestors of $\ensuremath{\mathcal{D}}$.
We call the value $h(\ensuremath{\mathcal{D}})=|A(\ensuremath{\mathcal{D}})|$ a \emph{level} of $\ensuremath{\mathcal{D}}$.
The data structures of levels $0$ and $1$ are stored as data structures
of Lemma~\ref{lem:brute_repr}, whereas the data structures of level
$2$ are stored as micro structures of Lemma~\ref{lem:micro}.
Each data structure $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$ has a defined set $V_\ensuremath{\mathcal{D}}\subseteq V_0$ of \emph{interesting vertices},
defined as follows:
$V_\ensuremath{\pi}=\bnd{\ensuremath{\mathcal{R}}}$, $V_{\ensuremath{\pi}_i}=\bnd{P_i}\cup \bnd{\ensuremath{\mathcal{R}}_i}$ and $V_{\ensuremath{\pi}_{i,j}}=V(P_{i,j})$.
The data structure $\ensuremath{\mathcal{D}}$ maintains a certain subgraph $G_\ensuremath{\mathcal{D}}$ of $\simp{G}$ defined inductively as follows (recall that we define $G_1 \setminus G_2$ to be a graph containing all vertices of $G_1$ and edges of $G_1$ that do not belong to $G_2$)
$$G_\ensuremath{\mathcal{D}}=\simp{G}[\ensuremath{\phi}(V_\ensuremath{\mathcal{D}})]\setminus\Biggl(\bigcup_{\ensuremath{\mathcal{D}}'\in A(\ensuremath{\mathcal{D}})}G_{\ensuremath{\mathcal{D}}'}\Biggr).$$
\begin{fact}\label{fac:minor}
For any $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$, $G_\ensuremath{\mathcal{D}}$ is a minor of $G_0$.
\end{fact}
\begin{fact}\label{fac:edge-exists}
For any $uv=e\in \simp{E}$, there exists $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$ such that $e\in E(G_\ensuremath{\mathcal{D}})$.
\end{fact}
\iffull
\begin{proof}
Let $u_0,v_0$ be the initial endpoints of $e$. Initially $e\in P_{i,j}$ for some $i,j$. Observe that, since
$\{\ensuremath{\phi}(u_0),\ensuremath{\phi}(v_0)\}\subseteq V(G_{\ensuremath{\pi}_{i,j}})$, $e$ is contained in
$G_{\ensuremath{\pi}_{i,j}}$ or some of its ancestors.
\end{proof}
\fi
Each $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$ is initialized with the graph $G_\ensuremath{\mathcal{D}}$,
according to the initial mapping $\ensuremath{\phi}(v)=v$ for any $v\in V_0$.
We define the set of \emph{ancestor vertices} $AV_\ensuremath{\mathcal{D}}=V_\ensuremath{\mathcal{D}}\cap\left(\bigcup_{\ensuremath{\mathcal{D}}'\in A(\ensuremath{\mathcal{D}})} V_{\ensuremath{\mathcal{D}}'}\right)$.
Now we discuss what it means for the bordered vertex merging data structure $\ensuremath{\mathcal{D}}$ to
maintain the graph $G_\ensuremath{\mathcal{D}}$.
Note that the vertex set used to initialize $\ensuremath{\mathcal{D}}$ is $V_\ensuremath{\mathcal{D}}$.
We write $\ensuremath{\phi}_\ensuremath{\mathcal{D}}, \ensuremath{\phi}_\ensuremath{\mathcal{D}}^{-1}$ to denote
the mappings $\ensuremath{\phi}, \ensuremath{\phi}^{-1}$ maintained by $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$, respectively.
Throughout a sequence of contractions, we maintain the following invariants for any $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$:
\begin{itemize}
\item There is a 1-1 mapping between the sets $\ensuremath{\phi}(V_\ensuremath{\mathcal{D}})$ and $\ensuremath{\phi}_\ensuremath{\mathcal{D}}(V_\ensuremath{\mathcal{D}})$
such that for the corresponding vertices $x\in \ensuremath{\phi}(V_\ensuremath{\mathcal{D}})$ and $y\in \ensuremath{\phi}_\ensuremath{\mathcal{D}}(V_\ensuremath{\mathcal{D}})$
we have $\ensuremath{\phi}^{-1}_\ensuremath{\mathcal{D}}(y)=\ensuremath{\phi}^{-1}(x)\cap V_\ensuremath{\mathcal{D}}$.
We also say that $x$ is \emph{represented} in $\ensuremath{\mathcal{D}}$ in this case.
\item There is an edge $xy\in E(G_\ensuremath{\mathcal{D}})$ if and only if there is an edge $x'y'$ in the graph
maintained by $\ensuremath{\mathcal{D}}$, where $x',y'\in \ensuremath{\phi}_\ensuremath{\mathcal{D}}(V_\ensuremath{\mathcal{D}})$ are the corresponding
vertices of $x$ and $y$, respectively.
\item The border set $B_\ensuremath{\mathcal{D}}$ of $\ensuremath{\mathcal{D}}$ is always equal to $\ensuremath{\phi}_\ensuremath{\mathcal{D}}(AV_\ensuremath{\mathcal{D}})$.
\end{itemize}
Thus, the graph maintained by $\ensuremath{\mathcal{D}}$ is isomorphic to $G_\ensuremath{\mathcal{D}}$ but can technically
use a different vertex set.
Observe that in $G_\ensuremath{\mathcal{D}}$ there are no edges between the vertices $\ensuremath{\phi}(AV_\ensuremath{\mathcal{D}})$ and
the following fact describes how this is reflected in $\ensuremath{\mathcal{D}}$.
\begin{fact}\label{fac:bempty}
In the graph stored in $\ensuremath{\mathcal{D}}$, no two vertices of $B_\ensuremath{\mathcal{D}}$ are adjacent.
\end{fact}
Note that as the sets $V_\ensuremath{\mathcal{D}}$ and $V_{\ensuremath{\mathcal{D}}'}$ might overlap for $\ensuremath{\mathcal{D}}\neq\ensuremath{\mathcal{D}}'$,
the vertices of $V$ can be represented in multiple data structures.
\begin{lemma}\label{lem:uniq_vert}
Suppose for $v\in V$ we have $v\in V(G_{\ensuremath{\mathcal{D}}_1})$ and $v\in V(G_{\ensuremath{\mathcal{D}}_2})$. Then, $v\in V(G_\ensuremath{\mathcal{D}})$,
where $\ensuremath{\mathcal{D}}$ is the lowest common ancestor of $\ensuremath{\mathcal{D}}_1$ and $\ensuremath{\mathcal{D}}_2$.
\end{lemma}
\iffull
\begin{proof}
We first prove that for $i\neq j$, $\ensuremath{\phi}(V(P_i))\cap \ensuremath{\phi}(V(P_j))\subseteq\ensuremath{\phi}(\bnd{\ensuremath{\mathcal{R}}})$.
Assume the contrary.
Thus, there exists such $w\in \ensuremath{\phi}(V(P_i))\cap \ensuremath{\phi}(V(P_j))$ that $w\notin \ensuremath{\phi}(\bnd{\ensuremath{\mathcal{R}}})$.
But for $x\in V$, $G_0[\ensuremath{\phi}^{-1}(x)]$ is a connected subgraph of $G_0$
and thus $G_0[\ensuremath{\phi}^{-1}(w)]$ is connected and contains both some vertex of $P_i$
and some vertex for $P_j$.
But each path from $V(P_i)$ to $V(P_j)$ in $G_0$ has to go through a vertex of $\bnd{R}$,
by the definition of an $r$-division.
Hence $\bnd{R}\cap\ensuremath{\phi}^{-1}(w)\neq\emptyset$ and $w\in\ensuremath{\phi}(\bnd{\ensuremath{\mathcal{R}}})$, a contradiction.
Analogously one can prove that for any $i$ and $j\neq k$, $\ensuremath{\phi}(V(P_{i,j}))\cap \ensuremath{\phi}(V(P_{i,k}))\subseteq\ensuremath{\phi}(\bnd{\ensuremath{\mathcal{R}}_i})$.
Suppose that $v\in V(G_{\ensuremath{\mathcal{D}}_1})\cap V(G_{\ensuremath{\mathcal{D}}_2})$. If for some $i\neq j$ we have
$\ensuremath{\mathcal{D}}_1\in\{\ensuremath{\pi}_i\}\cup\bigcup_k\{\ensuremath{\pi}_{i,k}\}$ and $\ensuremath{\mathcal{D}}_2\in\{\ensuremath{\pi}_j\}\cup\bigcup_k\{\ensuremath{\pi}_{j,k}\}$
then $v\in\ensuremath{\phi}(P_i)\cap\ensuremath{\phi}(P_j)$ and we conclude $v\in\ensuremath{\phi}(\bnd{R})$ and hence $v\in V(G_\ensuremath{\pi})$.
Analogously we prove that if $\ensuremath{\mathcal{D}}_1=\ensuremath{\pi}_{i,j}$ and $\ensuremath{\mathcal{D}}_2=\ensuremath{\pi}_{i,k}$ for some $j\neq k$, then
$v\in V(G_{\ensuremath{\pi}_i})$.
\end{proof}
\fi
By Lemma~\ref{lem:uniq_vert}, each vertex $v\in V$ is represented in a unique
data structure of minimal level, a lowest common ancestor of all data structures
where $v$ is represented.
We denote such a data structure by $\ensuremath{\mathcal{D}}(v)$.
Observe that for any $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$ the vertices $\{v:\ensuremath{\mathcal{D}}(v)=\ensuremath{\mathcal{D}}\}$ are represented
in $\ensuremath{\mathcal{D}}$ by $\ensuremath{\phi}_{\ensuremath{\mathcal{D}}}(V_\ensuremath{\mathcal{D}})\setminus \ensuremath{\phi}_{\ensuremath{\mathcal{D}}}(AV_\ensuremath{\mathcal{D}})$.
We now describe the way we index the vertices of $V$. This is required, as upon a contraction,
our data structure returns an identifier of a new vertex.
We also reuse the names of the initial vertices $V_0$, as the
bordered vertex merging data structures do.
Namely, a vertex $v\in V$ is labeled with $\ensuremath{\phi}_{\ensuremath{\mathcal{D}}(v)}(v')\in V_0$, where $v'$ represents
$v$ in $\ensuremath{\mathcal{D}}(v)$.
Note that, as the bordered vertex merging data structures always merge one vertex
involved into the other, for any $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$ we have
$\ensuremath{\phi}_\ensuremath{\mathcal{D}}(V_\ensuremath{\mathcal{D}})\setminus\ensuremath{\phi}_\ensuremath{\mathcal{D}}(AV_\ensuremath{\mathcal{D}})\subseteq V_\ensuremath{\mathcal{D}}\setminus AV_\ensuremath{\mathcal{D}}$.
Hence the label sets used by distinct sets $\{v:\ensuremath{\mathcal{D}}(v)=\ensuremath{\mathcal{D}}\}$ are distinct,
since the sets of the form $V_\ensuremath{\mathcal{D}}\setminus AV_\ensuremath{\mathcal{D}}$ are pairwise disjoint.
Such a labeling scheme makes it easy to find the data structure
$\ensuremath{\mathcal{D}}(v)$ by looking only at the label.
For brevity, in the following we sometimes do not distinguish between
the set $V$ and the set of labels $\bigcup_{\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}}\left(\ensuremath{\phi}_\ensuremath{\mathcal{D}}(V_\ensuremath{\mathcal{D}})\setminus\ensuremath{\phi}_\ensuremath{\mathcal{D}}(AV_\ensuremath{\mathcal{D}})\right)$.
\begin{lemma}\label{lem:gedge}
Let $uv=e\in\simp{E}$ and $h(\ensuremath{\mathcal{D}}(u))\geq h(\ensuremath{\mathcal{D}}(v))$.
Then $e\in E(G_{\ensuremath{\mathcal{D}}(u)})$ and either $\ensuremath{\mathcal{D}}(u)=\ensuremath{\mathcal{D}}(v)$ or $\ensuremath{\mathcal{D}}(u)$ is a descendant of $\ensuremath{\mathcal{D}}(v)$.
\end{lemma}
\iffull
\begin{proof}
If $\{u,v\}\subseteq\ensuremath{\phi}(\bnd{R})$, then clearly $h(\ensuremath{\mathcal{D}}(u))=h(\ensuremath{\mathcal{D}}(v))=0$, $e\in E(G_\ensuremath{\pi})$
and the lemma holds.
Moreover, no $G_\ensuremath{\mathcal{D}}$ such that $\ensuremath{\mathcal{D}}$ is a descendant of $\ensuremath{\pi}$ can contain the edge $uv$.
Otherwise, $u$ does not belong to $\ensuremath{\phi}(\bnd{R})$.
Consequently, by Fact~\ref{fac:edge-exists}~and~Lemma~\ref{lem:uniq_vert}, there exists exactly one $i$ such that $\ensuremath{\phi}(u)$
is a vertex of some graph $G_\ensuremath{\mathcal{D}}$, and $e$ is an edge of some graph $G_{\ensuremath{\mathcal{D}}'}$,
where $\ensuremath{\mathcal{D}},\ensuremath{\mathcal{D}}'$ are data structures in the subtree of $\ensuremath{\Pi}$ rooted of $\ensuremath{\pi}_i$.
If $v\notin \ensuremath{\phi}(\bnd{R})$, $v$ cannot be a vertex of any $G_{\ensuremath{\mathcal{D}}''}$, where $\ensuremath{\mathcal{D}}''$
is in the subtree of $\ensuremath{\pi}_j$, $j\neq i$.
Again, if $\{u,v\}\subseteq\ensuremath{\phi}(\bnd{P_i}\cup\bnd{\ensuremath{\mathcal{R}}_i})$, then
$uv\in E(G_{\ensuremath{\pi}_i})$, $\ensuremath{\mathcal{D}}(u)=\ensuremath{\pi}_i$ and no descendant of $G_{\ensuremath{\pi}_i}$ can contain $uv$.
If not, we analogously get that there might exist at most one $G_{\ensuremath{\pi}_{i,j}}$
containing the edge $uv$ and the vertex $u$.
If $v\notin \ensuremath{\phi}(\bnd{P_i}\cup\bnd{\ensuremath{\mathcal{R}}_i})$, then only $G_{\ensuremath{\pi}_{i,j}}$ can
contain the vertex $v$.
In all cases, $\ensuremath{\mathcal{D}}(u)=\ensuremath{\mathcal{D}}(v)$ or $\ensuremath{\mathcal{D}}(u)$ is a descendant of $\ensuremath{\mathcal{D}}(v)$.
\end{proof}
\fi
\begin{lemma}\label{lem:bind}
Let $uv$ be an edge of some $G_\ensuremath{\mathcal{D}}$, $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$.
If $\{u,v\}\subseteq V(G_{\ensuremath{\mathcal{D}}'})$, where $\ensuremath{\mathcal{D}}'\neq\ensuremath{\mathcal{D}}$, then
$\ensuremath{\mathcal{D}}'$ is a descendant of $\ensuremath{\mathcal{D}}$ and
both $u$ and $v$ are represented as border vertices of $\ensuremath{\mathcal{D}}'$.
\end{lemma}
\iffull
\begin{proof}
Suppose wlog. that $h(\ensuremath{\mathcal{D}}(u))\geq h(\ensuremath{\mathcal{D}}(v))$.
By Lemma~\ref{lem:gedge}, we have $\ensuremath{\mathcal{D}}=\ensuremath{\mathcal{D}}(u)$.
Let $\{u,v\}\subseteq V(G_{\ensuremath{\mathcal{D}}'})$.
If $\ensuremath{\mathcal{D}}'$ is a descendant of $\ensuremath{\mathcal{D}}$,
then $\{u,v\}\subseteq \ensuremath{\phi}(AV_{\ensuremath{\mathcal{D}}'})$ and by the invariants
maintained by our data structure, $u$ and $v$ are represented
by the vertices of $B_{\ensuremath{\mathcal{D}}'}$.
Suppose $\ensuremath{\mathcal{D}}'\neq \ensuremath{\mathcal{D}}$ and $\ensuremath{\mathcal{D}}'$ is not a descendant of $\ensuremath{\mathcal{D}}$.
Then, by Lemma~\ref{lem:uniq_vert}, the lowest common ancestor of $\ensuremath{\mathcal{D}}'$ and $\ensuremath{\mathcal{D}}$ contains
the vertex $u$ and is an ancestor of $\ensuremath{\mathcal{D}}=\ensuremath{\mathcal{D}}(u)$, a contradiction.
\end{proof}
\fi
\begin{lemma}\label{lem:children}
Let $v\in \ensuremath{\phi}(V_\ensuremath{\mathcal{D}})$, where $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$. Then, $v$ is represented in $O(|\ensuremath{\phi}^{-1}_\ensuremath{\mathcal{D}}(v)|)$
data structures $\ensuremath{\mathcal{D}}'$ such that $\ensuremath{par}(\ensuremath{\mathcal{D}}')=\ensuremath{\mathcal{D}}$.
\end{lemma}
\iffull
\begin{proof}
Let $\ensuremath{\mathcal{D}}'$ be a child of $\ensuremath{\mathcal{D}}$. If $v$ is represented in $\ensuremath{\mathcal{D}}'$, then
$v\in \ensuremath{\phi}(V_\ensuremath{\mathcal{D}})\cap\ensuremath{\phi}(V_{\ensuremath{\mathcal{D}}'})$.
It follows that as $G_0[\ensuremath{\phi}^{-1}(v)]$ is a connected subgraph of $G_0$,
it contains a path between some vertex $x\in V_\ensuremath{\mathcal{D}}$ and some vertex $y\in V_{\ensuremath{\mathcal{D}}'}$.
Assume $x\notin V_{\ensuremath{\mathcal{D}}'}$.
If $\ensuremath{\mathcal{D}}'=\ensuremath{\pi}_i$, then in fact we have $x\in V(P_j)$, for $j\neq i$ and any path
from $x$ to $y$ has to go through a vertex of $z\in \bnd{P_i}$ and as $\bnd{P_i}\subseteq V_\ensuremath{\mathcal{D}}\cap V_{\ensuremath{\mathcal{D}}'}$,
there exists a vertex from $V_\ensuremath{\mathcal{D}}\cap V_{\ensuremath{\mathcal{D}}'}$ in $\ensuremath{\phi}^{-1}(v)$.
Similarly, if $\ensuremath{\mathcal{D}}'=\ensuremath{\pi}_{i,j}$, there exists a vertex of $\bnd{P_{i,j}}$
in $\ensuremath{\phi}^{-1}(v)$ and we again obtain $\ensuremath{\phi}^{-1}(v)\cap V_\ensuremath{\mathcal{D}}\cap V_{\ensuremath{\mathcal{D}}'}\neq\emptyset$.
Recall that we maintain an invariant $\ensuremath{\phi}^{-1}_\ensuremath{\mathcal{D}}(v)=\ensuremath{\phi}^{-1}(v)\cap V_\ensuremath{\mathcal{D}}$.
Hence, $\ensuremath{\phi}^{-1}_\ensuremath{\mathcal{D}}(v)\cap V_{\ensuremath{\mathcal{D}}'}\neq\emptyset$.
However, for each $w\in \ensuremath{\phi}^{-1}_\ensuremath{\mathcal{D}}(v)$, there are only $O(1)$ child data structures $\ensuremath{\mathcal{D}}'$
such that $w\in V_\ensuremath{\mathcal{D}}'$, by the constant degree assumption.
It follows that there are can be at most $O(|\ensuremath{\phi}^{-1}_\ensuremath{\mathcal{D}}(v)|)$ data structures $\ensuremath{\mathcal{D}}'$
such that $\ensuremath{par}(\ensuremath{\mathcal{D}}')=\ensuremath{\mathcal{D}}$ and $\ensuremath{\phi}^{-1}_\ensuremath{\mathcal{D}}(v)\cap V_{\ensuremath{\mathcal{D}}'}\neq\emptyset$,
which in turn means that there are at most $O(|\ensuremath{\phi}^{-1}_\ensuremath{\mathcal{D}}(v)|)$ data structures $\ensuremath{\mathcal{D}}'$
such that $v$ is also represented in $\ensuremath{\mathcal{D}}'$.
\end{proof}
\fi
We also use the following auxiliary components for each $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$:
\newcommand{\ensuremath{\beta}}{\ensuremath{\beta}}
\newcommand{\ensuremath{\gamma}}{\ensuremath{\gamma}}
\begin{itemize}
\item For each $x\in \ensuremath{\phi}_{\ensuremath{\mathcal{D}}}(AV_\ensuremath{\mathcal{D}})$ we maintain a pointer
$\ensuremath{\beta}_\ensuremath{\mathcal{D}}(x)$ into $y\in \ensuremath{\phi}_{\ensuremath{par}(\ensuremath{\mathcal{D}})}(AV_\ensuremath{\mathcal{D}})$,
such that $x$ and $y$ represent
the same vertex of the maintained graph $G$.
\item A dictionary (we use a balanced BST) $\ensuremath{\gamma}_\ensuremath{\mathcal{D}}$ mapping a pair $(\ensuremath{\mathcal{D}}',x)$, where $\ensuremath{\mathcal{D}}'$ is a child of $\ensuremath{\mathcal{D}}$
and $x\in \ensuremath{\phi}_\ensuremath{\mathcal{D}}(V_\ensuremath{\mathcal{D}})$, to a vertex $y\in \ensuremath{\phi}_{\ensuremath{\mathcal{D}}'}(AV_\ensuremath{\mathcal{D}})$ iff $x$ and $y$
represent the same vertex of $V$.
\end{itemize}
Another component of our data structure
is the forest $\ensuremath{\mathcal{T}}$ of reported parallelisms:
for each reported parallelism $e\to \alpha(e)$, we make $e$ a child of $\alpha(e)$
in $\ensuremath{\mathcal{T}}$.
Note that the forest $\ensuremath{\mathcal{T}}$ allows us to go through all the edges parallel to $\alpha(e)$
in time linear in their number.
\begin{lemma}\label{lem:access-mapping}
For $v_0\in V_0$, we can compute $\ensuremath{\phi}(v_0)$ and find $\ensuremath{\mathcal{D}}(\ensuremath{\phi}(v_0))$ in $O(1)$ time.
\end{lemma}
\iffull
\begin{proof}
Let $P_{i,j}$ be any piece such that $v_0\in V(P_{i,j})$.
First, we can compute the representation $x=\ensuremath{\phi}_{\ensuremath{\pi}_{i,j}}(v_0)$
of $\ensuremath{\phi}(v_0)$ in $\ensuremath{\pi}_{i,j}$
in $O(1)$ time, as the data structure $\ensuremath{\pi}_{i,j}$ stores
the mapping $\ensuremath{\phi}_{\ensuremath{\pi}_{i,j}}$ explicitly.
Set $\ensuremath{\mathcal{D}}=\ensuremath{\pi}_{i,j}$.
Next, if $x\in\ensuremath{\phi}_{\ensuremath{\mathcal{D}}}(AV_\ensuremath{\mathcal{D}})$ (or, technically speaking,
if $x\in AV_\ensuremath{\mathcal{D}}$), we follow the pointer $\ensuremath{\beta}_\ensuremath{\mathcal{D}}(x)$ to
the data structure of lower level and repeat if needed,
until we reach the data structure $\ensuremath{\mathcal{D}}(\ensuremath{\phi}(v_0))$.
As the tree of data structures has $3$ levels, we follow
$O(1)$ pointers.
\end{proof}
\fi
\begin{lemma}\label{lem:transl}
Let $v\in V(G_\ensuremath{\mathcal{D}})$. For any $\ensuremath{\mathcal{D}}'$, such that $\ensuremath{par}(\ensuremath{\mathcal{D}}')=\ensuremath{\mathcal{D}}$,
we can compute the vertex $v'$
representing $v$ in $G_{\ensuremath{\mathcal{D}}'}$ (or detect that such $v'$ does not exist)
in $O(\log|V_\ensuremath{\mathcal{D}}|)$ time.
\end{lemma}
\iffull
\begin{proof}
By Lemma~\ref{lem:children}, the number of entries in $\ensuremath{\gamma}_\ensuremath{\mathcal{D}}$ is
$O\left(\sum_{v\in V(G_\ensuremath{\mathcal{D}})}|\ensuremath{\phi}^{-1}_\ensuremath{\mathcal{D}}(v)|\right)=O(V_\ensuremath{\mathcal{D}})$.
The cost of any operation on a balanced binary search is logarithmic in the size
of the tree.
\end{proof}
\fi
We now describe how to implement the call $(s,P,L):=\ensuremath{\textup{\texttt{contract}}}(e)$, where $uv=e\in E$, $u,v\in V$.
Suppose the
initial endpoints of $e$ were $u_0,v_0\in V_0$.
First, we iterate through the tree $T_e\in \ensuremath{\mathcal{T}}$ containing $e$
to find $\alpha(e)$.
By Lemma~\ref{lem:access-mapping}, we can find the vertices $u,v$ along with
the respective data structures $\ensuremath{\mathcal{D}}(u),\ensuremath{\mathcal{D}}(v)$, based on $u_0,v_0$
in $O(1)$ time.
Assume wlog. that $h(\ensuremath{\mathcal{D}}(u))\geq h(\ensuremath{\mathcal{D}}(v))$.
By Lemma~\ref{lem:gedge}, $\alpha(e)$ is an edge of $G_{\ensuremath{\mathcal{D}}(u)}$.
Although we are asked to contract $e$, we conceptually contract $\alpha(e)$, by
issuing a merge of $u$ and $v$ to $\ensuremath{\mathcal{D}}(u)$.
To reflect that we were actually asked to contract $e$, we include
all the edges of $T_e\setminus\{e\}$ in $L$ as self-loops.
The merge might make $\ensuremath{\mathcal{D}}(u)$ report some parallelisms $e_1\to e_2$.
In such a case we report $e_1\to e_2$ to the user (by including it in $P$) and update the forest $\ensuremath{\mathcal{T}}$.
We now have to reflect the contraction of $e$ in all the
required data structures $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$, so that our invariants
are satisfied.
Assume wlog. that $u$ is merged into $v$ in $\ensuremath{\mathcal{D}}$.
If before the contraction, both $u$ and $v$ were the vertices
of some $G_{\ensuremath{\mathcal{D}}'}$, $\ensuremath{\mathcal{D}}'\neq\ensuremath{\mathcal{D}}$, then
by Lemma~\ref{lem:bind}, $\ensuremath{\mathcal{D}}'$ is a descendant of $\ensuremath{\mathcal{D}}$.
By a similar argument as in the proof of Lemma~\ref{lem:brute_repr},
we can afford to iterate through $\ensuremath{\phi}_{\ensuremath{\mathcal{D}}}^{-1}(u)$ without increasing the asymptotic
performance of the $u$-into-$v$ merge performed by $\ensuremath{\mathcal{D}}$,
as long as we spend $O(\log|V_\ensuremath{\mathcal{D}}|)$ time per element of $\ensuremath{\phi}_{\ensuremath{\mathcal{D}}}^{-1}(u)$.
By Lemma~\ref{lem:children},
there are $O(|\ensuremath{\phi}_{\ensuremath{\mathcal{D}}}^{-1}(u)|)$ data structures
$\ensuremath{\mathcal{D}}_1,\ensuremath{\mathcal{D}}_2,\ldots$ that are the children of $\ensuremath{\mathcal{D}}$
and contain the representation of $u$.
For each such $\ensuremath{\mathcal{D}}_i$, we first use the dictionary $\ensuremath{\gamma}_\ensuremath{\mathcal{D}}$
to find the vertex $x$ representing $u$ in $\ensuremath{\mathcal{D}}_i$,
and update $\ensuremath{\beta}_{\ensuremath{\mathcal{D}}_i}(x)$ to $v$.
Then, using Lemma~\ref{lem:transl}, we check whether $v\in V(G_{\ensuremath{\mathcal{D}}_i})$ in $O(\log|V_\ensuremath{\mathcal{D}}|)$ time.
If not, we set $\ensuremath{\gamma}_\ensuremath{\mathcal{D}}(\ensuremath{\mathcal{D}}_i,v)$ to $x$.
Otherwise, we merge $u$ and $v$ in $\ensuremath{\mathcal{D}}_i$ and handle this
merge -- in terms of updating the auxiliary components $\ensuremath{\beta}$ and $\ensuremath{\gamma}$
-- analogously as for~$\ensuremath{\mathcal{D}}$.
This is legal, as $u,v\in\ensuremath{\phi}_{\ensuremath{\mathcal{D}}_i}(AV_{\ensuremath{\mathcal{D}}_i})$ and
thus $u$ and $v$ are border vertices in $\ensuremath{\mathcal{D}}_i$, by Fact~\ref{fac:bempty}.
The merge may cause $\ensuremath{\mathcal{D}}_i$ to report some parallelisms.
We handle them as described above in the case of the data structure~$\ensuremath{\mathcal{D}}$.
Note however
that merging border vertices
cannot cause reporting of new border edges (i.e., those with both
endpoints in $B_{\ensuremath{\mathcal{D}}_i}$).
The merge of $u$ and $v$ in $\ensuremath{\mathcal{D}}$ might also create some new edges $e'=xy$ between
the vertices $\ensuremath{\phi}_\ensuremath{\mathcal{D}}(AV_\ensuremath{\mathcal{D}})$ in $G_\ensuremath{\mathcal{D}}$.
Note that in this case $\ensuremath{\mathcal{D}}$ reports $xy$ as a border edge
and also we know that $h(\ensuremath{\mathcal{D}}(x))<h(\ensuremath{\mathcal{D}})$ and $h(\ensuremath{\mathcal{D}}(y))<h(\ensuremath{\mathcal{D}})$.
Hence, $e'$ should end up in some of the ancestors of $\ensuremath{\mathcal{D}}$.
We insert $e'$ to $\ensuremath{par}(\ensuremath{\mathcal{D}})$.
$\ensuremath{par}(\ensuremath{\mathcal{D}})$ might also report $xy$ as a border edge and in
that case $e'$ is inserted to the grandparent of $\ensuremath{\mathcal{D}}$.
It is also possible that $e'$ will be reported a parallel edge
in some of the ancestors of $\ensuremath{\mathcal{D}}$: in such a case an appropriate
directed parallelism is added to $P$.
Note that all the performed merges and edge insertions
are only used to make the graphs represented by the data structures
satisfy their definitions.
Fact~\ref{fac:minor} implies that the represented graphs
remain planar at all times.
We now describe how the other operations are implemented.
To compute $u,v\in V$ such that $\{u,v\}=\ensuremath{\textup{\texttt{vertices}}}(e)$, where $e\in E$,
we first use Lemma~\ref{lem:access-mapping} to compute
$u=\ensuremath{\phi}(u_0)$ and $v=\ensuremath{\phi}(v_0)$, where $u_0,v_0$ are the initial endpoints
of $e$.
Clearly, this takes $O(1)$ time.
To maintain the values $\ensuremath{\textup{\texttt{deg}}}(v)$ of each $v\in V$, we simply set
$\ensuremath{\textup{\texttt{deg}}}(s):=\ensuremath{\textup{\texttt{deg}}}(u)+\ensuremath{\textup{\texttt{deg}}}(v)-1$ after a call $(s,P,L):=\ensuremath{\textup{\texttt{contract}}}(e)$.
Additionally, for each directed parallelism $e_1\to e_2$ we decrease
$\ensuremath{\textup{\texttt{deg}}}(x)$ and $\ensuremath{\textup{\texttt{deg}}}(y)$ by one, where $\{x,y\}=\ensuremath{\textup{\texttt{vertices}}}(e_1)$.
\newcommand{\ensuremath{\mathcal{E}}}{\ensuremath{\mathcal{E}}}
For each $u\in V$ we maintain a doubly-linked list $\ensuremath{\mathcal{E}}(u)=\{\alpha(uv):uv\in \simp{E}\}$.
Additionally, for each $e\in \simp{E}$ we store the pointers to the two occurrences
of $e$ in the lists $\ensuremath{\mathcal{E}}(\cdot)$.
Again after a call $(s,P,L):=\ensuremath{\textup{\texttt{contract}}}(e)$, where $e=uv$, we set $\ensuremath{\mathcal{E}}(s)$ to be a concatenation
of the lists $\ensuremath{\mathcal{E}}(u)$ and $\ensuremath{\mathcal{E}}(v)$.
Finally, we remove all the occurrences of edges $\{\alpha(e)\}\cup \{e_1:(e_1\to e_2)\in P\}$
from the lists $\ensuremath{\mathcal{E}}(\cdot)$.
Now, the implementation of the iterator $\ensuremath{\textup{\texttt{neighbors}}}(u)$ is easy,
as the endpoints not equal to $u$ of the edges in $\ensuremath{\mathcal{E}}(u)$ form exactly the set $N(u)$.
\begin{lemma}\label{lem:easy-ops}
The operations $\ensuremath{\textup{\texttt{vertices}}}$, $\ensuremath{\textup{\texttt{deg}}}$ and $\ensuremath{\textup{\texttt{neighbors}}}$ run in $O(1)$ worst-case time.
\end{lemma}
To support the operation $\ensuremath{\textup{\texttt{edge}}}(u,v)$ in $O(1)$ time, we first turn all the
dictionaries $\ensuremath{\gamma}_\ensuremath{\mathcal{D}}$ into hash tables with $O(1)$
expected update time and $O(1)$ worst-case query time \cite{Dietzfelbinger:1994}.
Our data structure thus ceases to be deterministic, but
we obtain a more efficient version of Lemma~\ref{lem:transl}
that allows us to compute the representation of a vertex
in a child data structure $\ensuremath{\mathcal{D}}'$ in $O(1)$ time.
By Lemma~\ref{lem:gedge}, the edge $uv$ can be contained in
either $\ensuremath{\mathcal{D}}(u)$ or $\ensuremath{\mathcal{D}}(v)$, whichever has greater level.
Wlog. suppose $h(\ensuremath{\mathcal{D}}(u))\geq h(\ensuremath{\mathcal{D}}(v))$.
Again, by Lemma~\ref{lem:gedge}, $\ensuremath{\mathcal{D}}(u)$ is a descendant
of $\ensuremath{\mathcal{D}}(v)$.
Thus, we can find $v$ in $\ensuremath{\mathcal{D}}(u)$ by applying
Lemma~\ref{lem:transl} at most twice.
\begin{lemma}\label{lem:edge-op}
If the dictionaries $\ensuremath{\gamma}_\ensuremath{\mathcal{D}}$ are implemented as hash tables,
the operation $\ensuremath{\textup{\texttt{edge}}}$ runs in $O(1)$ worst-case time.
\end{lemma}
\iffull
The following lemma summarizes the total time spent
on updating all the vertex merging data structures $\ensuremath{\Pi}$ and is proved
in Section~\ref{sec:time_proof}.
\fi
\begin{lemma}\label{lem:time}
The cost of all operations on the data structures $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$ is $O(n)$.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
To initialize our data structure, we initialize all the data structures $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$
and the auxiliary components. This takes $O(n)$ time.
The time needed to perform any sequence of operations $\ensuremath{\textup{\texttt{contract}}}$ is proportional
to the total time used by the data structures $\ensuremath{\Pi}$, as the cost of maintaining
the auxiliary components can be charged to the operations performed by
the individual structures of $\ensuremath{\Pi}$.
By Lemma~\ref{lem:time}, this time is $O(n)$.
If the dictionaries $\ensuremath{\gamma}_\ensuremath{\mathcal{D}}$ are implemented as hash tables, this
bound is valid only in expectation.
By combining the above with Lemmas~\ref{lem:easy-ops}~and~\ref{lem:edge-op}, the theorem follows.
\end{proof}
\iffull
\subsection{Running Time Analysis}\label{sec:time_proof}
To bound the operating time of our data structure, we need to analyze,
for any $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$ and any sequence
of edge contractions, the number of changes to $E(G_\ensuremath{\mathcal{D}})$
that result in a costly operation of inserting an edge connecting
non-adjacent vertices into the underlying bordered vertex merging
data structure $\ensuremath{\mathcal{D}}$.
\begin{lemma}\label{lem:total_ins}
Let $\ensuremath{\mathcal{D}}\in\ensuremath{\Pi}$.
After the initialization of $\ensuremath{\mathcal{D}}$, only
the edges initially contained in the graphs represented by the descendants of $\ensuremath{\mathcal{D}}$
might be inserted into $\ensuremath{\mathcal{D}}$, each at most once.
\end{lemma}
\begin{proof}
Note that whenever we report a border edge in $\ensuremath{\mathcal{D}}$, we insert it to $\ensuremath{par}(\ensuremath{\mathcal{D}})$.
\end{proof}
Consider some sequence $S$ of $k$ edge contractions on $G$.
Let $G_i=(V_i,E_i)$ (for $i=0,1,\ldots,k$) be the graph $G$ after $i$
contractions.
Denote by $u_i,v_i\in V_{i-1}$ the vertices involved in the $i$-th
contraction, and by $s_i\in V_i$ the vertex of $G_i$ obtained as a result
of the $i$-th contraction.
We have $V_i=V_{i-1}\setminus\{u_i,v_i\}\cup\{s_i\}$.
Moreover, let $\ensuremath{\phi}_i:V_0\to V_i$ be the mapping $\ensuremath{\phi}$ after $i$ contractions of $S$.
Denote by $\simp{G_i}$ the graph $\simp{G}$ after $i$ contractions.
Let $W\subseteq V_0$. For $i>0$, we define the set $\Delta_i^W\subseteq E(\simp{G_i})$ of ``new'' edges
appearing in the induced subgraph $\simp{G}[\ensuremath{\phi}(W)]$ as a result
of the $i$-th contraction, in the following sense.
An edge $s_iy_i\in E(\simp{G_i})$ is included in $\Delta_i^W$
iff $\{s_i,y_i\}\subseteq \ensuremath{\phi}_i(W)$ and:
\begin{itemize}
\item either $u_i\notin \ensuremath{\phi}_{i-1}(W)$ or $u_iy_i\notin E(\simp{G_{i-1}})$,
\item either $v_i\notin \ensuremath{\phi}_{i-1}(W)$ or $v_iy_i\notin E(\simp{G_{i-1}})$.
\end{itemize}
Note that this definition implies $y_i\in \ensuremath{\phi}_{i-1}(W)$ and $|\{u_i,v_i\}\cap \ensuremath{\phi}_{i-1}(W)|=1$.
Define
$\Psi_W=|\Delta_1^W|+|\Delta_2^W|+\ldots+|\Delta_k^W|.$
\begin{corollary}\label{cor:psi_meaning}
We have $\Psi_W=\sum_{i=1}^k d^W_i$, where $d^W_i$
is the number of edges that should be added to $\simp{G_{i-1}}[\ensuremath{\phi}_{i-1}(W)]$,
after possibly performing a contraction of an edge $u_iv_i$ in $\simp{G_{i-1}}$
(if $\{u_i,v_i\}\cap\ensuremath{\phi}_{i-1}(W)\neq\emptyset$), in order to obtain $\simp{G_i}[\ensuremath{\phi}_i(W)]$.
\end{corollary}
\begin{lemma}\label{lem:subg_bound}
For any $W\subseteq V_0$, $\Psi_W=O(|W|)$.
\end{lemma}
\begin{proof}
Fix some plane embedding of $G_0$. We define semi-strict versions $G_0^W,G_1^W,\ldots,G_k^W$
of graphs $G_0[\ensuremath{\phi}(W)],G_1[\ensuremath{\phi}(W)],\ldots,G_k[\ensuremath{\phi}(W)]$, respectively, so that:
\begin{itemize}
\item $G_0^W=G_0[\ensuremath{\phi}(W)]$. Recall that $G_0$ is simple, and thus its subgraph $G_0[\ensuremath{\phi}(W)]$
is also simple and in particular semi-strict.
\item If $\{u_i,v_i\}\cap \ensuremath{\phi}_{i-1}(W)=\emptyset$, then $G_i^W=G_{i-1}^W$.
\item If $\{u_i,v_i\}\subseteq \ensuremath{\phi}_{i-1}(W)$, then we obtain $G_i^W$ from $G_{i-1}^W$
by first contracting $u_iv_i$.
For any triangular face $f=u_iv_ix_i$ of $G_{i-1}^W$ (there can be between $0$ and $2$ such faces),
the contraction introduces a face $f'=s_ix_i$ of length $2$.
We remove one of these edges $s_ix_i$ from $G_i^W$ so that the face $f'$ is merged with
any neighboring face
and $G_i^W$ is semi-strict.
\item If $|\{u_i,v_i\}\cap \ensuremath{\phi}_{i-1}(W)|=1$, suppose wlog. that $u_i\in \ensuremath{\phi}_{i-1}(W)$
(the case $v_i\in\ensuremath{\phi}_{i-1}(W)$ is symmetrical).
Pick a maximal pairwise non-parallel subset $F_i$ of such edges $v_ib_i$ of $G_{i-1}$,
that $b_i\in\ensuremath{\phi}_{i-1}(W)$ and $u_ib_i\notin E(G_{i-1}^W)$.
Let $G_i^W$ be obtained from the following subgraph of $G_{i-1}$:
$$X_i=(\ensuremath{\phi}_{i-1}(W)\cup\{v_i\},E(G_{i-1}^W)\cup\{u_iv_i\}\cup F_i)$$
by contraction of $u_iv_i$ (which merges vertices $u_i,v_i$ into $s_i$).
Observe that, by definition of $X_i$ the contraction of $u_iv_i$ in $X_i$ does not
introduce parallel edges and as a result $G_i^W$ is semi-strict.
\end{itemize}
The graphs $G_i^W$ are defined in such a way that for any $x,y\in \ensuremath{\phi}_i(W)$,
$xy\in E(\simp{G_i})$ if and only if $xy\in E(G_i^W)$.
As a result, we have $\Delta^W_i\subseteq E(G^W_i)\setminus E(G^W_{i-1})$.
It is thus sufficient to prove
$$\Psi_W\leq |E(G^W_1)\setminus E(G^W_{0})|+|E(G^W_2)\setminus E(G^W_{1})|+\ldots+|E(G^W_k)\setminus E(G^W_{k-1})|=O(|W|).$$
As each $G^W_{i}$ is semi-strict, $|E(G^W_i)|\leq 3|V(G^W_i)|=3|\ensuremath{\phi}_i(W)|\leq 3|W|$.
Moreover, as any contraction in a semi-strict graph decreases the number
of edges by at most $3$, $|E(G^W_{i-1})\setminus E(G^W_i)|\leq 3$.
In fact, by the definition of $G^W_i$, we have
$E(G^W_{i-1})\not\subseteq E(G^W_i)$ if and only if $\{u_i,v_i\}\subseteq\ensuremath{\phi}_{i-1}(W)$,
i.e., when $|V(G^W_i)|<|V(G^W_{i-1})|$.
This may happen for at most $|W|$ values of $i$, as $|V(G^W_0)|=W$.
Denote the set of these values $i$ as $I$.
We have
\begin{align*}
\Psi_W &\leq\sum_{i=1}^k|E(G^W_i)\setminus E(G^W_{i-1})|\\
&=\sum_{i=1}^k|E(G^W_i)\setminus (E(G^W_i)\cap E(G^W_{i-1}))|
=\sum_{i=1}^k|E(G^W_i)|-|E(G^W_i)\cap E(G^W_{i-1})|\\
&\leq \sum_{i=1}^k|E(G^W_i)|-\sum_{i\in\{1,\ldots,k\}\setminus I}|E(G^W_{i-1})|-\sum_{i\in I}(|E(G^W_{i-1})|-3)\\
&=|E(G^W_k)|-|E(G^W_0)|+3|I|
\leq 6|W|=O(|W|).\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:time}]
Recall that by Lemma~\ref{lem:brute_repr}, the cost of any sequence of operations
on $\ensuremath{\mathcal{D}}\in \{\ensuremath{\pi}\}\cup\{\ensuremath{\pi}_i:P_i\in\ensuremath{\mathcal{R}}\}$ is
$O((|V_\ensuremath{\mathcal{D}}|+f_\ensuremath{\mathcal{D}})\log^2{|V_\ensuremath{\mathcal{D}}|}+m_\ensuremath{\mathcal{D}})$, where
$m_\ensuremath{\mathcal{D}}$ is the total number of times an edge is inserted into $\ensuremath{\mathcal{D}}$
and $f_\ensuremath{\mathcal{D}}$ is the number of insertions connecting non-adjacent vertices.
By Lemma~\ref{lem:total_ins}, $m_\ensuremath{\pi}=O(|E_0|)$ and $m_{\ensuremath{\pi}_i}=O(|E(P_i)|)$.
By Corollary~\ref{cor:psi_meaning} and Lemma~\ref{lem:subg_bound},
$f_\ensuremath{\mathcal{D}}=\Psi_{V_\ensuremath{\mathcal{D}}}=O(|V_\ensuremath{\mathcal{D}}|)$.
We have $|V_\ensuremath{\pi}|=O(n/\log^2{n})$ and thus the cost of operating $\ensuremath{\pi}$ is $O(n)$.
Similarly, we have $|V_{\ensuremath{\pi}_i}|=O(\log^4{n}/\log^2{\log^4{n}})$
and the total cost of operating $O(n/\log^4{n})$ data structures $\ensuremath{\pi}_i$
is $O(n/\log^2{\log^4{n}}+\sum_i|E(P_i)|)=O(n)$.
By Lemma~\ref{lem:micro}, after $O(n)$ preprocessing, the total cost
of operating each $\ensuremath{\pi}_{i,j}$ is $O(|V(P_{i,j})|)$ and thus, summed
over all $i,j$, we again obtain $O(n)$ time.
\end{proof}
\fi
\section{Omitted Proofs}\label{a:omitted}
\uniquematching*
\input{proof_unique_matching}
\begin{restatable}{theorem}{vertds}
\label{thm:2vert_ds}
Let $G=(V,E)$ be a planar graph and let $n=|V|$.
There exists a deterministic data structure that maintains $G$ subject to edge deletions and can answer $2$-vertex connectivity queries in $O(1)$ time.
Its total update time is $O(n \log n)$.
\end{restatable}
\begin{proof}
The only bottleneck of the data structure of \cite{Giammarresi:96} is the following
subproblem (otherwise the total cost of the data structure is $O(n\log{n})$).
Suppose we delete an edge $e$ separating the faces $f_l,f_r$, $f_l\neq f_r$.
Denote by $C(f)$ the cycle bounding the face $f$.
We want to find the vertices of $C(f_l)\cap C(f_r)$ in order they appear
on this cycles (the order is the same for both faces up to reversal).
In the data structure of \cite{Giammarresi:96}, the cycles $C(f_l)$ are
represented as doubly linked lists and thus they can be maintained in amortized
constant time under edge deletions (which correspond to face merges).
The set $C(f_l)\cap C(f_r)$ is computed by iterating through the shorter
bounding cycle (say $C(f_l)$) and checking for each $v\in C(f_l)$ whether
$v$ is adjacent with $f_r$.
This, in turn, is accomplished by storing for each vertex $v\in V$ the set
of neighboring faces in a balanced binary search tree.
Consequently $C(f_l)\cap C(f_r)$ is computed in $O(|C(f_l)|\log{n})$ time.
As we always iterate through the smaller of the cycles which are subsequently
joined, this gives us $O(n\log^2{n})$ total time for any sequence of edge deletions.
We now show how the step of computing $C(f_l)\cap C(f_r)$ in order can be sped
up to $O(|C(f_l)|)$.
This will make the whole data structure handle any sequence of updates in $O(n\log{n})$
total time.
To proceed, we need the notion of a \emph{face-vertex} graph of $G$, denoted by $\fv{G}$.
This is a plane embedded graph, which is constructed as follows.
First, embed a single vertex inside every face of $G$, thus obtaining a set of vertices $F$.
The vertex set of $\fv{G}$ is $V(G) \cup F$.
We call each element of $V(G)$ a \emph{v-vertex} and each element of $F$ an \emph{f-vertex}.
Now, consider each face of $G$ one by one.
For a face $f$ let $v_1, \ldots, v_k$ be the sequence of vertices on the boundary of $f$ (note that we may have $v_i = v_j$ for $i \neq j$).
Then for each $1 \leq i \leq k$, $\fv{G}$ has a single edge connecting vertex $v_i$ with the vertex embedded inside the face $f$.
No other edges are added to $\fv{G}$.
In particular, every edge of $\fv{G}$ connects an f-vertex and a v-vertex, so $\fv{G}$ is bipartite.
Also, we may have multiple edges between two vertices of $\fv{G}$, if the boundary of some face of $G$ goes through a vertex multiple times.
We build the data structure $D(H)$ of Theorem~\ref{thm:main} for
the graph $H=\fv{G}\cup\dual{G}$.
Clearly, this graph is planar.
The deletions of edge of $G$ are reflected in $H$ by contractions of edges
connecting the faces.
Note that for each $v\in C(f_l)\cap C(f_r)$ there exist edges $vf_l$
and $vf_r$ in $H$.
Thus, after merging $f_l$ and $f_r$ into a face $f$, we will have at least two edges
$vf$ in $H$ that have not been previously parallel.
Hence, some parallelism $e\to e'$, where $e=vf$ will be reported.
As a result, we obtain the set $C(f_l)\cap C(f_r)$ from the set of
parallel edges reported by $D(H)$ after the last contraction.
Note that $D(H)$ might also report some parallel edges connecting
two face-vertices of $H$; such edges are ignored.
The total time for obtaining all the sets $C(f_l)\cap C(f_r)$ is linear,
by Theorem~\ref{thm:main}.
However, recall that the data structure of \cite{Giammarresi:96} requires
the elements $C(f_l)\cap C(f_r)$ in order of their occurrences on the cycle bounding $f_l$
and unfortunately $D(H)$ does not give us this order.
That is why we also need to traverse $C(f_l)$ to obtain the order of $C(f_l)\cap C(f_r)$.
Hence, $O(|C(f_l)|)$ additional time is needed.
\end{proof}
\begin{restatable}{theorem}{tedgeds}
\label{thm:3edge_ds}
Let $G=(V,E)$ be a planar graph and let $n=|V|$.
There exists a deterministic data structure that maintains $G$ subject to edge deletions and can answer $3$-edge connectivity queries in $O(1)$ time.
Its total update time is $O(n \log n)$.
\end{restatable}
\begin{proof}
The data structure of \cite{Giammarresi:96} maintains explicitly
the so-called \emph{cactus tree} which succinctly
describes the structure of $2$-edge-cuts in $G$.
The vertices of a cactus trees $T$ are the $3$-edge-components
of $G$ and the edge set of $T$ consist of edge-disjoint
simple cycles.
The core problem of the update procedure is deleting
an edge contained in a $3$-edge-connected component $C$.
When such edge $e$ of $G$ is deleted, a vertex of the cactus tree
representing $C$ is possibly split
and some cycles of $T$ are updated;
certain pairs of cycles of the cactus tree
are merged, whereas some cycles get extended by a single edge.
Although not stated explicitly, the total cost of maintaining
the cactus tree $T$ in \cite{Giammarresi:96}, once we know
which pairs of cycles should be merged and which cycles should
be extended after a deletion,
is $O(n\log{n})$, as the total number of updates to $T$
is in fact linear for any sequence of edge deletions
(in each such case, the number of vertices of $T$ grows by at least $1$).
The most computationally demanding part of the procedure updating $T$
in \cite{Giammarresi:96} is deciding which pairs of cycles of $T$
should be merged and which cycles should be extended:
it might take as much as $\Theta(n\log^2{n})$ time for the entire sequence
of edge updates to $G$.
This problem is reduced in \cite{Giammarresi:96} to the following.
Let the deleted edge $e$ separate two faces $f_l$ and $f_r$ of $G$ ($f_l\neq f_r$).
We need to find the set of faces $f$ of $G$ such that
$f\notin\{f_l,f_r\}$ and $f$ is neighboring with both $f_l$ and $f_r$
before the edge deletion, or, in other words, $\{f_lf,f_rf\}\subseteq E(\dual{G})$.
Note that after contraction $\dual{e}$ in $\dual{G}$ (which identifies the faces
$f_l$ and $f_r$), all such pairs
of edges constitute pairs of parallel edges of $\dual{G}$ that have
not been previously parallel.
Recall that if $\dual{G}$ was maintained using
the data structure of Theorem~\ref{thm:main},
a directed parallelism $e_1\to e_2$ or $e_2\to e_1$, where $e_1=f_lf \land e_2=f_rf$
would have been reported immediately after contracting $\dual{e}$.
Consequently, we can solve this subproblem in $O(n)$ total time
by maintaining the graph $\dual{G}$ under contractions using
the data structure of Theorem~\ref{thm:main}.
As all other subproblems of the update procedure of \cite{Giammarresi:96} are
solved in $O(n\log{n})$ total time, the theorem follows.
\end{proof}
\begin{lemma}\label{lem:maximal-edge-connected}
Let $k\geq 2$.
Suppose there exists a data structure $D_k$ maintaining a planar
graph $H$ under edge contractions and reporting edges of $H$ participating
in some cycle of length $i$, $i\leq k$, in an online manner.
Denote by $O(f_k(m))$ the total time needed by $D_k$ to execute any sequence of contractions on a graph on $m$ edges.
Then, there exists an algorithm computing the maximal
$(k+1)$-edge-connected subgraphs of a planar graph
$G$ in $O(f_k(m))$ time.
\end{lemma}
\input{proof_maximal}
\maxedgecon*
\begin{proof}
Recall that, by Theorem~\ref{thm:main}, we can maintain
any planar graph $H$ under edge contractions in linear total time,
so that the edges participating in $2$-cycles, i.e., parallel edges,
are reported in an online fashion.
To finish the proof, we apply Lemma~\ref{lem:maximal-edge-connected}.
\end{proof}
\section{Applications}\label{sec:overview}
\subparagraph{Decremental Edge- and Vertex-Connectivity.}
In the \emph{decremental $k$-edge ($k$-vertex) connectivity} problem, the goal is to design a data
structure that supports queries about the existence of $k$ edge-disjoint (vertex-disjoint)
paths between a pair of given vertices, subject to edge deletions.
We obtain improved algorithms for decremental 2-edge-, 2-vertex- and 3-edge-connectivity
in dynamic planar graphs.
For decremental 2-edge-connectivity we obtain an optimal data structure with both
updates and queries supported in amortized $O(1)$ time.
In the case of 2-vertex- and 3-edge-connectivity, we achieve the amortized update time
of $O(\log{n})$, whereas the query time is constant.
For all these problems, we improve upon the 20-year-old update bounds by Giammarresi and Italiano
\cite{Giammarresi:96} by a factor of $O(\log{n})$.
\begin{theorem}\label{thm:2edge_ds}
Let $G=(V,E)$ be a planar graph and let $n=|V|$.
There exists a deterministic data structure that maintains $G$ subject to edge deletions and can answer $2$-edge connectivity queries in $O(1)$ time.
Its total update time is $O(n)$.
\end{theorem}
\begin{proof}
Denote by $G_0$ the initial graph.
Suppose wlog. that $G_0$ is connected.
Let $B(G)$ be the set of all bridges of $G$.
Note that two vertices $u,v$ are in the same
2-edge-connected component of $G$ iff they
are in the same connected component of the graph $(V,E\setminus B(G))$.
Observe that if $e$ is a bridge, then deleting $e$ from $G$ does not
influence the 2-edge-components of $G$.
Hence, when a bridge $e$ is deleted, we may ignore this deletion.
We denote by $G'$ be the graph obtained from $G_0$ by the same sequence of deletions
as $G$, but ignoring the bridge deletions.
This way, $G'$ is connected at all times and the 2-edge-connected components
of $G'$ and $G$ are the same.
It is also easy to see that $E(G)\setminus B(G)=E(G')\setminus B(G')$
and $B(G)=B(G')\cap E(G)$.
Moreover, the set $E(G')$ shrinks in time whereas $B(G')$ only grows.
First we show how the set $B(G')$ is maintained.
Recall that $e\in E(G')$ is a bridge of $G'$~iff
$\dual{e}$ is a self-loop of $\dual{G'}$.
We build the data structure of Theorem~\ref{thm:main}
for $\dual{G'}$, which initially equals $\dual{G_0}$.
As deleting a non-bridge edge $e$ of $G'$ translates to a contraction
of a non-loop edge $\dual{e}$ in $\dual{G'}$, we can maintain
$B(G')$ in $O(n)$ total time by detecting self-loops in $\dual{G'}$.
Denote by $H$ the graph $(V,E(G')\setminus B(G'))$.
To support 2-edge connectivity queries,
we maintain the graph $H$ with the decremental connectivity data structure
of Łącki and Sankowski \cite{Lacki:2015}. This data structure maintains
a planar graph subject to edge deletions in linear total time and
supports connectivity queries in $O(1)$ time.
When an edge $e$ is deleted from $G$, we first check whether it
is a bridge and if so, we do nothing.
If $e$ is not a bridge, the set $E(G')$ shrinks and thus we remove the edge $e$ from $H$.
The deletion of $e$ might cause the set $B(G')$ to grow.
Any new edge of $B(G')$ is also removed from $H$ afterwards.
To conclude, note that each 2-edge connectivity query on $G$ translates
to a single connectivity query in $H$.
All the maintained data structures have $O(n)$ total update~time.
\end{proof}
As an almost immediate consequence of Theorem~\ref{thm:2edge_ds} we improve upon \cite{Gabow:2001}
and obtain an optimal
algorithm for the \emph{unique perfect matching} problem when restricted to planar graphs.
\iffull
The details can be found in Appendix~\ref{a:omitted}.
\fi
\ifshort
\vspace{-2mm}
\fi
\iffull
\begin{restatable}{cor}{uniquematching}
Given a planar graph $G=(V,E)$ with $n=|V|$, in $O(n)$ time we can find a unique perfect matching
of $G$ or detect that the number of perfect matchings in $G$ is not $1$.
\end{restatable}
\fi
\ifshort
\begin{corollary}
Given a planar graph $G=(V,E)$ with $n=|V|$, in $O(n)$ time we can find a unique perfect matching
of $G$ or detect that the number of perfect matchings in $G$ is not $1$.
\end{corollary}
\fi
To obtain improved bounds for $2$-vertex connectivity and $3$-edge connectivity we use the data structure of Theorem~\ref{thm:main} to remove bottlenecks in the existing algorithms by Giammarresi and Italiano~\cite{Giammarresi:96}.
\iffull
The details are deferred to Appendix~\ref{a:omitted}.
\fi
\begin{theorem}\label{thm:2vert3edge}
Let $G=(V,E)$ be a planar graph and let $n=|V|$.
There exists a deterministic data structure that maintains $G$ subject to edge deletions and can answer
$2$-vertex connectivity and $3$-edge connectivity queries in $O(1)$ time.
Its total update time is $O(n \log n)$.
\end{theorem}
\ifshort
\vspace{-2mm}
\fi
\subparagraph{Maximal 3-Edge-Connected Subgraphs.}
A $k$-edge-connected component of a graph $G$ is a maximal (w.r.t. inclusion) subset $S$ of vertices, such that each pair of vertices in $S$ is $k$-edge-connected.
However, if $k \geq 3$, in the subgraph of $G$ induced by $S$, some pairs of vertices may not be $k$-edge-connected (see~\cite{Chechik:2017} for an example).
Thus, for $k \geq 3$, maximal $k$-edge-connected subgraphs can be different from $k$-edge-connected components.
Very recently, Chechik et al.~\cite{Chechik:2017} showed how to compute maximal $k$-edge-connected subgraphs
in $O((m+n\log n)\sqrt{n}\,)$ time
for any constant $k$, or $O(m\sqrt{n}\,)$ time for $k=3$.
Using the results of~\cite{Giammarresi:96} one can compute maximal $3$-edge-connected subgraphs of a planar multigraph in $O(m+n \log n)$ time. Our new approach allows us to improve this to an optimal $O(m+n)$ time bound.
\iffull
\begin{restatable}{lem}{maxedgecon}
The maximal $3$-edge-connected subgraphs of a planar graph can be
computed in linear time.
\end{restatable}
\fi
\ifshort
\begin{lemma}
The maximal $3$-edge-connected subgraphs of a planar graph can be
computed in linear time.
\end{lemma}
\fi
\ifshort
\vspace{-4mm}
\fi
\iffull
\newpage
\fi
\subparagraph{Simple Linear-Time Algorithms.} Finally, we present two examples showing
that Theorem~\ref{thm:main} might be a useful
black-box in designing linear time algorithms for planar graphs.
\ifshort
The details and the relevant pseudocode can be found in the full version of this paper.
\fi
\iffull
\begin{wrapfigure}[10]{r}{0.4\textwidth}
\vspace{0.5em}
\includegraphics[width=0.25\textwidth]{degree5}
\caption{The degree $\le 5$ vertex
and its two independent neighbors may be colored using the remaining two colors.
\label{fig:5col}
}
\end{wrapfigure}
\fi
\ifshort
\begin{wrapfigure}[9]{r}{0.4\textwidth}
\includegraphics[width=0.25\textwidth]{degree5}
\caption{The degree $\le 5$ vertex
and its two independent neighbors may be colored using the remaining two colors.
\label{fig:5col}
}
\end{wrapfigure}
\fi
\ifshort
\vspace{-2mm}
\fi
\begin{example}\label{ex:5col}
Every planar graph $G$ can be $5$-colored in expected linear time.
\end{example}
\iffull
\fi
\input{proof5col}
\ifshort
\vspace{-2mm}
\fi
\begin{example}
An MST of a planar graph $G$ can be computed in linear time.
\end{example}
\iffull
\input{proofMST}
\fi
\section{Preliminaries}\label{sec:preliminaries}
Throughout the paper we use the term \emph{graph} to denote an undirected \emph{multigraph}, that is we allow the graphs to have parallel edges and self-loops.
Formally, each edge $e$ of such a graph is a pair $(\{u,w\},\ensuremath{\mathrm{id}}(e))$ consisting of a pair of vertices and a unique identifier used to distinguish between the parallel edges.
For simplicity, we skip this third coordinate and use just $uw$ to denote one of the edges connecting vertices $u$ and $w$.
If the graph contains no parallel edges and no self-loops, we call it \emph{simple}.
For any graph $G$, we denote by $V(G)$ and $E(G)$ the sets of vertices and edges
of $G$, respectively.
A graph $G'$ is called a subgraph of $G$ if $V(G')\subseteq V(G)$ and $E(G')\subseteq E(G)$.
We define $G_1\cup G_2=(V(G_1)\cup V(G_2),E(G_1)\cup E(G_2))$ and
$G_1\setminus G_2=(V(G_1),E(G_1)\setminus E(G_2))$.
For $S\subseteq V(G)$, we denote by $G[S]$ the \emph{induced subgraph} $(S,\{uv: uv\in E(G), \{u,v\}\subseteq S\})$.
For a vertex $v\in V$, we define $N(v)=\{u:uv\in E, u\neq v\}$ to be the \emph{neighbor set} of $v$.
A \emph{cycle} of a graph $G$ is a nonempty set $C \subseteq E(G)$, such that for some ordering of edges $C = \{u_1w_1, \ldots, u_kw_k\}$, we have $w_i = u_{i+1}$ for $1 \leq i < k$ and $w_k = u_1$, and the vertices $u_1,\ldots,u_k$ are distinct.
The \emph{length} of a cycle $C$ is simply $|C|$.
Note that this definition allows cycles of length $1$ (self-loop) or $2$ (a pair of parallel edges),
but does not allow non-simple cycles of length $3$ or more.
A \emph{cut} is a minimal (w.r.t. inclusion) set $C \subseteq E(G)$, such that $G \setminus C$ has more connected components than $G$.
Let $G=(V,E)$ be a graph and $xy=e \in E$.
We use $\remove{G}{e}$ to denote the graph obtained from $G$ by removing $e$
and $\contr{G}{e}$ to denote the graph obtained by contracting an edge $e$
(in the case of a contraction $e$ may not be a self-loop, i.e., $x\neq y$).
We will often look at contraction from the following perspective:
as a result of contracting $e$, all edge endpoints equal to $x$ or $y$ are replaced
with some new vertex $z$.
In some cases it is convenient to assume $z\in \{x,y\}$.
This yields a 1-to-1 correspondence between the edges of $\remove{G}{e}$ and the edges of $\contr{G}{e}$.
Formally, we assume that the contraction preserves the edge identifiers,
i.e., $e_1\in E(\remove{G}{e})$ and $e_2\in E(\contr{G}{e})$ are corresponding
if and only if $\ensuremath{\mathrm{id}}(e_1)=\ensuremath{\mathrm{id}}(e_2)$.
Note that contracting an edge may introduce parallel edges and self-loops.
Namely, for each edge that is parallel to $e$ in $G$, there is a self-loop in $\contr{G}{e}$. And for each cycle of length $3$ that contains $e$ in $G$, there is a pair of parallel edges in $G/e$.
\ifshort
\vspace{-2mm}
\fi
\subparagraph{Planar graphs.}
An embedding of a planar graph is a mapping of its vertices to distinct points
and edges to non-crossing curves in the plane.
We say that a planar graph $G$ is \emph{plane},
if some embedding of $G$ is assumed.
A face of a connected plane $G$ is a maximal open connected set of points
not in the image of any vertex or edge in the embedding of~$G$.
\ifshort
\vspace{-2mm}
\fi
\iffull
\subparagraph{Semi-strictness.}
\begin{wrapfigure}[6]{r}{0.37\textwidth}
\vspace{-2em}
\includegraphics[width=0.34\textwidth]{quasi-simple}
\caption{A semi-strict graph with $6$ vertices and $5$ faces.\label{fig:quasi}}
\end{wrapfigure}
We say that a connected plane graph $G$ is \emph{semi-strict} \cite{Klein:book} if each
of its faces has a boundary of at least $3$ edges (see Figure~\ref{fig:quasi}).
We can obtain a \emph{maximal semi-strict subgraph} of a plane embedded multigraph $H$ as follows:
for each set $P$ of parallel edges $xy$ of $H$ such that they form a contiguous
fragment of the edge rings of both $x$ and $y$, remove from $H$ all edges of $P$ except one.
\begin{fact}\label{f:factor3}
A semi-strict plane graph $G$ with $n$ vertices has at most $3n-6$ edges.
\end{fact}
\begin{proof}
We note that each face of $G$ has at least $3$ edges and apply the Euler's formula.
\end{proof}
\fi
\subparagraph{Duality.}
Let $G$ be a plane graph. We denote by $\dual{G}$ the dual graph of $G$.
Each edge of $G$ naturally corresponds to an edge of $\dual{G}$.
We denote by $\dual{e}$ the edge of $\dual{G}$ that corresponds to $e \in E(G)$.
More generally, if $E_1 \subseteq E(G)$ is a set of edges of $G$, we set $\dual{E_1}=\{ \dual{e} | e \in E_1\}$.
We exploit the following relations between $G$ and $\dual{G}$.
Deleting an edge $e$ of $G$ corresponds to contracting the edge $\dual{e}$ in $\dual{G}$, that is $\dual{(\remove{G}{e})} = \contr{\dual{G}}{\dual{e}}$.
Moreover, $C\subseteq E$ is a cut in $G$ iff $\dual{C}$ is a cycle in $\dual{G}$.
In particular, a bridge $e$ in $G$ corresponds to a self-loop in $\dual{G}$ and a two-edge cut in $G$ corresponds to a pair of parallel edges in $\dual{G}$.
\ifshort
\vspace{-2mm}
\fi
\subparagraph{Planar graph partitions.}
Let $G$ be a simple planar graph.
Let a \emph{piece} be subgraph of $G$ with no isolated vertices.
For a piece $P$, we denote by $\bnd{P}$ the set of vertices $v\in V(P)$
such that $v$ is adjacent to some edge of $G$ that is not contained in $P$.
$\bnd{P}$ is also called the set of \emph{boundary vertices} of $P$.
An $r$-division $\ensuremath{\mathcal{R}}$ of $G$ is a partition of $G$ into $O(n/r)$
edge-disjoint pieces such that each piece $P\in\ensuremath{\mathcal{R}}$ has $O(r)$ vertices
and $O(\sqrt{r})$ boundary vertices.
For an $r$-division $\ensuremath{\mathcal{R}}$, we also denote by $\bnd{\ensuremath{\mathcal{R}}}$ the set $\bigcup_{P_i\in \ensuremath{\mathcal{R}}}\bnd{P_i}$.
Clearly, $|\bnd{\ensuremath{\mathcal{R}}}|=O(n/\sqrt{r})$.
\ifshort
\vspace{-2mm}
\fi
\begin{lemma}[\cite{Goodrich:95, Klein:13, Walderveen:2013}]\label{lem:rdiv}
An $r$-division of a planar graph $G$ can be computed in linear time.
\end{lemma}
\ifshort
\vspace{-2mm}
\fi
\section{Pseudocode of Linear Time Algorithms for 5-coloring and MST}\label{a:pseudocode}
\vspace{-4mm}
\SetProcNameSty{texttt}
\begin{function*}[h]
\DontPrintSemicolon
$(s,P,L)\leftarrow\ensuremath{\textup{\texttt{contract}}}(e)$\;
\For{$w\in \{s\}\cup \bigcup\{\ensuremath{\textup{\texttt{vertices}}}(e_1):(e_1\to e_2)\in P\}$} {
\If{$w\notin Q$ {\bf and} $\ensuremath{\textup{\texttt{deg}}}(w)\leq 5$}{
$Q\leftarrow Q\cup \{w\}$
}
}
\Return{$s$}
\caption{contract-and-update($e$)}
\end{function*}
\vspace{-4mm}
\begin{algorithm*}[H]
\SetAlgoFuncName{Algorithm}
\SetAlgoRefName{}
\DontPrintSemicolon
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{A simple connected planar graph $G=(V,E)$ and a function $\ell:E\to\mathbb{R}$.}
\Output{A minimum spanning tree of $G$.}
$\ensuremath{\textup{\texttt{init}}}(G)$. Use $\ell$ to report directed parallelism, so that each time $e'\to e$ is reported,
we have $\ell(e')\geq \ell(e)$.\;
$Q\leftarrow \{v\in V: \ensuremath{\textup{\texttt{deg}}}(v)\leq 5\}$\;
$T\leftarrow \emptyset$\;
\While{$Q\neq\emptyset$}{
$u\leftarrow\textup{any element of } Q$\;
$Q\leftarrow Q\setminus\{u\}$\;
\If{$\ensuremath{\textup{\texttt{deg}}}(u)\geq 1$}{
$e\leftarrow \textup{an edge such that }(v,e)\in \ensuremath{\textup{\texttt{neighbors}}}(u)\textup{ and }\ell(e)\textup{ is minimal}$\;
$T\leftarrow T\cup\{e\}$\;
$\texttt{contract-and-update}(e)$\;
}
}
\Return{$T$}
\caption{MST of a planar graph}
\end{algorithm*}
\begin{procedure*}[H]
\DontPrintSemicolon
\If{$Q=\emptyset$}{
\Return
}
$u\leftarrow\textup{any element of } Q$\;
$Q\leftarrow Q\setminus\{u\}$\;
$Z\leftarrow \ensuremath{\textup{\texttt{neighbors}}}(u)$\;
\If{$\ensuremath{\textup{\texttt{deg}}}(u)\geq 1$ {\bf and} $\ensuremath{\textup{\texttt{deg}}}(u)\leq 4$}{
$v\leftarrow\textup{any vertex of } Z$\;
$s\leftarrow\texttt{contract-and-update}(\ensuremath{\textup{\texttt{edge}}}(u,v))$\;
$\texttt{color()}$\;
$C[v]\leftarrow C[s]$\;
}
\ElseIf{$\ensuremath{\textup{\texttt{deg}}}(u)=5$}{
$x,y\leftarrow\textup{any two vertices of } Z\textup{ such that }\ensuremath{\textup{\texttt{edge}}}(x,y)=\ensuremath{\textbf{nil}}$\;
$s'\leftarrow\texttt{contract-and-update}(\ensuremath{\textup{\texttt{edge}}}(u,x))$\;
$s\leftarrow\texttt{contract-and-update}(\ensuremath{\textup{\texttt{edge}}}(s_1,y))$\;
$\texttt{color()}$\;
$C[x]\leftarrow C[s]$\;
$C[y]\leftarrow C[s]$\;
}
$C[u]\leftarrow\textup{ any color of }\left(\{1,2,3,4,5\}\setminus \{C[w]:w\in Z\}\right)$\;
\caption{color()}
\end{procedure*}
\begin{algorithm*}[H]
\SetAlgoFuncName{Algorithm}
\SetAlgoRefName{}
\DontPrintSemicolon
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{A simple connected planar graph $G=(V,E)$.}
\Output{A 5-coloring $C$ of $G$.}
$\ensuremath{\textup{\texttt{init}}}(G)$\;
$Q\leftarrow \{v\in V: \ensuremath{\textup{\texttt{deg}}}(v)\leq 5\}$\;
$C\leftarrow \textup{ an array indexed with vertices with values from }\{1,2,3,4,5\}$\;
$\texttt{color()}$\;
\Return{$C$}
\caption{5-coloring of a planar graph}
\end{algorithm*}
|
1,116,691,498,406 | arxiv | \section{#1}\setcounter{equation}{0}}
\newcommand{\hbox{\rule[-2pt]{3pt}{6pt}}}{\hbox{\rule[-2pt]{3pt}{6pt}}}
\newcommand{\newsection}[1]{\setcounter{equation}{0}\setcounter{theorem}{0}
\section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\title{On a model of evolution of subspecies}
\author{{
Rahul Roy and Hideki Tanemura}
\footnote{E-Mail: {\tt [email protected]} and {\tt [email protected]}}\\
{\it Indian Statistical Institute, New Delhi and Keio University, Yokohama}}
\date{}
\begin{document}
\maketitle
\begin{abstract}
Ben-Ari and Schinazi (2016) introduced a stochastic model to study `virus-like evolving population with high mutation rate'. This model is a birth and death model with an individual at birth being either a mutant with a random fitness parameter in $[0,1]$ or having one of the existing fitness parameters with uniform probability; whereas a death event removes the entire population of the least fit site. We change this to incorporate the notion of `survival of the fittest', by requiring that a non-mutant individual, at birth, has a fitness according to a preferential attachment mechanism, i.e., it has a fitness $f$ with a probability proportional to the size of the population of fitness $f$. Also death just removes one individual at the least fit site. This preferential attachment rule leads to a power law behaviour in the asymptotics, unlike the exponential behaviour obtained by Ben-Ari and Schinazi (2016).
\end{abstract}
\vspace{0.1in}
\noindent
{\bf Key words:} Markov chain, Random
walk, preferential attachment model.
\vspace{0.1in}
\noindent
{\bf AMS 2000 Subject Classification:} 60J10, 60F15, 92D15.
\section{Introduction}
We study a model of the evolution and survival of species subjected to birth, mutation and death. This model was introduced by Guiol, Machado and Schinazi (2010) and is similar to a model studied by Liggett and Schinazi (2009). It has been of recent interest because of its relation to the discrete evolution model of Bak and Sneppen (1993).
In the model studied by Guiol, Machado and Schinazi (2010), at each discrete time point, with probability $p$ or $1-p$ respectively, there is either a birth of an individual of the species or a death (in case there exists at least one surviving species). An individual at birth is accompanied by a fitness parameter $f$, which is chosen uniformly in $[0,1]$, while the death is always of the individual with the least fitness parameter. They exhibited a phase transition in this model, i.e., for $p > 1/2$, the size of the population, $L_n$, at time $n$ whose fitness is smaller that $f_c := (1-p)/p$ is a null recurrent Markov chain, while asymptotically, the proportion of the population with fitness level lying in $(a,b)\subseteq (f_c, 1)$ equals $p(b-a)$ almost surely.
In a subsequent paper Ben-Ari and Schinazi (2016) modified the above model to study a `virus-like evolving population with high mutation rate'. Here, as earlier, at each discrete time point, with probability $p$ or $1-p$ respectively, there is either a birth of an individual of the species or a death (in case there exists at least one surviving species) of the individual with the least fitness parameter.
The caveat here is that at death, the entire population of the least fit individuals is removed; while, at birth, the individual,
\begin{itemize}
\item[(i)] with probability $r$, is a mutant and has a fitness parameter $f$ uniformly at random in $[0,1]$, or
\item[(ii)] with probability $1-r$, has a fitness parameter chosen uniformly at random among the existing fitness parameters, thereby increasing the population at that fitness level by $1$.
\end{itemize}
For this model too, the authors exhibited a phase transition. In particular, assuming $pr > (1-p)$, for $f_c := (1-p)/pr$ the number of fitness levels lying in $(0, f_c)$ at time $n$ where individuals exist is a null recurrent Markov chain, while the number of fitness levels lying to the right of $f_c$ is asymptotically uniformly distributed in $(f_c, 1)$ uniformly.
Here we propose a variant of the Ben-Ari, Schinazi model, a variant which we believe is closer to the Darwinian theory of the survival of the fittest.
To incorporate the Darwinian theory, we differ from the above model when a birth occurs which is not a mutant. Instead of the individual at birth having a fitness one of the existing fitness levels chosen uniformly at random, the newly born individual has a fitness $f$ which is chosen proportional to the size of the population of fitness $f$.
More particularly, suppose that at time $n$ there is a birth, which is not a mutant, and that there are $n_i$ individuals with fitness $f_i$ for $i = 1, \ldots , k$ and no other individuals elsewhere. The newly born individual has a fitness $f_j$ with a probability proportional to ${n_j}$ for $j = 1, \ldots, k$. Thus, at birth, an individual without mutation follows a preferential attachment rule akin to the Barab\'{a}si and Albert (1999) model.
Before we end this section we note that Schreiber (2001) and subsequently Bena\"{i}m,
Schreiber and Tarr\`{e}s (2004) study the question of random genetic drift and natural selection via urn models coupled with mean-field behaviour. Unlike our study, there is no spatial aspect of fitness in their model.
A formal set-up of this model is given in the next section, while in the last section we present some mean-field dynamics of the model.
\section{The model and statement of results}
We first present our model and state the results.
At time $0$ there is one individual at site $0$. At time $n$, there is either a birth or a death of an individual from the existing population with probability $p$ or $1-p$ respectively, where $p\in (0,1)$, and independent of any other random mechanism considered earlier.
\begin{enumerate}
\item[(P1)] In case of a birth, there are two possibilities.
\begin{itemize}
\item[(i)] with probability $r\in (0,1)$, a mutant is born and has a fitness parameter $f$ uniformly at random in $[0,1]$, or
\item[(ii)] with
probability $1-r$ the individual born has a fitness $f$ with a probability proportional to the number of individuals with fitness $f$ among the entire population present at that time. Here we have a caveat that, if there is no individual present at the time of birth, then the fitness of the individual is sampled uniformly in $[0,1]$.
\end{itemize}
\item[(P2)] In case of a death, an individual from the population at the site closest to $0$ is eliminated.
\end{enumerate}
Here and henceforth, a site represents a fitness level.
Let $X_n = \{(k_i, x_i) : k_i \geq 1, x_i \in [0,1], i = 1, \ldots , l\}$, where the total population at time $n$ is divided in exactly $l$ sites $x_1, \ldots , x_l$, with the size of the population at site $x_i$ being exactly $k_i$. In case there is no individual present at time $n$ we take $X_n = \emptyset$.
The process $X_n$ is Markovian on the state space
\begin{align}
\mathbb{X}:=\{\emptyset\} \cup \{ \{ (k,x)\}_{x\in \Lambda} : (k,x)\in \mathbb{N} \times [0,1], \; \sharp \Lambda <\infty, \},
\end{align}
where $\mathbb{N} = \{1,2,\dots\}$.
For a given $f\in (0,1)$, let $L_n^f$ denote the size of the population at time $n$ at sites in [0,f],
$$
L_n^f := \sum_{s \in [0,f]}k_s : s\in [0,f] \text{ and } (k_s ,s) \in X_n ,
$$
$R_n^f$ denote the size of the population at time $n$ at sites in $(f,1]$,
$$
R_n^f := \sum_{s \in (f,1]}k_s : s\in (f,1] \text{ and } (k_s ,s) \in X_n,
$$
and $N_n$ denote the size of the population at time $n$,
$$
N_n:= L_n^f+R_n^f.
$$
For a fixed $f \in (0,1)$, the pair $(L_n^f, R_n^f)$ is a Markov chain on $\mathbb{Z}_+\times\mathbb{Z}_+$, ($\mathbb{Z}_+=\mathbb{N} \cup \{0\}$) with transition probabilities given by
\vskip 3mm
(1-1) If $(L_n^f, R_n^f)=(0,0)$
\begin{equation}\label{TP11}
(L_{n+1}^f, R_{n+1}^f)=
\begin{cases}
(1,0) \quad &\mbox{w. p. $fp$}
\\
(0,1) &\mbox{w. p. $(1-f)p$}
\\
(0,0) &\mbox{w. p. $1-p$}
\end{cases}
\end{equation}
(1-2) If $(L_n^f, R_n^f)\in \{0\}\times \mathbb{N}$
\begin{equation}\label{TP2}
(L_{n+1}^f, R_{n+1}^f)=
\begin{cases}
(1,R_n^f) \quad &\mbox{w. p. $fpr$}
\\
(0,R_n^f+1) &\mbox{w. p. $(1-f)pr+p(1-r)$}
\\
(0,R_n^f-1) &\mbox{w. p. $1-p$}
\end{cases}
\end{equation}
(1-3) If $(L_n^f, R_n^f)\in \mathbb{N}\times \{0\}$
\begin{equation}\label{TP3}
(L_{n+1}^f, R_{n+1}^f)=
\begin{cases}
(L_n^f+1,0) \quad &\mbox{w. p. $fpr+p(1-r)$}
\\
(L_n^f,1) &\mbox{w. p. $(1-f)pr$}
\\
(L_n^f-1,0) &\mbox{w. p. $1-p$}
\end{cases}
\end{equation}
(1-4) If $(L_n^f, R_n^f)\in \mathbb{N}\times \mathbb{N}$
\begin{equation}\label{TP4}
(L_{n+1}^f, R_{n+1}^f)=
\begin{cases}
(L_n^f+1,R_n^f) \quad &\mbox{w. p. $\displaystyle{fpr+p(1-r)\frac{L_n^f}{N_n}}$}
\\
(L_n^f,R_n^f+1) &\mbox{w. p. $\displaystyle{(1-f)pr+p(1-r)\frac{R_n^f}{N_n}}$}
\\
(L_n^f-1,R_n^f) &\mbox{w. p. $1-p$}.
\end{cases}
\end{equation}
The model exhibits a phase transition at a critical position $f_c$ defined as
\begin{equation}\label{fc}
f_c:=\frac{1-p}{pr}
\end{equation}
as given in the following theorem:
\begin{thm}\label{Th1}
\begin{enumerate}
\item[\rm (1)] In case $p\le 1-p$, the population dies out infinitely often a.s., in the sense that
\begin{align}
P(N_n = 0 \text{ for infinitely many } n) = 1
\end{align}
\item[\rm \rm (2)] In case $1-p < rp$, the size of the population goes to infinity as $n\to\infty$, and most of the population is distributed at sites in the interval $[f_c,1]$, in the sense that
\begin{align}
\label{Thm_12}
P(\lim_{n \to \infty} \frac{R_n^{f_c}}{N_n} = 1) = 1 \text{ and }
P(\liminf_{n \to \infty} \frac{R_n^{f_c} - R_n^f}{N_n} > 0) &= 1 \text{ for any } f > f_c.
\end{align}
\item[\rm (3)] In case $rp \le 1-p < p$, the size of the population goes to infinity as $n\to\infty$, and most of the population is concentrated at sites near $1$, in the sense that
\begin{align}
\label{Thm11}
P(\lim_{n\to \infty} N_n = \infty) = 1 \text{ and, for any }\varepsilon > 0, \; P(\lim_{n \to \infty} \frac{R_n^{1-\varepsilon}}{N_n} = 1) = 1.
\end{align}
\end{enumerate}
\end{thm}
Let $F_n(f)$ denote the empirical distribution of sites at time $n$, i.e.
$$
F_n(f) := \frac{\sharp \{s \in [0,f]: (k,s) \in X_n \text{ for some } k \geq 1\}}{\sharp\{s \in[0,1]: (k,s) \in X_n \text{ for some } k \geq 1\}},
$$
we have
\begin{cor}
\label{GC}
If $1-p < rp$ (i.e., $f_c < 1$), then
\begin{equation}
\label{GL1}
F_n(f) \to \frac{\max \{f-f_c, 0\}}{1-f_c}
\quad\mbox{ uniformly a.s.}
\end{equation}
\end{cor}
Let $S_n:= \sharp\{s \in[0,1]: (k,s) \in X_n \text{ for some } k \geq 1\}$ be the total number of sites at time $n$ among which the total population is distributed.
For a given $n,k,f$ let
$U_n^k(f):= \sharp\{s \in [f,1]: (k,s) \in X_n\}$
denote the number of sites in $[f,1]$ at time $n$ which has a population of size exactly $k$; clearly $S_n=\sum_{k}U_n^k(0)$.
Taking $U_n^k(f+) = \lim_{s \downarrow f} U_n^k(s)$, for $A \subseteq \mathbb{X}$, define the empirical distribution of size and fitness on $\mathbb{N} \times [0,1]$ as
\begin{equation}
H_n(A):=
\begin{cases}
\frac{\sum_{(k,f)\in A}U_n^k(f)-U_n^k(f+)}{S_n},
&S_n>0,
\\
\delta_{(0,0)}(A),
&S_n=0.
\end{cases}
\end{equation}
\begin{thm}\label{Th2}
For $pr > 1-p$, as $n \to \infty$, $H_n$ converges weakly to a product measure on $\mathbb{N}\times [0,1]$ whose density is given by
\begin{align}
&p_k\frac{\mathbf{1}_{[f_c, 1]}(x)}{1-f_c}dx,
\quad (k,x)\in\mathbb{N}\times [0,1]
\nonumber
\\
&\text{ with }
p_k=\frac{(2p-1)r}{(1-r)(1-p)} B\left(1+\frac{(2p-1)r}{(1-r)(1-p)}, k\right) \text{ for } k\in\mathbb{N},
\end{align}
where $B(a,b)$ is the Beta function with parameter $a,b >0$.
\end{thm}
\begin{remark}
\label{powerlaw}
Since $B(s,k) = E}{{\mathbb E}{O}(k^{-s})$, $k\to\infty$,
the probability density $p_k$, $k\in\mathbb{N}$ has $m$-th moment if and only if $r > 1- \frac{2p-1}{2p-1 +(1-p)m}$.
\end{remark}
For the model studied by Ben-Ari and Schinazi (2016), in case of a death, the entire population at the site of lowest fitness is removed unlike our condition (P2). Thus in their model, if $\widetilde{S}_n$ denotes the number of sites at time $n$ among which the total population is distributed, then
$\widetilde{S}_n$ is a Markov chain with spatially homogeneous transition probabilities given by
\begin{equation}
\label{r-bas}
\widetilde{S}_{n+1}=\begin{cases}
\widetilde{S}_n+1 \quad &\mbox{with probability $pr$,}
\\
\widetilde{S}_n &\mbox{with probability $p(1-r)$,}
\\
\widetilde{S}_n-1 &\mbox{with probability $1-p$,}
\end{cases}
\end{equation}
with reflecting boundary condition at $0$.
For a given $f\in (0,1)$, letting $\widetilde{S}^{f-}_n$ denote the number of sites at time $n$ in [0,f],
and $\widetilde{S}^{f+}_n$ the number of sites at the sites in $(f,1]$,
the pair $(\widetilde{S}^{f-}_n, \widetilde{S}^{f+}_n)$ is a spatially homogeneous Markov chain on $\mathbb{Z}_+\times\mathbb{Z}_+$, where $\mathbb{Z}_+=\{0,1,2,\dots \}$:
(BAS-1) If $(\widetilde{S}^{f-}_n, \widetilde{S}^{f+}_n)=(0,0)$
\begin{equation}\label{BAS1}
(\widetilde{S}^{f-}_{n+1}, \widetilde{S}^{f+}_{n+1})=
\begin{cases}
(1,0) \quad &\mbox{w. p. $fp$}
\\
(0,1) &\mbox{w. p. $(1-f)p$}
\\
(0,0) &\mbox{w. p. $1-p$}
\end{cases}
\end{equation}
(BAS-2) If $(\widetilde{S}^{f-}_n, \widetilde{S}^{f+}_n)\in \{0\}\times \mathbb{N}$
\begin{equation}\label{BAS2}
(\widetilde{S}^{f-}_{n+1}, \widetilde{S}^{f+}_{n+1})=
\begin{cases}
(1,\widetilde{S}^{f+}_n) \quad &\mbox{w. p. $fpr$}
\\
(0,\widetilde{S}^{f+}_n+1) &\mbox{w. p. $(1-f)pr$}
\\
(0,\widetilde{S}^{f+}_n) &\mbox{w. p. $p(1-r)$}
\\
(0,\widetilde{S}^{f+}_n-1) &\mbox{w. p. $1-p$}
\end{cases}
\end{equation}
(BAS-3) If $(\widetilde{S}^{f-}_n, \widetilde{S}^{f+}_n)\in \mathbb{N}\times \{0\}$
\begin{equation}\label{BAS3}
(\widetilde{S}^{f-}_{n+1}, \widetilde{S}^{f+}_{n+1})=
\begin{cases}
(\widetilde{S}^{f-}_n+1,0) \quad &\mbox{w. p. $fpr$}
\\
(\widetilde{S}^{f-}_n,1) &\mbox{w. p. $(1-f)pr$}
\\
(\widetilde{S}^{f-}_n,0) \quad &\mbox{w. p. $p(1-r)$}
\\
(\widetilde{S}^{f-}_n-1,0) &\mbox{w. p. $1-p$}
\end{cases}
\end{equation}
(BAS-4) If $(\widetilde{S}^{f-}_n, \widetilde{S}^{f+}_n)\in \mathbb{N}\times\mathbb{N}$
\begin{equation}\label{BAS4}
(\widetilde{S}^{f-}_{n+1}, \widetilde{S}^{f+}_{n+1})=
\begin{cases}
(\widetilde{S}^{f-}_n+1,\widetilde{S}^{f+}_n) \quad &\mbox{w. p. $fpr$}
\\
(\widetilde{S}^{f-}_n,\widetilde{S}^{f+}_n) &\mbox{w. p. $p(1-r)$}
\\
(\widetilde{S}^{f-}_n, \widetilde{S}^{f+}_n+1) &\mbox{w. p. $(1-f)pr$}
\\
(\widetilde{S}^{f-}_n-1,\widetilde{S}^{f+}_n) &\mbox{w. p. $1-p$}
\end{cases}
\end{equation}
Also at birth, if the individual is not a mutant then the individual born has a fitness chosen uniformly at random among the fitnesses of the existing individuals at that time, unlike the preferential condition (P1)(ii) of our model. As such, the transition probabilities for this model are spatially homogeneous, while for our model, as is exemplified by (\ref{TP4}), the transition probabilities are not spatially homogeneous. Thus the equivalent result they have for Theorem \ref{Th2} has $p_k$ arising from a $\mathrm{Geom}\left(\frac{pr-(1-p)}{p-(1-p)}\right)$ distribution.
The power law phenomenon present in the study of preferential attachment graphs (see van der Hofstad (2017) Chapter 8) manifests itself in our model (as noted in Remark \ref{powerlaw}) through the Beta function in Theorem \ref{Th2}.
\section{Proof of Theorem \ref{Th1}}
As noted in Guiol, Machado and Schinazi (2010), for $p \leq 1-p$, i.e. when the death rate is more than the birth rate, the process $\{N_n : n\geq 0\}$ is equivalent to a random walk on the non-negative integers $\mathbb{Z}_+$ with non-positive drift and a holding at $0$ with probability $(1-p)$. Thus $N_n$ returns to the $0$ infinitely often with probability $1$.
For $p > 1-p$, $\{N_n : n\geq 0\}$ is equivalent to a random walk on the non-negative integers $\mathbb{Z}_+$ with positive drift and thus $N_n \to \infty$ as $n \to \infty$ with probability $1$.
Then we study the case when $1-p < p$.
\begin{lem}\label{lemma4
{\rm (1)}\quad Let $f_c=\frac{1-p}{rp}<1$.
\\
{\rm (i)}
For $f< f_c$ and for any $\eta\in (0,1)$ we have
\begin{equation}\label{key1}
P\left(\text{there exists }T>0 \mbox{ such that } \rho_n^f\equiv \frac{L_n^f}{N_n} \le \eta \text{ for all }n\ge T \right)=1,
\end{equation}
and
\begin{align}\label{key2}
P(L_n^f=0 \text{ infinitely often}) =1.
\end{align}
\noindent {\rm (ii)} Let $f> f_c$.
Then
\begin{align}\label{key3}
P(L_n^f=0 \text{ infinitely often}) = 0.
\end{align}
\noindent {\rm (2)}\quad Let $1\le f_c=\frac{1-p}{rp}<\frac{1}{r}$.
\\
{\rm (i)}
For $f< 1$ and for any $\eta\in (0,1)$ we have (\ref{key1})
and (\ref{key2}).
\noindent {\rm (ii)} Let $f=1$.
Then we have $(\ref{key3})$.
\end{lem}
\noindent {\it Proof.}
We prove two cases (1) and (2) together.
The idea of the proof is that, since for $f < f_c\wedge 1$, $R_n^f$ will be much larger than $L_n^f$, we stochastically bound the non-spatially homogeneous Markov chain with a boundary condition by a spatially homogeneous Markov chain a boundary condition, and study the modified Markov chain. As such,
for $\varepsilon\in [0,1]$, we introduce a Markov chain $(L^{f}_n(\varepsilon), R^{f}_n(\varepsilon))$ with stationary transition probabilities given by
\vskip 3mm
(Ep-1) If $(L^{f}_n(\varepsilon), R^{f}_n(\varepsilon))=(0,0)$
\begin{equation}\label{TP1}
(L^{f}_{n+1}(\varepsilon), R^{f}_{n+1}(\varepsilon))=
\begin{cases}
(1,0) \quad &\mbox{w. p. $fp$}
\\
(0,1) &\mbox{w. p. $(1-f)p$}
\\
(0,0) &\mbox{w. p. $1-p$.}
\end{cases}
\end{equation}
(Ep-2) If $(L^{f}_n(\varepsilon), R^{f}_n(\varepsilon))\in \{0\}\times \mathbb{N}$
\begin{equation}\label{TP21}
(L^{f}_{n+1}(\varepsilon), R^{f}_{n+1}(\varepsilon))=
\begin{cases}
(1,R^{f}_n(\varepsilon)) \quad &\mbox{w. p. $fpr$}
\\
(0,R^{f}_n(\varepsilon)+1) &\mbox{w. p. $(1-f)pr+p(1-r)$}
\\
(0,R^{f}_n(\varepsilon)-1) &\mbox{w. p. $1-p$.}
\end{cases}
\end{equation}
(Ep-3) If $(L^{f}_n(\varepsilon), R^{f}_n(\varepsilon))\in \mathbb{N}\times \{0\}$
\begin{equation}\label{TP31}
(L^{f}_{n+1}(\varepsilon), R^{f}_{n+1}(\varepsilon))=
\begin{cases}
(L^{f}_n(\varepsilon)+1,0) \quad &\mbox{w. p. $fpr+p(1-r)$}
\\
(L^{f}_n(\varepsilon),1) &\mbox{w. p. $(1-f)pr$}
\\
(L^{f}_n(\varepsilon)-1,0) &\mbox{w. p. $1-p$.}
\end{cases}
\end{equation}
(Ep-4) If $(L^{f}_n(\varepsilon), R^{f}_n(\varepsilon))\in \mathbb{N}\times \mathbb{N}$
\begin{equation}\label{TP41}
(L^{f}_{n+1}(\varepsilon), R^{f}_{n+1}(\varepsilon))=
\begin{cases}
(L^{f}_n(\varepsilon)+1,R^{f}_n(\varepsilon)) \quad &\mbox{w. p. $fpr+p(1-r)\varepsilon$}
\\
(L^{f}_n(\varepsilon),R^{f}_n(\varepsilon)+1) &\mbox{w. p. $(1-f)pr+p(1-r)(1-\varepsilon)$}
\\
(L^{f}_n(\varepsilon)-1,R^{f}_n(\varepsilon)) &\mbox{w. p. $1-p$.}
\end{cases}
\end{equation}
For $\varepsilon\in [0,1]$, we couple the processes $\{(L^{f}_n(\varepsilon), R^{f}_n(\varepsilon)): n \geq 1\}$ such that
\begin{align}\label{couple}
&L^{f}_n (\varepsilon) \le L^{f}_n (\varepsilon'),
\qquadR^{f}_n (\varepsilon) \ge R^{f}_n (\varepsilon')
\quad \mbox{ for }\varepsilon \le \varepsilon' \text{ and all } n \geq 1.
\end{align}
Taking $L^{f}_n$, $R^{f}_n$ and $N_n$ as in Subsection 2.1 and $L^{f}_n (\cdot)$ and $R^{f}_n (\cdot)$ as above, we have, for
$\rho_n^f:=\frac{L^{f}_n}{N_n}$,
\begin{align}
&N_n(\varepsilon):=L^{f}_n(\varepsilon)+R^{f}_n(\varepsilon)=N_n \label{sumN}\\
& L^{f}_{n+1}=L^{f}_{n+1}\left(\rho_n^f\right),
\quad R^{f}_{n+1}=R^{f}_{n+1}\left(\rho_n^f\right),
\label{3:25}
\\
&L^{f}_n(0) \le L^{f}_{n} \le L^{f}_n(1), \quad
R^{f}_n(1) \le R^{f}_{n} \le R^{f}_n(0).
\label{3:27}
\end{align}
By the law of large numbers we have
\begin{align*}
\lim_{n\to\infty}\frac{L_n^f(\varepsilon)}{n}= \left[fpr+p(1-r)\varepsilon-1+p\right]_+,
\text{ and }
\lim_{n\to\infty}\frac{N_n}{n}= 2p-1, \text{ almost surely},
\end{align*}
and so, for $
\rho_n^f(\varepsilon):=\frac{L^{f}_n(\varepsilon)}{N_n}
$, we have
\begin{align}
\lim_{n\to\infty}
\rho_n^f(\varepsilon)
&=\left[\frac{fpr+p(1-r)\varepsilon -1+p}{2p-1}
\right]_+
\nonumber
\\ \label{LLN}
&=\left[\frac{fpr-1+p}{2p-1}+ \frac{p(1-r)\varepsilon}{2p-1}\right]_+ .
\end{align}
We introduce the linear function defined by
$$
h(x)=\frac{fpr-1+p}{2p-1}+\frac{p(1-r)}{2p-1}x.
$$
Note that $\frac{p(1-r)}{2p-1}>0$.
By a simple calculation we see that if $f\le 1$
$$
h(0)\le \frac{pr-1+p}{2p-1}<0 \quad\text{ if $pr<1-p$}\; \quad\text{ and }\quad h(1)\le 1- \frac{2p}{2p-1} < 1.
$$
Then we may choose $\delta>0$ such that
\begin{equation}\label{h<x}
h_{\delta}(x) := h(x+\delta)< x, \quad [0,1].
\end{equation}
Put
\begin{align*}
\Lambda(\varepsilon, \delta)
&=\bigg\{ \omega :
\text{there exists } N=N(\omega)\in\mathbb{N} \mbox{ such that for all $n\ge N$, }
\rho_n^f(\varepsilon) < h_{\delta}(\varepsilon)
\bigg\}.
\end{align*}
From (\ref{LLN}), we have that
\begin{align}\label{P=1}
P(\Lambda(\varepsilon, \delta))=1,
\quad \mbox{for all $\varepsilon, \delta\in (0,1]$.}
\end{align}
Also,
taking $\varepsilon_c >0$ such that $h_{\delta}(\varepsilon_c)=\eta$, i.e.,
$$
\varepsilon_c=\varepsilon_c(\delta,\eta):= \frac{\eta-\frac{fpr-1+p}{2p-1}}{\frac{p(1-r)}{2p-1}(1+\delta)}
=\frac{(2p-1)\eta-(fpr-1+p)}{p(1-r)(1+\delta)},
$$
we see that for $\varepsilon \le \varepsilon_c$ we have
$
\max\big\{h_{\delta}(\varepsilon), \eta \big\}=\eta
$.
Now consider the recursion formula
\begin{align}\label{R_F}
x_{n+1}= h_{\delta}(x_n).
\end{align}
Since ($\ref{h<x}$), for $f<f_c<1$,
\begin{equation}
\label{decreasing}
x_n \text{ is decreasing and }\lim_{n\to\infty}x_n= \frac{fpr-1+p}{pr-1+p}<0.
\end{equation}
We put
$$
h_{\delta}(k,x):=h_{\delta}(2^{-k} ([2^k x]+1)) \text{ for }k\in\mathbb{N},
$$
where $[a]$ the largest integer less than $a\in\mathbb{R}$.
From (\ref{decreasing}) we see that, for sufficient large $k$, there exists $n_c\in \mathbb{N}$ such that
\begin{align}\label{n_c}
h_{\delta}^n(k,1):= h_{\delta}(k,h_{\delta}^{n-1}(k,1)) \le \eta \mbox{ for all } n \ge n_{c}.
\end{align}
Note that from (\ref{couple}) and (\ref{3:27}) we have that
\begin{align}\label{inc}
\rho_n^f(\varepsilon) \ge \rho_n^f(\varepsilon')
\mbox{ for } \varepsilon > \varepsilon' \text{, and } \rho_n^f \leq \rho_n^f(1),
\end{align}
thus, for any $\omega \in \bigcap_{m\in\mathbb{N}}\Lambda(m2^{-k}, \delta)$
there exists $N_1(\omega)$ such that,
for all $n\ge N_1(\omega)$,
$$
\rho_n^f[\omega] \le\rho_n^f(1)[\omega] \le h_{\delta}(k,1),
$$
and there exists $N_2(\omega)\ge N_1(\omega)$ such that
for all $n\ge N_2(\omega)$
$$
\rho_n^f[\omega] \le
\rho_n^f(h_{\delta}(k,1))[\omega] \le h_{\delta}(k,h(k,1))=h_{\delta}^2(k,1).
$$
Repeating this procedure we have for any $\ell\in\mathbb{N}$ there exists $N_\ell(\omega)$ such that
for all $n\ge N_\ell(\omega)$
\begin{align}
\rho_n^f[\omega] \le h_{\delta}^\ell (k,1).
\end{align}
From (\ref{n_c}), we now have
$$
\rho_n^f[\omega] \le \eta \quad \mbox{ for all $n\ge N_{n_c}(\omega)$}.
$$
Since $P(\bigcap_{m\in\mathbb{N}}\Lambda(m2^{-k}, \delta))=1$ from (\ref{P=1}), we have
\begin{align}
\lim_{n\to\infty}\rho_{n}^f[\omega] \le \eta,
\quad \mbox{a.s.}
\end{align}
Thus we obtain (\ref{key1}).
If
\begin{align}
\label{fcon}
f< f_c -\frac{1-r}{r}\varepsilon,
\end{align}
$L_n^f(\varepsilon)$ is recurrent. Also, for $f < f_c\wedge 1$, the condition (\ref{fcon}) holds for sufficiently small $\varepsilon$, hence
from (\ref{key1}) we see that $L_n^f$ hits the origin infinitely often.
This proves (i) of the Lemma \ref{lemma4}.
Let $f_c<1$. Observing that, for $\widetilde{S}_n^{f-}$ as in (\ref{r-bas}) and $L^{f}_n(\cdot)$ as above,
$$
\widetilde{S}_n^{f-} \le L^{f}_n(0),
$$
we see from (\ref{BAS1})-(\ref{BAS4}) that when $f>f_c$, for only finitely many $n$ we have $\widetilde{S}_n^{f-}=0$.
Thus, from (\ref{3:27}) we have (ii) of (1).
Let $1 \le f_c < \frac{1}{r}$.
Since $1-p < p$
the random walk comparison as noted at the beginning of this section shows that $N_n \to \infty$ almost surely as $n \to \infty$.
We have (ii) of (2).
\hbox{\rule[-2pt]{3pt}{6pt}}
\vspace{,5cm}
We give the proof of Theorem \ref{Th1}.
Part (1) is obtained by the random walk comparison.
Part (3) is derived from (2) of Lemma 4.
The first statement of (2) is derived (ii) of (1) and (2) in Lemma \ref{lemma4}.
Finally, considering the birth rate $rp$ of mutants, the limiting expected number of them with a fitness between $(a,b)$, with $f_c < a < b \leq 1$, is $rp(b-a)$. Thus we have, by an application of the strong law of large numbers
$$
\liminf_{n \to \infty} \frac{R_n^{b} - R_n^a}{N_n} \geq \frac{p(b-a)}{2p-1} \text{ almost surely}.
$$
(Note this also follows from part (b) of the main Theorem of Guiol, Machado and Schinazi (2010).)
This completes the proof of the second statement of part (2) of Theorem \ref{Th1}.
Finally, since the sites are each independently and uniformly distributed on $[0,1]$ Corollary \ref{GC} follows from Lemma \ref{lemma4}.
\section{Proof of Theorem \ref{Th2}}
We will prove Theorem \ref{Th2} with the help of two lemmas.
Let $A_k(t_1,n)$, $k, t_1,n\in\mathbb{N}$, be the event that a mutant born at time $t_1$ gets $k-1$ attachments until time $n$, and let $q_k(t_1, n):=P(A_k(t_1,n))$. We have
\begin{lem}\label{Lemma6} Let $p=1$ i.e. no deaths. For each $k,t_1 \in \mathbb{N}$
\begin{align}\label{429}
E\left[
\left\{
\frac{1}{n}\sum_{t_1=1}^n(\mathbf{1}_{A_k(t_1,n)}-q_k(t_1,n) )
\right\}^2
\right]\to 0 \text{ as } n\to\infty.
\end{align}
\end{lem}
\noindent {\it Proof.}\quad
The left hand side of (\ref{429}) is
\begin{align*}
&\frac{1}{n^2}\sum_{t_1=1}^n \sum_{s_1=1}^n \left[
P(A_k(s_1,n)\cap A_k(t_1,n)) - P(A_k(s_1,n))P(A_k(t_1,n))
\right]
\\
&=\frac{1}{n^2}\sum_{t_1=1}^n \sum_{s_1=1}^n P(A_k(s_1,n))\left[
P(A_k(t_1,n)\big{|}A_k(s_1,n)) - P(A_k(t_1,n))
\right].
\end{align*}
Thus it is enough to show the following for the proof of the lemma:
for any $x_1, y_1\in (0,1)$ with $x_1< y_1$
\begin{align}\label{430}
&P(A_k(y_1 n,n)\big{|}A_k(x_1 n,n))- P(A_k(y_1 n,n))\to 0, \quad n\to\infty.
\end{align}
Let $\{t_\ell\}_{\ell=1}^k$ be an increasing sequence of $\mathbb{N}$ with $t_k\le n$.
We denote by $A_k[\{t_\ell \}_{\ell=1}^k;n]$ the event that a mutant comes at time $t_1$ which gets it's $(\ell-1)$th attachment at time $t_\ell$, $\ell= 2,3,\dots,k$, and no other attachment till time $n$.
Then
\begin{align}\label{sum_Ak}
A_k(t_1,n)=\sum_{\substack{t_2, t_3,\dots,t_k\in\mathbb{N} \\ t_1<t_2<\cdots < t_k<n}} A_k[\{t_\ell \}_{\ell=1}^k;n].
\end{align}
Let $\{s_\ell\}_{\ell=1}^k$ and $\{t_\ell\}_{\ell=1}^k$ be increasing sequences of $\mathbb{N}$ with $s_k,t_k\le n$.
Suppose that $s_1=t_1$, then
\begin{align}\label{E1}
P(A_k[\{t_\ell \}_{\ell=1}^k;n]\big{|}A_k[\{s_\ell \}_{\ell=1}^k;n])= \bold{1}(s_\ell = t_\ell, \ell =2,3,\dots,k).
\end{align}
Also, for $s_1\not =t_1$,
if $\{s_\ell; \ell=2,\dots, k\}\cap \{t_\ell; \ell=2,\dots, k\}\not=\emptyset$, then
\begin{align}\label{E2}
P(A_k[\{t_\ell \}_{\ell=1}^k;n]\big{|}A_k[\{s_\ell \}_{\ell=1}^k;n])=0;
\end{align}
and if $\{s_\ell; \ell=1,2,\dots, k\}\cap \{t_\ell; \ell=1,2,\dots, k\}=\emptyset$, then
\begin{align*}
\nonumber
&P(A_k[\{t_\ell \}_{\ell=1}^k;n]\Big{|}A_k[\{s_\ell \}_{\ell=1}^k;n])
\\
\nonumber
&=P(A_k[\{t_\ell \}_{\ell=1}^k;n]\Big{|}\mbox{the mutant which came at time }t_1\\
\nonumber
&
\mbox{$\qquad\qquad\qquad\qquad\qquad$ does not get any attachment at times $\{s_\ell\}_{\ell=1}^k$ })
\\
&=P(A_k[\{t_\ell \}_{\ell=1}^k;n])\prod_{m : s_m >t_1}\left(1-\frac{\ell[s_m])(1-r)}{s_m} \right)^{-1},
\end{align*}
where $\ell[s_m]= \max\{ \ell : t_\ell < s_m\}$ is the population size at time $s_m$ of the fitness location occupied by
the mutant which came at time $t_1$.
Hence, we have,
\begin{align}\nonumber
&P(A_k[\{t_\ell \}_{\ell=1}^k;n]\big{|}A_k[\{s_\ell \}_{\ell=1}^k;n])- P(A_k[\{t_\ell \}_{\ell=1}^k;n])
\\ \nonumber
&= P(A_k[\{t_\ell \}_{\ell=1}^k;n]\big{|}A_k[\{s_\ell \}_{\ell=1}^k;n])
\left[
1-\prod_{m : s_m >t_1}\left(1-\frac{\ell[s_m])(1-r)}{s_m} \right)\right]
\\ \label{E3}
&\le \frac{k^2}{t_1} P(A_k[\{t_\ell \}_{\ell=1}^k;n]\big{|}A_k[\{s_\ell \}_{\ell=1}^k;n]).
\end{align}
Combining (\ref{E1}), (\ref{E2}) and (\ref{E3}) with (\ref{sum_Ak}),
we obtain (\ref{430}).
This completes the proof.
\hbox{\rule[-2pt]{3pt}{6pt}}
\vspace{.5cm}
Next we have
\begin{lem}\label{Lemma5} Let $p=1$. For each $k\in\mathbb{N}$
\begin{align}
&\lim_{n\to\infty}\frac{1}{n}\sum_{t_1=1}^n q_{k}(t_1,n)
=\frac{r}{1-r} B\left(\frac{2-r}{1-r}, k\right)=p_k.
\end{align}
\end{lem}
\noindent{\it Proof.}\quad
Let $A_k(t_1,n)$ and $q_k(t_1,n)$, $k, t_1,n\in\mathbb{N}$, be as above.
For $k=1$, we have
\begin{align*}
q_1(t_1, n)&= r \prod_{j=t_1+1}^n \left( 1-\frac{1-r}{j}\right),
\end{align*}
since the number of individuals at time $j-1$ is $j$ and the probability that the mutant who arrived at time $t_1$ gets an attachment at time $j$ is $\frac{1-r}{j}$.
For $k=2$
\begin{align*}
q_2(t_1, n)&= r \sum_{t_2=t_1+1}^n \left\{\prod_{j=t_1+1}^{t_2-1} \left( 1-\frac{1-r}{j}\right)\right\}
\frac{1-r}{t_2}\left\{\prod_{j=t_2+1}^n \left( 1-\frac{2(1-r)}{j}\right)\right\},
\end{align*}
where $t_2$ is the time of the first attachment.
Similarly for each $k\in\mathbb{N}$
\begin{align*}
q_k(t_1, n)&= r \sum_{t_1 < t_2<\cdots <t_k\le n}
\prod_{\ell=1}^k \prod_{j=t_\ell +1}^{t_{\ell+1}}\left( 1-\frac{\ell(1-r)}{j}\right)
\prod_{\ell=1}^{k-1}\frac{\ell(1-r)}{t_{\ell+1}-\ell(1-r)},
\end{align*}
where we used the equation
$$
\frac{\ell(1-r)}{t_{\ell +1}}\frac{1}{1-\frac{\ell(1-r)}{t_{\ell +1}}}=\frac{\ell(1-r)}{t_{\ell+1}-\ell(1-r)}.
$$
By using Stirling's formula we see that
$$
\prod_{j=t_{\ell}+1}^{t_{\ell+1}} \left( 1-\frac{\ell(1-r)}{j}\right) \sim \left( \frac{t_{\ell}}{t_{\ell+1}}\right)^{\ell(1-r)},
\quad t_{\ell}, t_{\ell+1}\to\infty.
$$
Now letting $n\to\infty$ and taking $t_\ell=n x_\ell$ we have
\begin{align*}
&\frac{1}{n}\sum_{t_1=1}^n q_{k}(t_1,n)
\sim r \int_{0<x_1<\cdots <x_{k}<1}dx_1\cdots dx_k \prod_{\ell=1}^k \left(\frac{x_\ell}{x_{\ell+1}}\right)^{\ell(1-r)}\prod_{\ell=1}^{k-1}\frac{\ell(1-r)}{x_{\ell+1}}
\\
&=r (1-r)^{k-1}(k-1)!\int_{0<x_1<\cdots <x_{k}<1}dx_1\cdots dx_k \; x_1^{1-r} \prod_{\ell=2}^k x_{\ell}^{-r}
\\
&=r (1-r)^{k-1}\int_0^1 dx_1 x_1^{1-r} \prod_{\ell=2}^k \int_{x_1}^1 dx_\ell \; x_\ell^{-r}
\\
&= r \int_0^1 dx_1 x_1^{1-r} (1-x_1^{1-r})^{k-1}
\\
&=\frac{r}{1-r}\int_0^1 dy \; y^{\frac{1}{1-r}}(1-y)^{k-1}= \frac{r}{1-r} B\left(\frac{2-r}{1-r}, k\right).
\end{align*}
This compltes the proof.
\hbox{\rule[-2pt]{3pt}{6pt}}
\vskip 3mm
We give the proof of Theorem \ref{Th2}.
When $p=1$ From Lemmas \ref{Lemma6} and \ref{Lemma5} we have
\begin{align*}
\frac{1}{n}\sum_{t_1=1}^n \mathbf{1}_{A_k(t_1,n)}
\to \frac{r}{1-r} B\left(\frac{2-r}{1-r}, k\right) \text{ as } n\to\infty,
\quad \text{in probability}.
\end{align*}
Noting that
$$
\lim_{n\to\infty}\frac{S_n}{n}=r, \quad \text{a.s.}
$$
we have
\begin{align}\label{con1}
&\lim_{n\to\infty}\frac{\sum_{f\in (0,1)}U_n^k(f)-U_n^k(f+)}{S_n}=\frac{1}{1-r} B\left(\frac{2-r}{1-r}, k\right)=p_k. \quad \text{in probability}.
\end{align}
Next we consider the case where $p\in (0,1)$.
We introduce another Markov process $\hat{X}_n$, $n\in \mathbb{N}\cup \{0\}$, which is a pure birth process, as follows:
\begin{enumerate}
\item At time $0$ there exists one individual at a site uniformly distributed on $(f_c,1)$.
\item with probability $p(1-rf_c)$ there is a new birth. There are two possibilities --
\begin{itemize}
\item with probability $\displaystyle{\hat{r}:=\frac{pr(1-f_c)}{p(1-rf_c)}}$ a mutant is born with a fitness uniformly distributed in $[f_c,1]$,
\item with probability $\displaystyle{1-\hat{r}:=\frac{p(1-r)}{p(1-rf_c)}}$ a non-mutant individual is born. It has a fitness $f$ with a probability proportional to the number of individuals of fitness $f$, and we increase the corresponding population of fitness $f$ individuals by $1$.
\end{itemize}
\item With probability $1-p(1-rf_c)$ nothing happens, i.e. neither a birth nor a death occurs.
\end{enumerate}
For the Markov process $\hat{X}_n$, $n\in\mathbb{N}\cup \{0\}$,
we define $\hat{q}_k$, $\hat{S}_n$ and $\hat{U}_n$ in the same manner as $q_k$, $S_n$ and $U_n$ for $X_n$, $n\in\mathbb{N}\cup \{0\}$. Then by the same argument as above we see that
\begin{align*}
\frac{1}{n}\sum_{t_1=1}^n \tilde{q}_{k}(t_1,n)&\sim p(1-rf_c) \frac{\hat{r}}{1-\hat{r}} B\left(\frac{2-\hat{r}}{1-\hat{r}}, k\right)
\end{align*}
and
$$
\lim_{n\to\infty}\frac{\hat{S}_n}{n}=pr(1-f_c).
$$
Hence
\begin{align*}
&\lim_{n\to\infty}\frac{\sum_{f\in (0,1)}\hat{U}_n^k(f)-\hat{U}_n^k(f+)}{\hat{S}_n}
= \frac{1}{1-\hat{r}} B\left(\frac{2-\hat{r}}{1-\hat{r}}, k\right)
= p_k
\end{align*}
From Lemma \ref{lemma4}, we know that deletions of individuals in $(f_c,1)$ occur finitely often and$\frac{R_n^f}{L_n^f+R_n^f}\to 1$ almost surely as $n\to\infty$. Thus we have
\begin{align*}
&\lim_{n\to\infty}\frac{\sum_{f\in (0,1)}U_n^k(f)-U_n^k(f+)}{S_n}=\lim_{n\to\infty}\frac{\sum_{f\in (0,1)}\hat{U}_n^k(f)-\hat{U}_n^k(f+)}{\hat{S}_n} \quad \text{a.s.}
\end{align*}
and so (\ref{con1}) for $p\in (0,1]$.
Noting that the sites are uniformly distributed on $[0,1]$ independently, and preferential attachment does not depend on the position of sites, we obtain Theorem \ref{Th2} from (\ref{con1}).
\hbox{\rule[-2pt]{3pt}{6pt}}
\vspace{.6cm}
\section{Number of individuals of a fixed fitness}
Fix $f\in [0,1]$ and let $N_n^f$ denote the number of individuals with fitness $f$ at time $n$. When $rp > 1-p$, i.e. $f_c < 1$, from Lemma \ref{lemma4} we know that, $P(L_n^f=0 \text{ infinitely often}) =1$ for $f \in (f_c,1)$. Thus, if a mutant with fitness $f \in (f_c,1)$ is born at some large time $\ell$, then the chances of the mutant dying is small, and so a natural question is `for some $n > \ell$, how many individuals did this mutant attract by time $n$', i.e., what is the value of $N_n^f$?
\begin{prop}
Fix $f\in (f_c,1)$, we have, for $\ell < n$, as $ \ell, n\to\infty$
\begin{align*}
&E[N_n^f | \text{a mutant with fitness $f$ is born at time $\ell$}]\\
&\sim \frac{\Gamma((2p-1)\ell+1)\Gamma((2p-1)n+1+p(1-r))}{\Gamma((2p-1)\ell+1+p(1-r))\Gamma((2p-1)n+1)}
\\
&\sim\left( \frac{n}{\ell}\right)^{p(1-r)}.
\end{align*}
\end{prop}
\noindent {\it Proof}. Since we are interested in the region $f > f_c$ and also, for the calculation of the expectation, we just need to factor out the death rate $(1-p)$, so we modify the Markov process
$\hat{X}_n$ introduced in the proof of Lemma \ref{Lemma5}, by removing the times when `nothing happens' , i.e. the process does not move. This is done as follows: let $\hat{N}_n$ be the number of individuals of the process $\hat{X}_n$ at time $n$, we define a new Markov process $\check{X}_n$, for $n \geq 0$, by
$$
\hat{X}_n = \check{X}_{\hat{N}_n-1}.
$$
Since $\hat{N}_0 = 1$, we see that $\check{N}_\ell = \ell+1$, where $\check{N}_\ell$ is the number of individuals of the process $\check{X}$ at time $\ell$.
Letting $\check{N}_m^f$ denote the number of individuals of the $\check{X}$ process of fitness $f$ at time $m$, we have
\begin{align*}
&E[\check{N}_m^f|\check{N}_{m-1}^f]
\\
&=\{1-p(1-r)\}\check{N}_{m-1}^f +p(1-r)\left\{(\check{N}_{m-1}^f+1)\frac{\check{N}_{m-1}^f}{m}+\check{N}_{m-1}^f \left(1-\frac{\check{N}_{m-1}^f}{m} \right)\right\}
\\
&=\left(1+\frac{p(1-r)}{m}\right)\check{N}_{m-1}^f.
\end{align*}
If $\check{N}_0^f=\check{N}_0=1$ then we have
\begin{align}
&E[\check{N}_m^f | \check{N}_0^f=1]=\prod_{k=1}^{m}(\frac{k+p(1-r)}{k})=\frac{\Gamma(m+1+p(1-r))}{\Gamma(1+p(1-r))\Gamma(m+1)},
\end{align}
while, if $\check{N}_\ell^f=1$ then we have
\begin{align}
&E[\check{N}_m^f | \check{N}_\ell^f=1]= \prod_{k=\ell+1}^{m}(\frac{k+p(1-r)}{k})
=\frac{\Gamma(\ell+1)\Gamma(m+1+p(1-r))}{\Gamma(\ell+1+p(1-r))\Gamma(m+1)}.
\end{align}
Since $\frac{\hat{N}_n}{n}\to pr(1-f_c)+p(1-r)=2p-1$,
if $\hat{N}_0^f=1$ then we have
\begin{align}
&E[\hat{N}_n^f | \hat{N}_0^f=1]\sim \prod_{k=1}^{(2p-1)n}(\frac{k+p(1-r)}{k})=\frac{\Gamma((2p-1)n+1+p(1-r))}{\Gamma(1+p(1-r))\Gamma((2p-1)n+1)}.
\end{align}
Also, $\frac{\hat{N}_\ell}{\ell}\to 2p-1$, so
for $\hat{N}_\ell^f=1$, we have
\begin{align*}
E[\hat{N}_n^f | \hat{N}_\ell^f=1]&= \prod_{k=(2p-1)\ell+1}^{(2p-1)n}(\frac{k+p(1-r)}{k})
\\
&=\frac{\Gamma((2p-1)\ell+1)\Gamma((2p-1)n+1+p(1-r))}{\Gamma((2p-1)\ell+1+p(1-r))\Gamma((2p-1)n+1)}.
\end{align*}
From Lemma \ref{lemma4} we have $E[N_n^f |N_\ell^f=1]\sim E[\hat{N}_n^f|\hat{N}_\ell^f=1]$,
and that completes the proof of the proposition.
\hbox{\rule[-2pt]{3pt}{6pt}}
\section{Heuristics for the case $f_c > 1$}
We now present some mean field heuristics about the location of the leftmost site
$x_t$ at time $t$ in the case when $pr < 1-p < p$, i.e. $f_c > 1$. These heuristics should be seen in connection with (\ref{Thm11}) of Theorem \ref{Th1}.
Let $y_t=1-x_t$. The number of individuals to enter the interval $(x_t,1]$ is approximately
$$
pry_t dt + p(1-r)dt,
$$
where the first term counts the births which are mutants and the second term counts the births which are not mutants. While the number of individuals deleted in the interval $(x_t,1]$ is approximately
$$
-\frac{dy_t}{y_t} \{ p-(1-p) \}t,
$$
this being the absolute value of the deletions since $\frac{dy_t}{dt}<0$.
Thus we consider the following differential equation:
\begin{align*}
pry_t dt + p(1-r)dt +\frac{dy_t}{y_t}\{ 2p-1\}t=(2p-1)dt,
\end{align*}
from which we have
\begin{align*}
\frac{dt}{t} &= \frac{-(2p-1)}{pry_t+p(1-r)-(2p-1)}\frac{dy_t}{y_t}
\\
&= \frac{-(2p-1)}{pry_t+pr(f_c-1)}\frac{dy_t}{y_t}
\\
&= -\frac{2p-1}{pr}\left\{ \frac{1}{y_t+ (f_c-1)} \right\}\frac{dy_t}{y_t}
\\
&=-\frac{2p-1}{pr(f_c-1)}\left\{ \frac{f_c-1}{y_t+ (f_c-1)} \right\}\frac{dy_t}{y_t}
\\
&=-\frac{2p-1}{pr(f_c-1)}\left\{ 1 - \frac{y_t}{y_t+ (f_c-1)} \right\}\frac{dy_t}{y_t}
\\
&=-\frac{2p-1}{pr(f_c-1)}\left\{ \frac{1}{y_t} - \frac{1}{y_t+ (f_c-1)} \right\}dy_t.
\end{align*}
Hence, for an appropriate constant $c$, we have
$$
c+ \log t = \frac{2p-1}{pr(f_c-1)}\left[\log (y_t+(f_c-1)) -\log y_t\right]= \frac{2p-1}{pr(f_c-1)}\log(1+ \frac{1-r}{y_t}),
$$
and so
\begin{align*}
&1+ \frac{f_c-1}{y_t}=\exp\left\{\{c+\log t)\frac{pr(f_c-1)}{2p-1}\right\}=Ct^{\gamma},
\end{align*}
where $\displaystyle{ \gamma = \frac{pr(f_c-1)}{2p-1} = \frac{1-p-pr}{2p-1}}$
and $C=e^{c\gamma}$. Thus
$$
y_t = \frac{f_c-1}{Ct^\gamma -1} \sim C' t^{-\gamma}, \quad t\to\infty.
$$
Moreover, the number of sites is approximately
$$
rpty_t\sim C'r(1-r)p t^{1-\gamma}.
$$
\begin{remark}
For $f_c >1$ we have $\gamma=\gamma(p,r) > 0$, and $\gamma(p,r)$ is a decreasing function of $p$. Also
\begin{itemize}
\item [(i)] when $p=1-p$, i.e., $p=\frac{1}{2}$, then $\gamma = \infty$; this corresponds to the case when the process dies out repeatedly,
\item[(ii)] when $pr=1-p$, i.e., $f_c=1$, then $\gamma = 0$; this corresponds to the case when the number of sites surviving is of order $o(t)$.
\item[(iii)] when $p=\frac{2}{3+r}\in \left(\frac{1}{2},\frac{1}{1+r} \right) $, then $\gamma =1$; this corresponds to the case when there are only a bounded number of sites surviving.
\end{itemize}
\end{remark}
From the above, we see that there are three critical values
$$
p_c^{(0)}:=\frac{1}{2} < p_c^{(1)}:= \frac{2}{3+r} < p_c^{(2)}:=\frac{1}{r+1}<1
$$
and four phases:
\begin{enumerate}
\item For $p\in (p_c^{(2)}, 1)$, $\gamma \in (-r,0)$ and individuals exist in the interval $(f_c,1]$.
\item For $p\in (p_c^{(1)}, p_c^{(2)})$, $\gamma \in (0,1)$ and the number of sites are increasing with the order $t^{1-\gamma}$
and the average number of particles is of order $t^{\gamma}$.
\item For $p\in (p_c^{(0)}, p_c^{(1)}]$, $\gamma \in (1,\infty)$, that is, $1-\gamma$ is negative, and the number of sites is finite, with the average number of individuals being of order $t$.
\item For $p\in (0,p_c^{(0)}]$ the process dies out infinitely often.
\end{enumerate}
\section{Simulation}
We conclude the paper with some simulations. The R code is given in the appendix. Here we have taken $p=3/4$, $r=1/2$, so that $f_c = 2/3$. The simulation has been conducted with $n = 100,000$.
Figure \ref{popsize-fitness} presents the size of the population in $\log_2$ scale at each surviving site. The plot above the red line indicates the sites where the population size is $2^6$ or more, while the plot above the green line indicates the sites where the population size is $2^8$ or more.
\begin{figure}[htp] \centering{
\includegraphics[scale=0.39]{popsize-fitness.pdf}
}
\caption{Population (in $\log_2$ scale) at various fitness levels.}
\label{popsize-fitness}
\end{figure}
In Figure \ref{kdist} the $x$-axis gives the population size, while the y-axis presents the proportion of sites with the given population size. The blue line is the theoretical value as obtained from Theorem \ref{Th2} and the vertical bars are the observed values.
\begin{figure}[htp] \centering{
\includegraphics[scale=0.35]{kdist.pdf}
}
\caption{Theoretical and observed proportion of sites with respect to population size.}
\label{kdist}
\end{figure}
\section{Acknowledgements}
The authors are grateful to Professor Deepayan Sarkar who wrote the R code and performed the simulation.
Rahul Roy acknowledges the grant MTR/2017/000141 from DST which supported this research and also the hospitality of Keio University where much of the work was done.
Hideki Tanemura's research is supported in part by Grant-in-Aid for Scientific Research (S), No.16H06338; Grant-in-Aid for Scientific Research (B), No.19H01793 from Japan Society for the Promotion of Science.
\noindent
|
1,116,691,498,407 | arxiv |
\section{Introduction}
\label{introduction}
In this section we define the space of real, unimodular lattices, give our result, and introduce notation. The novelty of this paper lies in exhibiting $\mathbb{Z}^2$'s extremal behavior with respect to the Euclidean distance function; that $\mathbb{Z}^2$ is a critical point with respect to this function in the space of lattices is known in the community, and we choose to include it here for completeness.
\noindent{}{Euclidean} lattices are ubiquitous in many fields of math, for example, group theory \cite{groupslattices}, cryptography \cite{LatBase}\cite{crypto}, representation theory \cite{reptheorylat}, and the study of Lie groups and Lie algebras \cite{LatLie}. A \emph{lattice} $\Gamma$ in $\mathbb{R}^n$ is a discrete, additive subgroup of finite covolume. Every lattice $\Gamma$ can be expressed as a set of integer linear combinations of a basis vectors $\{v_1, v_2, \ldots, v_n\}$ in $\mathbb{R}^n$. Symbolically we can express $\Gamma$ as $$\Gamma =\left \{ \sum_{i=1}^n m_i v_i: m_i \in \mathbb{Z}\right \}.$$ The convex hull of the $v_i$ is a \textit{fundamental domain} for $\Gamma$ acting on $\mathbb{R}^n$.
Using the column vectors $v_i$ we form a matrix $g$; we use this to express $\Gamma$ as $\Gamma = g\mathbb{Z}^n$. Restricting to the case where $g \in SL \left(n, \mathbb{R} \right)$ is the same as considering lattices of unit covolume. The \emph{space of unimodular lattices}, denoted $L \left(\mathbb{R}^n \right)$, is then given by the quotient $SL \left(n, \mathbb{R} \right)/SL \left(n, \mathbb{Z} \right)$, where the coset $gSL \left(n ,\mathbb{Z} \right)$ is identified with the lattice $g \mathbb{Z}^n$. This assignment is well-defined, since for any $h \in SL \left(n, \mathbb{Z} \right)$, $h\mathbb{Z}^n = \mathbb{Z}^n$.
Every flat two-dimensional torus can be seen as the quotient of $\mathbb{R}^2$ by a lattice $\Gamma = g\mathbb{Z}^2$, where we can think of the resulting torus $\mathbb{R}^2/\Gamma$ as a parallelogram spanned by (any choice of) basis vectors of $\Gamma$ with with sides identified by Euclidean translations. Any basis for $\Gamma$ gives a matrix in $GL\left(2, \mathbb{R}\right)$ whose columns are the basis vectors. When we normalize lattices to have covolume $1$, we can then restrict our set of matrices to $SL\left(2, \mathbb{R}\right)$. If we consider tori up to rotation, we have a further quotient by the group $SO(2,\mathbb{R})$, so our space of 2-dimensional tori (up to rotation and scaling) is $SO\left(2,\mathbb{R}\right) \backslash SL(2, \mathbb{R}) / SL(2, \mathbb{Z})$.
Figure ~\ref{fig:modular:surface} gives an illustration of $L(\mathbb{R}^2)$ as a fundamental domain for the action of $SL(2, \mathbb{Z})$ on the upper half-plane $\mathbb H^2 = SO(2, \mathbb{R})\backslash SL(2,\mathbb{R})$.
\begin{figure}\caption{An illustration of $L(\mathbb{R}^2)$, unimodular lattices up to rotation, via an image of the fundamental domain for $SL(2, \mathbb{Z})$ acting on $\mathbb{H} \simeq SO\left(2,\mathbb{R}\right) \backslash SL(2, \mathbb{R})$. }\label{fig:modular:surface}
\begin{tikzpicture}[scale=1]
\draw(-5,0)--(5,0);
\draw(180:1) arc (180:120:1);
\draw(60:1) arc (60:0:1);
\filldraw[fill=blue!20!white] (1/2, 5)--(60:1) arc (60:120:1)--(-1/2, 5);
\draw[dashed] (0,1)--(0,5) node[below] at (0, 1){\textcolor{green}{\tiny{$\mathbb{Z}^2$}}};
\draw[green!80] (0,1) circle (.1 cm);
\draw[xshift=1cm](180:1) arc (180:120:1);
\draw[orange!80] (.5, .866025) circle (.1cm) node[below] at (.5, .866025){\tiny{$\Lambda$}};
\draw[xshift=1cm](60:1) arc (60:0:1);
\draw[xshift=1cm] (1/2, 5)--(60:1) arc (60:120:1)--(-1/2, 5);
\draw[xshift=2cm](180:1) arc (180:120:1);
\draw[xshift=2cm](60:1) arc (60:0:1);
\draw[xshift=2cm] (1/2, 5)--(60:1) arc (60:120:1)--(-1/2, 5);
\draw[xshift=3cm](180:1) arc (180:120:1);
\draw[xshift=3cm](60:1) arc (60:0:1);
\draw[xshift=3cm] (1/2, 5)--(60:1) arc (60:120:1)--(-1/2, 5);
\draw[xshift=-1cm](180:1) arc (180:120:1);
\draw[xshift=-1cm](60:1) arc (60:0:1);
\draw[xshift=-1cm] (1/2, 5)--(60:1) arc (60:120:1)--(-1/2, 5);
\draw[xshift=-2cm](180:1) arc (180:120:1);
\draw[xshift=-2cm](60:1) arc (60:0:1);
\draw[xshift=-2cm] (1/2, 5)--(60:1) arc (60:120:1)--(-1/2, 5);
\draw[xshift=-3cm](180:1) arc (180:120:1);
\draw[xshift=-3cm](60:1) arc (60:0:1);
\draw[xshift=-3cm] (1/2, 5)--(60:1) arc (60:120:1)--(-1/2, 5);
\foreach \x in { -3, -2, -1, 0, 1, 2, 3, } \path(\x,0)node[below]{\tiny $\x$};
\end{tikzpicture}
\end{figure}
\subsection{Distances from deep holes}
We now fix our question: Let $p =\left(\frac{1}{2}, \frac{1}{2}\right)$ denote the center of the standard square fundamental domain $[0, 1]^2$ of $\mathbb{Z}^2$; in the terminology of \cite{extremal}, this center is called a \emph{deep hole} in the lattice. We note that deep holes are also referred to as \textit{circumcenters} when the lattices have rotational symmetry. Let $A_r\left(\mathbb{Z}, p\right)= \{mv+nw \in \mathbb{Z}^2 : |mv+nw-p| = r\}$ be the set of integer lattice points of distance $r$ from $p$. Let $\Delta = \left(v' w'\right)Z^2 = \mathbb{Z} v' + \mathbb{Z} w'$ represent a small perturbation of $\mathbb{Z}^2$ in the space of unimodular lattices. Then, $\det\left( v' w'\right)= 1$, and $|v-v'|$ and $|w-w'|$ are small. Next, we define $C_r\left(\mathbb{Z}^2, p\right)= \{mv' + nw' : |mv+nw - p| = r\}$ to be the set of perturbations of lattice points which were originally at distance $r$ from $p$ in $\mathbb{Z}^2$. Note that it is equivalent to express $C_r$ in terms of $A_r$: $C_r = \{mv' + nw': mv + nw \in A_r\}$. We want to compare the distances of the lattice points $C_r$ in the perturbed lattice $\Delta$ from the deep hole $p$ to the distances of the points $A_r$ in the original lattice. Symbolically, we want to compute the difference of the following sums to explore the behavior of lattices nearby $\mathbb{Z}^2$:
$$\sum_{\delta \in C_r} \|p-\delta\| \text{ and } \sum_{z \in A_r} \|p-z\|.$$
\subsection{Result}
\begin{theorem}\label{theorem:theorem}
\text{ } \\
\indent{\indent{If}} $\Delta$ is sufficiently close to $\mathbb{Z}^2$ with respect to the Euclidean metric, then for a fixed deep hole $p$
\begin{equation}\label{eq_dist2}
\sum_{\delta \in C_r} \| p - \delta\| - \sum_{z \in A_r} \| p - z\| \geq r \, |A_r| \, d\left(\Delta, \mathbb{Z}^2\right)^2.
\end{equation}
\indent{\indent{If}} $\phi:\mathbb{R}_+ \rightarrow \mathbb{R}$ is any monotonically increasing, convex function, then
\begin{equation*}\label{eq_dist3}
\sum_{\delta \in C_r} \phi\left(\| p - \delta\|\right)- \sum_{\lambda \in A_r} \phi\left(\| p - \lambda\|\right)\geq r \, \phi'\left(r\right)\, |A_r| \, d\left(\Delta, \mathbb{Z}^2\right)^2.
\end{equation*}
\end{theorem}
\vspace{.5cm}
The distance function $d\left(\Delta, \mathbb{Z}^2\right)$ is given by $\sqrt{ |v-v'|^2 + |w-w'|^2}$. Taking $d\left(\Delta, \mathbb{Z}^2\right)= \|\left(v, w\right)-\left(v', w'\right)\|$, where $\| \cdot \|$ is any other norm on $\mathbb{R}^4$, would yield an equivalent result, up to constants. Our proof of this result relies on explicit computations of derivatives.
\subsubsection{Organization}
In this section, we gave a brief introduction to lattices and the framework of examining their distances from a fixed, non-lattice point and give our main result. In \ref{symmetries}, we prove a \ref{Lemma} about the rotational symmetry of lattice points in $\mathbb{Z}^2$. In \ref{section3}, we prove our result, and in \ref{sect4}, we give further directions for research which naturally arise from this result and those of \cite{extremal}.
\subsubsection{Notation}
We give a brief summary of the notation used throughout this work, some of which was given in the introduction. First, we remark that though we are always working with column vectors, for ease of notation we write them as row vectors.
\begin{itemize}
\item $SL \left(n, \mathbb{R}\right)$ is the group of $n \times n$ real matrices with determinant $1$; $SL \left(n,\mathbb{Z}\right)$ has integer entries.
\item $L\left(\mathbb{R}^n\right)= SL\left(n, \mathbb{R}\right)/SL\left(n, \mathbb{Z}\right)$ is the space of unimodular lattices in $\mathbb{R}^n$. Here, we consider lattices up to rotation, so $L\left(\mathbb{R}^n\right)$ describes $SO\left(2, \mathbb{R}\right)\backslash SL\left(n, \mathbb{R}\right)/SL\left(n, \mathbb{Z}\right)$.
\item $\Gamma$ will always refer to an arbitrary unimodular lattice. Because our lattices are integral, we can write $\Gamma = \mathbb{Z} a + \mathbb{Z} b$ in $L \left(\mathbb{R}^2\right)$ where $\left(a, b\right)$ is a basis for $\mathbb{R}^2$ with $\det\left(a, b\right)= 1$.
\item We call the standard basis vectors $v =\left(1, 0\right)$ and $w =\left(0, 1\right)$, then we write $\mathbb{Z}^2 = \{ kv + lw: k, l \in \mathbb{Z}\}$.
\item $\Delta$ is a unimodular lattice given by a small perturbation of $\Gamma$'s basis vectors; $$\Delta = \{kv' + l w': k,l \in \mathbb{Z}\}$$ where $\|w -w'\|$ and $\|v-v'\|$ are small. We use $\Delta$ to denote the perturbation of both $\Gamma$ and $\mathbb{Z}^2$, but the context should make it clear what lattice is being perturbed.
\item $\Lambda$ is the unit covolume \emph{hexagonal} lattice. We use the basis $$\Lambda = \left\{k \frac{\sqrt{2}}{3^{1/4}} \,\left(1, \, 0\right)+ l \frac{1}{3^{1/4} \sqrt{2}} \,\left(1,\, \sqrt{3}\right):, k,l \in \mathbb{Z}\right\}.$$
The \textit{density} of a lattice refers to the reciprocal of the covolume, meaning that $$\text{density of } \Gamma = \frac{1}{\text{vol} \left(\mathbb{R}^2/\Gamma\right)}$$.
\item Given a lattice $\Gamma =\left(ma + n b\right)\in L \left(\mathbb{R}^2\right)$ and a point $q \in \mathbb{R}^2$, we define $$ A_r\left(\Gamma, q\right)= \{ ma+nb: m, n \in \mathbb{Z}, |ma+nb-q| = r\} $$ to be the set of lattice points exactly distance $r$ from $q$. We will denote it just as $A_r$ when the lattice is understood, and $A_r\left(\Gamma \right)$ to specify the lattice explicitly.
\item For $\Delta =\left(\mathbb{Z} a' + \mathbb{Z} b'\right)$ a fixed small perturbation of $\Gamma$, we define $C_r$ to be the set of perturbations of lattice points in $\Gamma$ which are distance $r$ from $p$; that is $$C_r = \{ ma'+nb': m, n \in \mathbb{Z}, |ma+nb-p| = r\}.$$ The choice of fundamental domain and therefore $p$ is not reflected in this notation; the calculation is independent of this choice.
\end{itemize}
\section{Symmetries} \label{symmetries}
Following \cite{extremal}, we first show that points in lattice $\mathbb{Z}^2$ at a fixed distance $r$ from deep hole $p =\left(\frac{1}{2}, \frac{1}{2}\right)$ occur naturally in quadruples. In the following lemma, let $\mathbb{Z}^2 \in \mathbb{R}^2$ have basis
$$ v = \left(1,0\right)\text{ and } w = \left(0,1\right).$$
The deep hole of the unit square is $p = \frac{v+w}{2} =\left(\frac{1}{2}, \frac{1}{2}\right)$ and consider $R = \begin{pmatrix} 0 & 1 \\ -1& 0 \end{pmatrix},$ the rotation matrix by $\frac{\pi}{2}$. This lemma says that for $p$ and \textit{any} lattice point $q$, there is a quadruple of lattice points given by a rotation of the vector connecting $p$ to $q$ by $\frac{\pi}{2}$; see Figure \ref{fig:vector:rotate} below.
\begin{lemma} \label{Lemma} For any $q \in \mathbb{R}^2$ such that $p+q \in \mathbb{Z}^2$, the images $R^i q$ are in the set
$\{p+q, p + Rq, \, p + R^2q \, p + R^3q\} \subset \mathbb{Z}^2$. \end{lemma}
\begin{proof} We explicitly calculate the quantities $p + Rq, \, p + R^2q, \, p + R^3q$, and argue that they are contained in $\mathbb{Z}^2$. If $p+q \in \mathbb{Z}^2$, then $ p+q = kv + lw \text{ for some } k,l\in \mathbb{Z}$ and therefore we can write $$q = \left( k-\frac{1}{2}\right) v + \left( l-\frac{1}{2}\right) w. $$ Using that
$Rv = w$ and $Rw = -v$, we have that
\begin{align*}
p + Rq &= p + R\left(\left(k-\frac{1}{2}\right)v + \left(l-\frac{1}{2}\right)w\right)\\
& = p + \left(k-\frac{1}{2}\right)Rv + \left(l-\frac{1}{2}\right)Rw\\
& = \frac{v+w}{2} + \left(k-\frac{1}{2}\right)w + \left(l-\frac{1}{2}\right)\left(-v \right)\\
&= \frac{1}{2}v + \frac{1}{2}w + kw \\
& =\frac{1}{2}w -lw + \frac{1}{2}v\\
& = \left(1-l\right)v + kw \in \mathbb{Z}^2 \\
\end{align*}
Similarly,
$ \, p + R^2 q = \left(1-k\right)v + \left(1-l\right)w \in \mathbb{Z}^2$, and
$p + R^3 q = lv + \left(1-k\right)w \in \mathbb{Z}^2.$
\end{proof}
\begin{figure}\caption{Rotation of horizontal basis vector by $R$. }\label{fig:vector:rotate}
\begin{tikzpicture}
\draw[help lines, color=gray!30, dashed](-0.3, -0.3) grid (3.3, 3.3);
\draw[ ->] (1,1)--(2,1) node[below] at (2, 1){q};
\draw[green!80, thick, ->] (1,1)--(1,2) node[left] at (1,2){Rq};
\draw[thick, ->] (1.8,1.3) arc (0:89:.5);
\node[below] at (1,1){p};
\end{tikzpicture}
\end{figure}
\section{Proof of Theorem} \label{section3}
To prove our main theorem using explicit computation of derivatives. To prove Theorem~\ref{theorem:theorem}, we need to show that for any lattice $\Delta$ sufficiently close $\mathbb{Z}^2$,
\begin{equation}\label{eq_dist2}
\sum_{\delta \in C_r} \| p - \delta\| - \sum_{z \in A_r} \| p - z\| \gtrsim r \, |A_r| \, d\left(\Delta, \mathbb{Z}^2\right)^2.
\end{equation}
We note that "sufficiently close" is with respect to the Euclidean metric on the space of lattices defined in section 1. The cases of linear distance and squared distances are addressed separately and when considered together the result follows.
\subsection{Squared Distance} We now prove Theorem~\ref{theorem:theorem}. First, note that since the question we are considering is rotationally invariant. Any covolume one, unimodular lattice in $\mathbb{R}^2$ can be rotated to have one of its basis vectors be horizontal; we fix this as a convention. Since $\Gamma$ has covolume $1$, we can express a basis for it as
$$ v_1\left(x,y\right)= y^{\frac{-1}{2}}\left(1,0\right) \hspace{1cm} \text{ and } \hspace{1cm} w_1\left(x,y\right)= y^{\frac{-1}{2}}\left(x,y\right).$$
To check their understanding, a reader could verify that $\mathbb{Z}^2$ corresponds to a parameter choice of $x=0, y=1$, and so $\mathbb{Z}^2$ is generated by $v = \left(1,0\right), \, w = \left(0,1\right)$. It is standard for $\mathbb{Z}^2$ to consider the fundamental domain $[0, 1]^2$ with deep hole $p = \left(\frac{1}{2}, \frac{1}{2}\right)$. An arbitrary lattice point $z \in \mathbb{Z}^2$ is given by the expression $z = kv + lw$ for $k,l \in \mathbb{Z}$. It naturally has three distinct associated points by a rotation of $\frac{\pi}{2}$ around $p$. These associated points have the following expression:
\begin{align*}
&z' = p + Rq = p + \begin{bmatrix} 0 & -1 \\ 1 & 0\end{bmatrix}\left(z-p\right),\\
&z'' = p + \begin{bmatrix} 0 & -1 \\ -1 & 0\end{bmatrix}\left(z-p\right) \\
&z''' = p + \begin{bmatrix} 0 & 1 \\ -1 & 0\end{bmatrix}\left(z-p\right).\\
\end{align*}
Our naming convention follows \cite{extremal}. Now, take $z \ \in \mathbb{Z}^2$ and it's associated quadruple $\{z, z', z'', z'''\}$. We will investigate this quadruple under perturbation. Let $\Delta$ be a perturbation of $\mathbb{Z}^2$ and $\left(\delta, \delta', \delta'', \delta'''\right)$ be the perturbation in $\Delta$ of our quadruple $\left(z, z', z'', z'''\right)$ in $\mathbb{Z}^2$. The previous lemma implies that the tuples $\left(\delta, \delta', \delta'', \delta'''\right)$ of perturbed lattice points are of the form:
\begin{align*}
&\delta = kv_1 + lw_1, \\
& \delta' = \left(1-l\right)v_1 + kw_1, \\
&\delta'' = \left(1-k\right)v_1 + \left(1-l\right)w_1 \\
& \delta''' = lv_1 + \left(1-k\right)w_1.\\
\end{align*}
\subsubsection{Defining $f$}
We will show that, in total, the squared distance of a perturbed quadruple to our fixed $p$ strictly increases. That is, if $\Delta$ has parameters $x$ and $y$ as discussed above, we want to understand the behavior of the function $f(x,y)$ given by
$$ f\left(x,y\right)= \| \delta - p\|^2 + \| \delta' - p\|^2 + \| \delta'' - p\|^2 + \| \delta''' - p\|^2. $$
There are many ways to express and simplify $f$. We like the form
\begin{align*}
f\left(x,y\right)&= \left(\left(\frac{-1}{2} + \frac{k}{\sqrt{y}} + \frac{lx}{\sqrt{y}}\right)^2 + \left(\frac{1}{2} + \left(-1+k\right)\sqrt{y}\right)^2 + \left(\frac{-1}{2} + k\sqrt{y}\right)^2 \right)4y + \left(-2 + 2k + 2\left(-1 + l\right)x + \sqrt{y}\right)^2 \\ & \hspace{.5cm} + \left(\frac{1}{2} + \left(-1 + l\right)\sqrt{y}\right)^2 + \left(\frac{-1}{2} + l\sqrt{y}\right)^2 + \left(-2l + 2\left(-1 + k\right)x + \sqrt{y}\right)^2 + \left(-2 + 2l-2kx + \sqrt{y}\right)^2. \\
\end{align*}
\subsubsection{Partial derivatives of $f$}
We show that $\mathbb{Z}^2$ is a critical point in $L\left(\mathbb{R}^2\right)$ by directly computing partial derivatives of $f$.
\begin{align*}
\partial_x f &=
\frac{\left(k-1\right)\left(-2 l+2 \left(k-1\right)x+\sqrt{y}\right)}{y}
+\frac{\left(l-1\right)\left(-2+2 k+2\left(l-1\right)
x +\sqrt{y}\right)}{y}\\ &-\frac{k \left(-2+2 l-2 k x+\sqrt{y}\right)}{y} +
\frac{2 l \left(-\frac{1}{2}+\frac{k}{\sqrt{y}}+\frac{l x}{\sqrt{y}}\right)}{\sqrt{y}} \\
\end{align*}
\begin{align*}
\partial_y f &=
2 \left(-\frac{k}{2 y^{3/2}}-\frac{l x}{2 y^{3/2}}\right)\left(-\frac{1}{2}+\frac{k}{\sqrt{y}}+\frac{l x}{\sqrt{y}}\right)-\frac{\left(-2
l+2\left(-1+k\right)x+\sqrt{y}\right)^2}{4 y^2} -\frac{\left(-2+2 l-2 k x+\sqrt{y}\right)^2}{4 y^2} \\
&-\frac{\left(-2+2 k+2\left(-1+l\right)x+\sqrt{y}\right)^2}{4 y^2}
+\frac{-2
l+2\left(-1+k\right)x+\sqrt{y}}{4 y^{3/2}} + \frac{-2+2 l-2 k x+\sqrt{y}}{4 y^{3/2}}+\frac{-2+2 k+2\left(-1+l\right)x+\sqrt{y}}{4 y^{3/2}}
\\
&+ \frac{\left(-1+k\right)\left(\frac{1}{2}+\left(-1+k\right)
\sqrt{y}\right)}{\sqrt{y}} + \frac{k \left(-\frac{1}{2}+k \sqrt{y}\right)}{\sqrt{y}} + \frac{\left(-1+l\right)\left(\frac{1}{2}+\left(-1+l\right)\sqrt{y}\right)}{\sqrt{y}}+\frac{l
\left(-\frac{1}{2}+l \sqrt{y}\right)}{\sqrt{y}}. \\
\end{align*}
We evaluate both partials at $p$, or $x=0, y=1$, to show that they are identically $0$, independent of the values of $k$ and $l$.
\begin{align*}
\partial_x\left(0,1\right) &=
\left(-1 + k\right)\left(1 - 2 l\right)+\left(-1 + 2 k\right)\left(-1 + l\right)+ 2\left(-\left(\frac{1}{2}\right)+ k\right)l - k \left(-1 + 2 l\right)\\
& = \left(-1 + k + 2l - 2kl\right)+ \left(1-2k -l +2kl\right)+ \left(k-l\right)\\
& = 0 \\
\end{align*}
\begin{align*}
\partial_y\left(0,1\right) &=
\left(-1+k\right)\left(-\frac{1}{2}+k\right)
+\frac{1}{4}\left(-1+2 k\right)-\frac{1}{4}\left(-1+2 k\right)^2+\frac{1}{4}\left(1-2 l\right)
-\frac{1}{4}\left(1-2 l\right)^2 \\
&+\left(-1+l\right)\left(-\frac{1}{2}+l\right)+
-\frac{l}{2}-l^2 +\frac{1}{4}\left(-1+2 l\right)-\frac{1}{4}\left(-1+2 l\right)^2\\
& = \left(\frac{1}{2}-\frac{3 k}{2}+k^2\right)+ \left(-\frac{1}{2}+\frac{3 k}{2}-k^2\right)+ \left(\frac{l}{2}-l^2\right)+ \left(\frac{1}{2}-\frac{3 l}{2}+l^2\right)+ \left(-\frac{1}{2}+l\right)\\
& = 0 \\
\end{align*}
\noindent{}{We} have now shown $\mathbb{Z}^2$ is a critical point with respect to the squared distance metric $f$ on the space of lattices.
\subsubsection{The Hessian of $f$}
To understand the nature of this critical point, we compute the Hessian for $f$ at $x=0, y=1$. The Hessian has the generic form $$H\left(k,l\right)= \begin{bmatrix} \partial_{xx} & \partial_{xy} \\ \partial_{yx} & \partial_{yy} \end{bmatrix}. $$
In our case, we have $$ H\left(k,l\right)= D^2f|_{x=0, y=1} = \begin{bmatrix} h_1\left(k,l\right)& -1 \\ -1 & h_3\left(k,l\right)\end{bmatrix}$$
where $h_1\left(k,l\right)= 4 \left(1-k+k^2-l+l^2\right)$ and $h_3\left(k,l\right)= 3-4 k+4 k^2-4 l+4 l^2$.
\noindent{}{It} is important to note here that this is a key difference between $\mathbb{Z}^2$ and $\Delta$, the triangular lattice in \cite{extremal}. In $\Delta$'s Hessian, $h_3 = h_1$; here, $h_3 = h_1 - 1$. The characteristic polynomial of $H(k,l)$ is $$P\left(\lambda\right)= -1+\left(3-4 k+4 k^2-4 l+4 l^2-\lambda\right)\left(4\left(1-k+k^2-l+l^2\right)-\lambda\right)$$ with roots
\begin{align*}
\lambda &= \frac{1}{2} \left(7\pm \sqrt{5}-8 k+8 k^2-8 l+8 l^2\right)\\
& = \frac{1}{2}\left( 7 \pm \sqrt{5} + 2h_1\left(k,l\right)-8\right)\\
& = \frac{1}{2}\left(-1 \pm \sqrt{5} + 2h_1\left(k,l\right)\right).\\
\end{align*}
Let $\lambda_{min}\left(k,l\right)$ denote the behavior of smaller of the two eigenvalues, where here $h_1 + \frac{-1 - \sqrt{5}}{2}$; we want to minimize this with respect to $k$ and $l$. For all values of $\left(k,l\right)\in \mathbb{Z}^2$, $h_1 \geq 4$ with equality achieved at $\left(k,l\right)\in \{\left(0,0\right), \left(0,1\right), \left(1,0\right), \left(1,1\right)\}$; so, our minimum is $\lambda_{min} = 4 - \frac{-1-\sqrt{5}}{2}$.
We note that the same conditions for $k,l$ hold for the larger root, so $\lambda_{max}$ is $h_1\left(k,l\right)+ \frac{\sqrt{5} -1}{2}$.
Then, our growth is bounded away from $0$.
Note that for nonzero radii contained in the closure of the fundamental domain, $r = \frac{1}{\sqrt{2}}$.
Since $\lambda_{min} = 4 - \frac{-1-\sqrt{5}}{2}$, we have that $\lambda_{min} > \frac{1}{2} = r^2 = k^2 + l^2$. \\
Lastly, we consider the asymptotic behavior of $\lambda_{min}$. We note that both $h_1$ and $h_3$ are positive definite quadratic forms, and that the off-diagonal terms are fixed at $-1$. Thus, $\lambda_{min} \geq \frac{1}{2}\left(9-\sqrt{2} + \sqrt{5}\right)$.
We now have an explicit lower bound for growth under perturbation: $\left(4 - \frac{-1-\sqrt{5}}{2}\right)- \sqrt{\frac{1}{2}} = \simeq 4.9109\dots$. \\
This implies the result in the case of squared distances:
\begin{align*}
& \sum_{\delta \in C_r} \| p - \delta\|^2 - \sum_{z \in A_r} \| p - z\|^2 \gtrsim r^2 \, |A_r| \, d\left(\Delta, \mathbb{Z}^2\right)^2. \\
\end{align*}
\qed
\subsection{Distance} The argument for linear distance follows the one preceding for squared distances, but here things look a little more complicated with the square root being taken over each summed term of $f\left(x,y\right)$. Beginning with $$ f\left(x,y\right)= \| \gamma - p\| + \| \gamma' - p\| + \| \gamma'' - p\| + \| \gamma''' - p\|, $$
we substitute $\lambda$, $\lambda'$, $\lambda''$, $\lambda'''$ in $\mathbb{Z}^2$ for $\gamma$, $\gamma'$, $\gamma''$, $\gamma'''$, and the value of $p$, we re-express $f$ as the following:
\begin{align*}
f\left(x,y\right)&= \| \left(\frac{2k + 2lx - \sqrt{y}}{2\sqrt{y}}, \frac{2l\sqrt{y} -1}{2}\right)\| \\
&+ \|\left(\frac{2\left(1-l\right)+ 2kx - \sqrt{y}}{2 \sqrt{y}}, \frac{2k\sqrt{y} - 1}{2} \right)\|\\
& + \|\left(\frac{2\left(1-k\right)+ 2x\left(1-l\right)- \sqrt{y}}{2 \sqrt{y}}, \frac{2\left(1-l\right)\sqrt{y}}{2} \right)\| \\
& + \|\left(\frac{2l + 2\left(1-k\right)x - \sqrt{y}}{2 \sqrt{y}}, \frac{2\left(1-k\right)\sqrt{y} - 1}{2} \right)\|. \\
\end{align*}
The next lines are the result of expanding the norm on each term.
\begin{align*}
f\left(x,y\right) &=
\left(\sqrt{\left(-\frac{1}{2} +\frac{k}{\sqrt{y}}+\frac{l x}{\sqrt{y}}\right)^2+\left(-\frac{1}{2}+l \sqrt{y}\right)^2}\right)\\
&+ \left(\sqrt{\left(-\frac{1}{2}+k \sqrt{y}\right)^2+\frac{\left(-2+2 l-2 k x+\sqrt{y}\right)^2}{4 y}}\right) \\
&+ \left(\sqrt{\left(\frac{1}{2}+\left(-1+l\right)\sqrt{y}\right)^2+\frac{\left(-2+2 k+2\left(-1+l\right)x+\sqrt{y}\right)^2}{4 y}}\right)\\
&+ \left(\sqrt{\left(\frac{1}{2}+\left(-1+k\right)\sqrt{y}\right)^2+\frac{\left(-2 l+2\left(-1+k\right)x+\sqrt{y}\right)^2}{4 y}}\right).
\end{align*}
\subsubsection{Partial Derivatives}
\noindent{The} partial $\partial_x f$ comes out to be
\begin{align*}
\partial_x f &=
\frac{\left(-1+k\right)\left(-2 l+2\left(-1+k\right)x+\sqrt{y}\right)}{2 \sqrt{\left(\frac{1}{2}+\left(-1+k\right)\sqrt{y}\right)^2 + \frac{\left(-2 l+2\left(-1+k\right)x+\sqrt{y}\right)^2}{4
y}} y} - \frac{k \left(-2+2 l-2 k x+\sqrt{y}\right)}{2 \sqrt{\left(-\frac{1}{2}+k \sqrt{y}\right)^2 + \frac{\left(-2+2 l-2 k x+\sqrt{y}\right)^2}{4 y}}
y}\\
&+ \frac{\left(-1+l\right)\left(-2+2 k+2\left(-1+l\right)x+\sqrt{y}\right)}{2 \sqrt{\left(\frac{1}{2}+\left(-1+l\right)\sqrt{y}\right)^2+\frac{\left(-2+2 k+2\left(-1+l\right)x+\sqrt{y}\right)^2}{4
y}} y} +\frac{l \left(-\frac{1}{2}+\frac{k}{\sqrt{y}}+\frac{l x}{\sqrt{y}}\right)}{\sqrt{\left(-\frac{1}{2}+\frac{k}{\sqrt{y}}+\frac{l x}{\sqrt{y}}\right)^2+\left(-\frac{1}{2}+l
\sqrt{y}\right)^2} \sqrt{y}}.
\end{align*}
Evaluated at $\left(0,1\right)$, we see that
\begin{align*}
\partial_x f\left(0,1\right)&=
\frac{\left(-1+k\right)\left(-2 l+1\right)}{2 \sqrt{\left(\frac{1}{2}+\left(-1+k\right)\right)^2 + \frac{\left(-2 l+2\left(-1+k\right)0+1\right)^2}{4}}} - \frac{k \left(-2+2l+\right)}{2 \sqrt{\left(-\frac{1}{2}+k \right)^2 + \frac{\left(-2+2 l+1\right)^2}{4}}}\\
& + \frac{\left(-1+l\right)\left(-2+2k+1\right)}{2 \sqrt{\left(\frac{1}{2}+\left(-1+l\right)\right)^2+\frac{\left(-2+2k+1\right)^2}{4}}} +\frac{l\left(-\frac{1}{2}+k\right)}{\sqrt{\left(-\frac{1}{2}+k\right)^2+\left(-\frac{1}{2}+l\right)^2}} \\
& = 0. \\
\end{align*}
Next, we compute the partial of $f$ with respect to $y$, which gives us
\begin{align*}
\partial_y f\left(0,1\right)& =
\frac{-\frac{\left(-2 l+2\left(-1+k\right)x+\sqrt{y}\right)^2}{4 y^2}+\frac{-2 l+2\left(-1+k\right)x+\sqrt{y}}{4 y^{3/2}}+\frac{\left(-1+k\right)\left(\frac{1}{2}+\left(-1+k\right)
\sqrt{y}\right)}{\sqrt{y}}}{2 \sqrt{\left(\frac{1}{2}+\left(-1+k\right)\sqrt{y}\right)^2+\frac{\left(-2 l+2\left(-1+k\right)x+\sqrt{y}\right)^2}{4 y}}} \\
&+ \frac{-\frac{\left(-2+2
l-2 k x+\sqrt{y}\right)^2}{4 y^2}+\frac{-2+2 l-2 k x+\sqrt{y}}{4 y^{3/2}}+\frac{k \left(-\frac{1}{2}+k \sqrt{y}\right)}{\sqrt{y}}}{2 \sqrt{\left(-\frac{1}{2}+k
\sqrt{y}\right)^2+\frac{\left(-2+2 l-2 k x+\sqrt{y}\right)^2}{4 y}}} + \frac{2 \left(-\frac{k}{2 y^{3/2}}-\frac{l x}{2 y^{3/2}}\right)\left(-\frac{1}{2}+\frac{k}{\sqrt{y}}+\frac{l
x}{\sqrt{y}}\right)+\frac{l \left(-\frac{1}{2}+l \sqrt{y}\right)}{\sqrt{y}}}{2 \sqrt{\left(-\frac{1}{2}+\frac{k}{\sqrt{y}}+\frac{l x}{\sqrt{y}}\right)^2+\left(-\frac{1}{2}+l
\sqrt{y}\right)^2}} \\
& + \frac{-\frac{\left(-2+2 k+2\left(-1+l\right)x+\sqrt{y}\right)^2}{4 y^2}+\frac{-2+2 k+2
\left(-1+l\right)x+\sqrt{y}}{4 y^{3/2}}+\frac{\left(-1+l\right)\left(\frac{1}{2}+\left(-1+l\right)\sqrt{y}\right)}{\sqrt{y}}}{2 \sqrt{\left(\frac{1}{2}+\left(-1+l\right)\sqrt{y}\right)^2+\frac{\left(-2+2
k+2\left(-1+l\right)x+\sqrt{y}\right)^2}{4 y}}}.
\end{align*}
Evaluating at $\left(0,1\right)$ we again get $0$:
\begin{align*}
\partial_y f\left(0,1\right)&=
\frac{-\frac{\left(-2 l+\sqrt{y}\right)^2}{4}+\frac{-2 l+\sqrt{y}}{4}+\frac{\left(-1+k\right)\left(\frac{1}{2}+\left(-1+k\right)
\right)}{1}}{2 \sqrt{\left(\frac{1}{2}+\left(-1+k\right)\right)^2+\frac{\left(-2 l+1\right)^2}{4}}}
+ \frac{-\frac{\left(-2+2
l+1\right)^2}{4}+\frac{-2+2 l+1}{4}+\frac{k \left(-\frac{1}{2}+k \right)}{1}}{2 \sqrt{\left(-\frac{1}{2}+k
\right)^2+\frac{\left(-2+2 l+1\right)^2}{4}}} \\
& + \frac{-\frac{\left(-2+2 k+1\right)^2}{4}+\frac{-2+2 k+1}{4 }+\frac{\left(-1+l\right)\left(\frac{1}{2}+\left(-1+l\right)\right)}{\sqrt{y}}}{2 \sqrt{\left(\frac{1}{2}+\left(-1+l\right)\right)^2+\frac{\left(-2+2
k+1\right)^2}{4 }}} + \frac{2 \left(-\frac{k}{2}\right)\left(-\frac{1}{2}+k\right)+\frac{l \left(-\frac{1}{2}+l\right)}{1}}{2 \sqrt{\left(-\frac{1}{2}+\frac{k}{1}\right)^2+\left(-\frac{1}{2}+l\right)^2}} \\
& = 0. \hspace{4cm}
\end{align*}
\noindent{}{Therefore} $\mathbb{Z}^2$ is a critical point for $f$. For our next step, we give the mixed partial $\partial g_{xy}|_{x=0, y=1}$, which evaluates to: $$\frac{-1+k^2\left(10-24 l\right)+6 l-14 l^2+8 l^3+8 k^3\left(-1+2 l\right)-2 k \left(1-12 l^2+8 l^3\right)}{2 \sqrt{2} \left(1-2 k+2 k^2-2 l+2 l^2\right)^{3/2}}.$$
\subsubsection{The Hessian}
To establish $\mathbb{Z}^2$ as a local minima or maxima, we form the Hessian.
$$ H\left(k,l\right)= D^2g|_{x=0, y=1} = \begin{bmatrix} h_1\left(k,l\right)& h_2\left(k,l\right)\\ h_2\left(k,l\right)& h_3\left(k,l\right)\end{bmatrix},$$
where
\begin{align*}
h_1\left(k,l\right)&= \frac{\sqrt{2}\left(1 - 3 k + 7 k^2 - 8 k^3 + 4 k^4 - 3 l + 7 l^2 - 8 l^3 +
4 l^4\right)}{\left(1 - 2 k + 2 k^2 - 2 l + 2 l^2\right)^{\frac{3}{2}}} \\
h_2\left(k,l\right)&= \frac{-1 + k^2\left(10 - 24 l\right)+ 6 l - 14 l^2 + 8 l^3 + 8 k^3\left(-1 + 2 l\right)-
2 k\left(1 - 12 l^2 + 8 l^3\right)}{2 \sqrt{2}\left(1 - 2 k + 2 k^2 - 2 l +
2 l^2\right)^{\frac{3}{2}}} \\
\text{ and \,\ } h_3\left(k,l\right)&= \frac{5 - 16 k^3 + 8 k^4 - 18 l + 26 l^2 - 16 l^3 + 8 l^4 -
6 k\left(3 - 8 l + 8 l^2\right)+
k^2\left(26 - 48 l + 48 l^2\right)}{2 \sqrt{2}\left(1 - 2 k + 2 k^2 - 2 l +
2 l^2\right)^{\frac{3}{2}}}. \\
\end{align*}
\noindent{}{The} determinant of $H\left(k,l\right)$ is
\begin{align*}
\label{det}
& \frac{19 - 192 k^5 + 64 k^6 - 82 l + 194 l^2 - 304 l^3 + 320 l^4 -
192 l^5 + 64 l^6 + 64 k^4\left(5 - 3 l + 3 l^2\right)}{8\left(1 - 2 k +
2 k^2 - 2 l + 2 l^2\right)^2}\\
& - \frac{16 k^3\left(21 - 26 l + 24 l^2\right)+
2 k^2\left(121 - 264 l + 336 l^2 - 192 l^3 + 96 l^4\right)-
2 k\left(49 - 144 l + 216 l^2 - 176 l^3 + 96 l^4\right))}{8\left(1 - 2 k +
2 k^2 - 2 l + 2 l^2\right)^2}. \\
\end{align*}
For all values $k, \, l \in \mathbb{Z}$, $min_{\left(k,l\right)}det\left(H\left(k,l\right)\right) = \frac{19}{8}$ and this minimum is achieved by the three triples $\{\left(0,0\right), \left(0,1\right), \left(1,0\right)\}$. We note that $h_1>0$ for all values of $k,l$; the minimum is achieved at one of the triples minimizing determinant: $k=1, l=1$.
Since $det\left(H\left(k,l\right)\right)>0$ and $h_1>0$ for all pairs $\left(k,l\right)$, we conclude that our critical point $\mathbb{Z}^2 \in L\left(\mathbb{R}^2\right)$ is a local minimum! To establish a strictly positive lower bound on the growth of distances from lattice points to $p$ as their distance from $\mathbb{Z}^2$ increases, we give the following computation. \\
The characteristic polynomial $char\left(H\left(k,l\right)\right)$ is $\left(h_1 - z\right)*\left(h_3 - z\right)- h_2^2 = 0$, which expanded has a frankly hilarious form taking 10 printed lines, and so we leave them in short form:
$$z= \frac{1}{2}\left(h_1 + h_3 \pm \sqrt{h_1^2 + 4 h_2^2 - 2h_1h_3 + h_3^2}\right).$$
With respect to $k$ and $l$, we claim these roots are always strictly positive. To see this, we first minimize over real values $\left( k,l\right) \in \mathbb{R}^2$ to identify candidates for $min_{\left(k,l\right) \in \mathbb{Z}^2} \left( \frac{1}{2}\left(h_1 + h_3 \pm \sqrt{h_1^2 + 4 h_2^2 - 2h_1h_3 + h_3^2}\right) \right)$. In the table \ref{table:roots} below, we give approximate values for roots, again emphasizing that these are the \textit{real} roots of $char\left(H\left(k,l\right)\right)$.
\vspace{.5cm}
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Root & Decimal Approx.\\
\hline
$\frac{1}{2}\left(h_1 + h_3 + \sqrt{h_1^2 + 4 h_2^2 - 2h_1h_3 + h_3^2}\right)$ & $z=1.6231$ at $k=1.1530, l=0.8641$ \\
\hline
$\frac{1}{2}\left(h_1 + h_3 - \sqrt{h_1^2 + 4 h_2^2 - 2h_1h_3 + h_3^2}\right)$ & $z=0.618034$ at $k=0.584444, l=0.797255$ \\
\hline
\end{tabular}
\label{table:roots}
\end{center}
\vspace{.5cm}
The smaller of the two eigenvalues is $z= \frac{1}{2} \left(h_1 + h_3 - \sqrt{h_1^2 + 4 h_2^2 - 2h_1h_3 + h_3^2}\right)$. We call the real-valued minimizing pair $\tilde{z} =\left(0.584444, 0.797255\right)$. To find the minimizing integer pair, we identified candidate tuples by testing all possible pairs with entries given by the floor and ceiling of $\tilde{z}$: $\left(0,0 \right), \left( 0,1\right), \left( 1,0\right), \left( 1,1\right)$. Both $\left(0,0\right)$ and $ \left(1,1\right)$ minimize $z =\left(\frac{9-\sqrt{5}}{\left(2 \sqrt{2}\right)}\right)$ over the integers. Then, for $\left( k,l\right) \in \mathbb{Z}^2$ we have that the smallest eigenvalue is $z = \frac{9-\sqrt{5}}{2\sqrt{2}}$, roughly $0.6180$.
Thus, we have a positive bound for the smallest growth in total distance of lattice points from $p$ under small perturbation. This eigenvalue is undefined at $\left(0.5, 0.5\right)$, but these are not integers and so this does not affect our computation.
\qed
\subsubsection{Convex Functions}
\begin{proof}
It remains to study the case
\begin{equation}
\sum_{\delta \in C_r} \phi \left(\| p - \delta\|\right)- \sum_{\lambda \in A_r} \phi \left(\| p - \lambda\| \right).
\end{equation}
where $\phi$ is a convex function.
For $\lambda \in A_r$ and its corresponding point under perturbation, $\delta \in C_r$, consider the quantity
\begin{equation}
\| p - \delta\| = \|p - \lambda \| + \varepsilon_{\delta}, \qquad \varepsilon_{\delta} \in \mathbb{R}.
\end{equation}
Rearranging, we have $ \| p - \delta\| - \|p - \lambda \| = r + \varepsilon_{\delta}$. Summing, we see
\begin{equation}
\sum_{\delta \in C_r}\left(\| p - \delta\|\right)- \sum_{\lambda \in A_r}\left(\| p - \lambda\|\right)= \sum_{\delta \in C_r}\left(\varepsilon_{\delta}\right).
\end{equation}
Then
\begin{equation}
\sum_{\delta \in C_r}\left(\|p-\delta\|\right)= \sum_{\lambda \in A_r}\left(\|p-\lambda\|\right)+ \sum \varepsilon_{\delta}
= \sum_{\delta \in C_r}\left(r + \varepsilon_{\delta}\right)
\end{equation}
A Taylor expansion of $\sum_{\delta \in C_r} \phi\left(r+ \varepsilon_{\delta}\right)$ around $\varepsilon_{\delta}= 0$ shows that
\begin{align*}
\sum_{\delta \in C_r} \phi \left(\| p - \delta\|\right)- \sum_{\lambda \in A_r} \phi \left(\| p - \lambda\|\right)& = \sum_{\delta \in C_r}\left(r + \varepsilon_{\delta}\right)\\
& = \phi\left(r + \epsilon_{\delta}\right)+ \phi'\left(r\right)\sum_{\delta \in C_r} {\varepsilon_{\delta}} + \frac{\phi''\left(R \right)}{2} \sum_{\delta \in C_r}{\varepsilon_{\delta}^2} + o\left(d\left(\Lambda, \Delta \right)^2\right),
\end{align*}
where the error term is allowed to depend on $r$ and $A_r$. Our first result gives us a bound for $r \, |A_r| \, d\left(\Lambda, \Gamma \right)^2$, so we have
\begin{equation}
\sum_{\delta \in C_r} {\varepsilon_{\delta}} \gtrsim r \, |A_r| \, d\left(\Lambda, \Delta \right)^2.
\end{equation}
Thus, we have shown that for a convex function $\phi$, \begin{equation}
\sum_{\delta \in C_r} \phi \left(\| p - \delta\|\right)- \sum_{\lambda \in A_r} \phi \left(\| p - \lambda\|\right)\gtrsim r \, \phi'\left(r\right)\, |A_r| \, d\left(\Lambda, \Delta \right)^2.
\end{equation}
\end{proof}
\newpage
\section{Further research}
\label{sect4}
\subsection{Higher dimensions}
This result gives promising indications of generalization. In particular, we conjecture that the $3$-dimensional analog of the result for some lattices which are optimal for sphere packing will hold. Our main obstacle in this endeavor is the growth of the dimension of the space of lattices. We construct $L\left(\mathbb{R}^n\right)$ as $SL\left(n, \mathbb{R}\right)/ SL\left(n, \mathbb{Z}\right)$, a quotient space with dimension $n^2-1$. The space $L\left(\mathbb{R}^2\right)$ considered above has dimension $3$, but if we consider lattices only up to rotation, then the quotient space is $2$ dimensional. The dimension of $L\left(\mathbb{R}^3\right)$ is $8$; even if we quotient by rotations again, the resulting space is $5$-dimensional.
\subsection{Connections to Sphere Packings}
Given a lattice $\Gamma$, we can associate a \textit{sphere packing}
$\mathcal{B}$ by putting spheres of the same radius around each lattice point so that the resulting spheres are mutually tangent. Informally, optimal sphere packings in Euclidean spaces are arrangements of (disjoint)
spheres of the same size which cover as much of the space as possible. More precisely, let $B_r\left(x \right)$ denote a Euclidean ball of radius $r$ around the point $x$.
We define $B_r\left(x, \Lambda \right): = B_r\left(x\right) \cap \mathcal{B}$. Then the ratio of the volumes $$ \frac{B_r\left(x, \Lambda \right)}{B_r\left(x \right)}$$ is called the \textit{density} of the packing. An \textit{optimal} packing maximizes $$r_{\mathcal{B}} = \lim_{r \rightarrow \infty} \frac{B_r\left(x, \Lambda \right)}{B_r\left(x \right)}.$$
The sphere packings associated to the square and hexagonal lattices in $\mathbb{R}^2$ are critical points for this notion of density, and as we showed above, the lattices critical points in the space of lattices for our problem of studying distances to deep holes. It is natural to ask whether those lattices in $L \left(\mathbb{R}^n \right)$ which are associated to optimal sphere packings are also extremal in our sense. Generally, we conjecture that any unimodular lattice which is also an optimal sphere packing in $\mathbb{R}^n$ will exhibit this extremal property.
\subsection{Other point sets} There are other naturally occurring families of point sets in Euclidean spaces, arising from various geometric and dynamical constructions. Examples include sets of \emph{holonomy vectors of saddle connections on translation surfaces} \cite{holonomy}, and \emph{cut-and-project quasicrystals} \cite{quasi}. In both examples there are versions of the question we have considered above about \emph{deep holes} in these point sets; however, as we saw with the growth on dimension of $L \left(\mathbb{R}^n \right)$, understanding optimal configurations is a challenging question due to the higher-dimensional nature of the associated spaces of configurations. An additional consideration would be the lack of an obvious additive structure. Intuition may be gained from first examining examples like the sets of saddle connections associated to Veech surfaces \cite{holonomy} and well-known tilings \cite{tilings}, like the Penrose tiling\cite{penrose}.
\newpage
\printbibliography
\end{document}
|
1,116,691,498,408 | arxiv | \section{Introduction} \label{sec:intro}
There are growing evidences recently of the supercritical (or super-Eddington) accretion objects
(hereafter, super-Eddington accretors) in the Universe. Super-Eddington
accretors are very powerful engines and so play essential roles in
various astrophysical phenomena (e.g., emitting high energy emission
and/or launching relativistic baryon jets). They can also give large
impacts on their environments through intense radiation and massive
outflow, thereby giving rise to interesting activities (e.g., creating
huge ionized nebulae). It is thus worth of studying the detailed
processes associated with super-Eddington accretors from various
viewpoints.
One of the most promising candidates for the super-Eddington accretors
is ULXs, compact Ultraluminous X-ray sources, which were successively
discovered in nearby active galaxies
\citep{1989ApJ...347..127F,2011ApJS..192...10L,2011MNRAS.416.1844W}.
The ULXs are off-nuclear point sources producing
very large X-ray luminosity, $L_{\rm x} > 10^{39}$ erg s$^{-1}$, far
exceeding the Eddington limit ($L_{\rm Edd}$) of a stellar mass black
hole.
There are two major
scenarios so far proposed and discussed to explain their nature: (1)
sub-Eddington accretors harboring an intermediate mass black hole (IMBH)
with mass exceeding $100 M_\odot$ \citep[e.g.][]{2000ApJ...535..632M,
2004ApJ...614L.117M}, and (2) super-Eddington accretors harboring a
stellar mass black hole with super-Eddington rates with ${\dot M} \gg L_{\rm Edd}/c^2$
\citep[e.g.][]{2001ApJ...549L..77W,2001ApJ...552L.109K,2007MNRAS.377.1187P}. Quite
recently, one very
convincing piece of evidence in favor of the latter scenario has been
reported; that is the discovery of pulses in one of the ULXs M82 X-2
\citep{2014Natur.514..202B}. This discovery has established that at least
some part of ULXs is super-Eddington accretors
\citep[ULX Pulsars, see][for the discovery of other
cases]{2016ApJ...831L..14F,2017Sci...355..817I,2017MNRAS.466L..48I}.
The ULXs are not the only candidate for super-Eddington accretors,
however, there are actually plenty of other objects known to date, that
are suspected to host supercritical accretion flow. One good example is
ULSs, Ultraluminous supersoft sources, which have similarly high X-ray
luminosities but which exhibit much softer X-ray spectra with typical
photon energy of $\sim 0.1$ keV
\citep[e.g.,][]{2003ApJ...592..884D,2004ApJ...617L..49K}. These features
can be understood, if one observes
super-Eddington accretors from nearly edge-on direction
\citep{2016MNRAS.456.1859U,2016ApJ...818L...4G,2017PASJ..tmp..143O}.
Other candidates include microquasars, TDE (tidally disrupted events),
narrow-line Seyfert 1 galaxies
\citep{1999ApJ...522..839W,2000PASJ...52..499M}, and so
on. Super-Eddington accretors are unique in the sense that
their energy release rate does not depend on their internal properties
at all but on the external conditions;
i.e., mass supply rate to the compact object vicinity.
In parallel with accumulation of observational evidences supporting the
ubiquitous existence of super-Eddington accretors, semi-analytic and
simulation studies have been conducted rather extensively in these days.
The possibility of supercritical accretion onto the compact star was
first discussed in the pioneering paper by \cite{1973A&A....24..337S}
(herefter SS73).
\cite{1988ApJ...332..646A} found an equilibrium solution
of the supercritical disk and constructed the so-called slim disk model,
in which advection of radiation entropy plays a crucial role
\citep[see][for a simplified self-similar solution of the slim
disk]{1999PASJ...51..725W}.
The general relativistic version of the slim disk was first constructed
by \cite{1998MNRAS.297..739B}, who claimed that the thermalization
timescale could be longer than the accretion timescale so that radiation
and matter temperatures may deviate.
The supercritical accretion disk has also been discussed in the context
of magnetized and/or non-magnetized neutron star. In the case of
accretion onto a magnetized neutron star, the accretion mode through the
disks quenches due to the strong magnetic pressure. Gas then falls onto
the neutron star surface along the magnetic field lines, thereby forming
accretion columns \citep{1976MNRAS.175..395B,1988SvAL...14..390L}.
The emission from the accretion columns can reaches
$10^{40}\ \mathrm{erg s^{-1}}$
\citep{2015MNRAS.454.2539M},
which is consistent with resent observation of the ULX pulsars.
The pioneering simulation work was made by \cite{2005ApJ...628..368O} using
radiation hydrodynamic (RHD) simulations.
They could for the first time succeed in producing steady-state supercritical accretion flow
and revealed various unique features, such as anisotropic radiation field, wide-angle outflow,
large-scale circulation of gas within the flow, and so on.
The most up-to-dated simulations are performed under the full GR treatments including magnetic field for BH
\citep{2014MNRAS.441.3177M,2014MNRAS.439..503S,2015MNRAS.454.2372S,2016MNRAS.456.3929S,2016ApJ...826..23,2017MNRAS.466..705S}
and for NS \citep{2017ApJ...845L...9T}, and found formation of strong
outflows \citep{2015PASJ...67...60T,2015MNRAS.453.3213S}.
\cite{2016ApJ...826..23} demonstrate that the hot accretion flow is
formed closed to the compact object and it can be responsible for hard
X-ray emission.
We, here, wish to address one key question; why is supercritical accretion feasible?
Another related question is; is there no practical limits on mass accretion rates and luminosities,
provided that sufficient amount of mass is supplied externally?
Through the number of simulation studies conducted recently we now have a consensus
that it is really feasible to put material into a BH as much as you like.
We should be careful, however, since the simulations only give results,
while it is our task to specify mechanisms underlying them.
Popular argument made in this context is as follows:
supercritical accretion is feasible, since radiation goes out in the perpendicular direction to the disk plane,
thus giving little effects on the matter that accretes along the disk plane.
This explanation is not complete, however, since
it misses the consideration of the force balance on the equatorial plane,
although radiation force should also give enormous impacts on the material there.
What is needed is to give a clear explanation why matter can accrete towards the region full of radiation energy.
It is interesting to note in this respect that
\cite{2007ApJ...670.1283O} discussed this problem, by using their RHD simulation data.
They have found two key ingredients which make it possible to excite supercritical flow:
anisotropic radiation field created by large $\tau$ accretion flow from the equatorial plane
and photon trapping effects; photons created deep inside the thick accretion flow are trapped within the
flow and finally swallowed by a BH before escaping from the surface of the flow.
The outgoing radiative flux is thus largely attenuated (or sometimes flux becomes inward)
so that supercritical accretion is feasible onto BHs.
How about the cases of NS accretions?
We should point that photon trapping cannot be so effective on a long timescale there,
since photons should eventually be emitted from the solid surface of a NS.
As a result, radiation force should always be outward, thereby decelerating accreting gas.
Supercritical accretion is relatively easier, if the NS is strongly magnetized
and if accretion occurs through a narrow accretion column (i.e., ULX pulsars). This is because
excess radiation energy can then almost freely escape from the side wall
of the accretion column
\citep{1976MNRAS.175..395B,2016PASJ...68...83K,2017ApJ...845L...9T}.
In this paper, we make careful analysis of the GR simulation data
to find an answer to the question, why the super-Eddington accretion
onto a non-magnetized NS is feasible. The paper is organized as follows:
we will describe the methods of calculations in section 2 and then present results in section 3.
Final section is devoted to discussion on observational implications and other related issues.
\section{Basic Equations and Numerical Method}
We numerically solve general relativistic Radiation Magnetohydrodynamic (GR-RMHD) equations,
in which the radiation equation is based
on a moment formalism with applying a M-1 closure
\citep{1984JQSRT..31..149L, 2013MNRAS.429.3533S, 2013PASJ...65...72K}.
In the following, Greek suffixes indicate space-time components, and
Latin suffixes indicate space components.
We take the light speed $c$ as unity otherwise stated. Then length and time are
normalized by gravitational radius $r_\mathrm{g}=GM/c^2$ and its light
crossing time $t_\mathrm{g}=r_\mathrm{g}/c$, where $G$ is the
gravitational constant and $M$ is a mass of a central object.
We take $M=1.4M_\odot$ and $M=10 M_\odot$ for NS and BH, respectively.
Basic equations consist of mass conservation,
\begin{equation}
(\rho u^\nu)_{;\nu} = 0,\label{eq:masscons}
\end{equation}
the energy momentum conservation for magnetofluids,
\begin{equation}
T^\nu_{\mu;\nu}= G_\mu, \label{eq:mhdeq}
\end{equation}
the energy momentum tensor for radiation field,
\begin{equation}
R^\nu_{\mu;\nu} = -G_\mu,
\label{eq:radeq}
\end{equation}
and induction equation,
\begin{equation}
\partial_t (\sqrt{-g}B^i)=[\sqrt{-g}(B^i v^j - B^j v^i)],
\label{eq:induction}
\end{equation}
where $\rho$ is the proper mass density, $u^\nu$ is the four velocity,
$v^i=u^i/u^0$ is the laboratory frame three velocity, $B^i$ is the
laboratory frame magnetic three field, and $g=\det({g_{\mu\nu}})$ is the
determinant of metric, $g_{\mu\nu}$.
The energy momentum tensor for magnetofluid and radiation are given by
\begin{eqnarray}
T^{\mu \nu}&=&
\left(\rho + e + p_\mathrm{gas} + 2 p_\mathrm{mag}\right)u^\mu u^\nu
\nonumber \\
&&+\left(p_\mathrm{gas} +p_\mathrm{mag}\right)g_{\mu\nu}
-b^\mu b^\nu,\\
R^{\mu\nu}&=& p_{\mathrm{rad}}
\left(4u^\mu_\mathrm{rad} u^\nu_\mathrm{rad} + g^{\mu\nu}
\right),
\end{eqnarray}
where $p_\mathrm{gas}, e, p_\mathrm{mag}, p_\mathrm{rad}$
and $u_\mathrm{rad}^\mu$ are the gas pressure,
gas internal energy, magnetic pressure,
radiation pressure,
and radiation frame's four velocity.
The gas internal energy is related to the gas pressure by
$e=(\Gamma-1)p_\mathrm{gas}$ with $\Gamma=5/3$ being the specific heat ratio.
The magnetic four vector $b^\mu$ is related to its three
vector through $b^\mu = B^\nu h_\nu^\mu/u^0$, where
$h_\nu^\mu=\delta_\nu^\mu + u_\mu u^\nu$ is the projection tensor and
$\delta^\mu_\nu$ is the Kronecker delta. The magnetic pressure is
represented by $p_\mathrm{mag}=b_\mu b^\mu/2$.
The gas and radiation field interact each other through a radiation
four force $G^\mu$, which is represented by
\begin{eqnarray}
G^\mu&=& -\rho\kappa_\mathrm{abs}(R^\mu_\alpha u^\alpha + 4 \pi
\mathrm{B}u^\mu)\nonumber \\
&-& \rho\kappa_\mathrm{sca}
(R^\mu_\alpha u^\alpha + R^\alpha_\beta u_\alpha u^\beta u^\mu)
+G^\mu_\mathrm{comp},
\end{eqnarray}
where $\kappa_\mathrm{abs}=6.4\times 10^{22}\rho T_\mathrm{gas}^{-3.5}\
\mathrm{cm\ g^{-1}}$
and $\kappa_\mathrm{sca}=0.4\ \mathrm{cm\ g^{-1}}$ are free-free absorption and
Thomson-scattering opacities.
The gas temperature is calculated by
$T_\mathrm{gas}=\mu m_p p_\mathrm{gas}/\rho k_\mathrm{B}$,
where $m_p$ is the proton mass,
$k_\mathrm{B}$ is the Boltzmann constant,
and $\mu=0.5$ is the mean molecular weight.
The black body intensity is given by
$\mathrm{B}=a_\mathrm{rad} T^4_\mathrm{gas}$ with
$a_\mathrm{rad}$ being the radiation constant.
We included the thermal Comptonization as follows:
\begin{eqnarray}
G^\mu_\mathrm{comp}
&=& -\rho\kappa_\mathrm{sca} \hat{E}_\mathrm{rad}
\frac{4k_\mathrm{B}(T_\mathrm{e}-T_\mathrm{rad})}{m_\mathrm{e}}
\nonumber \\
&\times& \left[
1 + 3.683\left(\frac{k_\mathrm{B} T_\mathrm{e}}{m_\mathrm{e}}\right)
+ 4 \left(\frac{k_\mathrm{B} T_\mathrm{e}}{m_\mathrm{e}}\right)^2
\right]\nonumber \\
&\times& \left[
1 + \left(\frac{k_\mathrm{B} T_\mathrm{e}}{m_\mathrm{e}}\right)
\right]^{-1}u^\mu,
\end{eqnarray}
where $T_\mathrm{e}$ is the electron temperature, $\hat{E}_\mathrm{rad}$
is the comoving frame radiation energy density,
$T_\mathrm{rad}=(\hat{E}_\mathrm{rad}/a_\mathrm{rad})^{1/4}$ is the
radiation temperature, and $m_\mathrm{e}$ is
the electron rest mass \citep{2015MNRAS.447...49S}.
We take $T_\mathrm{e}= T_\mathrm{gas}$ for simplicity.
We solve these equations in polar coordinate $(t,r,\theta,\phi)$ with
Kerr-Schild metric
by assuming axisymmetry with respect to the rotation axis,
$\theta=0$ and $\pi$.
The computational domain consists of
$r=[r_\mathrm{in},245r_\mathrm{g}]$, $\theta=[0,\pi]$.
Here we set the inner radius $r_\mathrm{in}$
to be $10\ \mathrm{km}$ for the NS
and $0.98 r_\mathrm{H}$ for the BH, where
$r_{\mathrm{H}}=M+(M^2+a^2)^{1/2}$ is a horizon radius
with $a$ being the spin parameter.
We take $a=0$ in this paper.
Numerical grid points are
$(N_r, N_\theta, N_\phi)=(264, 264, 1)$.
A radial grid size exponentially
increases with radius, and a polar grid is given by $\theta=\pi x_2 +
(1-h)\sin(2\pi x_2)/2$, where $h=0.5$ and $x_2$ spans uniformly from $0$
and $1$. We adopted outgoing boundary at outer radius, and reflective
boundary is adopted at $\theta=0$ and $\pi$. At the inner boundary
$r=r_\mathrm{in}$, a mirror symmetric boundary condition is employed for
the case of the NS,
while an outgoing boundary condition is used for the
the case of the BH.
That is, the matter as well as the energy is not swallowed
by the NS.
We start simulations from an equilibrium torus given by
\cite{1976ApJ...207..962F}, but the gas pressure in this solution is
replaced by a gas + radiation pressure by assuming a local
thermodynamic equilibrium. The inner edge of initial torus is situated at
$r=20r_\mathrm{g}$, while its pressure maximum is situated at
$33r_\mathrm{g}$.
Weak poloidal magnetic fields are initially embedded in the torus.
The magnetic flux vector $A_\phi$ is proportional to $\rho$,
and a ratio of maximum $p_\mathrm{mag}$ and
$p_\mathrm{gas}+p_\mathrm{rad}$ is set to be 100.
Outside the torus, the gas is not magnetized and
the density and the pressure are given by
$\rho=10^{-4}\rho_0 (r/r_\mathrm{g})^{-1.5}$
and $p_\mathrm{gas}=10^{-6} \rho_0 (r/r_\mathrm{g})^{-2.5}$,
where $\rho_0$ is the maximum mass density inside the torus.
We also set $p_\mathrm{rad}=10^{-10} \rho_0$
and $u_\mathrm{rad}^\mu=(1,0,0,0)$ outside the torus.
\begin{figure*}
\begin{center}
\includegraphics[width=12cm]{fig1.eps}
\caption{(a):Global structure of accretion disks and outflows for neutron
star (left) and black hole (right) case. Color shows mass density,
vectors shows stream lines, and white curves show photosphere.
(b): enlarged view of stream lines around the neutron star (left) and black hole
(right). Red (blue) lines indicate that the radial velocity is in positive
(negative) direction.}
\label{fig1}
\end{center}
\end{figure*}
In this paper, we take $\rho_0 = 0.1\ \mathrm{g\ cm^{-3}}$ for the
NS. On the other hand, the relatively small maximum
mass density is employed for the BH
($\rho_0=1.4\times 10^{-2}\ \mathrm{g\ cm^{-3}}$).
By such adjustment,
we can compare the models of NS and BH
under the almost equal condition,
since the mass of the NS
is about one order of magnitude smaller than
that of the BH.
In present work, we ignore the rotation of a central object ($a=0$).
We also consider an unmagnetized NS. Thus we can directly
study effects of physical boundary at a surface of central objects by
comparing results between the BH and NS.
\section{Results}
\subsection{Overview of the two cases}\label{f1}
In the following, we show time averaged data between
$t=3,000t_\mathrm{g}-5,000t_\mathrm{g}$ at which the mass accretion
continuously occurs onto a central star.
We first give in Figure 1 global supercritical accretion flow patterns, comparing
the two cases of NS accretion (left) and BH accretion (right).
The color contours in figure 1-(a) represent gas density distribution
with the same color scales (but note that the density normalizations
$\rho_0$ is by a factor of $\sim$ 7 greater in the left panel),
and arrows show fluid stream lines.
White lines indicate photosphere measured from outer boundary at
$r=245r_\mathrm{g}$ along fixed $\theta$.
The size of the NS (=10 km) corresponds to 4.8 $r_{\rm g}$ for a mass of
1.4 $M_\odot$.
Figure 1-(b) shows stream lines around the NS (left) and BH (right). Red and blue
lines indicate that the radial velocity is in positive and negative
direction, respectively.
The flow patterns displayed in these figures are distinct in many respects.
First of all, the flow lines are roughly conical (i.e., the line
directions are more or less radial) in the innermost region (at $r
\lesssim 15 r_{\rm g}$) in the BH case (see the right panel),
while they are chaotic, especially in the innermost region in the NS
case (the left panel).
Second, the high density regions (indicated by the red color) is thinly collimated near the BH
and thus has a conical structure in the BH accretion,
while it is rather broadened and covers the large surface area of the NS.
Third, we see more significant outflow motion in the NS case.
In particular, the strong outflow is ejected even below the photosphere
(indicated by the thick white line).
The outflow has a large opening angle from $\simeq 60^\circ$
and its four velocity in orthonormal frame is $0.2$ around
$r=60_\mathrm{g}$ and $\theta = 60^\circ$, while it is only $0.005$ for
BH case. The mass flux is order of magnitude larger for NS than that of BH.
As these consequences, some of the inwardly flowing material in the NS
accretion flow
does not reach the NS surface but is reflected and turns its direction
to outward.
No such reflection motion is significant in the BH
accretion flow (see figure 1-(b)).
These differences should be understood in terms of the different
mechanisms of absorbing radiation effects.
Figure 2 shows radial profiles of mass inflow rate $\dot
M_\mathrm{in}$ (red), outflow rate $\dot M_\mathrm{out}$ (blue),
and net inflow rate $\dot M_\mathrm{net} = \dot M_\mathrm{in}-\dot
M_\mathrm{out}$ (black), for neutron star (solid) and black hole
(dashed).
For the NS, the mass inflow rate is about $\dot
M_\mathrm{in}\simeq 300 L_\mathrm{Edd}$ around $10r_\mathrm{g}$.
It steeply decreases with a decrease in radius near the NS surface at $r=4.8r_\mathrm{g}$
since we employ reflection boundary condition.
Also the mass outflow rate has a similar trend with that of the inflow rate, but it
is slightly smaller than the inflow rate. This indicates that
substantial mass is blown away from the disk.
We note that the mass supply (inflow) rate around $r=20r_\mathrm{g}$ is about
$10^{3}L_\mathrm{Edd}$ in both case, since we start from the similar initial
torus. Even though that, the mass outflow rate is much higher for the NS than
that for BH. Thus, it indicates that the NS can drive more massive
outflows than the BH. We also note that the net inflow rate is
approximately constant inside $r\gtrsim 15r_\mathrm{g}$ for BH case.
Thus, the inflow-outflow equilibrium is realized inside this radius.
For the NS case, the net inflow rate is not constant but it
slightly increases with increasing radius, even though the computational time is
the same ($t=3,000-5,000r_\mathrm{g}$) in both simulations.
This would be due to the mass accumulations on the NS as shown above
(see also figure 1).
To summarize, a fraction of about a few tens of percent of the input
mass can accrete onto a BH, whereas only ten percent of less of the
input mass can accrete onto a NS. The other fraction of mass is lost as
outflow.
\subsection{Various energy density distributions}\label{f2-3}
Next, we consider energy composition in the accretion disks with
different central objects. The kinetic, gas, magnetic and radiation
energy densities are expressed as
\begin{eqnarray}
E_\mathrm{kin} &=& \rho (\gamma - 1)\gamma,\\
E_\mathrm{gas} &=& (e + p_\mathrm{gas})\gamma^2 - p_\mathrm{gas}\\
E_\mathrm{mag} &=& b^2 \gamma^2 - (n_\alpha b^\alpha)^2 \\
E_\mathrm{rad} &=& n_\alpha n_\beta R^{\alpha \beta}
\end{eqnarray}
where $n_\alpha = (-\alpha,0)$ is the normal observer's four velocity,
$\alpha=(-g^{00})^{-1/2}$ is the lapse function, and
$\gamma = - n_\alpha u^\alpha$ is the Lorentz factor.
The energy density is normalized by $\rho_0$.
Left three panels in figure 3 show spatial distributions of $E_\mathrm{kin}, E_\mathrm{mag}$ and
$E_\mathrm{rad}$.
Again, the conical flow structure around the BH is clearly shown in the lower panels of figure 3
except for the magnetic energy distribution that shows a more spherically symmetric shape
(see the second panel from the left).
By contrast, the NS accretion case displayed in the upper panels show somewhat distinct pattern.
The upper, third panel from left, for example, show that
the large $E_{\rm rad}$ region is found more widely around the NS than that around the BH.
This indicates intense radiation emitted from the NS surface and from the innermost flow region.
Kinetic energy distribution displayed in the upper left panel shows a similar structure,
implying launch of outflow occurring widely from the surface of the accretion flow.
Such enhanced energy regions around the central object are not found in the lower panels,
since excess energy can be absorbed by the BH.
Right panel in figure 3 shows comoving frame radiation energy density
distributions
normalized by $L_\mathrm{Edd}/(4\pi r^2 c)$, where we recover the light
speed $c$ for the sake of clarity.
We found that this quantity largely exceeds unity, typically $\sim 10^3$
or even greater, in the entire inflow region.
This is true in both of NS and BH cases, though the photon accumulation region
is much wider in the former.
This fact indicates that there exists a region full of radiation energy
and that its radiation energy density is so high that it would be able to blow away
the large amount of gas by counteracting the gravity force.
Nevertheless, we find that the inflow region stably persists around the compact objects.
This is because the inflow exists deeply inside the photosphere (see Figure 1) so that
the radiation flux can be much attenuated to become $F_{\rm rad} \ll E_{\rm rad}c$.
As a result, the gas is never prevented from accretion
\citep{2007ApJ...670.1283O}.
This issue will be discussed later again.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{fig2.eps}
\caption{Radial profile of mass inflow rate (dashed), outflow rate
(dotted), and net inflow rate (solid) for neutron star (red) and black
hole (black).}
\label{fig2}
\end{center}
\end{figure}
Figure 4 shows the density weighted, angle-averaged energy densities in various forms along $r$.
We take an average of a physical quantity, $f$, over the entire solid angle ($\Omega$) according to
\begin{equation}
<f> = \frac{\int d \Omega\ f \rho \sqrt{-g}}{\int d \Omega\ \rho \sqrt{-g}},
\end{equation}
where $g=\det\ g_{\mu\nu}$.
Comparing these panels, we understand that the kinetic energy $E_{\rm kin}$
dominates over all other energy forms inside the accretion disks in both cases.
While the radiation energy density $E_{\rm rad}$, the second largest one,
increases with decreasing radius in both cases, there exists an interesting distinction between the two:
the ratio of $E_{\rm rad}/E_{\rm kin}$ increases with a decreasing radius near the central object
in the NS accretion, while the opposite is the case in the BH accretion.
In the proximity of the NS, especially, the radiation energy density is comparable to
the kinetic energy density (see also fig. 3).
(Note that the kinetic energy is due mostly to the rotation, not to the
accretion.)
These facts indicate that the radiation pressure force makes a significant contribution in force balance near the NS
(this point will be discussed in the next subsection).
Around the BH, in contrast, the ratio of $E_{\rm rad}/E_{\rm kin}$ stays nearly constant
on the order of $\sim 10$ \% but rather decreases in the innermost part.
This is the direct consequence of photons being swallowed by the BH.
We should note, however, that the difference between $E_\mathrm{kin}$ and $E_\mathrm{rad}$
may depend on the mass accretion rate.
The magnetic energy is unimportant in both cases;
the ratio of $E_{\rm mag}/E_{\rm kin}$ is always around a few \%.
Likewise, the gas energy $E_{\rm gas}$ is everywhere negligible because the gas temperature
is low enough.
An interesting distinction between the BH and NS cases is found regarding the magnetic energy distribution;
that is, it is nearly isotropic in the BH accretion while it is concentrated
on the polar and equatorial regions in the NS accretion (see Fig. 3).
In our simulations, we start from the poloidal magnetic field.
The magnetic flux is swept according to the gas accretion and it is accumulated near the central object.
Since we assume ideal MHD and axisymmetry, the magnetic field is dissipated by a small
numerical resistivity and most of the flux remains around the pole.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig3.eps}
\caption{First three panels from the left show kinetic, magnetic, and
radiation energy density. The right panel shows the comoving frame
radiation energy density normalized by $L_\mathrm{Edd}/(4\pi r^2 c)$. Top and bottom panels
correspond to the case for neutron star and black hole, respectively. }
\label{fig3}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig4.eps}
\caption{Density weighted kinetic (black), radiation (black), magnetic
(blue) and gas energy density (orange). Top panel shows result for
neutron star, and bottom does for black hole. }
\label{fig4}
\end{center}
\end{figure}
\subsection{Force balance on the equatorial plane}\label{f4}
Next we show the radial profile of forces acting on the fluid elements.
We consider a steady state equation of motion
\begin{eqnarray}
f^\mathrm{adv}_r+
f^\mathrm{grav}_r + f^\mathrm{cent}_r + f^\mathrm{rad}_r
+f^\mathrm{gas}_r + f^\mathrm{mag}_r + f^\mathrm{cor}_r=0,\label{eq:fdcomp}
\end{eqnarray}
Here $f^\mathrm{adv}_r, f^\mathrm{grav}_r, f^\mathrm{cent}_r, f^\mathrm{rad}_r,
f^\mathrm{gas}_r, f^{\mathrm{mag}}_r, f^\mathrm{cor}_r$ are
defined according to \cite{2015arXiv150906644M} as,
\begin{eqnarray}
f^\mathrm{adv}_r &=&-u^j \partial_j u_r,\label{eq:fadv}\\
f^\mathrm{grav}_r &=& \frac{T^t_t}{w}\Gamma^t_{rt},\label{eq:fgrav}\\
f^\mathrm{cent}_r &=& \frac{T^\phi_\phi}{w}\Gamma^t_{rt},\label{eq:fcent}\\
f^\mathrm{rad}_r &=& \frac{G_r}{w},\label{eq:rad}\\
f^\mathrm{gas}_r &=& -\frac{\partial_r p_\mathrm{gas}}{w},\label{eq:fgas}\\
f^\mathrm{mag}_r &=&-\frac{-\partial_r (b^2/2) + \partial_i (b^i b_r)}{w},\label{eq:fmag}\\
f^\mathrm{cor}_r &=& f^\mathrm{metric}_r - f^{\mathrm{grav}}_r -
f^{\mathrm{cent}}_r + f^\mathrm{ent}_r,\label{eq:fcor}
\end{eqnarray}
where
\begin{eqnarray}
f^\mathrm{metric}_r &= & \frac{1}{w}T^\kappa_\lambda \Gamma^\lambda_{r\kappa}
-\frac{T^i_r - \rho u^i u_r}{w}\frac{\partial_i
\sqrt{-g}}{\sqrt{-g}},\\
f^\mathrm{ent}_r &=& -\frac{u_r}{w}\partial_i\left[(w-\rho)u^i\right],
\end{eqnarray}
where $w = \rho + e + p_\mathrm{gas} + 2 p_\mathrm{mag}$ denotes the
relativistic enthalpy.
Here equations (\ref{eq:fadv}) - (\ref{eq:fmag}) correspond to
advection term, gravity
force, centrifugal force, radiation force, gas pressure gradient force
and Lorentz force. $f^\mathrm{cor}_r$ is the relativistic correction term.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig5.eps}
\caption{Density weighted radial force normalized by the gravity
force.}
\label{fig5}
\end{center}
\end{figure}
Figure 5 shows various forms of density weighted, angle-averaged radial force along $r$
normalized by the gravity force.
Here $f^\mathrm{tot}_r$ is the total force without gravity force, so that
steady accretion would be realized where $f^\mathrm{tot}_r/|f^\mathrm{grav}_r| = 1$.
Let us first examine the NS case displayed in the upper panel.
We immediately notice that the centrifugal force balances almost completely with the gravity force
at large radii far from the central object. Hence, the rotation profile is nearly Keplerian and
radiation force is negligible there.
With a decrease in radius, however, the outward radiation force grows, since the NS surface
cannot swallow the radiation so that the radiation energy is accumulated there.
The radiation energy density profile, hence, has a negative gradient along $r$,
which gives rise to outward radiation pressure force.
The centrifugal force decreases with a decreasing radius so that
the radiation force and centrifugal force can be comparable close to the NS surface.
This occurs because the gravitational attraction force by the NS is weakened by the outward
radiation-pressure force. As a result, the disk rotation becomes highly sub-Keplerian,
although the flow is still in a quasi-steady state.
The important fact is that radiation force does never exceed the gravitational force,
which makes it feasible to induce supercritical accretion flow.
It is then of great importance to pay attention to the behavior of the centrifugal force.
We find a clear tendency that it declines inward very close to the NS.
This is caused by the accumulation of low angular momentum above the NS surface
and never happens in the BH case, since matter should be immediately swallowed.
But the gradient of the radiation energy density is not large enough to totally compensate
the gravitational attraction force towards the NS.
Finally, the advection term is very small, compared with the gravity force, but it does not vanish.
That is, the matter is slowly accreting onto the NS surface with accretion velocity
being much less than the free-fall velocity.
We may call this slowly accreting zone (at $r < 10 r_{\rm g}$) the settling region.
As a result, the supercritical accretion is feasible for the NS.
Next, let us examine the force balance in the BH accretion case in comparison with the NS case.
A big distinction is found in the behavior of the radiation force,
which is negative in the BH case, while it was positive in the NS case.
This is because not only the gas but also the radiation energy is swallowed by the BH.
The negative radiation flux pushes the gas toward the BH.
This explains why supercritical accretion onto a BH is feasible
\citep[see][but for the discussion based on the pseudo-Newtonian dynamics]{2007ApJ...670.1283O}.
Another distinction is that there is no force balance near the BH
in the sense that the total force does no longer balance with the gravity force near the BH.
This means, mass is continuously falls onto the BH with finite velocity.
Especially, the accretion motion is supersonic and is close to speed of
light in the BH vicinity.
We note that the centrifugal force exceeds the gravity force inside
$r<10r_\mathrm{g}$ for BH, but the total force balance holds if we
consider the relativistic correction factor $f_r^\mathrm{cor}$, i.e., a
quasi-steady state does actually realizes. There is an issue how we
decompose each force term in equation (\ref{eq:fdcomp}).
The centrifugal force $f^\mathrm{cent}_r$ approaches to non-relativistic
one far from the black hole, but this force does not balance with gravity
force everywhere. It deviates from the gravity force close to the
central object. The relativistic correction term $f_r^\mathrm{cor}$ is
important in this region.
For example, the innermost stable circular orbit is never obtained without
$f_r^\mathrm{cor}$.
The gravity force almost balances with the centrifugal force and
correction force in this region, but the advection and radiation forces
are also important and thus, the total force balances with the gravity force.
\section{Discussion}
In the present paper we have carefully examine the gas dynamics of supercritical flow
around the NS, in comparison with that around the BH, through the GR-RMHD simulations.
Supercritical accretion is feasible in both of NS and BH cases but for distinct reasons.
While it is photon trapping that works in the BH case, the removal of mass and energy
in the form of intense outflow is a key to realizing supercritical accretion onto NS.
The flow dynamics is also distinct:
sub-sonic, settling flow occurs around the NS surface, whereas matter nearly free falls onto the BH.
In the following, we will discuss some related issues more or less related to supercritical NS accretion.
\subsection{Outflow from inside the spherization radius}
It is widely known that SS73 have proposed the standard disk model,
but in the same paper they also made pioneering discussion regarding the gas dynamics of the
supercritical accretion flow onto the BH.
In their section IV, SS73 introduced the notion of the spherization radius,
inside which gas flows towards the central BH in a spherically symmetric fashion.
They also pointed out that outflow emerges from inside this radius.
They evaluated the spherization radius to be on the order of
$r_{\rm sph} \sim 10 ({\dot M}c^2/L_{\rm Edd}) r_{\rm g}$,
corresponding to the trapping radius, inside which photon trapping is significant (see also Begelman 1982).
In the present case we estimate $r_{\rm sph} \sim 10^3 r_{\rm g}$
(for ${\dot M}c^2/L_{\rm Edd}\gtrsim 300$, see Figure 2)
thus being far outside the picture box of figure 1.
The right panel of figure 1 clearly shows
that the inflow and outflow streamlines are separated all the way down to the BH event horizon.
In other words, there are no stream lines which turns its direction from inward to outward.
By contrast, the left panel of figure 1 shows somewhat similar streamlines as those illustrated in Fig. 8 of SS73;
that is, some streamlines change their directions from inward to outward.
Rather, we see that the change of the direction occurs even in the very vicinity of the NS surface.
In fact, the inflow and outflow rates nearly coincide in the innermost region
(inside $\sim 10 r_{\rm g}$, see, figure 2) so that the net accretion
rate is kept around the critical rate.
This is exactly a situation as that postulated by SS73.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig6.eps}
\caption{Same with figure 1, but color shows Bernoulli parameter. }
\label{fig6}
\end{center}
\end{figure}
\subsection{Bernoulli parameter}\label{f5}
To visualize the relative importance of the outflow in the NS accretion,
we calculate the local Bernouilli parameter according to Sadowski \& Narayan (2015);
\begin{equation}
{\rm B_e} \equiv -\frac{T_t^r + R_t ^ r + \rho u^r}{\rho u^r},
\end{equation}
where $T_t^r$ and $R_t^r$ are the $t-r$ components of the MHD and
radiation energy-momentum tensors
(representing the energy flux of MHD and radiation processes), respectively,
and $\rho u^r$ stands for the rest-mass energy flux.
The results are shown in figure 6 for the NS and BH cases in the left and right panels, respectively.
The locations of the photospheres are also indicated by the thick white lines there.
It is obvious that the blue regions, in which ${\rm Be} < 0$, are wider in the BH case.
Especially, we find that Bernouilli parameter is negative mostly below the photosphere close to the BH,
while it is positive in the NS case (except near the equatorial plane).
\subsection{Radiation cushion}
A next question which we wish to address is if there exists a settling regime covering the NS surface.
The accretion column created on the magnetized NS surface is composed of the upper free-fall region
and the lower settling region (e.g. Basko \& Sunyaev 1976, Kawashima et al. 2016).
In the latter, accretion velocity is much reduced by the decelerating force asserted by radiation cushion.
The direct consequence of the existence of the settling region is that
the matter density is $\rho \propto r^{-3}$, radiation pressure is $P_{\rm rad} \propto r^{-4}$,
and radiation temperature is $T_{\rm rad} \propto r^{-1}$.
These relations are derived from the hydrostatic balance in the radiation-pressure dominated atmosphere,
which leads
\begin{equation}
\frac{GM\rho}{r^2} = - \frac{dP_{\rm rad}}{dr}.
\label{eq:fbalance}
\end{equation}
Here, we assume that accretion motion is very slow (accretion velocity is much less than free-fall velocity).
Let us further assume little entropy production is significant during the accretion.
Then, the adiabatic relation holds between $P_{\rm rad}$ and matter density $\rho$; that is,
$P_{\rm rad}\propto \rho^{4/3}$. We then find
$dP_{\rm rad}/P_{\rm rad}^{3/4} \propto dr/r^2$, which reads $P_{\rm rad} \propto r^{-4}$
and $\rho \propto P_{\rm rad}^{3/4}\propto r^{-3}$.
To see if such dependences appear in the simulation data of the NS case,
we plot matter density and $(T_\mathrm{rad})^3$ as functions of radii in figure 7.
We find that radiation entropy crudely obeys the expected relationship;
$T^{3}_{\rm rad} \propto r^{-3}$ in the innermost region,
$r < 10 r_{\rm g}$,
although the density profile is steeper than $r^{-3}$.
These results indicate an almost adiabatic settling region is
formed close to the NS.
The mass density and radiation entropy on the surface of NS
increase with time due to the accumulation. Nevertheless, their radial
profiles do not change.
This indicates the force balance given in equation (\ref{eq:fbalance})
holds during simulation interval. Thus, we can expect that supercritical
accretion onto the NS continues in accompany with forming settling
region, until the gas in the disk is exhausted and mass accretion rate decreases.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig7.eps}
\caption{Density weighted $\rho$ and $T_\mathrm{rad}^3$ profiles with
different time intervals. }
\label{fig7}
\end{center}
\end{figure}
\subsection{Validity of our numerical model}
We simply compute opacities assuming fully ionized hydrogen gas.
The free-free opacity is, however, much larger by assuming the solar
opacity. We expect results would not be affected so much by the
metallicity since the
local thermodynamic equilibrium ($T_\mathrm{gas}=T_\mathrm{rad}$) is
attained mainly due to the Comptonization whose cooling timescale is much
shorter for the supercritical accretion disks. For the scattering opacity, it
decreases about 15\% assuming the solar abundance.
The reduction of opacity might reduce the outflow power. But the outflow
velocity is determined by the balance between the radiation force
($\propto \kappa_\mathrm{sca} F_\mathrm{rad})$ and
its drag force \cite[$\propto \kappa_\mathrm{sca} E_\mathrm{rad}$,
see][]{2015PASJ...67...60T}. The resulting terminal
velocity would not be affected by the opacity. Also
\cite{2005ApJ...628..368O} shows that the luminosity weakly depends
or is almost independent from the metallicity. Thus, our conclusion would
hold even if we adopted the solar metallicity.
Another concern in our numerical model is the boundary condition
on the neutron star. We simply applied a mirror boundary condition where
the gas never
flows across the boundary. This boundary condition might be plausible to
mimic the neutron star's solid surface,
while other boundary conditions have been adopted in the past study;
e.g., free boundary condition \citep{2012MNRAS.421...63R} or the
accretion-energy-injection boundary condition \citep{2007PASJ...59.1033O}.
Also the boundary condition adopted in our simulation does not take into
account the interaction between the gas and neutron star.
The magnetic activity in this boundary layer can transport the angular momentum
\citep{2002MNRAS.330..895A}. The dissipation of rotation energy of the
disk would increase the radiation energy close to the neutron star.
Although recent high resolution MHD simulations show that the stresses
worked in the boundary layer oscillate around zero
\citep{2012ApJ...751...48P,2017arXiv170901197B},
it is under debate what boundary condition is appropriate to describe
the neutron star surface.
We have to perform comprehensive study around the neutron star surface
with different boundary condition models to investigate the plausible
boundary conditions. We remain this problem as a important future work.
\section{Conclusions}
We performed 2-dimensional axisymmetric GR-RMHD simulation of
supercritical accretion onto a non-rotating unmagnetized neutron star,
and comparing results with non-rotating black hole.
Our findings can be summarized as follows:
\begin{itemize}
\item In contrast with the black hole case, a significant fraction of
mass is blown away by the radiation pressure driven outflow and
thus the net mass inflow rate reduces for the neutron star
case. Also the anisotropic radiation arising from the anisotropic
density distribution helps photons escape from the disk.
\item Inside the accretion disks, the radiation flux is largely
attenuated so that the radiation force balances with the sum of
centrifugal and gravity forces. Due to the large optical depth in
the supercritical disks, the radiation energy density much exceeds
that expected from the Eddington luminosity, $E_\mathrm{rad}\simeq
F_\mathrm{rad} c/\tau > 100 L_\mathrm{Edd}/(4\pi r^2 c)$.
\item We found that the gas and radiation is accumulated on the neutron
star surface. The settling region, where accretion motion is
significantly decelerated by radiation cushion is formed. The
radiation cushion would be approximately adiabatic, i.e., the
radiation energy roughly follows $\hat E\propto r^{-4}$ and the
gas and radiation temperature obeys $\propto r^{-1}$. Such a
radiation cushion never appears around the black hole so that
matter can be directly swallowed by the black hole.
Also, these mass density and radiation energy density profiles
follow radiation pressure supported hydrostatic balance.
\end{itemize}
These facts make supercritical accretion feasible for the neutron star.
\acknowledgments
Numerical computations were carried out on Cray XC30 at the Center for
Computational Astrophysics of National Astronomical Observatory of
Japan, and on K computer at AICS.
This work is supported in part by JSPS Grant-in-Aid for
Young Scientists (17K14260 H.R.T.) and for Scientific Research (C)
(17K05383 S. M., 15K05036 K.O.).
This research was also supported by MEXT as 'Priority Issue on Post-K
computer' (Elucidation of the Fundamental Laws and Evolution of the
Universe) and JICFuS.
|
1,116,691,498,409 | arxiv | \section{Introduction}\label{sec:i}
Electromagnetic field of a fast charge moving through the topologically nontrivial matter is of interest in many physics fields. A solution to this problem was found when the topological charge density is nearly stationary and homogenous and the electromagnetic response is dominated by low frequencies \cite{Tuchin:2014iua,Li:2016tel}, which is believed to be relevant to the phenomenology of the relativistic heavy-ion collisions as long as the considered time intervals as shorter than the sphaleron transition time $\tau_c$ \cite{Arnold:1997gh}. It was shown in \cite{Tuchin:2014iua,Li:2016tel} that the finite topological charge can have a profound effect on the electromagnetic field. The main goal of this paper is to compute the electromagnetic field in the opposite limit viz.\ at time intervals much longer than $\tau_c$ by treating the topological charge density as a spatially uniform \cite{Zhitnitsky:2014ria,Kharzeev:2007tn} stochastic process. This approach is developed in my recent paper \cite{Tuchin:2019gkg} where it is applied to investigate the late-time behavior of the chiral instability.
The presentation is structured as follows. In \sec{sec:a} the Maxwell-Chern-Simons (MCS) equations are expanded in the helicity basis which is particularly convenient for discussing the topological effects. In particular, the effect of the time-dependence of the topological charge can be computed by solving an ordinary differential equation \eq{a18}. The kinematics of the relativistic heavy-ion collisions allows for a few very accurate approximations of MCS equations, which is reviewed in \sec{sec:k}. The electromagnetic field at early times $t\ll \tau_c$ is discussed in \sec{sec:b} were the results of \cite{Tuchin:2014iua} are reproduced in the helicity basis. The solution to MCS equations at later times is derived in \sec{sec:b2} by ensemble averaging equation \eq{a18}. The main result is displayed in \eq{D1} and \fig{fig:2}. Summary is presented in \sec{sec:s}.
\section{Electromagnetic field in the helicity basis }\label{sec:a}
Electrodynamics in the topological matter is described using the Maxwell-Chern-Simons theory which adds to the Maxwell Lagrangian a term that couples $F\tilde F$ directly to the topological charge density\cite{Wilczek:1987mv,Carroll:1989vb,Sikivie:1984yz}. The field equations for a point charge $q$ moving in the positive $z$ direction with constant velocity $v$ read
\begin{subequations}\label{A1}
\bal
&\b \nabla \times \b B = \partial_t \b D +\sigma_\chi \b B+ \b j \,,\label{a1}\\
&\b \nabla\cdot \b D= \rho\,,\label{a2}\\
&\b \nabla \times \b E =-\partial_t \b B\,,\label{a3}\\
&\b \nabla\cdot \b B=0\,,\label{a4}
\gal
\end{subequations}
where the external current is $ j^\mu = (\rho,\b j)= q(1,v\unit z) \delta(z-vt)\delta (\b b)$, $z$ and $\b b$ are the longitudinal and transverse components of the position vector. The displacement is given by
\ball{a5}
\b D(\b r, t) = \int_{0}^\infty\varepsilon (t')\b E(\b r, t-t')dt'\,.
\gal
The spectral representation of permittivity is assumed to be $\varepsilon_\omega = 1+i\sigma/\omega$, where $\sigma$ is electrical conductivity (taken to be a constant). The chiral conductivity $\sigma_\chi(t)$ is proportional to the time-derivative of the topological charge density of matter. It is modeled as stationary $\sigma_\chi(0)$ at $t\ll \tau_c$, whereas at $t\gg \tau_c$ as a stochastic process with vanishing expectation value $\aver{\sigma_\chi(t)}=0$ and the dispersion $\Sigma_\chi=\sqrt{\aver{\sigma_\chi^2}}=\sigma_\chi(0)$.
Due to the presence of the anomalous current $\sigma_\chi \b B$ in \eq{a1}, it is natural to seek a solution of Eqs.~\eq{A1} as a superposition of the helicity states, which are the eigenstates of the curl operator in the Cartesian coordinates,
\begin{subequations}\label{A6}
\bal
\b B(\b r,t)&= \sum_\lambda\int \frac{d^3k}{(2\pi)^3}e^{i\b k\cdot \b r}\b\epsilon_{\lambda \b k}\Phi_{\lambda\b k}(t)\,,\label{a6}\\
\b E(\b r,t)&= \sum_\lambda\int \frac{d^3k}{(2\pi)^3}e^{i\b k\cdot \b r} \b\epsilon_{\lambda \b k}\Psi_{\lambda\b k}(t)+ \int \frac{d^3k}{(2\pi)^3}e^{i\b k\cdot \b r}\unit k \Psi'_{\lambda\b k}(t)\,, \label{a7}
\gal
\end{subequations}
where $\lambda=\pm 1$ is helicity and $\b\epsilon_{\lambda \b k}$ are the circular polarization vectors satisfying the orthonormality conditions $\b\epsilon_{\lambda \b k}\cdot\b\epsilon_{\mu \b k}^*=\delta_{\lambda\mu}$, $\b\epsilon_{\lambda \b k}\cdot \b k=0$ and the identity
\ball{a8}
i\unit k \times\b \epsilon_{\lambda \b k }= \lambda\b \epsilon_{\lambda \b k }\,.
\gal
We also expand the external current in this basis
\ball{a10}
\b j (\b r,t)= \sum_\lambda\int \frac{d^3k}{(2\pi)^3}e^{i\b k\cdot \b r} \b\epsilon_{\lambda \b k}J_{\lambda\b k}(t)+ \int \frac{d^3k}{(2\pi)^3}e^{i\b k\cdot \b r}\unit k J'_{\lambda\b k}(t)\,,
\gal
where
\ball{a11}
J_{\lambda\b k}= qv \b \epsilon_{\lambda \b k}^*\cdot \unit z e^{-ik_zvt}\,,\qquad J'_{\lambda\b k}= qv \frac{k_z}{k} e^{-ik_zvt}\,,
\gal
which can be verified using the equations derived in Appendix.
Plugging \eq{a7} and \eq{a5} into \eq{a2} yields the equation:
\ball{a12}
\int_0^\infty dt' \varepsilon(t') ik \Psi'_{\lambda k}(t-t')= q e^{-ik_z vt}\,.
\gal
Fourier-transforming \eq{a12} with respect to time one obtains
\ball{a13}
ik\Psi'_{\lambda \b k}(t)= \int_{-\infty}^\infty d\omega e^{-i\omega t}\frac{q}{\varepsilon_\omega}\delta(\omega-k_z v)= \frac{q}{\varepsilon_{k_zv} }e^{-ik_zvt}\,.
\gal
Substituting \eq{a13} into \eq{a7}, gives the noise-free longitudinal component of the electric field.
Turning to the transverse components of the field, write \eq{a1} and \eq{a3} in momentum space
\begin{subequations}
\bal
& i\b k\times \b \epsilon_{\lambda \b k} \Phi_{\lambda \b k}= \b\epsilon_{\lambda \b k}\dot \Psi_{\lambda \b k}+\unit k \dot \Psi'_{\lambda \b k} + \sigma \b \epsilon_{\lambda \b k}\Psi_{\lambda \b k}+\sigma \unit k \Psi'_{\lambda \b k}+\sigma_\chi \b \epsilon_{\lambda \b k}\Phi_{\lambda \b k}+ \b \epsilon_{\lambda \b k} J_{\lambda \b k}+ \unit k J'_{\lambda \b k}\,,\label{a14}\\
&i\b k \times \b \epsilon_{\lambda \b k} \Psi_{\lambda \b k} = -\b \epsilon_{\lambda \b k} \dot \Phi_{\lambda \b k} \label{a15}\,.
\gal
\end{subequations}
Using \eq{a8} in \eq{a15} yields
\ball{a17}
k \Psi_{\lambda \b k}= -\lambda\dot \Phi_{\lambda \b k}\,,
\gal
while \eq{a14} reduces to the following two equations
\begin{subequations}
\bal
& \lambda \ddot \Phi_{\lambda \b k}+\lambda\sigma \dot \Phi_{\lambda \b k} +(\lambda k^2-\sigma_\chi k)\Phi _{\lambda \b k}=k J_{\lambda \b k}\,,\label{a18}\\
& \dot \Psi'_{\lambda \b k}+\sigma \Psi'_{\lambda \b k} = -J'_{\lambda \b k}\,.\label{a19}
\gal
\end{subequations}
Eq.~\eq{a19} is the momentum space representation of the continuity equation $\b \nabla\cdot \b j= - \dot \rho = -\b\nabla\cdot \dot {\b D}$, while \eq{a18} determines the magnetic field. Solution to \eq{a18} depends on the functional form of $\sigma_\chi(t)$.
\section{Anomaly-free solution and the heavy-ion collision kinematics}\label{sec:k}
Analytical solutions of \eq{a18} are derived in \sec{sec:b} and \sec{sec:b2} in the form of the three-dimensional Fourier integrals for constant and stochastic chiral conductivity respectively. These integrals cannot be taken exactly, but can be approximated in appropriate kinematics. This paper specifically focuses on the kinematics of the heavy-ion collisions.
For the maximum clarity, it is instructive to discuss the heavy-ion collision kinematics using the anomaly-free solution, i.e.\ the case $\sigma_\chi=0$. The solution in this case can be easily obtained from \eq{a18} and is well-known:
\ball{k1}
B_\phi(\b r,t)&= \int \frac{d^3k}{(2\pi)^3}e^{ik_z(z- vt)}e^{i\b k_\bot\cdot \b b}\frac{(-i\b k\cdot \unit b) qv }{k_z^2/\gamma^2+k_\bot^2-i\sigma k_z v }\,,
\gal
where $\gamma= (1-v^2)^{-1/2}$. Integration over $k_z$ picks up one of the two poles depending on the sign of the variable $\zeta = vt-z$:
\ball{k2}
k_{z0}^\pm = \frac{i\sigma v\gamma^2}{2}\left( 1\pm \sqrt{1+Y}\right)\,,
\gal
where a shorthand notation is used:
\ball{k3}
Y=\frac{4k_\bot^2}{v^2\sigma^2\gamma^2}\,.
\gal
In the limit $Y\gg 1$ the poles \eq{k2} become $ k_{z0}^\pm=\pm i \gamma k_\bot$ and integration in \eq{k1} yields
\ball{k10}
B_\phi(\b r,t)&= \frac{qv\gamma}{4\pi}\frac{b}{\left(b^2+\gamma^2\zeta ^2\right)^{3/2}}\,,
\gal
which is the magnetic component of the Coulomb field of a moving charge in free space. In the opposite limit $Y\ll 1$, $k_{z0}^-= -i k_\bot^2/v\sigma$ and $k_{z0}^+=i\sigma v\gamma^2$. The corresponding magnetic field is
\ball{k12}
B_\phi(\b r,t)&= \frac{q\sigma b}{8\pi\zeta^2} \exp\left\{-\frac{b^2 \sigma}{4\zeta}\right\}\theta(\zeta)\,.
\gal
The contribution at $\zeta<0$ is proportional to $\exp(-\sigma v\gamma^2|\zeta|)$. It can be neglected because in relativistic heavy-ion collisions, the valence charges move with \emph{ultrarelativistic velocities} \texttt{(i)} $\gamma \gg 1$.
The second condition stems from the fact that the \emph{plasma is a poor electrical conductor} in the sense that $b\sigma\ll 1$. Indeed, plasma conductivity at the critical temperature is $\sigma = (34\,\text{fm})^{-1}$ \cite{Aarts:2007wj,Ding:2010ga,Amato:2013oja} and $b=1- 10$~fm. Since the fields depend on the transverse direction through the phase factor $e^{i\b b\cdot \b k_\bot}$, the typical transverse momentum is $k_\bot \sim 1/b$ implying the second condition \texttt{(ii)} $k_\bot \gg \sigma$. It is seen from \eq{k2} that these two conditions imply that $k_z\gg k_\bot$, hence $k\approx k_z$.
The numerical calculation shown in \fig{fig:2} is done in this approximation.
One can develop a simple, but remarkably accurate, analytical approximation using the fact that the time-dependence of the field is given by $e^{-ik_z\zeta}$. This implies that at late times $\zeta\gg 1/\sigma\gamma$ the longitudinal component of the field spectrum is bounded by \texttt{(iii)} $k_z\ll \sigma\gamma$. In this case $k_\bot \ll k_z\ll \gamma\sigma$ which corresponds to $b\sigma \gamma\gg 1$ or $Y\ll 1$. This condition holds well for $\gamma \sim 100$ and above. Moreover, $1/\sigma\gamma$ is smaller than the Compton wavelength of the color charges making up the plasma immediately upon the collision, thus effectively allowing us to regard $1/\sigma\gamma$ as zero (i.e.\ the collision instant). The solution in this case is given by \eq{k12}. This is the phenomenologically relevant case.
At not very high energies and/or small enough $b$ the opposite limit $b\sigma\gamma \ll 1$ or $Y\gg 1$ occurs. In this case there is an interval of momenta $\gamma^2\sigma \ll k_z\ll k_\bot^2/\sigma$ corresponding to the times $\sigma b^2\ll \zeta\ll 1/\gamma^2\sigma$ when the effect of the conductivity is negligible and one recovers \eq{k10}.
In summary, the kinematics of a typical relativistic heavy-ion collision with $\gamma>100$ is such that the field components satisfy $\sigma \ll k_\bot \ll k_z\ll \sigma\gamma$. The same conclusion holds for anomalous plasma provided that $\sigma_\chi$ and $\Sigma_\chi$ are not much larger than $\sigma$. For numerical calculation in \sec{sec:b2} only $\omega \approx k\approx k_z$ approximation is used.
\section{Electromagnetic field at early times} \label{sec:b}
At $t\ll \tau_c$, the time-evolution of the chiral conductivity can be neglected. The time-dependence of the magnetic field follows from \eq{a11} and is given by $\Phi_{\lambda \b k}\sim e^{-ik_z vt}$. Thus, \eq{a18} and \eq{a17} yield
\begin{subequations}
\bal
\Phi_{\lambda \b k}&= \frac{qv \unit z \cdot\b \epsilon_{\lambda\b k}^*\lambda ke^{-ik_z vt}}{k^2-\lambda \sigma_\chi k -(k_zv)^2-i\sigma k_z v}\,,\label{a22} \\
\Psi_{\lambda \b k}&=\frac{iqk_zv^2 \unit z \cdot\b \epsilon_{\lambda\b k}^* e^{-ik_z vt}}{k^2-\lambda \sigma_\chi k -(k_zv)^2-i\sigma k_z v}\,.\label{a23}
\gal
\end{subequations}
Substituting these equations along with \eq{a13} into \eq{a6} and \eq{a7} on derives \cite{Tuchin:2014iua}:
\bal
\b B(\b r, t)= &\int \frac{d^3k}{(2\pi)^3}\frac{qvke^{ik_z(z- vt)}e^{i\b k_\bot\cdot \b b}}{\left[ k^2-(k_zv)^2-i\sigma k_zv\right]^2-(\sigma_\chi k)^2}\nonumber\\
&\times \left\{ \left[ k^2-(k_zv)^2-i\sigma k_z v\right]\sum_\lambda\lambda\b\epsilon_{\lambda \b k}(\unit z\cdot \b\epsilon^*_{\lambda\b k}) + \sigma_\chi k\sum_\lambda\b\epsilon_{\lambda \b k}(\unit z\cdot \b\epsilon^*_{\lambda\b k})
\right\}\label{a24A}\,.
\gal
The explicit expressions for the polarization sums can be found in Appendix.
In particular, the azimuthal component reads, upon using \eq{b16} and \eq{b18} and integrating over the angle between $\b k$ and $\b b$:
\bal
B_\phi(\b r, t)
= &\int \frac{dk_\bot k_\bot}{(2\pi)^2}\int_{-\infty}^\infty dk_z\frac{qvk_\bot e^{ik_z(z- vt)}J_1(k_\bot b) }{\left[ k^2-(k_zv)^2-i\sigma k_zv\right]^2-(\sigma_\chi k)^2} \left[ k^2-(k_zv)^2-i\sigma k_z v\right] \,.
\label{a24B}
\gal
Thus far the calculation has been completely general. Now, employing the approximation $k\approx k_z$ the $k_\bot$-integral can be done by writing in \eq{a24B}
\bal
&\frac{k^2-(k_zv)^2-i\sigma k_z v }{\left[ k^2-(k_zv)^2-i\sigma k_zv\right]^2-(\sigma_\chi k_z)^2}=
\frac{1}{2}\sum_{\lambda=\pm 1} \frac{1}{k_\bot^2+k_z^2/\gamma^2-i\sigma k_zv +\lambda\sigma_\chi k_z}\,.
\gal
The result is
\ball{a24C}
B_\phi(\b r, t)= &\frac{qv}{8\pi^2}\int_{-\infty}^{+\infty} dk_z e^{ik_z(z-vt)}\nonumber\\
&
\times \sum_{\lambda=\pm 1}
\sqrt{k_z^2/\gamma^2-i\sigma k_zv +\lambda\sigma_\chi k_z}\,K_1\!\!\left(b\sqrt{k_z^2/\gamma^2-i\sigma k_zv +\lambda\sigma_\chi k_z}\right)\,.
\gal
This equation is plotted in \fig{fig:2} for $t<\tau_c$. The other components of the magnetic and electric field can be computed in a similar way \cite{Tuchin:2014iua,Li:2016tel}.
Employing a stronger approximation $\sigma \ll k_\bot \ll k_z\ll \sigma\gamma$ and integrating in \eq{a24B} first over $k_z$ and then over $k_\bot$ obtains a simple formula \cite{Tuchin:2014iua,Li:2016tel}
\bal
B_\phi(\b r, t)
\approx &\int \frac{dk_\bot k_\bot}{(2\pi)^2}\int_{-\infty}^{+\infty} dk_z\frac{qvk_\bot e^{ik_z(z- vt)}J_1(k_\bot b) }{\left[ k_\bot^2-i\sigma k_zv\right]^2-(\sigma_\chi k)^2} \left[ k_\bot^2-i\sigma k_z v\right]
\nonumber\\
=& \frac{qb}{8\pi \zeta^2}\exp\left( -\frac{b^2\sigma}{4\zeta}\right)\left[ \sigma \cos\left( \frac{b^2\sigma_\chi}{4\zeta}\right)+\sigma_\chi \sin \left( \frac{b^2\sigma_\chi}{4\zeta}\right)\right]\,,\label{a24D}
\gal
Eq.~\eq{a24D} is a very good approximation of \eq{a24C} in the relativistic heavy-ion collision kinematics.
\section{Electromagnetic field at later times}\label{sec:b2}
To study the late time $t\gg \tau_c$ behavior of the electromagnetic field, one can regard the chiral conductivity $\sigma_\chi(t)$ as a random process and hence \eq{a18} becomes a stochastic equation describing time-evolution of the field amplitude with momentum $\b k$ and polarization $\lambda$.
Introducing an auxiliary variable $x= \Phi_{\lambda \b k}e^{\sigma t/2}$ one can cast \eq{a18} in the form
\ball{a26}
\ddot x(t)+\omega^2[1+\alpha \xi (t)]x(t)= \lambda k J_{\lambda \b k}(t)e^{\sigma t/2}\,,
\gal
where
\ball{a27}
\omega^2= k^2-\frac{\sigma^2}{4}\,,\qquad
\alpha = -\frac{\lambda k}{\omega^2}\Sigma_\chi\,,\qquad \xi(t) = \frac{\sigma_\chi}{\Sigma_\chi}\,.
\gal
Eq.~\eq{a26} describes the one-dimensional harmonic oscillator with random frequency. It does not have an analytical solution. However, one can deduce from it a set of ordinary differential equations for the expectation value of the amplitude moments \cite{VanKampen:1975}. In particular, assuming $\alpha\ll 1$, the average value of $x$ satisfies the equation
\ball{a29}
\partial_t^2\aver{x(t)}+\frac{1}{2}c_2\alpha^2\omega \partial_t\aver{x(t)}+\left(1-\frac{1}{2}\alpha^2c_1\right)\omega^2\aver{x(t)}= \lambda k J_{\lambda \b k}(t)e^{\sigma t/2}(1+\alpha^2 c_0)\,.
\gal
where
\begin{subequations}
\bal
&c_0(\omega)=\int_0^\infty \mathcal{K}(\tau) \sin( \omega \tau)[1-\cos( \omega \tau)]d(\omega \tau)\,,\label{a32}\\
&c_1(\omega)=\int_0^\infty \mathcal{K}(\tau)\sin(2 \omega \tau)d( \omega \tau)\,,\label{a33}\\
&c_2(\omega)=\int_0^\infty \mathcal{K}(\tau)[1-\cos(2 \omega \tau)]d( \omega \tau)\,.\label{a34}
\gal
\end{subequations}
with the autocorrelation function $\mathcal{K}(\tau)= \aver{\xi(t)\xi(t-\tau)}$. Eq.~\eq{a29} can be converted into the equations for the average of the amplitude $\Phi_{\lambda \b k}$:
\ball{a40}
\partial_t^2 \aver{\Phi_{\lambda \b k}}+\left(\sigma+\frac{\alpha^2}{2}c_2\omega\right) \partial_t\aver{ \Phi_{\lambda \b k}}+\left(\omega^2+\frac{\sigma^2}{4}-\frac{\alpha^2}{2}c_1\omega^2+\frac{\alpha^2}{4}c_2\sigma\omega\right)\aver{\Phi_{\lambda \b k}}\nonumber\\
= \lambda k J_{\lambda \b k}(t)(1+\alpha^2 c_0)\,.
\gal
The terms proportional to $\alpha^2$ represent contributions of the fluctuating chiral conductivity. Solution to \eq{a40} is
\ball{a41}
\aver{\Phi_{\lambda \b k}(t)}= \frac{qv \unit z \cdot\b \epsilon_{\lambda\b k}^*\lambda k(1+\alpha^2 c_0)e^{-ik_z vt}}{k^2 -(k_zv)^2-i\sigma k_z v+\alpha^2 Q(\omega) }\,.
\gal
were a shorthand notation is used
\ball{c2}
Q(\omega) = \frac{1}{4}\left( c_2\sigma\omega-2ic_2k_zv\omega -2c_1\omega^2\right) \,.
\gal
Substituting \eq{a41} into \eq{a6} yields the magnetic field:
\ball{c1}
\aver{\b B(\b r,t)}&= \sum_\lambda\int \frac{d^3k}{(2\pi)^3}e^{ik_z(z- vt)}e^{i\b k_\bot\cdot \b b}\frac{qv \b\epsilon_{\lambda \b k}(\unit z \cdot\b \epsilon_{\lambda\b k}^*)\lambda k(1+\alpha^2 c_0)}{k^2 -(k_zv)^2-i\sigma k_z v+\alpha^2 Q(\omega) }\,,
\gal
The $z$-component of the magnetic field vanishes due to \eq{b11}. Its $b$ component vanishes when \eq{b19} is substituted into \eq{c1} and integrated over the azimuthal angle $\psi$. Using \eq{b18} the azimuthal component of the magnetic field is
\ball{c4}
\aver{B_\phi(\b r,t)}&= \int \frac{d^3k}{(2\pi)^3}e^{ik_z(z- vt)}e^{i\b k_\bot\cdot \b b}\frac{(-i\b k\cdot \unit b) qv (1+\alpha^2 c_0)}{k^2 -(k_zv)^2-i\sigma k_z v+\alpha^2 Q(\omega) }\,.
\gal
The electric field is obtained using \eq{a7}, \eq{a13} and \eq{a41}:
\ball{c5}
\aver{\b E(\b r,t)}=&\int \frac{d^3k}{(2\pi)^3}e^{i\b k_\bot \cdot \b b-ik_z(vt-z)}
\left\{
- \sum_\lambda\frac{\lambda}{k}
\frac{(-ik_zv) qv \b\epsilon_{\lambda \b k}(\unit z \cdot\b \epsilon_{\lambda\b k}^*)\lambda k(1+\alpha^2 c_0)}{k^2 -(k_zv)^2-i\sigma k_z v+\alpha^2 Q(\omega) }
-\frac{i\b k}{k^2\varepsilon_{k_zv}}
\right\}
\gal
In particular, employing \eq{b8} and \eq{b17} the non-vanishing components are
\bal
\aver{ E_z(\b r,t)}&=q\int \frac{d^3k}{(2\pi)^3}e^{i\b k_\bot \cdot \b b-ik_z(vt-z)}\frac{ik_z}{k^2}
\frac{ k^2\left(v^2-\varepsilon_{k_zv}^{-1}\right)-\alpha^2 Q(\omega)\varepsilon_{k_zv}^{-1}+ \alpha^2 c_0k_\bot^2v^2}{k^2 -(k_zv)^2-i\sigma k_z v+\alpha^2Q(\omega)}\label{c7}\,,
\\
\aver{ E_b(\b r,t)}&=-q\int \frac{d^3k}{(2\pi)^3}e^{i\b k_\bot \cdot \b b-ik_z(vt-z)}\frac{i\b k\cdot \unit b}{k^2}
\frac{\left( k^2+\alpha^2 Q(\omega)\right) \varepsilon_{k_zv}^{-1}+ \alpha^2 c_0k_z^2v^2}{k^2 -(k_zv)^2-i\sigma k_z v+\alpha^2Q(\omega)}\,.\label{c8}
\gal
Thus, the direction of the \emph{average} electric and magnetic fields is the same as in the anomaly-free case.
In the heavy-ion collision kinematics discussed in \sec{sec:k} $\omega\approx k_z$ which implies $\alpha \approx -\lambda\Sigma_\chi/k_z$ and $\alpha^2 Q(\omega)\approx -\Sigma_\chi^2\left[i c_2(k_z)+c_1(k_z)\right]/2$. This allows taking the transverse momentum integrals in \eq{c4},\eq{c7} and \eq{c8}
\begin{subequations}\label{D1}
\bal
\aver{B_\phi(\b r,t)}=& \frac{qv}{(2\pi)^2}\int_{-\infty}^{+\infty} dk_z(1+\alpha^2c_0)sK_1(bs) e^{-ik_z(vt-z)}\,,\label{d1}\\
\aver{E_z(\b r,t)}=& \frac{qi}{(2\pi)^2}\int_{-\infty}^{+\infty} dk_z k_z\left( v^2-\frac{1}{\varepsilon_{vk_z}}-\frac{\alpha^2}{4k_z^2\varepsilon_{vk_z}}Q(vk_z)\right) K_0(bs) e^{-ik_z(vt-z)}\,,\label{d2}\\
\aver{E_b(\b r,t)}=& \frac{q}{(2\pi)^2}\int_{-\infty}^\infty dk_z\left(1+\frac{\alpha^2}{4k_z^2} Q(vk_z)\right)\frac{1}{\varepsilon_{vk_z}}sK_1(bs) e^{-ik_z(vt-z)}\,,\label{d3}
\gal
where
\ball{d5}
s^2= \frac{k_z^2}{\gamma^2}-ik_zv\sigma + \alpha^2 Q(vk_z)\,.
\gal
\end{subequations}
Note that all three terms are kept in \eq{d5}, i.e.\ no assumption is made about the relationship between $k_z$ and $\sigma \gamma$ (i.e.\ condition \texttt{(iii)} is not imposed as no further analytical integration can be done anyway). As a result, \eq{d1} reduces to \eq{k10} when $\sigma= \Sigma_\chi=0$. Similarly \eq{d2},\eq{d3} also reduce to their corresponding classical free space expressions in this limit.
To estimate the numerical effect of the topological fluctuations on the electromagnetic field, consider the Ornstein-Uhlenbeck random process with the auto-correlation function
$\mathcal{K}(\tau)=\exp(-\tau/\tau_c)$. The corresponding coefficients \eq{a32}--\eq{a34} read
\ball{d7}
&c_0(\omega)= \frac{4(\tau_c\omega)^3}{1+4(\tau_c\omega)^2}\,,\qquad c_1(\omega)=\frac{2(\omega\tau_c)^2}{1+4(\omega\tau_c)^2}\,,\qquad
c_2(\omega)= \frac{4(\tau_c\omega)^3}{1+4(\tau_c\omega)^2}\,.
\gal
Magnetic field \eq{a24C} and \eq{d1} is plotted in \fig{fig:2} for different values of $\Sigma_\chi$ and $\tau_c=2$~fm. It must be stressed that neither of these quantities is presently constrained by experiment. Their values in \fig{fig:2} are chosen for the presentation clarity.
The field discontinuity at $t=\tau_c$ is mere reflection of the fact that neither solution can be trusted at $t=\tau_c$.
The main feature is that the magnetic field oscillates when $\Sigma_\chi$ is sufficiently large compared to $\sigma$.
\begin{figure}[ht]
\begin{tabular}{cc}
\includegraphics[height=4.5cm]{B4a.pdf} &
\includegraphics[height=4.5cm]{B4b.pdf}\\
\includegraphics[height=4.5cm]{B3a.pdf} &
\includegraphics[height=4.5cm]{B3b.pdf}
\end{tabular}
\caption{Azimuthal component of magnetic field (left panels) and its absolute value (right panels) at $\tau_c=2$~fm (vertical dotted line). Solid lines: $\aver{\sigma^2_\chi(t)}^{1/2}=\Sigma_\chi\neq 0$ as indicated on each panel, dashed lines: $\aver{\sigma^2_\chi(t)}=0$. The minima of the right panels are the zeros of the field at which it reverses its direction, as shown in the inset in the lower right panel. Other parameters: $\gamma=100$, $\sigma^{-1}=34$~fm, $b=7.4$~fm, $z=0$. }
\label{fig:2}
\end{figure}
\section{Summary}\label{sec:s}
In this paper the electromagnetic field of a fast electric charge in chiral medium is computed in two cases: in \sec{sec:b} when the chiral conductivity is constant and in \sec{sec:b2} when it is random. In the former case the previous result derived in \cite{Tuchin:2014iua} is reproduced, while in the later one a new result given by \eq{c1},\eq{c7} and \eq{c8} is derived. In the relativistic heavy-ion collisions kinematics the field expressions reduce to \eq{D1}. \fig{fig:2} is the graphic representation of the magnetic field produced in a typical heavy-ion collision by a single valence quark. One observes that the field oscillations at early times, first observed in \cite{Tuchin:2014iua}, may persist at later times at large chiral conductivity. If the chiral conductivity is indeed that large, observation of the chiral magnetic \cite{Kharzeev:2007jp,Fukushima:2008xe} and associated effects \cite{Kharzeev:2015znc} in relativistic heavy-ion collisions becomes especially challenging.
Throughout the paper the topological charge density has been assumed to be spatially homogeneous \cite{Zhitnitsky:2014ria,Kharzeev:2007tn}. In practice this might not be a good approximation if more than one CP-odd domain is produced in a single heavy-ion collision. The impact of spatial and temporal variations as well as the quantum interference effects deserve a dedicated analysis. Furthermore, throughout the paper it is assumed that the electrical conductivity is constant. This is clearly not the case in a realistic heavy-ion collisions. One should be mindful of these limitations when considering the phenomenological applications of the results of this work.
\acknowledgments
This work was supported in part by the U.S. Department of Energy under Grant No.\ DE-FG02-87ER40371.
|
1,116,691,498,410 | arxiv | \section{Introduction} \label{Sec:Introduction}
In many wireless network scenarios, the channel is shared among
multiple systems. The coexisting systems create mutual interference,
which poses great challenges for communication systems design.
Conventionally, interference is either treated as noise in the weak
interference case \cite{Jnl:Interferce_as_noise:Tse} or canceled at
the receiver in the strong interference case
\cite{Jnl:Interferce_cancel:Carleial,
Bok:Fundamentals_wireless:Tse}. In the past decade, various schemes
are proposed to utilize multiple signaling dimensions for
interference avoidance and mitigation. In particular, in the recent
breakthrough work \cite{Jnl:IA:Cadambe_Jafar}, the authors show that
the paradigm of interference alignment (IA) can be exploited to
confine mutual interference to some lower dimensional subspace, so
that desired signals can be transmitted on interference-free
subspace. It is shown that this IA scheme, if feasible, is optimal
in the degree-of-freedom (DoF) sense. The results of
\cite{Jnl:IA:Cadambe_Jafar} has triggered a number of extensions
\cite{Cnf:IA_alternating_minimization:Peters_Heath,
Misc:Distributed_IA:Gomadam_Cadambe_Jafar} and related works
\cite{Misc:Real_IA_SISO:Motahari_Khandani,
Misc:Real_IA_MIMO:Ghasemi_Khandani}. These IA-based schemes, albeit
theoretically promising, have various limitations. First, IA-based
schemes require ideal conditions to be feasible such as perfect
channel state information (CSI) and very large dimensions on the
signal space. For example, the conventional IA scheme
\cite{Jnl:IA:Cadambe_Jafar} requires time or frequency extensions to
have feasible solutions. For $K$-pairs quasi-static MIMO
interference channels where time / frequency extensions are not
viable, the IA scheme \cite{Jnl:IA:Cadambe_Jafar} is only feasible
for $K \leq 3$ (cf. \cite{Cnf:IA_feasibility:Yetis_Jafar}). Second,
while IA-based schemes have promising DoF performance -- which is an
asymptotic performance measure for very high signal-to-noise ratio
(SNR) -- they are not optimal at medium SNR that correspond to
practical applications. When designing practical communication
systems for the interference channel, a number of technical issues
shall be considered. Specifically, in practice only imperfect CSI is
available and there are limited signaling dimensions. Moreover, it
is important to ensure satisfactory performance among all the
systems in the network.
In this paper, we consider the problem of robust transceiver design
for the $K$-pair quasi-static MIMO interference channel with
fairness considerations. Specifically, 1) we apply robust design
principles to provide resilience against CSI uncertainties; and 2)
we formulate the transceiver design as a precoder-decorrelator
optimization problem to maximize the worst-case
signal-to-interference-plus-noise ratio (SINR) among all users in
the interference channel. In the literature, precoder-decorrelator
optimization for worst-case SINR are proposed for broadcast and
point-to-point systems \cite{Jnl:RobustQosBroadcastMiso:Davidson,
Jnl:RobustQosP2PMimo:Palomar, Cnf:RobustQosBroadcastMimo:Boche,
Jnl:RobustQosP2PMimo:Miquel}. Specifically, in
\cite{Jnl:RobustQosBroadcastMiso:Davidson,
Jnl:RobustQosP2PMimo:Miquel} the authors consider precoding design
for the worst-case SINR in MISO broadcast channel, where it is shown
that the precoder optimization problem is always convex. In
\cite{Cnf:RobustQosBroadcastMimo:Boche} the authors consider
precoder-decorrelator design for the worst-case SINR MIMO broadcast
channel using an iterative algorithm based on solving convex
subproblems. On the other hand, in
\cite{Jnl:RobustQosP2PMimo:Palomar} the authors consider a
space-time coding scheme for the point-to-point channel with
imperfect channel knowledge. However, these existing works cannot be
extended to robust transceiver design for the MIMO interference
channel, which presents the following key technical challenges.
\textbf{The Precoder-Decorrelator Optimization Problem is NP-Hard:}
The precoder-decorrelator optimization problem for the interference
channel involves solving a separable homogeneous quadratically
constrained quadratic program (QCQP), which is NP-hard in general
\cite{Bok:Palomar, Bok:Palomar2}. One approach to facilitate solving
this class of problems is to apply semidefinite relaxation (SDR) by
relaxing rank constraints; this method was applied in precoding
design for MISO broadcast channel
\cite{Jnl:Rank_constrained_separable_SDP:Yongwei_Palomar,
Jnl:Downlink_beamforming:Ottersten} and for MISO multicast channel
\cite{Jnl:MBS-SDMA_using_SDR_with_perfect_CSI:Sidiropoulos_Luo,
Jnl:MBS_using_SDR_with_perfect_CSI:Sidiropoulos_Davidson_Luo}.
Although the resultant semidefinite program (SDP) may be solvable,
the optimization in general does not always have the desired rank
profile.
\textbf{Convergence of Alternative Optimization Algorithm:} Our
proposed solution is based on alternative optimization (AO). The
method of AO was proposed in \cite{Misc:Chan-Byoung_Chae,
Jnl:AO:Chan-Byoung_Chae} for precoder and decorrelator optimization
for multi-user MIMO broadcast channels. However, coupled with the
rank constrained SDP issues as well as the absence of
uplink-downlink duality (as in the case of broadcast channels)
\cite{Jnl:Beamforming_duality:Liu, Cnf:Beamforming_duality:Boche},
establishing the convergence proof of the AO algorithm in the
interference channel is non-trivial \cite{Jnl:AO_math_paper} and
traditional convergence proof \cite{Misc:Chan-Byoung_Chae,
Jnl:AO:Chan-Byoung_Chae} cannot be applied to our situations.
\emph{Notation}: In the sequel, we adopt the following notations.
$\mathbb{R}^{M \times N}$, $\mathbb{C}^{M \times N}$ and
$\mathbb{Z}^{M \times N}$ denote the set of real, complex and
integer $M \times N$ matrices, respectively; $\mathbb{R}_+$ denotes
the set of positive real numbers; upper and lower case letters
denote matrices and vectors, respectively; $\mathbb{H}^N$ denotes
the set of $N \times N$ Hermitian matrices; $\textbf{X} \succeq 0$
denotes that $\textbf{X}$ is a positive semi-definite matrix; $(
\cdot )^T$ and $( \cdot )^\dag$ denote transpose and Hermitian
transpose, respectively; $\textrm{rank} ( \cdot )$ and $\textrm{Tr}
( \cdot )$ denote matrix rank and trace, respectively;
$[\textbf{X}]_{(a,b)}$ denotes the $(a,b)^{\textrm{th}}$ element of
$\textbf{X}$; $|| \cdot ||$ denotes the Frobenius norm; $\mathcal{I}
( \cdot )$ denotes the indicate function; $\mathcal{K}$ denotes the
index set $\{ 1, \ldots, K \}$ and $\mathcal{L}_k$ denotes the index
set $\{ 1, \ldots, L_k \}$; $\textbf{0}_N$ denotes an $N \times 1$
vector of zeros and $\textbf{I}_{N}$ denotes an $N \times N$
identity matrix; $\mathbb{E}[ \cdot ]$ denotes expectation; and
$\mathcal{CN} ( \boldsymbol\mu, \boldsymbol \Phi )$ denotes complex
Gaussian distribution with mean $\boldsymbol \mu$ and covariance
matrix $\boldsymbol \Phi$.
\begin{figure}[t]
\centering
\includegraphics[width = 3.5in]{./SystemModel}
\caption{System model. There are $K$ source-destination pairs where
each source node is equipped with $M$ antennas and each destination
node is equipped with $N$ antennas. The $k^{\textrm{th}}$
transmitter sends $L_k$ independent data streams to the desired
receiver.}\label{Fig:SystemModel}
\end{figure}
\section{System Model and Review of Prior Works} \label{Sec:System_model_prior_works}
\subsection{System Model} \label{Sec:System_Model}
We consider a MIMO interference channel consisting of $K$
source-destination pairs where each source node is equipped with $M$
antennas and each destination node is equipped with $N$ antennas as
shown in Fig.~\ref{Fig:SystemModel}. For ease of exposition, we
focus on the $k^\textrm{th}$ user referring to source node $S_k$ and
destination node $D_k$; nevertheless, the same model applies to all
other source-destination pairs. Specifically, $S_k$ transmits $L_k$
data streams $\textbf{s}^{(k)} = [ s_1^{(k)} \ldots s^{(k)}_{L_k}
]^T$ to $D_k$, which performs linear detection. The received signal
of $D_k$ is interfered by the transmitted signals of all other
users. To mitigate the impact of mutual interference, prior to
transmission $S_k$ precodes the data streams $\textbf{s}^{(k)}$
using the precoder matrix $\textbf{V}^{(k)} = [ \textbf{v}_1^{(k)}
\ldots \textbf{v}^{(k)}_{L_k} ] \in \mathbb{C}^{M \times L_k}$ and
$D_k$ decorrelates the received signal using the decorrelator matrix
$\textbf{U}^{(k)} = [ \textbf{u}_1^{(k)} \ldots
\textbf{u}^{(k)}_{L_k} ] \in \mathbb{C}^{N \times L_k}$. It follows
that the transmitted signal of $S_k$ is given by
\begin{equation}
\textbf{x}^{(k)} = \textbf{V}^{(k)} \textbf{s}^{(k)} =
\textstyle\sum_{l=1}^{L_k} \textbf{v}_l^{(k)}
s_l^{(k)},\label{Eqn:Tx_signal}
\end{equation}
the received signal of $D_k$ is given by
\begin{IEEEeqnarray*}{Rl}
\textbf{y}^{(k)}&= \textstyle\sum_{j=1}^K \textbf{H}^{(k,j)}
\textbf{x}^{(j)} + \textbf{n}^{(k)}\\
&= \textbf{H}^{(k,k)} \textbf{x}^{(k)} +
\underbrace{\textstyle\sum_{\substack{j=1\\j \neq k}}^K
\textbf{H}^{(k,j)} \textbf{x}^{(j)}}_{\textrm{interference}} +
\textbf{n}^{(k)},\IEEEyesnumber\label{Eqn:Rx_signal}
\end{IEEEeqnarray*}
and the decorrelator output of $D_k$ is given by
\begin{IEEEeqnarray*}{Rl}
\widetilde{\textbf{s}}^{(k)} &= ( \textbf{U}^{(k)} )^\dag
\textbf{y}^{(k)}\\
&= \underbrace{( \textbf{U}^{(k)} )^\dag \textbf{H}^{(k,k)}
\textbf{V}^{(k)} \textbf{s}^{(k)}}_{\textrm{desired signals}}
\IEEEyesnumber\label{Eqn:Decorrelator_output}\\
&+ \underbrace{\textstyle\sum_{\substack{j=1\\j \neq k}}^K (
\textbf{U}^{(k)} )^\dag \textbf{H}^{(k,j)} \textbf{V}^{(j)}
\textbf{s}^{(j)}}_{\textrm{leakage interference}} + (
\textbf{U}^{(k)} )^\dag \textbf{n}^{(k)},
\end{IEEEeqnarray*}
where $\textbf{H}^{(k,j)} \in \mathbb{C}^{N \times M}$ is the fading
channel from $S_j$ to $D_k$ and $\textbf{n}^{(k)} \sim \mathcal{CN}
( \textbf{0}_N, N_0 \textbf{I}_N )$ is the AWGN. As per
(\ref{Eqn:Tx_signal})--(\ref{Eqn:Decorrelator_output}), the estimate
of data stream $s_l^{(k)}$ is given by
\begin{IEEEeqnarray*}{Rl}
\widetilde{s}_l^{(k)} &= \underbrace{( \textbf{u}_l^{(k)} )^\dag
\textbf{H}^{(k,k)} \textbf{v}_l^{(k)} s_l^{(k)}}_{\textrm{desired
signal}} + \underbrace{\textstyle\sum_{\substack{m=1\\m \neq
l}}^{L_k} ( \textbf{u}_l^{(k)} )^\dag \textbf{H}^{(k,k)}
\textbf{v}_m^{(k)} s_m^{(k)}}_{\textrm{inter-stream
interference}}\\
&+ \underbrace{\textstyle\sum_{\substack{j=1\\j \neq k}}^K
\textstyle\sum_{m=1}^{L_j} ( \textbf{u}_l^{(k)} )^\dag
\textbf{H}^{(k,j)} \textbf{v}_m^{(j)} s_m^{(j)}}_{\textrm{leakage
interference}} + ( \textbf{u}_l^{(k)} )^\dag
\textbf{n}^{(k)},\;\;\;\;\IEEEyesnumber\label{Eqn:Signal_estimate_perfect_CSI}
\end{IEEEeqnarray*}
where the severity of the inter-stream and leakage interference
terms depend on the transceiver processing and CSI assumption.
Considering practical systems, we make the following assumptions
towards designing effective precoders and decorrelators.
\begin{assumption}[Transmit power constraint]\label{Assumption:Pwr}
We assume the data streams are independent and have unit power, i.e.
$\mathbb{E} [ ( \textbf{s}^{(k)} )^\dag \textbf{s}^{(k)} ] =
\textbf{I}_{L_k}$. Furthermore, we assume the maximum transmit power
of the $k^{\textrm{th}}$ source node is $P_k$ so the precoders shall
satisfy the power constraint $\mathbb{E} [ ( \textbf{x}^{(k)} )^\dag
\textbf{x}^{(k)} ] = \sum_{l=1}^{L_k} ( \textbf{v}_l^{(k)} )^\dag
\textbf{v}_l^{(k)} \leq P_k$.~ \hfill\IEEEQEDclosed
\end{assumption}
\begin{assumption}[Fading model]\label{Assumption:Fading}
We assume quasi-static fading so the fading channels
$\textbf{H}^{(k,j)}$ remain unchanged during a fading block. In
addition, we assume $\textrm{rank} ( \textbf{H}^{(k,j)} ) = \min (
M, N )$.~ \hfill\IEEEQEDclosed
\end{assumption}
\begin{assumption}[CSI model]\label{Assumption:CSI} We assume perfect
CSI is available at the receivers (i.e. perfect CSIR), and only
imperfect CSI is available at the transmitters (i.e. imperfect CSIT)
for designing the precoders and decorrelators. Specifically, we
model channel estimates at the transmitters as
\begin{equation}
\widehat{\textbf{H}}^{(k,j)} = \textbf{H}^{(k,j)} -
\mathbf{\Delta}^{(k,j)}, \forall j,k \in
\mathcal{K},\label{Eqn:Channel_estimates}
\end{equation}
where $\mathbf{\Delta}^{(k,j)}$ is the CSI error
\cite{Jnl:Worst-case_robust_MIMO:Jiaheng_Palomar,
Jnl:RobustQosBroadcastMiso:Davidson, Jnl:RobustQosP2PMimo:Palomar}.
Specifically, we assume $|| \mathbf{\Delta}^{(k,j)} ||^2 \leq
\varepsilon$, which implies that the actual channel
$\textbf{H}^{(k,j)}$ belongs to a spherical uncertainty region
centered at $\widehat{\textbf{H}}^{(k,j)}$ with radius
$\varepsilon$. For notational convenience, we denote $\mathcal{H} =
\{ \textbf{H}^{(k,j)} \}_{j,k=1}^K = \{
\widehat{\textbf{H}}^{(k,j)}\!\!+\!\mathbf{\Delta}^{(k,j)}
\}_{j,k=1}^K$ and $\mathcal{\widehat{H}} = \{
\widehat{\textbf{H}}^{(k,j)} \}_{j,k=1}^K$.~ \hfill\IEEEQEDclosed
\end{assumption}
\begin{remark}[Interpretation of the CSI error model]
The imperfect CSIT model (\ref{Eqn:Channel_estimates}) encapsulates
the following scenarios.
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\item \emph{Quantized CSI in FDD Systems
\cite[Section~II\nobreakdash-B]{Jnl:RobustQosBroadcastMiso:Davidson}:}
For FDD systems, the transmitters are provided with quantized CSI
via feedback. Using uniform quantizers, the quantization cells in
the interior of the quantization region can be approximated by
spherical regions of radius equal to the quantization step size. As
a result, the imperfect CSIT model corresponds to quantized CSI
obtained using a uniform vector quantizer with quantization step
size $\sqrt{\varepsilon}$.
\item \emph{Estimated CSI in TDD Systems \cite[Section~IV\nobreakdash-A]{Jnl:RobustQosP2PMimo:Palomar}:}
For TDD systems, the transmitters can estimate the channels from the
sounding signals received in the reverse link. The imperfectness of
the CSIT in this case comes from the estimation noise as well as
delay. Using MMSE channel prediction, the CSI estimate
$\widehat{\textbf{H}}^{(k,j)}$ is unbiased, whereas the CSI error
$\mathbf{\Delta}^{(k,j)}$ is Gaussian distributed and independent
from the CSI estimate $\widehat{\textbf{H}}^{(k,j)}$. As a result,
$\mathbf{\Delta}^{(k,j)}$ is a jointly Gaussian matrix and $||
\mathbf{\Delta}^{(k,j)} ||^2 \leq \varepsilon$ corresponds to
``equal probability contour'' on the probability space of
$\mathbf{\Delta}^{(k,j)}$. In other words, the probability of the
event $|| \mathbf{\Delta}^{(k,j)} ||^2 \leq \varepsilon$ depends on
$\varepsilon$ only. Accordingly, we could find an $\varepsilon$ such
that $\textrm{Pr}[ || \mathbf{\Delta}^{(k,j)} ||^2 \leq \varepsilon
] = 0.99$ (for example).~ \hfill\IEEEQEDclosed
\end{list}
\end{remark}
By Assumptions~\ref{Assumption:Pwr}~to~\ref{Assumption:CSI}, the
data stream estimate $\widetilde{s}_l^{(k)}$ in
(\ref{Eqn:Signal_estimate_perfect_CSI}) can be \emph{equivalently}
expressed as
\begin{IEEEeqnarray*}{Rl}
\widetilde{s}_l^{(k)} &= ( \textbf{u}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,k)}\!\!+\!\mathbf{\Delta}^{(k,k)} )
\textbf{v}_l^{(k)} s_l^{(k)}\\
+&\textstyle\sum_{\substack{m=1\\m \neq l}}^{L_k} (
\textbf{u}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,k)}\!\!\!+\!\mathbf{\Delta}^{(k,k)} )
\textbf{v}_m^{(k)} s_m^{(k)}\IEEEyesnumber\label{Eqn:Signal_estimate_imperfect_CSI}\\
+&\textstyle\sum_{\substack{j=1\\j \neq k}}^K\!\sum_{m=1}^{L_j} (
\textbf{u}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,j)}\!\!\!+\!\mathbf{\Delta}^{(k,j)} )
\textbf{v}_m^{(j)} s_m^{(j)}\!+\!( \textbf{u}_l^{(k)} )^\dag
\textbf{n}^{(k)}.
\end{IEEEeqnarray*}
The actual SINR of $\widetilde{s}_l^{(k)}$ at the $k^{\textrm{th}}$
receiver is given by (\ref{Eqn:Sinr_perfect}),
\begin{figure*}[!t]
\normalsize
\begin{IEEEeqnarray*}{l}
\gamma_l^{(k)} ( \mathcal{H}, \{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} ) \textstyle= \frac{||
( \textbf{u}_l^{(k)} )^\dag ( \widehat{\textbf{H}}^{(k,k)}\!+
\mathbf{\Delta}^{(k,k)} ) \textbf{v}_l^{(k)}
||^2}{\sum_{\substack{m=1\\m \neq l}}^{L_k} || ( \textbf{u}_l^{(k)}
)^\dag ( \widehat{\textbf{H}}^{(k,k)}\!+ \mathbf{\Delta}^{(k,k)} )
\textbf{v}_m^{(k)} ||^2 + \sum_{\substack{j=1\\j \neq k}}^K
\sum_{m=1}^{L_j} || ( \textbf{u}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,j)}\!+ \mathbf{\Delta}^{(k,j)} )
\textbf{v}_m^{(j)} ||^2 + N_0 || \textbf{u}_l^{(k)}
||^2}\IEEEyesnumber\label{Eqn:Sinr_perfect}
\end{IEEEeqnarray*}
\hrulefill
\end{figure*}
whereby the instantaneous mutual information between data stream
$s_l^{(k)}$ and estimate $\widetilde{s}_l^{(k)}$ can be expressed as
\begin{IEEEeqnarray*}{l}
C_l^{(k)} ( \mathcal{H}, \{ \{ \textbf{v}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K, \textbf{u}_l^{(k)} )\IEEEyesnumber\label{Eqn:Cap_perfect}\\
= \log_2( 1 + \gamma_l^{(k)} ( \mathcal{H}, \{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} ) ).
\end{IEEEeqnarray*}
\subsection{Review of Prominent Transceiver Designs for MIMO
Interference Channels}\label{Sec:Prior_works} In the following, we
review the motivations and issues of prominent transceiver designs
for MIMO interference channels in the literature.
\subsubsection{Interference Alignment in Quasi-Static MIMO Signal
Space} \label{Sec:Scheme_IA_MIMO} In \cite{Jnl:IA:Cadambe_Jafar,
Cnf:IA_feasibility:Yetis_Jafar} the authors exploited IA in
quasi-static MIMO signal space for precoder-decorrelator design.
Specifically, assuming perfect CSI, we could obtain precoders and
decorrelators that confine the interference on each destination node
to a lower dimension subspace, such that interference can be more
effectively removed. Note that IA is only feasible with sufficiently
large number of signaling dimensions. For the $K$-pair quasi-static
$N \times M$ MIMO interference channel, IA could achieve a DoF of $K
\frac{\min ( M, N )}{2}$ for $K \leq 3$ but might not be feasible
for $K > 3$. Moreover, IA is not optimal in general at medium SNR.
For example, consider the data stream estimate
$\widetilde{s}_l^{(k)}$ in
(\ref{Eqn:Signal_estimate_imperfect_CSI}); suppose IA is feasible
then
\begin{equation*}
( \textbf{u}_l^{(k)} )^\dag \widehat{\textbf{H}}^{(k,j)}
\textbf{v}_m^{(j)} = 0, j \neq k \textrm{ or } l \neq m,
\end{equation*}
and the actual SINR of the $l^{\textrm{th}}$ data stream at
$k^{\textrm{th}}$ receiver is given by
\begin{IEEEeqnarray*}{l}
\gamma_l^{(k)} ( \mathcal{H}, \{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} )\\
\textstyle= \frac{|| ( \textbf{u}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,k)}\!+ \mathbf{\Delta}^{(k,k)} )
\textbf{v}_l^{(k)} ||^2}{\left(\substack{\sum_{\substack{m=1\\m \neq
l}}^{L_j} || ( \textbf{u}_l^{(k)} )^\dag \mathbf{\Delta}^{(k,k)}
\textbf{v}_m^{(k)} ||^2\\+ \sum_{\substack{j=1\\j \neq k}}^K
\sum_{m=1}^{L_j} || ( \textbf{u}_l^{(k)} )^\dag
\mathbf{\Delta}^{(k,j)} \textbf{v}_m^{(j)} ||^2 + N_0 ||
\textbf{u}_l^{(k)} ||^2}\right)}.\IEEEyesnumber\label{Eqn:Sinr_IA}
\end{IEEEeqnarray*}
As per (\ref{Eqn:Sinr_IA}), the presence of CSI error
$\mathbf{\Delta}^{(k,j)}$ creates persistent residual interference.
Even when the residual interference is negligible, i.e.
\begin{IEEEeqnarray*}{l}
\gamma_l^{(k)} ( \mathcal{H}, \{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} ) \textstyle\approx
\frac{|| ( \textbf{u}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,k)}\!\!+\!\mathbf{\Delta}^{(k,k)} )
\textbf{v}_l^{(k)} ||^2}{N_0 || \textbf{u}_l^{(k)} ||^2},
\end{IEEEeqnarray*}
the conventional IA scheme \cite{Jnl:IA:Cadambe_Jafar,
Cnf:IA_feasibility:Yetis_Jafar} makes no attempt to optimize SINR
performance.
\subsubsection{Interference Alignment in Real Fading
Channels} \label{Sec:Scheme_IA_real} In
\cite{Misc:Real_IA_SISO:Motahari_Khandani,
Misc:Real_IA_MIMO:Ghasemi_Khandani} the authors consider IA along
the real line by creating fictitious signaling dimensions.
Specifically, assuming perfect CSI, we could design the leakage
interference terms at each destination node to have the same scaling
factor (or pseudo direction), such that interference can be
effectively removed. For example, consider the received signal in
(\ref{Eqn:Rx_signal}); for the purpose of illustration let $M=N=1$
and $H^{(k,j)} \in \mathbb{R}$ so
\begin{IEEEeqnarray*}{Rl}
y^{(k)} &\textstyle= H^{(k,k)} x^{(k)} + \sum_{\substack{j=1\\j \neq
k}}^K H^{(k,j)} x^{(j)} + n^{(k)}\\
&\textstyle\stackrel{(a)}{=} \sum_{l=1}^{L_j} (
\hat{H}^{(k,k)}\!+\!\Delta^{(k,k)} ) v_l^{(k)} s_l^{(k)}\\
&\textstyle+\sum_{\substack{j=1\\j \neq k}}^K \sum_{l=1}^{L_j}
\underbrace{( \hat{H}^{(k,j)}\!+\!\Delta^{(k,j)} ) v_l^{(j)}
s_l^{(j)}}_{\textrm{leakage interference}} +
n^{(k)},\label{Eqn:Rx_signal_real}
\end{IEEEeqnarray*}
where (a) follows from (\ref{Eqn:Tx_signal}) and
(\ref{Eqn:Channel_estimates}). To facilitate IA along the real line,
the data streams shall belong to the set of integers (i.e.
$s_l^{(k)} \in \mathbb{Z}$) and we shall choose the precoders such
that $\hat{H}^{(k,j)} v_l^{(j)} = \hat{H}^{(k,m)} v_l^{(m)}$ for $j
\neq m$. It is shown in \cite{Misc:Real_IA_SISO:Motahari_Khandani,
Misc:Real_IA_MIMO:Ghasemi_Khandani} that, if ideally CSI error is
negligible (i.e. $\Delta^{(k,j)} \approx 0$), this scheme could
theoretically achieve a DoF of $K \frac{MN}{M+N}$. However, this
scheme would require infinite SNR and cannot be implemented in
practice.
\subsubsection{Iterative Algorithms to Minimize Leakage
Interference / Maximize SINR} \label{Sec:Scheme_IA_greedy} In
\cite{Misc:Distributed_IA:Gomadam_Cadambe_Jafar,
Cnf:IA_alternating_minimization:Peters_Heath} the authors exploit
uplink-downlink duality and propose iterative algorithms for
precoder-decorrelator design. Specifically, the algorithms in
\cite[Algorithm~1]{Misc:Distributed_IA:Gomadam_Cadambe_Jafar},
\cite{Cnf:IA_alternating_minimization:Peters_Heath} are established
with the objective of sequentially minimizing the aggregate leakage
interference induced by each data stream, whereas the algorithm in
\cite[Algorithm~2]{Misc:Distributed_IA:Gomadam_Cadambe_Jafar} is
established with the objective to sequentially maximize the SINR of
each data stream. Note that the aforementioned algorithms neglect
the presence of CSI error, which could have significant performance
impacts. Moreover, these algorithms neglect individual user
performance and fairness. This is undesirable because for practical
systems it is important to ensure all users have satisfactory
performance.
\section{Problem Formulation: Robust Transceiver Design with
Fairness Considerations} \label{Sec:Problem_formulation} In this
section, we formulate a transceiver design for the $K$-pair
quasi-static MIMO interference channel that is robust against CSI
uncertainties and with the objective of enforcing fairness among all
users' data streams. Specifically, to provide the best resilience
against CSI error, we adopt a worst-case design approach. On the
other hand, the fairness aspect is motivated by the practical system
consideration to ensure all users in the network can have
satisfactory performance. As such, we formulate the
precoder-decorrelator design with imperfect CSIT as an optimization
problem to maximize the \emph{worst-case} SINR among all users' data
streams, subject to the maximum transmit power per source node.
\subsection{Optimization Problem}
The robust and fair transceiver optimization problem for the
$K$-pair $N \times M$ MIMO interference channel consists of the
following components.
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\item \textbf{Optimization Variables}: The optimization variables
include the set of precoders $\{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K$ and the set of decorrelators $\{ \{
\textbf{u}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K$. These variables are
adaptive with respect to imperfect CSIT $\widehat{\mathcal{H}} = \{
\widehat{\textbf{H}}^{(k,j)} \}_{j,k=1}^K$.
\item \textbf{Optimization Objective}: The optimization objective is to
maximize, with imperfect CSIT, the minimum worst-case SINR
among\footnote{Note that (\ref{Eqn:Worst-case_SINR_Tx}) is the
worst-case SINR perceived by the transmitter based on imperfect CSIT
$\widehat{\mathcal{H}} = \{ \widehat{\textbf{H}}^{(k,j)}
\}_{j,k=1}^K$. We choose the worst-case SINR perceived by the
transmitter in order to incorporate robustness against CSI error
$\mathbf{\Delta}^{(k,j)}$.} all users' data streams (perceived by
the transmitter) given by (cf. (\ref{Eqn:Sinr_perfect}) and
Assumption~\ref{Assumption:CSI})
\end{list}
\begin{equation}
\min_{\substack{\forall k \in \mathcal{K}\\\forall l \in
\mathcal{L}_k}} \min_{|| \mathbf{\Delta}^{(k,j)} ||^2 \leq
\varepsilon} \gamma_l^{(k)} ( \mathcal{H}, \{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)}
).\label{Eqn:Worst-case_SINR_Tx}
\end{equation}
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\item \textbf{Optimization Constraints}: The optimization constraints
are the maximum transmit power for each source node $P_1, \ldots,
P_K$, which give the precoder power constraints $\sum_{l=1}^{L_k} (
\textbf{v}_l^{(k)} )^\dag \textbf{v}_l^{(k)} \leq P_k, \forall k \in
\mathcal{K}$ (cf. Assumption~\ref{Assumption:Pwr}).
\end{list}
Accordingly, the optimization problem can be formally written as
Problem~1.\\
\indent\emph{Problem 1: (Robust Max-Min Fair Precoder-Decorrelator
Design):}
\begin{IEEEeqnarray*}{cl}
\IEEEeqnarraymulticol{2}{l}{ \{ \{ \{ ( \textbf{v}_m^{(j)} )^\star
\}_{m=1}^{L_j} \}_{j=1}^K, \{ \{ ( \textbf{u}_m^{(j)} )^\star
\}_{m=1}^{L_j} \}_{j=1}^K \} = \mathcal{P}( P_1, \ldots, P_K )}\\
\arg\!\!\!\!\!\!\max_{\substack{\textbf{v}_m^{(j)} \in
\mathbb{C}^{M\!\times\!1}\\ \textbf{u}_m^{(j)} \in
\mathbb{C}^{N\!\times\!1}}} \!\min_{\substack{\forall k \in
\mathcal{K}\\\forall l \in \mathcal{L}_k}}
\!\min_{||\!\mathbf{\Delta}^{(k,j)}\!||^2 \leq
\varepsilon}&\gamma_l^{(k)}\!( \mathcal{H}, \{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K,
\textbf{u}_l^{(k)}\!)\IEEEyessubnumber\label{Eqn:Problem_P0_cost}\\
\textrm{s. t.}&\textstyle\sum_{l=1}^{L_k} ( \textbf{v}_l^{(k)}
)^\dag \textbf{v}_l^{(k)}\!\!\leq\!\!P_k, \forall
k\!\in\!\mathcal{K}.\IEEEyessubnumber\label{Eqn:Problem_P0_TxPwr_constraint}
\end{IEEEeqnarray*}
In (\ref{Eqn:Problem_P0_cost}), the worst-case SINR with imperfect
CSIT is given by the following proposition.
\begin{proposition}[Worst-Case SINR with Imperfect CSIT]
\label{Proposition:Worst-case_SINR} Given CSI estimates
$\widehat{\mathcal{H}} = \{ \widehat{\textbf{H}}^{(k,j)}
\}_{j,k=1}^K$ at the transmitter with error $||
\mathbf{\Delta}^{(k,j)} ||^2 \leq \varepsilon$, the worst-case SINR
of data stream estimate $\widetilde{s}_l^{(k)}$ perceived by the
transmitter can be expressed as (\ref{Eqn:Worst-case_SINR}).
\begin{figure*}[!t]
\normalsize
\begin{equation}
\begin{array}{l}
\displaystyle \min_{|| \mathbf{\Delta}^{(k,j)} ||^2 \leq
\varepsilon} \textstyle \gamma_l^{(k)} ( \mathcal{H}, \{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)}
)\\
= \frac{|| ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)} ||^2 - \varepsilon
|| \textbf{u}_l^{(k)} ||^2 || \textbf{v}_l^{(k)} ||^2}{\sum_{j=1}^K
\sum_{m=1}^{L_j} || ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,j)} \textbf{v}_m^{(j)} ||^2 + \varepsilon
|| \textbf{u}_l^{(k)} ||^2 \sum_{j=1}^K \sum_{m=1}^{L_j} ||
\textbf{v}_m^{(j)} ||^2 - || ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)} ||^2 - \varepsilon
|| \textbf{u}_l^{(k)} ||^2 ||
\textbf{v}_l^{(k)} ||^2 + N_0 || \textbf{u}_l^{(k)} ||^2}\\
\triangleq \widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}}, \{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)}
).\label{Eqn:Worst-case_SINR}
\end{array}
\end{equation}
\hrulefill
\end{figure*}
\end{proposition}
\begin{proof}
Please refer to Appendix~\ref{Proof:Worst-case_SINR}.
\end{proof}
Using Proposition~\ref{Proposition:Worst-case_SINR} and let
$\widetilde{P} = \min( P_1, \ldots, P_K )$ and $\rho_k = P_k /
\widetilde{P}$, we can recast Problem~$\mathcal{P}$ as
\begin{IEEEeqnarray*}{cl}
\IEEEeqnarraymulticol{2}{l}{ \{ \gamma^\star, \{ \{ (
\textbf{v}_m^{(j)} )^\star \}_{m=1}^{L_j} \}_{j=1}^K, \{ \{ (
\textbf{u}_m^{(j)} )^\star \}_{m=1}^{L_j}
\}_{j=1}^K \} = \mathcal{P}( \widetilde{P} )}\;\;\;\;\;\;\;\;\\
\min_{\substack{\textbf{v}_m^{(\!j\!)}\!\in \mathbb{C}^{\!M\!\times\!1}\\
\textbf{u}_m^{(\!j\!)}\!\in \mathbb{C}^{\!N\!\times\!1}\\\gamma \in
\mathbb{R}_+}}\!\!&\;\;-\gamma\IEEEyessubnumber\label{Eqn:Problem_P_cost}\\
\textrm{s. t.}&\widetilde{\gamma}_l^{(k)}\!(
\widehat{\mathcal{H}},\!\{ \{ \textbf{v}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K,\!\textbf{u}_l^{(k)}\!)\!\ge\!\gamma,\!\forall
l\!\in\!\mathcal{L}_k,\!\forall
k\!\in\!\mathcal{K},\IEEEyessubnumber\label{Eqn:Problem_P_QoS_constraint}\\
&\textstyle\sum_{l=1}^{L_k} ( \textbf{v}_l^{(k)} )^\dag
\textbf{v}_l^{(k)} \leq \rho_k \widetilde{P},\forall k \in
\mathcal{K}.\IEEEyessubnumber\label{Eqn:Problem_P_TxPwr_constraint}
\end{IEEEeqnarray*}
\subsection{Properties of the Optimization Problem}
Note that it is not trivial to solve Problem~$\mathcal{P}$ since it
is non-convex and NP-hard in general as we elaborate below. In
Section~\ref{Sec:Iterative_soln}, we shall propose a low complexity
iterative algorithm for solving Problem~$\mathcal{P}$.
\subsubsection{Problem~$\mathcal{P}$ is a non-convex problem}
\label{Sec:Non-convex} The minimum SINR constraints in
(\ref{Eqn:Problem_P_QoS_constraint}) can be rearranged as
\begin{IEEEeqnarray*}{l}
\textstyle( 1 + \gamma ) || ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)} ||^2 + ( \gamma - 1
) \varepsilon || \textbf{u}_l^{(k)} ||^2 || \textbf{v}_l^{(k)}
||^2\\
\textstyle- \gamma N_0 || \textbf{u}_l^{(k)} ||^2 - \gamma
\sum_{j=1}^K \sum_{m=1}^{L_j} || ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,j)} \textbf{v}_m^{(j)} ||^2\\
\textstyle- \gamma \varepsilon || \textbf{u}_l^{(k)} ||^2
\sum_{j=1}^K \sum_{m=1}^{L_j} || \textbf{v}_m^{(j)} ||^2 \geq
0,\label{Eqn:Problem_P_Qos_rearrange}
\end{IEEEeqnarray*}
which are non-convex inequalities consisting of non-positive linear
combinations of norms. Therefore, Problem~$\mathcal{P}$ is a
non-convex problem.
\begin{figure}[t]
\centering
\includegraphics[width = 3.5in]{./RelationshipAmongProblems}
\caption{Interrelationship among the optimization
problems.}\label{Fig:RelationshipAmongProblems}
\end{figure}
\subsubsection{Problem~$\mathcal{P}$ is NP-hard in general}
\label{Sec:NP-hard} To illustrate that Problem~$\mathcal{P}$ is
NP-hard in general, we consider the \emph{inverse problem} of
jointly minimizing the transmit powers of all source nodes subject
to a minimum SINR constraint for all users' data
streams\footnote{Please refer to
\cite{Jnl:Linear_precoding_conic:Eldar,
Jnl:Rank_constrained_separable_SDP:Yongwei_Palomar,
Jnl:Downlink_beamforming:Ottersten,
Jnl:MBS-SDMA_using_SDR_with_perfect_CSI:Sidiropoulos_Luo} and
references therein for discussions on the inverse relationship
between max-min fair and minimum power precoder design problems for
MISO \emph{broadcast} and \emph{multicast} channels.}. In
Section~\ref{Sec:Iterative_soln}, we shall propose an algorithm for
solving Problem~$\mathcal{P}$ \emph{facilitated} by solving the
inverse problem\footnote{The inverse problem will be utilized in
Section~\ref{Sec:Optimize_precoders}.} that consists of the
following components.
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\item \textbf{Optimization Variables}: The optimization variables
include the set of precoders $\{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K$ and the set of decorrelators $\{ \{
\textbf{u}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K$.
\item \textbf{Optimization Objective}: The optimization
objective is to minimize the required transmit power of all source
nodes, by means of minimizing the precoder powers $\displaystyle
\textstyle\sum_{l=1}^{L_k} ( \textbf{v}_l^{(k)} )^\dag
\textbf{v}_l^{(k)}$, $\forall k \in \mathcal{K}$.
\item \textbf{Optimization Constraints}: The optimization constraint
is for all users' data streams to meet the prescribed minimum SINR
$\gamma$, i.e. $\widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}},
\{ \{ \textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K,
\textbf{u}_l^{(k)} ) \ge \gamma$.
\end{list}
Accordingly, the inverse problem can be formally written as
Problem~2.\\
\indent\emph{Problem~2 (Power Minimization Precoder-Decorrelator
Design):}
\begin{IEEEeqnarray*}{cl}
\IEEEeqnarraymulticol{2}{l}{ \{ \beta^\star, \{ \{ (
\textbf{v}_m^{(j)} )^\star \}_{m=1}^{L_j} \}_{j=1}^K, \{ \{ (
\textbf{u}_m^{(j)} )^\star \}_{m=1}^{L_j}
\}_{j=1}^K \} = \mathcal{Q}( \gamma )}\;\;\;\;\;\;\;\;\\
\min_{\substack{\textbf{v}_m^{(\!j\!)}\!\in \mathbb{C}^{\!M\!\times\!1}\\
\textbf{u}_m^{(\!j\!)}\!\in \mathbb{C}^{\!N\!\times\!1}\\\beta \in
\mathbb{R}_+}}\!\!&\;\;\beta\IEEEyessubnumber\label{Eqn:Problem_Q_cost}\\
\textrm{s. t.}&\textstyle\sum_{l=1}^{L_k} ( \textbf{v}_l^{(k)}
)^\dag \textbf{v}_l^{(k)} \leq \rho_k \beta,\;\;\forall k \in
\mathcal{K},\IEEEyessubnumber\label{Eqn:Problem_Q_TxPwr_constraint}\\
&\widetilde{\gamma}_l^{(k)}\!( \widehat{\mathcal{H}},\!\{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K,\!\textbf{u}_l^{(k)}\!)\!\ge\!\gamma,\!\forall
l\!\in\!\mathcal{L}_k,\!\forall
k\!\in\!\mathcal{K}.\IEEEyessubnumber\label{Eqn:Problem_Q_QoS_constraint}
\end{IEEEeqnarray*}
Consider an instance of Problem~$\mathcal{Q}$ with minimum SINR
constraint $\widetilde{\gamma}$, i.e.
\begin{equation}
\{ \widetilde{\beta}, \{ \{ \widetilde{\textbf{v}}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \{ \{ \widetilde{\textbf{u}}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K \} = \mathcal{Q}( \widetilde{\gamma}
),\label{Eqn:Q_soln}
\end{equation}
and the required transmit power of the $k^{\textrm{th}}$ source node
is $\rho_k \widetilde{\beta}$. It can be shown that
\begin{equation}
\{ \widetilde{\gamma}, \{ \{ \widetilde{\textbf{v}}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \{ \{ \widetilde{\textbf{u}}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K \} = \mathcal{P}( \widetilde{\beta}
)\label{Eqn:P_Q_soln}
\end{equation}
so we can solve Problem~$\mathcal{Q}$ to obtain a corresponding
solution for Problem~$\mathcal{P}$, and vice-versa. Since
Problem~$\mathcal{Q}$ is NP-hard in general, Problem~$\mathcal{P}$
is also NP-hard. Specifically, we define the \emph{special case} of
Problem~$\mathcal{Q}$ with \emph{fixed} decorrelators as\\
\indent\emph{Problem~3 (Power Minimization Precoder Design
with Fixed Decorrelators):}
\begin{IEEEeqnarray*}{cl}
\IEEEeqnarraymulticol{2}{l}{ \{ \xi^\star, \{ \{ (
\textbf{v}_m^{(j)} )^\star \}_{m=1}^{L_j} \}_{j=1}^K \} =
\mathcal{Q}_\textrm{v}( \gamma, \{ \{
\textbf{u}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K )}\;\;\;\;\;\;\;\;\;\;\;\;\\
\min_{\substack{\textbf{v}_m^{(\!j\!)}\!\in \mathbb{C}^{\!M\!\times\!1}\\
\xi \in
\mathbb{R}_+}}\!\!&\;\;\xi\IEEEyessubnumber\label{Eqn:Problem_Q_V_cost}\\
\textrm{s. t.}&\textstyle\sum_{l=1}^{L_k} ( \textbf{v}_l^{(k)}
)^\dag \textbf{v}_l^{(k)} \leq \rho_k \xi, \forall k \in
\mathcal{K},\IEEEyessubnumber\label{Eqn:Problem_Q_V_TxPwr_constraint}\\
&\widetilde{\gamma}_l^{(k)}\!( \widehat{\mathcal{H}},\!\{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K,\!\textbf{u}_l^{(k)}\!)\!\ge\!\gamma,\!\forall
l\!\in\!\mathcal{L}_k,\!\forall
k\!\in\!\mathcal{K}.\IEEEyessubnumber\label{Eqn:Problem_Q_V_QoS_constraint}
\end{IEEEeqnarray*}
Note that Problem~$\mathcal{Q}_\textrm{v}$ belongs to the class of
separable homogenous QCQP, which is NP-hard in general
\cite{Bok:Palomar,Bok:Palomar2}. This implies that
Problem~$\mathcal{Q}$, which contains
Problem~$\mathcal{Q}_\textrm{v}$ as special case, is also NP-hard in
general\footnote{Problem~$\mathcal{Q}_\textrm{v}$ will be utilized
in Section~\ref{Sec:Optimize_precoders}.}.
\section{Low Complexity Iterative Solution} \label{Sec:Iterative_soln}
In this section, we propose a low complexity iterative algorithm for
solving the robust and fair transceiver optimization problem
$\mathcal{P}$. In particular, the proposed algorithm is facilitated
by solving the inverse Problem~$\mathcal{Q}$, whereby we exploit the
structure of Problem~$\mathcal{Q}$ to apply effective optimization
techniques.
\subsection{Overview of Algorithm} \label{Sec:Overview_algorithm}
The proposed algorithm for solving Problem~$\mathcal{P}$ is
facilitated by solving Problem~$\mathcal{Q}$ as illustrated in
Fig.~\ref{Fig:RelationshipAmongProblems}, which is also detailed in
Algorithm~\ref{Algorithm:Top-level}. Specifically, we iteratively
refine the decorrelators and precoders to monotonically improve the
minimum SINR. Each iteration consists of two stages:
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\item (Steps 1-3 of Algorithm~\ref{Algorithm:Top-level}) First,
given the \emph{status quo} minimum SINR $\widetilde{\gamma}$
achieved with the $k^{\textrm{th}}$ source node transmitting at
power $P_k$, we solve Problem~$\mathcal{Q}$ to optimize the
precoders and decorrelators for minimizing the transmit powers, i.e.
\begin{equation}
\{ \widetilde{\beta}, \{ \{ \widetilde{\textbf{v}}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \{ \{ \widetilde{\textbf{u}}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K \} = \mathcal{Q}( \widetilde{\gamma}
),\label{Eqn:Target_SINR}
\end{equation}
such that the minimum SINR $\widetilde{\gamma}$ is achieved with the
$k^{\textrm{th}}$ source node transmitting at a \emph{reduced} power
of $\sum_{l=1}^{L_k} ( \widetilde{\textbf{v}}_l^{(k)} )^\dag
\widetilde{\textbf{v}}_l^{(k)} = \rho_k \widetilde{\beta} \leq P_k$.
\item (Steps 4-5 of Algorithm~\ref{Algorithm:Top-level}) Second,
we improve the minimum SINR by up-scaling the transmit precoding
power\footnote{We show in (\ref{Eqn:Convergence2}) that up-scaling
the precoding powers improve the minimum SINR.} of the
$k^{\textrm{th}}$ user to the power constraint $P_k$, i.e.
$\widetilde{\textbf{v}}_m^{(j)} = \sqrt{P_j / (\rho_j
\widetilde{\beta})} \widetilde{\textbf{v}}_m^{(j)}$.
\end{list}
We repeat the iteration until the minimum SINR converges to a
maximum. However, it is not trivial to solve the iteration step as
per (\ref{Eqn:Target_SINR}) since Problem~$\mathcal{Q}$ is NP-hard
in general as shown in Section~\ref{Sec:NP-hard}. As such, we shall
solve Problem~$\mathcal{Q}$ based on alternative optimization
between the decorrelators and the precoders, i.e. we present the
algorithm for optimizing the decorrelators with \emph{fixed}
precoders in Section~\ref{Sec:Optimize_decorrelators}, and introduce
the algorithm for optimizing the precoders with \emph{fixed}
decorrelators in Section~\ref{Sec:Optimize_precoders}. The top-level
detail steps of the optimization algorithm is summarized below
(Algorithm~\ref{Algorithm:Top-level}) and illustrated in
Fig~\ref{Fig:RelationshipAmongAlg}. The convergence proof for
Algorithm~\ref{Algorithm:Top-level} is provided in
Appendix~\ref{Proof:Convergence}.
\begin{algorithm}[Top-Level Algorithm]
\label{Algorithm:Top-level}~\\
\textbf{Inputs}: maximum transmit power for each
source node $P_1, \ldots, P_K$\\
\textbf{Outputs}: precoders $\{ \{ ( \textbf{v}_m^{(j)} )^\star
\}_{m=1}^{L_j} \}_{j=1}^K$ and decorrelators $\{ \{ (
\textbf{u}_m^{(j)} )^\star \}_{m=1}^{L_j} \}_{j=1}^K$
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\item \textbf{Step 0}: Initialize decorrelators $\{ \{
\widetilde{\textbf{u}}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K$ and
precoders $\{ \{ \widetilde{\textbf{v}}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K$, where the transmit power for the $j^{\textrm{th}}$
source node is $\sum_{m=1}^{L_j} ( \widetilde{\textbf{v}}_m^{(j)}
)^\dag \widetilde{\textbf{v}}_m^{(j)} = P_j$.
\end{list}
\textbf{Repeat}
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\item \textbf{Step 1}: Optimize the decorrelators with fixed
precoders (cf. Section~\ref{Sec:Optimize_decorrelators})
\begin{equation*}
\{ \{ \widetilde{\textbf{u}}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K =
\mathcal{Q}_\textrm{u}( \{ \{ \widetilde{\textbf{v}}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K ).
\end{equation*}
Update the candidate decorrelators $( \textbf{u}_m^{(j)} )^\star$ =
$\widetilde{\textbf{u}}_m^{(j)}$.
\item \textbf{Step 2}: Evaluate the minimum SINR
\begin{equation*}
\begin{array}{l}
\displaystyle\min_{\substack{\forall k \in \mathcal{K}\\\forall l
\in \mathcal{L}_k}} \widetilde{\gamma}_l^{(k)} (
\widehat{\mathcal{H}}, \{ \{ \widetilde{\textbf{v}}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \widetilde{\textbf{u}}_l^{(k)} ) =
\widehat{\gamma}.
\end{array}
\end{equation*}
Update the target SINR $\widetilde{\gamma} = \widehat{\gamma}$.
\item \textbf{Step 3}: Optimize the precoders with fixed
decorrelators (cf. Section~\ref{Sec:Optimize_precoders})
\begin{center}
$\{ \xi, \{ \{ \widetilde{\textbf{v}}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K \} = \mathcal{Q}_\textrm{v}( \widetilde{\gamma}, \{ \{
\widetilde{\textbf{u}}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K )$.
\end{center}
\item \textbf{Step 4}: Evaluate the required transmit power of each
source node $\rho_j \widetilde{\beta} = \sum_{m=1}^{L_j} (
\widetilde{\textbf{v}}_m^{(j)} )^\dag
\widetilde{\textbf{v}}_m^{(j)}$.
\item \textbf{Step 5}: Evaluate the minimum SINR with up-scaled
precoders
\begin{equation*}
\min_{\substack{\forall k \in \mathcal{K}\\\forall l \in
\mathcal{L}_k}} \widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}},
\{ \{ \sqrt{P_j / ( \rho_j \widetilde{\beta}} )
\widetilde{\textbf{v}}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K,
\widetilde{\textbf{u}}_l^{(k)} ) = \widehat{\gamma}.
\end{equation*}
Update the target SINR $\widetilde{\gamma} = \widehat{\gamma}$ and
candidate precoders $( \textbf{v}_m^{(j)} )^\star$ = $\sqrt{P_j / (
\rho_j \widetilde{\beta} )} \widetilde{\textbf{v}}_m^{(j)}$.
\end{list}
\textbf{Until} the minimum SINR $\widetilde{\gamma}$
converges.\\
\textbf{Return} precoders $\{ \{ ( \textbf{v}_m^{(j)} )^\star
\}_{m=1}^{L_j} \}_{j=1}^K$ and decorrelators $\{ \{ (
\textbf{u}_m^{(j)} )^\star \}_{m=1}^{L_j} \}_{j=1}^K$.
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width = 3.5in]{./RelationshipAmongAlg}
\caption{Illustration of overall
algorithm.}\label{Fig:RelationshipAmongAlg}
\end{figure}
\subsection{Decorrelator Optimization with Fixed Precoders}
\label{Sec:Optimize_decorrelators} We define the decorrelator
optimization problem with fixed precoders to maximize the minimum
SINR among all users' data streams as\\
\indent\emph{Problem~4 (Maximum SINR Decorrelator Design with Fixed
Precoders):}
\begin{IEEEeqnarray*}{cl}
\IEEEeqnarraymulticol{2}{l}{ \{ \{ ( \textbf{u}_m^{(j)} )^\star
\}_{m=1}^{L_j} \}_{j=1}^K = \mathcal{Q}_\textrm{u}( \{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K )}\\
\arg\!\!\!\!\!\!\max_{\textbf{u}_m^{(j)} \in
\mathbb{C}^{N\!\times\!1}} \min_{\substack{\forall k \in
\mathcal{K}\\\forall l \in \mathcal{L}_k}}
\widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}}, \{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)}
).\IEEEyesnumber\label{Eqn:Problem_Q_U_QoS_constraint}
\end{IEEEeqnarray*}
As per (\ref{Eqn:Problem_Q_U_QoS_constraint}), the worst-case SINR
of data stream estimate $\widetilde{s}_l^{(k)}$ only depends on
decorrelator $\textbf{u}_l^{(k)}$. Therefore, we can independently
optimize each decorrelator, i.e.
\begin{IEEEeqnarray*}{cl}
\IEEEeqnarraymulticol{2}{l}{ ( \textbf{u}_l^{(k)} )^\star =
\mathcal{Q}_\textrm{u}^{(k,l)}( \{ \{
\textbf{v}_m^{(j)} \}_{l=1}^{L_j} \}_{j=1}^K )}\\
\arg\!\!\!\!\!\!\max_{\textbf{u}_l^{(k)} \in
\mathbb{C}^{N\!\times\!1}}&\;\;\widetilde{\gamma}_l^{(k)} (
\widehat{\mathcal{H}}, \{ \{ \textbf{v}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K, \textbf{u}_l^{(k)}
),\IEEEyesnumber\label{Eqn:Problem_Q_U_cost}
\end{IEEEeqnarray*}
and the optimal decorrelator is given by
Theorem~\ref{Theorem:Optimal_decorrelator}.
\begin{theorem}[Optimal Decorrelator with Fixed Precoders]
\label{Theorem:Optimal_decorrelator} Given precoders $\{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K$, the optimal
decorrelator for data stream estimate $\widetilde{s}_l^{(k)}$ is
given by $( \textbf{u}_l^{(k)} )^\star = \frac{( \textbf{F}_l^{(k)}
)^{-\frac{1}{2}} ( \textbf{w}_l^{(k)} )^\star}{|| (
\textbf{F}_l^{(k)} )^{-\frac{1}{2}} ( \textbf{w}_l^{(k)} )^\star
||}$, where
\begin{IEEEeqnarray*}{l}
\textbf{F}_l^{(k)} \textstyle= \sum_{j=1}^K \sum_{m=1}^{L_j}
\widehat{\textbf{H}}^{(k,j)} \textbf{v}_m^{(j)} ( \textbf{v}_m^{(j)}
)^\dag ( \widehat{\textbf{H}}^{(k,j)} )^\dag\\
\textstyle+ \varepsilon \sum_{j=1}^K \sum_{m=1}^{L_j} ||
\textbf{v}_m^{(j)} ||^2 \textbf{I}_N - \widehat{\textbf{H}}^{(k,k)}
\textbf{v}_l^{(k)} ( \textbf{v}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,k)} )^\dag\\
- \varepsilon || \textbf{v}_l^{(k)} ||^2 \textbf{I}_N + N_0
\textbf{I}_N,
\end{IEEEeqnarray*}
$( \textbf{w}_l^{(k)} )^\star$ is the principle eigenvector of $(
\textbf{F}_l^{(k)} )^{-\frac{1}{2}} \textbf{E}_l^{(k)} (
\textbf{F}_l^{(k)} )^{-\frac{1}{2}}$, and $\textbf{E}_l^{(k)} =
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)} ( \textbf{v}_l^{(k)}
)^\dag ( \widehat{\textbf{H}}^{(k,k)} )^\dag - \varepsilon ||
\textbf{v}_l^{(k)} ||^2 \textbf{I}_N$.
\end{theorem}
\begin{IEEEproof}
Please refer to Appendix~\ref{Proof:Optimal_decorrelator}.
\end{IEEEproof}
\subsection{Precoder Optimization with Fixed Decorrelators}
\label{Sec:Optimize_precoders} In Section~\ref{Sec:NP-hard}, we
defined the precoder optimization problem with fixed decorrelators,
Problem~$\mathcal{Q}_\textrm{v}$ (cf.
(\ref{Eqn:Problem_Q_V_cost})--(\ref{Eqn:Problem_Q_V_QoS_constraint})).
Since Problem~$\mathcal{Q}_\textrm{v}$ belongs to the class of
separable homogenous QCQP, it is NP-hard in general. In the
literature, some authors consider instances of this class of
problems for MISO \emph{broadcast} channel that are always solvable
(cf. \cite{Jnl:Linear_precoding_conic:Eldar,
Jnl:Downlink_beamforming:Ottersten} and references therein), whereas
some authors consider problems for MISO \emph{multicast} channel
that are always NP-hard (cf.
\cite{Jnl:MBS-SDMA_using_SDR_with_perfect_CSI:Sidiropoulos_Luo} and
references therein). For the \emph{interference} channel model
considered herein, we provide an algorithm for obtaining the optimal
solution for Problem~$\mathcal{Q}_\textrm{v}$.
One effective approach for solving separable homogenous QCQP is to
apply semidefinite relaxation (SDR) techniques. Let
$\textbf{V}_l^{(k)} = \textbf{v}_l^{(k)} ( \textbf{v}_l^{(k)}
)^\dag$. From (\ref{Eqn:Worst-case_SINR}), the worst-case SINR of
data stream estimate $s_l^{(k)}$ can be expressed as
(\ref{Eqn:Worst-case_SINR_V}).
\begin{figure*}[!t]
\normalsize
\begin{IEEEeqnarray*}{Rl}
\widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}}, \{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} )
&\textstyle= \frac{\textrm{Tr} ( ( \widehat{\textbf{H}}^{(k,k)}
)^\dag \textbf{u}_l^{(k)} ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{V}_l^{(k)} ) - \varepsilon ||
\textbf{u}_l^{(k)} ||^2 \textrm{Tr} ( \textbf{V}_l^{(k)} )}{\left(
\substack{\sum_{j=1}^K \sum_{m=1}^{L_j} \textrm{Tr} ( (
\widehat{\textbf{H}}^{(k,j)} )^\dag \textbf{u}_l^{(k)} (
\textbf{u}_l^{(k)} )^\dag \widehat{\textbf{H}}^{(k,j)}
\textbf{V}_m^{(j)} ) + \varepsilon || \textbf{u}_l^{(k)} ||^2
\sum_{j=1}^K \sum_{m=1}^{L_j} \textrm{Tr} ( \textbf{V}_m^{(j)} )\\-
\textrm{Tr} ( ( \widehat{\textbf{H}}^{(k,k)} )^\dag
\textbf{u}_l^{(k)} ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{V}_l^{(k)} ) - \varepsilon ||
\textbf{u}_l^{(k)} ||^2 \textrm{Tr} ( \textbf{V}_l^{(k)} ) + N_0 ||
\textbf{u}_l^{(k)} ||^2}
\right)}\;\;\;\;\;\;\IEEEyesnumber\label{Eqn:Worst-case_SINR_V}\\
&\textstyle\triangleq \widetilde{\Gamma}_l^{(k)} (
\widehat{\mathcal{H}}, \{ \{ \textbf{V}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K, \textbf{u}_l^{(k)} ).
\end{IEEEeqnarray*}
\hrulefill
\end{figure*}
It follows that we can \emph{equivalently} express the precoder
optimization problem with fixed decorrelators as
\begin{IEEEeqnarray*}{cl}
\IEEEeqnarraymulticol{2}{l}{ \{ \Xi^\star, \{ \{ (
\textbf{V}_m^{(j)} )^\star \}_{m=1}^{L_j} \}_{j=1}^K \} =
\widetilde{\mathcal{Q}}_\textrm{v}( \gamma, \{ \{
\textbf{u}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K )}\;\;\;\;\;\;\;\;\;\;\;\;\\
\min_{\substack{\textbf{V}_m^{(\!j\!)}\!\in
\mathbb{C}^{\!M\!\times\!M}\\\Xi \in
\mathbb{R}_+}}\!\!&\;\;\Xi\IEEEyessubnumber\label{Eqn:Problem_R_V_cost}\\
\textrm{s. t.}&\textstyle\sum_{l=1}^{L_k} \textrm{Tr} (
\textbf{V}_l^{(k)} ) \leq \rho_k \Xi, \forall k \in
\mathcal{K},\IEEEyessubnumber\label{Eqn:Problem_R_V_TxPwr_constraint}\\
&\widetilde{\Gamma}_l^{(k)}\!( \widehat{\mathcal{H}},\!\{ \{
\textbf{V}_m^{(j)} \}_{m=1}^{L_j}
\}_{j=1}^K,\!\textbf{u}_l^{(k)}\!)\!\ge\!\gamma,\!\forall
l\!\in\!\mathcal{L}_k,\!\forall
k\!\in\!\mathcal{K},\IEEEyessubnumber\label{Eqn:Problem_R_V_QoS_constraint}\\
&\textbf{V}_l^{(k)} \succeq 0, \forall k \in \mathcal{K}, \forall l
\in
\mathcal{L}_k,\IEEEyessubnumber\label{Eqn:Problem_R_V_SD_constraint}\\
&\textrm{rank} ( \textbf{V}_l^{(k)} ) = 1, \forall k \in
\mathcal{K}, \forall l \in
\mathcal{L}_k,\IEEEyessubnumber\label{Eqn:Problem_R_V_rank_constraint}
\end{IEEEeqnarray*}
where (\ref{Eqn:Problem_R_V_SD_constraint}) and
(\ref{Eqn:Problem_R_V_rank_constraint}) follow from the definition
of $\textbf{V}_l^{(k)}$, (\ref{Eqn:Problem_R_V_TxPwr_constraint})
are power constraints, and (\ref{Eqn:Problem_R_V_QoS_constraint})
are SINR constraints. Note that we could obtain the optimal precoder
$( \textbf{v}_m^{(j)} )^\star$ from the eigenvector of $(
\textbf{V}_m^{(j)} )^\star$ corresponding to the only non-zero
eigenvalue.
Comparing between Problem~$\widetilde{\mathcal{Q}}_\textrm{v}$ and
Problem~$\mathcal{Q}_\textrm{v}$, the SINR constraints of
Problem~$\widetilde{\mathcal{Q}}_\textrm{v}$
(\ref{Eqn:Problem_R_V_QoS_constraint}) are convex inequalities, i.e.
\begin{IEEEeqnarray*}{l}
( 1\!+\!\gamma ) \textrm{Tr} ( ( \widehat{\textbf{H}}^{(k,k)} )^\dag
\textbf{u}_l^{(k)} ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{V}_l^{(k)} ) - \gamma N_0 ||
\textbf{u}_l^{(k)} ||^2\\
\textstyle+ ( \gamma\!-\!1 )\varepsilon || \textbf{u}_l^{(k)} ||^2
\textrm{Tr} ( \textbf{V}_l^{(k)} ) - \gamma \varepsilon ||
\textbf{u}_l^{(k)} ||^2 \sum_{j=1}^K \sum_{m=1}^{L_j} \textrm{Tr} ( \textbf{V}_m^{(j)} )\\
\textstyle- \gamma \sum_{j=1}^K \sum_{m=1}^{L_j} \textrm{Tr} ( (
\widehat{\textbf{H}}^{(k,j)} )^\dag \textbf{u}_l^{(k)} (
\textbf{u}_l^{(k)} )^\dag \widehat{\textbf{H}}^{(k,j)}
\textbf{V}_m^{(j)} ) \ge 0,
\end{IEEEeqnarray*}
but Problem~$\widetilde{\mathcal{Q}}_\textrm{v}$ is still a
non-convex problem due to the rank constraints
(\ref{Eqn:Problem_R_V_rank_constraint}). By means of SDR, we
\emph{neglect} the rank constraints and
Problem~$\widetilde{\mathcal{Q}}_\textrm{v}$ degenerates into an SDP
that can be solved efficiently \cite{Bok:Convex_optimization:Boyd}.
In general, the resultant solution $\{ \Xi^\star, \{ \{ (
\textbf{V}_m^{(j)} )^\star \}_{m=1}^{L_j} \}_{j=1}^K \}$ could have
arbitrary rank. If $\textrm{rank} ( ( \textbf{V}_m^{(j)} )^\star ) =
1$, $\forall m \in \mathcal{L}_j$ and $\forall j \in \mathcal{K}$,
then constraints (\ref{Eqn:Problem_R_V_rank_constraint}) are
intrinsically satisfied and $\{ \{ ( \textbf{V}_m^{(j)} )^\star
\}_{m=1}^{L_j} \}_{j=1}^K$ are optimal. The following theorem
summarizes the optimality of the SDR solution in
(\ref{Eqn:Problem_R_V_cost})--(\ref{Eqn:Problem_R_V_rank_constraint}).
\begin{theorem}[Optimality of the SDR Solution]\label{Theorem:Optimiality_conditions}
The SDR solution of Problem~$\widetilde{\mathcal{Q}}_{\textrm{v}}$
will always give rank 1 solutions (i.e. $\textrm{rank} ( (
\textbf{V}_m^{(j)} )^\star ) = 1$) and hence, the SDR solution is
optimal for $\widetilde{\mathcal{Q}}_{\textrm{v}}$.
\end{theorem}
\begin{IEEEproof}
Please refer to Appendix~\ref{Proof:Optimiality_conditions}.
\end{IEEEproof}
\section{Simulation Results and Discussions} \label{Sec:Simulation_results}
In this section, we evaluate the proposed robust transceiver design
via numerical simulations. In particular, we compare the performance
of the proposed scheme against four baseline schemes:
\begin{list}{\labelitemi}{\leftmargin=0em}
\item \textbf{Baseline 1} the conventional IA scheme
\cite{Jnl:IA:Cadambe_Jafar};
\item \textbf{Baseline 2} the SINR maximization scheme
\cite[Algorithm~2]{Misc:Distributed_IA:Gomadam_Cadambe_Jafar};
\item \textbf{Baseline 3} a naive max-min SINR scheme adopted from
\cite{Cnf:RobustQosBroadcastMimo:Boche};
\item \textbf{Baseline 4} a naive max-min SINR scheme adopted from
\cite{Jnl:Linear_precoding_conic:Eldar}.
\end{list}
As discussed in Section~\ref{Sec:Prior_works}, baselines 1 and 2 are
theoretically promising schemes for the interference channel but
neglect important practical issues such as CSI uncertainty and
fairness among users. On the other hand, baselines 3 and 4 are
adopted from existing max-min SINR schemes that are originally
designed for the broadcast channel (i.e. there is \emph{only a
single transmitter} and multiple receivers). Without loss of
generality, we assume independent and identically distributed (iid)
Rayleigh fading channels, i.e. $[\textbf{H}^{(k,j)}]_{(a,b)} \sim
\mathcal{CN} ( 0, 1 )$, $\forall j,k \in \mathcal{K}$, $\forall a
\in [1,N]$, and $\forall b \in [1,M]$. For the purpose of
illustration, we consider the scenario where all users have the same
power constraint $P_1 = \ldots = P_K = P$. In
Fig.~\ref{Fig:K3_M4_L2_WorstStream} to Fig.~\ref{Fig:Imperfect} we
present simulation results for the average data rates\footnote{The
average data rate is defined as the average goodput (i.e. the
bits/s/Hz successfully delivered to the receiver). Specifically, the
goodput of data stream $s^{(k)}_{l}$ is given by $r_l^{(k)}
\mathcal{I} ( r_l^{(k)} \leq C_l^{(k)} )$, where $r_l^{(k)} =
\log_2( 1 + \gamma_l^{(k)}( \widehat{\mathcal{H}}, \{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} )
)$ is the scheduled data rate based on the SINR perceived with
respect to imperfect CSIT $\widehat{\mathcal{H}} = \{
\widehat{\textbf{H}}^{(k,j)} \}_{j,k=1}^K$, and $C_l^{(k)} = \log_2(
1 + \gamma_l^{(k)}( \mathcal{H}, \{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} ) )$ is the actual
instantaneous mutual information.} versus SNR\footnote{The SNR is
defined as $\frac{P}{N_0}$, where $N_0$ is the AWGN variance.} with
different number of users and levels of CSI uncertainty.
\subsection{Fairness Performance}
In Fig.~\ref{Fig:K3_M4_L2_WorstStream} and
Fig.~\ref{Fig:K3_M4_L2_Sum}, we compare the average data rates of
the proposed and baseline schemes. For the purpose of illustration,
we consider the three-user $4 \times 4$ MIMO interference channel,
where each user transmits $L=2$ data streams and the precoders are
designed with imperfect CSIT with $\varepsilon = \{ 0.1, 0.15 \}$,
whereas the receivers have perfect CSIR. It can be observed that the
proposed scheme achieves much higher average worst-case data rate
per user than all the baseline schemes, and thus provides better
minimum performance. For example, at CSI error $\varepsilon = 0.15$,
the proposed scheme has 5dB SNR gain over the SINR maximization
algorithm (baseline 2) at providing a worst case data rate of 6
b/s/Hz and the conventional IA scheme (baseline 1) cannot provide
worst-case data rate of 6 b/s/Hz. The superior performances of the
proposed scheme is accountable to both the SDR approach as well as a
suitably chosen utility function (optimizing the worst case
performance). Specifically, the chosen utility function 1) provide
resilience against CSI uncertainties as well as 2) achieve fair
performance among users. On the other hand, the SDR approach also
contributes to obtaining a good solution for solving the
optimization problem.
\subsection{Total Sum Data Rate Performance}
In Fig.~\ref{Fig:K3_M4_L2_Sum}, we compare the average total sum
data rates of the proposed and baseline schemes for $K=3$, $N=M=4$,
$L=2$, and CSI error $\varepsilon = \{ 0.1, 0.15 \}$. It can be
observed that the proposed scheme not only achieves better
worst-case data rate but also achieves higher total sum data rate
than all the baseline schemes. In particular, due to the presence of
CSI error, the total sum rate of the conventional IA scheme
(baseline 1) does not scale linearly with the SNR anymore. Comparing
Fig.~\ref{Fig:K3_M4_L2_Sum} with
Fig.~\ref{Fig:K3_M4_L2_WorstStream}, it can be observed that the
proposed scheme achieves the performance gain on fairness without
sacrificing the total sum data rate.
\begin{figure}[t]
\centering
\includegraphics[width = 3.5in]{./WorstUserGoodputB}
\caption{Average data rate of the worst-case user versus SNR. $K=3$,
$N=M=4$, $L=2$ and CSI error $\varepsilon = \{0.1 ,
0.15\}$.}\label{Fig:K3_M4_L2_WorstStream}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 3.5in]{./SumGoodputD}
\caption{Average total sum data rate versus SNR. $K=3$, $N=M=4$,
$L=2$ and CSI error $\varepsilon = \{0.1 ,
0.15\}$.}\label{Fig:K3_M4_L2_Sum}
\end{figure}
\subsection{Robustness to CSI Errors}
In Fig.~\ref{Fig:Imperfect}, we show the average worst-case data
rates of the proposed and baseline schemes for different levels of
CSI uncertainty. It can be observed that the proposed scheme always
achieves higher average worst-case data rate than the baseline
schemes. For example, the SINR maximization algorithm (baseline 2)
is designed assuming perfect CSI; its performance degrades rapidly
for CSI error $\varepsilon > 0.02$ and it can be observed that the
achieved data rate could decrease with increasing SNR. On the other
hand, the proposed scheme achieves a robust degradation with respect
to CSI errors.
\begin{figure}[t]
\centering
\includegraphics[width = 3.5in]{./Imperfect}
\caption{Average worst-case data rate versus CSI errors. $K=3$,
$N=M=4$, $L=2$ and SNR 18dB and 23dB.}\label{Fig:Imperfect}
\end{figure}
\section{Conclusions} \label{Sec:Conclusions}
In this paper, we proposed a robust transceiver design for the
$K$-pair quasi-static MIMO interference channel with fairness
considerations. Specifically, we formulated the
precoder-decorrelator design as an optimization problem to maximize
the worst-case SINR among all users. We devised a low complexity
iterative algorithm based on AO and SDR techniques. Numerical
results verify the advantages of incorporating into transceiver
design for the interference channel important practical issues such
as CSI uncertainty and fairness performance.
\appendices
\begin{figure*}[!t]
\normalsize
\begin{IEEEeqnarray*}{l}
\gamma_l^{(k)} ( \mathcal{H}, \{ \{ \textbf{v}_m^{(j)}
\}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} )\\
\textstyle= \frac{|| ( \textbf{u}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,k)}\!+ \mathbf{\Delta}^{(k,k)} )
\textbf{v}_l^{(k)} ||^2}{\sum_{\substack{m=1\\m \neq l}}^{L_k} || (
\textbf{u}_l^{(k)} )^\dag ( \widehat{\textbf{H}}^{(k,k)}\!+
\mathbf{\Delta}^{(k,k)} ) \textbf{v}_m^{(k)} ||^2 +
\sum_{\substack{j=1\\j \neq k}}^K \sum_{m=1}^{L_j} || (
\textbf{u}_l^{(k)} )^\dag ( \widehat{\textbf{H}}^{(k,j)}\!+
\mathbf{\Delta}^{(k,j)} ) \textbf{v}_m^{(j)} ||^2 + N_0 ||
\textbf{u}_l^{(k)} ||^2}\IEEEyessubnumber\label{Eqn:Sinr_perfect0}\\
\textstyle \ge \frac{|| ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)} ||^2 - || (
\textbf{u}_l^{(k)} )^\dag \mathbf{\Delta}^{(k,k)} \textbf{v}_l^{(k)}
||^2}{\sum_{j=1}^K \sum_{m=1}^{L_j} ( || ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,j)} \textbf{v}_m^{(j)} ||^2 + || (
\textbf{u}_l^{(k)} )^\dag \mathbf{\Delta}^{(k,j)} \textbf{v}_m^{(j)}
||^2 ) - (|| ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)} ||^2 + || (
\textbf{u}_l^{(k)} )^\dag \mathbf{\Delta}^{(k,k)} \textbf{v}_l^{(k)}
||^2 ) + N_0 ||
\textbf{u}_l^{(k)} ||^2}\IEEEyessubnumber\label{Eqn:Sinr_perfect1}\\
\textstyle \ge \frac{|| ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)} ||^2 - \varepsilon
|| \textbf{u}_l^{(k)} ||^2 || \textbf{v}_l^{(k)} ||^2}{\sum_{j=1}^K
\sum_{m=1}^{L_j} || ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,j)} \textbf{v}_m^{(j)} ||^2 + \varepsilon
|| \textbf{u}_l^{(k)} ||^2 \sum_{j=1}^K \sum_{m=1}^{L_j} ||
\textbf{v}_m^{(j)} ||^2 - || ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)} ||^2 - \varepsilon
|| \textbf{u}_l^{(k)} ||^2 || \textbf{v}_l^{(k)} ||^2 + N_0 ||
\textbf{u}_l^{(k)} ||^2}\IEEEyessubnumber\label{Eqn:Sinr_perfect2}\\
\triangleq \widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}}, \{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} ).
\end{IEEEeqnarray*}
\hrulefill
\end{figure*}
\section{Proof: Worst-Case SINR with Imperfect
CSIT}\label{Proof:Worst-case_SINR} Given CSI estimates
$\widehat{\mathcal{H}} = \{ \widehat{\textbf{H}}^{(k,j)}
\}_{j,k=1}^K$ at the transmitter, the worst-case SINR for each data
stream estimate can be expressed as follows. Consider
$\widetilde{s}_l^{(k)}$ whose SINR $\gamma_l^{(k)} ( \mathcal{H}, \{
\{ \textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)}
)$ is given by (\ref{Eqn:Sinr_perfect0}). First, by the triangle
inequality
\begin{equation*}
\begin{array}{l}
|| ( \textbf{u}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,j)}\!\!+\!\mathbf{\Delta}^{(k,j)} )
\textbf{v}_m^{(j)} ||^2\\
\;\;\;\;\;\;\;\;\;\;\ge || ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,j)} \textbf{v}_m^{(j)} ||^2 - || (
\textbf{u}_l^{(k)} )^\dag \mathbf{\Delta}^{(k,j)} \textbf{v}_m^{(j)}
||^2,\\
|| ( \textbf{u}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,j)}\!\!+\!\mathbf{\Delta}^{(k,j)} )
\textbf{v}_m^{(j)} ||^2\\
\;\;\;\;\;\;\;\;\;\;\le || ( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,j)} \textbf{v}_m^{(j)} ||^2 + || (
\textbf{u}_l^{(k)} )^\dag \mathbf{\Delta}^{(k,j)} \textbf{v}_m^{(j)}
||^2,
\end{array}
\end{equation*}
and so the SINR is lowered bounded as (\ref{Eqn:Sinr_perfect1}).
Second, with CSI error $|| \mathbf{\Delta}^{(k,j)} ||^2 \leq
\varepsilon$,
\begin{equation*}
\begin{array}{l}
|| ( \textbf{u}_l^{(k)} )^\dag \mathbf{\Delta}^{(k,j)}
\textbf{v}_m^{(j)} ||^2\\
= \textrm{Tr} ( ( \textbf{u}_l^{(k)} )^\dag
\mathbf{\Delta}^{(k,j)} \textbf{v}_m^{(j)} ( \textbf{v}_m^{(j)}
)^\dag ( \mathbf{\Delta}^{(k,j)} )^\dag
\textbf{u}_l^{(k)} )\\
\stackrel{(a)}{\leq} \textrm{Tr} ( \textbf{u}_l^{(k)} (
\textbf{u}_l^{(k)} )^\dag ) \textrm{Tr} ( \mathbf{\Delta}^{(k,j)}
\textbf{v}_m^{(j)} (
\textbf{v}_m^{(j)} )^\dag ( \mathbf{\Delta}^{(k,j)} )^\dag )\\
\stackrel{(b)}{\leq} \textrm{Tr} ( \textbf{u}_l^{(k)} (
\textbf{u}_l^{(k)} )^\dag ) \underbrace{\textrm{Tr} ( (
\mathbf{\Delta}^{(k,j)} )^\dag \mathbf{\Delta}^{(k,j)} )}_{= ||
\mathbf{\Delta}^{(k,j)} ||^2} \textrm{Tr} ( \textbf{v}_m^{(j)} (
\textbf{v}_m^{(j)} )^\dag )\\
= \varepsilon || \textbf{u}_l^{(k)} ||^2 || \textbf{v}_m^{(j)} ||^2,
\end{array}
\end{equation*}
where (a) and (b) follow from the properties that $\textrm{Tr} (
\textbf{A} \textbf{B} ) = \textrm{Tr} ( \textbf{B} \textbf{A} )$ for
$\textbf{A} \in \mathbb{C}^{M \times N}$ and $\textbf{B} \in
\mathbb{C}^{N \times M}$ and $\textrm{Tr} ( \textbf{C} \textbf{D} )
\leq \textrm{Tr} ( \textbf{C} ) \textrm{Tr} ( \textbf{D} )$ for
positive semi-definite $\textbf{C}, \textbf{D} \in \mathbb{C}^{N
\times N}$. Thus, the worst-case SINR perceived by the transmitter
can be expressed as (\ref{Eqn:Sinr_perfect2}).
\section{Proof: Optimal Decorrelator with Fixed Precoders}
\label{Proof:Optimal_decorrelator} From (\ref{Eqn:Worst-case_SINR}),
the worst-case SINR of data stream estimate $\widetilde{s}_l^{(k)}$
can be expressed as
\begin{equation}
\textstyle\widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}}, \{ \{
\textbf{v}_m^{(j)} \}_{m=1}^{L_j} \}_{j=1}^K, \textbf{u}_l^{(k)} ) =
\frac{( \textbf{u}_l^{(k)} )^\dag \textbf{E}_l^{(k)}
\textbf{u}_l^{(k)}}{( \textbf{u}_l^{(k)} )^\dag \textbf{F}_l^{(k)}
\textbf{u}_l^{(k)}},\label{Eqn:Sinr_quotient}
\end{equation}
where
\begin{IEEEeqnarray*}{l}
\textbf{F}_l^{(k)} \textstyle= \sum_{j=1}^K \sum_{m=1}^{L_j}
\widehat{\textbf{H}}^{(k,j)} \textbf{v}_m^{(j)} ( \textbf{v}_m^{(j)}
)^\dag ( \widehat{\textbf{H}}^{(k,j)} )^\dag\\
\textstyle+ \varepsilon \sum_{j=1}^K \sum_{m=1}^{L_j} ||
\textbf{v}_m^{(j)} ||^2 \textbf{I}_N - \widehat{\textbf{H}}^{(k,k)}
\textbf{v}_l^{(k)} ( \textbf{v}_l^{(k)} )^\dag (
\widehat{\textbf{H}}^{(k,k)} )^\dag\\
- \varepsilon || \textbf{v}_l^{(k)} ||^2 \textbf{I}_N + N_0
\textbf{I}_N,
\end{IEEEeqnarray*}
which is a Hermitian and positive definite matrix, and
\begin{equation*}
\begin{array}{l}
\textbf{E}_l^{(k)} = \widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)}
( \textbf{v}_l^{(k)} )^\dag ( \widehat{\textbf{H}}^{(k,k)} )^\dag -
\varepsilon || \textbf{v}_l^{(k)} ||^2 \textbf{I}_N,
\end{array}\label{Eqn:Sinr_num}
\end{equation*}
which is a non-negative definite\footnote{If $\textbf{E}_l^{(k)}$ is
negative definite, then the CSI error $\varepsilon$ is too high.
Without loss of generality, we assume $\varepsilon$ is sufficiently
small.} Hermitian matrix. Without loss of generality, let
$\textbf{u}_l^{(k)} = c ( \textbf{F}_l^{(k)} )^{-\frac{1}{2}}
\textbf{w}_l^{(k)}$ for arbitrary scaling factor $c \in \mathbb{C}$.
We can equivalently expressed (\ref{Eqn:Sinr_quotient}) as
\begin{IEEEeqnarray*}{Rl}
\textstyle\frac{( \textbf{u}_l^{(k)} )^\dag \textbf{E}_l^{(k)}
\textbf{u}_l^{(k)}}{( \textbf{u}_l^{(k)} )^\dag \textbf{F}_l^{(k)}
\textbf{u}_l^{(k)}} &\textstyle= \frac{( \textbf{w}_l^{(k)} )^\dag (
\textbf{F}_l^{(k)} )^{-\frac{1}{2}} \textbf{E}_l^{(k)} (
\textbf{F}_l^{(k)} )^{-\frac{1}{2}} \textbf{w}_l^{(k)}}{(
\textbf{w}_l^{(k)} )^\dag
\textbf{w}_l^{(k)}}\IEEEyesnumber\label{Eqn:Sinr_max}\\
&\textstyle= \frac{( \textbf{w}_l^{(k)} )^\dag \textbf{Q}
\mathbf{\Lambda} \textbf{Q}^\dag \textbf{w}_l^{(k)}}{(
\textbf{w}_l^{(k)} )^\dag \textbf{w}_l^{(k)}},
\end{IEEEeqnarray*}
where $\textbf{Q} \mathbf{\Lambda} \textbf{Q}^\dag$ denotes the
eigen-decomposition of $( \textbf{F}_l^{(k)} )^{-\frac{1}{2}}
\textbf{E}_l^{(k)} ( \textbf{F}_l^{(k)} )^{-\frac{1}{2}}$. It can be
shown that\footnote{Please refer to
\cite[Appendix~E]{Bok:Adaptive_filter_theory:Haykin}.}
(\ref{Eqn:Sinr_max}) is maximized with $( \textbf{w}_l^{(k)}
)^\star$ being the principal eigenvector\footnote{As per
\cite[Theorem~7.6.3]{Bok:Matrix_analysis:Horn} the principle
eigenvalue of $( \textbf{F}_l^{(k)} )^{-\frac{1}{2}}
\textbf{E}_l^{(k)} ( \textbf{F}_l^{(k)} )^{-\frac{1}{2}}$ is always
positive.} of $( \textbf{F}_l^{(k)} )^{-\frac{1}{2}}
\textbf{E}_l^{(k)} ( \textbf{F}_l^{(k)} )^{-\frac{1}{2}}$. In turn,
the optimal unit norm decorrelator is given by $( \textbf{u}_l^{(k)}
)^\star = \frac{( \textbf{F}_l^{(k)} )^{-\frac{1}{2}} (
\textbf{w}_l^{(k)} )^\star}{|| ( \textbf{F}_l^{(k)} )^{-\frac{1}{2}}
( \textbf{w}_l^{(k)} )^\star ||}$.
\section{Proof: Optimality of the SDR Solution for
Problem~$\widetilde{\mathcal{Q}}_{\textrm{v}}$}
\label{Proof:Optimiality_conditions} By using SDR, we solve the
following SDP problem with complex-valued parameters:
\begin{IEEEeqnarray*}{cl}\label{Eqn:HH_primal}
\min_{\textbf{V}_m^{(j)}, \Xi}&\;\; \Xi\IEEEyessubnumber\label{Eqn:R_V_cost}\\
\textrm{s. t.} & \textstyle\sum\nolimits_{m=1}^{L_j} \textrm{Tr} (
\textbf{V}_m^{(j)} ) \leq \rho_j \Xi,\forall j \in \mathcal{K}\\
&\textstyle\sum_{j=1}^K\!\sum_{m=1}^{L_j}\!\text{Tr}(\mathbf{A}_{(l,m)}^{(k,j)}\mathbf{V}_{m}^{(j)})\!\geq\!
b_{l}^{(k)}\!, \forall l\!\in\!\mathcal{L}_k, \forall k\!
\in\!\mathcal{K},\;\;\;\;\;\;\;\IEEEyessubnumber\label{Eqn:R_V_QoS}\\
&\Xi\geq 0,\IEEEyessubnumber\\
&\textbf{V}_m^{(j)} \succeq 0, \forall m \in \mathcal{L}_j, \forall
j \in \mathcal{K},\IEEEyessubnumber\label{Eqn:R_V_SD}
\end{IEEEeqnarray*}
where $\mathbf{A}_{(l,m)}^{(k,j)}\in\mathbb{H}^{M}$ is given by
\begin{equation*}
\mathbf{A}_{(l,m)}^{(k,j)}\!\!=\!\!\left\{\!\!\!\!
\begin{array}{ll}
( \widehat{\textbf{H}}^{(k,k)}\!)^\dag \textbf{u}_l^{(k)}\!(
\textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,k)}\!\!-\!\varepsilon || \textbf{u}_l^{(k)}
||^2\mathbf{I}\!\!\!\!\!\!& \begin{array}{l}\text{if }
j\!=\!k\\\text{and }
m\!=\!l\end{array}\\
-\!\gamma(( \widehat{\textbf{H}}^{(k,j)}\!)^\dag
\textbf{u}_l^{(k)}\!( \textbf{u}_l^{(k)} )^\dag
\widehat{\textbf{H}}^{(k,j)}\!\!\!+\!\varepsilon ||
\textbf{u}_l^{(k)} ||^2\mathbf{I} )\!\!\!\!\!\!& \text{ otherwise }
\end{array}\right.\;\;\;\;\;\;
\end{equation*}
\addtocounter{equation}{2}
\begin{figure*}[!t]
\normalsize
\begin{IEEEeqnarray*}{ll}
\mathbf{Z}_{m}^{(j)}&\textstyle=x^{(j)}\mathbf{I}-\sum_{k=1}^K\sum_{l=1}^{L_k}y_{l}^{(k)}
\rho_j\mathbf{A}_{(l,m)}^{(k,j)}\IEEEyesnumber\label{Eqn:wxy}\\
&=\underbrace{x^{(j)}\mathbf{I}+\textstyle\sum_{k=1}^K\textstyle\sum_{l=1}^{L_k}y_{l}^{(k)}\rho_j\gamma\left((
\widehat{\textbf{H}}^{(k,j)} )^\dag \textbf{u}_l^{(k)} (
\textbf{u}_l^{(k)} )^\dag \widehat{\textbf{H}}^{(k,j)} + \varepsilon
|| \textbf{u}_l^{(k)} ||^2\mathbf{I} \right)\mathcal{I}{\{k\neq
j\&l\neq m\}}+y_{m}^{(j)}\rho_j\varepsilon ||
\textbf{u}_m^{(j)}||^2\mathbf{I}}_{\text{rank } M} \\
&-\underbrace{y_{m}^{(j)}\rho_j( \widehat{\textbf{H}}^{(j,j)} )^\dag
\textbf{u}_m^{(j)} ( \textbf{u}_m^{(j)} )^\dag
\widehat{\textbf{H}}^{(j,j)}}_{\text{rank } 1}.
\end{IEEEeqnarray*}
\hrulefill
\end{figure*}
\addtocounter{equation}{-3}and $b_{l}^{(k)}=\gamma N_0 ||
\textbf{u}_l^{(k)} ||^2>0$. The corresponding dual problem is given
by the following SDP:
\begin{IEEEeqnarray*}{cl}\label{Eqn:HH_dual}
\max_{\substack{y_{l}^{(k)}\\x^{(j)}}}&\;
\textstyle\sum\nolimits_{k=1}^K\textstyle\sum\nolimits_{l=1}^{L_k}y_{l}^{(k)}b_{l}^{(k)}\IEEEyessubnumber\label{Eqn:ggg}\\
\textrm{s. t.}
&\;\underbrace{x^{(j)}\mathbf{I}\!-\!\!\textstyle\sum_{k=1}^K\!\sum_{l=1}^{L_k}y_{l}^{(k)}
\!\!\rho_j\mathbf{A}_{(l,m)}^{(k,j)}}_{=\mathbf{Z}_{m}^{(j)}}\!\succeq\!0,
\forall m\!\!\in\!\mathcal{L}_j, \forall j\!\!\in\!\mathcal{K},\;\;\;\;\;\IEEEyessubnumber\label{Eqn:xxx}\\
&1-\textstyle\sum_{j=1}^Kx^{(j)} \geq 0,\IEEEyessubnumber\\
&y_{l}^{(k)}\geq 0, \forall k \in \mathcal{K}, \forall l \in
\mathcal{L}_k,\IEEEyessubnumber\\
&x^{(j)}\geq 0, \forall j \in
\mathcal{K},\IEEEyessubnumber\label{Eqn:nnn}
\end{IEEEeqnarray*}
Note that $(\mathbf{V}_{m}^{(j)})^\star \neq\mathbf{0},\forall j \in
\mathcal{K}, \forall m \in \mathcal{L}_j$, and from the
complementary conditions for the primal and dual SDP:
\begin{equation}
\label{Eqn:HH_comple}
\text{Tr}(\mathbf{Z}_{m}^{(j)}(\mathbf{V}_{m}^{(j)})^\star)=0,\forall
j \in \mathcal{K}, \forall m \in \mathcal{L}_j
\end{equation}
\addtocounter{equation}{1} we can infer that
$\mathbf{Z}_{m}^{(j)}\not\succ\mathbf{0}$. Suppose that one of the
optimal values $\{\{(y_{l}^{(k)})^\star \}_{l=1}^{L_k}\}_{k=1}^K$
for the dual problem, say $(y_{1}^{(1)})^\star=0$, then
\begin{equation*}
\mathbf{Z}_{1}^{(1)}=x^{(1)}\mathbf{I}+\sum_{k=1}^K
\sum_{l=1}^{L_k}y_{l}^{(k)} (-
\rho_1\mathbf{A}_{(l,1)}^{(k,1)})\mathcal{I}{\{k\neq 1\&l\neq
1\}}\succ\mathbf{0}.
\end{equation*}
It contradicts the fact $\mathbf{Z}_{1}^{(1)}\not\succ\mathbf{0}$,
and hence $(y_{l}^{(k)})^\star>0, \forall k \in \mathcal{K}, \forall
l \in \mathcal{L}_k$. From (\ref{Eqn:xxx}) and (\ref{Eqn:wxy}),
$\text{rank}(\mathbf{Z}_{m}^{(j)}) \ge M-1$. On the other hand, from
(\ref{Eqn:HH_comple}), since $\textbf{Z}_m^{(j)}\nsucc \textbf{0}$
so $\text{rank}(\mathbf{Z}_{m}^{(j)}) < M$. It follows that
$\text{rank}(\mathbf{Z}_{m}^{(j)})=M-1$. Moreover, due to
(\ref{Eqn:HH_comple}) the optimal solution
$\{\{(\mathbf{V}_{m}^{(j)})^\star\}_{m=1}^{L_j}\}_{j=1}^K$ of primal
problem (\ref{Eqn:HH_primal}) must be of rank one. In other words,
there will be zero duality gap between the primal non-convex problem
$\widetilde{\mathcal{Q}}_\textrm{v}$ and the dual problem obtained
by relaxing the rank constraint given by (\ref{Eqn:HH_dual}).
\section{Proof: Convergence of
Algorithm~\ref{Algorithm:Top-level}} \label{Proof:Convergence} At
the $n^{\textrm{th}}$ iteration of
Algorithm~\ref{Algorithm:Top-level}, we denote the precoders as $\{
\{ \widetilde{\textbf{v}}_m^{(j)}[n] \}_{m=1}^{L_j} \}_{j=1}^K$, the
decorrelators as $\{ \{ \widetilde{\textbf{u}}_m^{(j)}[n]
\}_{m=1}^{L_j} \}_{j=1}^K$, the minimum SINR as
$\widetilde{\gamma}[n]$, and the transmit power scaling factor as
$\widetilde{\beta}[n]$.
Upon initialization, we \emph{define} the minimum SINR as
$\widetilde{\gamma}[0] = 0$ and start with arbitrary precoders $\{
\{ \widetilde{\textbf{v}}_m^{(j)}[0] \}_{m=1}^{L_j} \}_{j=1}^K$,
where the transmit power of the $j^{\textrm{th}}$ source node is
$\sum_{m=1}^{L_j} ( \widetilde{\textbf{v}}_m^{(j)}[0] )^\dag
\widetilde{\textbf{v}}_m^{(j)}[0] = P_j$, and the transmit power
scaling factor is $\widetilde{\beta}[0] = \min( P_1, \ldots, P_K )$.
In the following, we show that each iteration of
Algorithm~\ref{Algorithm:Top-level} increases the minimum SINR, i.e.
$\widetilde{\gamma}[n] \ge \widetilde{\gamma}[n-1]$, so
Algorithm~\ref{Algorithm:Top-level} must converge.
In Step 1, given the precoders $\{ \{
\widetilde{\textbf{v}}_m^{(j)}[n-1] \}_{m=1}^{L_j} \}_{j=1}^K$, the
decorrelators $\{ \{ \widetilde{\textbf{u}}_m^{(j)}[n]
\}_{m=1}^{L_j} \}_{j=1}^K$ are optimized to increase the minimum
SINR, i.e.
\begin{IEEEeqnarray*}{Rl}
\widehat{\gamma} &= \displaystyle \min_{\substack{l\in
\mathcal{L}_k\\k\in \mathcal{K}}} \textstyle
\widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}}, \{ \{
\widetilde{\textbf{v}}_m^{(j)}[n\!-\!1]
\}_{m=1}^{L_j} \}_{j=1}^K, \widetilde{\textbf{u}}_l^{(k)}[n] )\\
&\ge \displaystyle \min_{\substack{l\in \mathcal{L}_k\\k\in
\mathcal{K}}} \textstyle \widetilde{\gamma}_l^{(k)} (
\widehat{\mathcal{H}}, \{ \{ \widetilde{\textbf{v}}_m^{(j)}[n\!-\!1]
\}_{m=1}^{L_j} \}_{j=1}^K,
\widetilde{\textbf{u}}_l^{(k)}[n\!-\!1] )\\
&=\widetilde{\gamma}[n\!-\!1].\IEEEyesnumber\label{Eqn:Convergence1}
\end{IEEEeqnarray*}
In Step 3, given the decorrelators $\{ \{
\widetilde{\textbf{u}}_m^{(j)}[n] \}_{m=1}^{L_j} \}_{j=1}^K$ and the
minimum SINR constraint $\widehat{\gamma}$, the precoders $\{ \{
\widetilde{\textbf{v}}_m^{(j)}[n] \}_{m=1}^{L_j} \}_{j=1}^K$ are
optimized to jointly reduce the transmit powers of all nodes, i.e.
the minimum SINR is unchanged
\begin{IEEEeqnarray*}{Rl}
\widehat{\gamma} &= \displaystyle \min_{\substack{l\in
\mathcal{L}_k\\k\in \mathcal{K}}} \textstyle
\widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}}, \{ \{
\widetilde{\textbf{v}}_m^{(j)}[n] \}_{m=1}^{L_j}
\}_{j=1}^K, \widetilde{\textbf{u}}_l^{(k)}[n] )\\
&= \displaystyle \min_{\substack{l\in \mathcal{L}_k\\k\in
\mathcal{K}}} \textstyle \widetilde{\gamma}_l^{(k)} (
\widehat{\mathcal{H}}, \{ \{ \widetilde{\textbf{v}}_m^{(j)}[n\!-\!1]
\}_{m=1}^{L_j} \}_{j=1}^K, \widetilde{\textbf{u}}_l^{(k)}[n] )
\end{IEEEeqnarray*}
whereas the transmit powers of all source nodes are reduced
\begin{IEEEeqnarray*}{Rl}
\rho_j \widetilde{\beta}[n] &= \textstyle\sum_{m=1}^{L_j} (
\widetilde{\textbf{v}}_m^{(j)}[n] )^\dag
\widetilde{\textbf{v}}_m^{(j)}[n]\\
&\leq \textstyle\sum_{m=1}^{L_j} (
\widetilde{\textbf{v}}_m^{(j)}[n-1] )^\dag
\widetilde{\textbf{v}}_m^{(j)}[n-1]\\
&= P_j.
\end{IEEEeqnarray*}
In Step 5, the precoders are up-scaled to the power constraint,
i.e.$\textbf{v}_m^{(j)}[n] = \sqrt{P_j/(\rho_j
\widetilde{\beta}[n])} \textbf{v}_m^{(j)}[n]$, where by definition
$P_1/\rho_1 = \ldots = P_K/\rho_K$. As such, the minimum SINR is
increased according to
\begin{IEEEeqnarray*}{l}
\widehat{\gamma} = \displaystyle \min_{\substack{l\in
\mathcal{L}_k\\k\in \mathcal{K}}} \textstyle
\widetilde{\gamma}_l^{(k)} ( \widehat{\mathcal{H}}, \{ \{
\textbf{v}_m^{(j)}[n] \}_{m=1}^{L_j} \}_{j=1}^K,
\textbf{u}_l^{(k)}[n] )\\
=\!\displaystyle \min_{\substack{l\in \mathcal{L}_k\\k\in
\mathcal{K}}}\!\textstyle \frac{|| ( \textbf{u}_l^{(k)}[n] )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)}[n] ||^2 -
\varepsilon || \textbf{u}_l^{(k)}[n] ||^2 || \textbf{v}_l^{(k)}[n]
||^2}{\left(\substack{\sum_{j=1}^K\!\sum_{m=1}^{L_j} || (
\textbf{u}_l^{(k)}[n] )^\dag \widehat{\textbf{H}}^{(k,j)}
\textbf{v}_m^{(j)}[n] ||^2\\\!+ \varepsilon || \textbf{u}_l^{(k)}[n]
||^2 \sum_{j=1}^K \sum_{m=1}^{L_j} || \textbf{v}_m^{(j)}[n] ||^2\!-
\varepsilon || \textbf{u}_l^{(k)}[n] ||^2 || \textbf{v}_l^{(k)}[n]
||^2\\- || ( \textbf{u}_l^{(k)}[n] )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)}[n] ||^2 + N_0 ||
\textbf{u}_l^{(k)}[n]
||^2}\right)}\\
< \displaystyle \min_{\substack{l\in \mathcal{L}_k\\k\in
\mathcal{K}}} \textstyle \widetilde{\gamma}_l^{(k)} (
\widehat{\mathcal{H}}, \{ \{ \sqrt{P_K/(\rho_K\widetilde{\beta}[n])}
\textbf{v}_m^{(j)}[n]
\}_{m=1}^L \}_{j=1}^K, \textbf{u}_l^{(k)}[n] )\\
=\!\displaystyle \min_{\substack{l\in \mathcal{L}_k\\k\in
\mathcal{K}}}\!\textstyle \frac{|| ( \textbf{u}_l^{(k)}[n] )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)}[n] ||^2 -
\varepsilon || \textbf{u}_l^{(k)}[n] ||^2 || \textbf{v}_l^{(k)}[n]
||^2}{\left(\substack{\sum_{j=1}^K\!\sum_{m=1}^{L_j} || (
\textbf{u}_l^{(k)}[n] )^\dag \widehat{\textbf{H}}^{(k,j)}
\textbf{v}_m^{(j)}[n] ||^2\\\!+ \varepsilon || \textbf{u}_l^{(k)}[n]
||^2 \sum_{j=1}^K \sum_{m=1}^{L_j} || \textbf{v}_m^{(j)}[n] ||^2\!-
\varepsilon || \textbf{u}_l^{(k)}[n] ||^2 || \textbf{v}_l^{(k)}[n]
||^2\\- || ( \textbf{u}_l^{(k)}[n] )^\dag
\widehat{\textbf{H}}^{(k,k)} \textbf{v}_l^{(k)}[n] ||^2 +
\frac{N_0}{(P_K/\rho_K)(1/\widetilde{\beta}[n])} ||
\textbf{u}_l^{(k)}[n] ||^2}\right)}\\
= \widetilde{\gamma}[n].\IEEEyesnumber\label{Eqn:Convergence2}
\end{IEEEeqnarray*}
It follows from (\ref{Eqn:Convergence1}) and
(\ref{Eqn:Convergence2}) that the minimum SINR increases with each
iteration, i.e. $\widetilde{\gamma}[n] \ge \widehat{\gamma} \ge
\widetilde{\gamma}[n\!-\!1]$, and
Algorithm~\ref{Algorithm:Top-level} must converge.
\bibliographystyle{IEEEtran}
|
1,116,691,498,411 | arxiv | \section{Introduction}
For a knot $K\subset S^{3}$, the authors defined in
\cite{KM-sutures} a Floer homology group $\KHI(K)$, by a slight
variant of a construction that appeared first in \cite{Floer-Durham-paper}.
In brief, one takes the knot complement $S^{3} \setminus N^{\circ}(K)$
and forms from it a closed $3$-manifold $Z(K)$ by attaching to
$\partial N(K)$ the manifold $F\times S^{1}$, where $F$ is a genus-1
surface with one boundary component. The attaching is done in such a
way that $\{\text{point}\}\times S^{1}$ is glued to the meridian of
$K$ and $\partial F \times \{\text{point}\}$ is glued to the
longitude. The vector space $\KHI(K)$ is then defined by applying
Floer's instanton homology to the closed 3-manifold $Z(K)$. We will
recall the details in section~\ref{sec:background}. If $\Sigma$ is a
Seifert surface for $K$, then there is a corresponding closed surface
$\bar\Sigma$ in $Z(K)$, formed as the union of $\Sigma$ and one copy
of $F$. The homology class $\bar\sigma=[\bar\Sigma]$ in $H_{2}(Z(K))$
determines an
endomorphism $\mu(\bar\sigma)$ on the instanton homology of $Z(K)$,
and hence also an endomorphism of $\KHI(K)$. As was shown in
\cite{KM-sutures}, and as we recall below, the generalized
eigenspaces of $\mu(\bar\sigma)$ give a direct sum decomposition,
\begin{equation}\label{eq:eigenspace-decomposition}
\KHI(K) = \bigoplus_{j=-g}^{g} \KHI(K,j).
\end{equation}
Here $g$ is the genus of the Seifert surface. In this paper, we will
define a canonical $\Z/2$ grading on $\KHI(K)$, and hence on each
$\KHI(K,j)$, so that we may write
\[
\KHI(K,j) = \KHI_{0}(K,j) \oplus \KHI_{1}(K,j).
\]
This allows us to define the Euler characteristic $\chi(\KHI(K,j))$ as
the difference of the ranks of the even and odd parts. The main result
of this paper is the following theorem.
\begin{theorem}\label{thm:main}
For any knot in $S^{3}$,
the Euler characteristics $\chi(\KHI(K,j))$ of the summands
$\KHI(K,j)$ are minus the coefficients of
the symmetrized Alexander polynomial $\Delta_{K}(t)$, with Conway's
normalization. That is,
\[
\Delta_{K}(t) = - \sum_{j} \chi(\KHI(K,j)) t^{j}.
\]
\end{theorem}
The Floer homology group $\KHI(K)$ is supposed to be an ``instanton''
counterpart to the Heegaard knot homology of Ozsv\'ath-Szab\'o and
Rasmussen \cite{Ozsvath-Szabo-knotfloer,Rasmussen-thesis}. It is known
that the Euler characteristic of Heegaard knot homology gives the
Alexander polynomial; so the above
theorem can be taken as further evidence that the two theories are
indeed closely related.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{Oriented-Skein}
\end{center}
\caption{\label{fig:Oriented-Skein}
Knots $K_{+}$, $K_{-}$ and $K_{0}$ differing at a single crossing.}
\end{figure}
The proof of the theorem rests on Conway's skein relation for the
Alexander polynomial. To exploit the skein relation in this way, we
first extend the definition of $\KHI(K)$ to links. Then, given three
oriented knots or links $K_{+}$, $K_{-}$ and $K_{0}$ related by the
skein moves (see Figure~\ref{fig:Oriented-Skein}), we establish a long
exact sequence relating the instanton knot (or link) homologies of
$K_{+}$, $K_{-}$ and $K_{0}$. More precisely,
if for example $K_{+}$ and $K_{-}$ are knots and $K_{0}$ is a
$2$-component link, then we will show that there is along exact
sequence
\[
\cdots \to \KHI(K_{+}) \to \KHI(K_{-}) \to \KHI(K_{0}) \to \cdots .
\]
(This situation is a little different when $K_{+}$ and $K_{-}$ are
$2$-component links and $K_{0}$ is a knot: see Theorem~\ref{thm:skein}.)
Skein exact sequences of
this sort for $\KHI(K)$ are not new. The definition of $\KHI(K)$
appears almost verbatim in Floer's paper
\cite{Floer-Durham-paper}, along with outline
proofs of just such a skein sequence. See in particular part ($2'$) of
Theorem 5 in \cite{Floer-Durham-paper}, which corresponds to
Theorem~\ref{thm:skein} in this paper.
The material of Floer's paper
\cite{Floer-Durham-paper} is also presented in \cite{Braam-Donaldson}. The
proof of the skein exact sequence which we shall describe is essentially
Floer's argument, as amplified in \cite{Braam-Donaldson},
though we shall present it in the context of sutured
manifolds. The new ingredient however is the decomposition
\eqref{eq:eigenspace-decomposition} of the instanton Floer homology,
without which one cannot arrive at the Alexander
polynomial.
\medskip The structure of the remainder of this paper is as
follows. In section~\ref{sec:background}, we recall the construction
of instanton knot homology, as well as instanton homology for sutured
manifolds, following \cite{KM-sutures}. We take the opportunity here
to extend and slightly generalize our earlier results concerning these
constructions. Section~\ref{sec:skein} presents the proof of the main
theorem. Some applications are discussed in section~\ref{sec:applications}.
The relationship between $\Delta_{K}(t)$ and the instanton homology of
$K$ was conjectured in \cite{KM-sutures}, and the result provides the
missing ingredient to show that the $\KHI$ detects fibered knots.
Theorem~\ref{thm:main} also provides a lower bound for the rank of the
instanton homology group:
\begin{corollary}\label{cor:alexander-vs-rank}
If the Alexander polynomial of $K$ is $\sum_{-d}^{d}a_{j} t^{j}$,
then
the rank of $\KHI(K)$ is not less than $\sum_{-d}^{d}|a_{j}|$.
\qed
\end{corollary}
The corollary can be used to draw conclusions about the existence of
certain representations of the knot group in $\SU(2)$.
\subparagraph{Acknowledgment.}
As this paper was being completed, the authors learned that
essentially the same result has been obtained simultaneously by Yuhan
Lim \cite{Lim}. The authors are grateful to the referee for pointing
out the errors in an earlier version of this paper, particularly
concerning the mod $2$ gradings.
\section{Background}
\label{sec:background}
\subsection{Instanton Floer homology}
Let $Y$ be a closed, connected, oriented $3$-manifold, and let $w\to Y$ be a
hermitian line bundle with the property that the pairing of $c_{1}(w)$
with some class in $H_{2}(Y)$ is odd. If $E\to Y$ is a $U(2)$ bundle
with $\Lambda^{2}E \cong w$, we write $\bonf(Y)_{w}$ for the
space of $\PU(2)$ connections in the adjoint bundle $\ad(E)$, modulo
the action of the gauge group consisting of automorphisms of $E$ with
determinant $1$. The instanton Floer homology group $I_{*}(Y)_{w}$ is
the Floer homology arising from the Chern-Simons functional on
$\bonf(Y)_{w}$. It has a relative grading by $\Z/8$.
Our notation for this Floer group follows
\cite{KM-sutures}; an exposition of its construction is in
\cite{Donaldson-book}. We will always use complex coefficients, so
$I_{*}(Y)_{w}$ is a complex vector space.
If $\sigma$ is a $2$-dimensional integral homology class in $Y$, then
there is a corresponding operator $\mu(\sigma)$ on $I_{*}(Y)_{w}$ of
degree $-2$. If $y\in Y$ is a point representing the generator of
$H_{0}(Y)$, then there is also a degree-$4$ operator $\mu(y)$.
The operators $\mu(\sigma)$, for $\sigma\in H_{2}(Y)$, commute with
each other and with $\mu(y)$.
As shown in \cite{KM-sutures} based on the calculations
of \cite{Munoz}, the simultaneous eigenvalues of the commuting
pair of operators
$(\mu(y),\mu(\sigma))$ all have the form
\begin{equation}\label{eq:eigenvalue-pairs}
(2, 2k) \qquad\text{or}\qquad (-2, 2k\sqrt{-1}),
\end{equation}
for even integers $2k$ in the range
\[
| 2k | \le |\sigma|.
\]
Here $|\sigma|$ denotes the Thurston norm of $\sigma$, the minimum
value of $-\chi(\Sigma)$ over all aspherical embedded surfaces
$\Sigma$ with $[\Sigma]=\sigma$.
\subsection{Instanton homology for sutured manifolds}
We recall the definition of the instanton Floer homology for a
balanced sutured manifold, as introduced in \cite{KM-sutures} with
motivation from the Heegaard counterpart defined in \cite{Juhasz-1}.
The reader is referred to \cite{KM-sutures} and \cite{Juhasz-1} for
background and details.
Let
$(M,\gamma)$ be a balanced sutured manifold. Its oriented boundary is a union,
\[
\partial M = R_{+}(\gamma) \cup A(\gamma) \cup
(- R_{-}(\gamma))
\]
where $A(\gamma)$ is a union of annuli, neighborhoods of the sutures
$s(\gamma)$. To define the instanton homology group $\SHI(M,\gamma)$
we proceed as follows. Let $([-1,1]\times T,\delta)$ be a product sutured
manifold, with $T$ a connected, oriented surface with boundary. The
annuli $A(\delta)$ are the annuli $[-1,1]\times \partial T$, and we
suppose these are in one-to-one correspondence with the annuli
$A(\gamma)$. We attach this product piece to $(M,\gamma)$ along the
annuli to obtain a manifold
\begin{equation}\label{eq:barM}
\bar{M} = M \cup \bigl( [-1,1]\times T \bigr).
\end{equation}
We write
\begin{equation}\label{eq:boundary-barM}
\partial \bar{M} = \bar{R}_{+} \cup (-\bar{R}_{-}).
\end{equation}
We can regard $\bar{M}$ as a sutured manifold (not balanced, because it
has no sutures). The surface $\bar{R}_{+}$ and $\bar{R}_{-}$ are both
connected and are diffeomorphic. We choose an orientation-preserving
diffeomorphism
\[
h : \bar{R}_{+} \to \bar{R}_{-}
\]
and then define $Z=Z(M,\gamma)$ as the quotient space
\[
Z = \bar{M}/\sim,
\]
where $\sim$ is the identification defined by $h$. The two surfaces
$\bar{R}_{\pm}$ give a single closed surface
\[
\bar{R}\subset Z.
\]
We need to impose a side condition on the choice of $T$ and $h$ in
order to proceed. We require that there is a closed curve $c$ in $T$
such that $\{1\}\times c$ and $\{-1\}\times c$ become non-separating
curves in $\bar{R}_{+}$ and $\bar{R}_{-}$ respectively; and we require
further that $h$ is chosen so as to carry $\{1\}\times c$ to
$\{-1\}\times c$ by the identity map on $c$.
\begin{definition}
We say that $(Z,\bar{R})$ is an admissible closure of $(M,\gamma)$
if it arises in this way, from some choice of $T$ and $h$,
satisfying the above conditions. \CloseDef
\end{definition}
\begin{remark}
In \cite[Definition~4.2]{KM-sutures}, there was an additional
requirement that $\bar{R}_{\pm}$ should have genus $2$ or more. This
was needed only in the context there of Seiberg-Witten Floer
homology, as explained in section~7.6 of
\cite{KM-sutures}. Furthermore, the notion of closure in
\cite{KM-sutures} did not require that $h$ carry $\{1\}\times c$
to $\{-1\}\times c$, hence the qualification ``admissible'' in the
present paper.
\end{remark}
In an admissible closure, the curve $c$ gives rise to a torus $S^{1}\times c$
in $Z$ which meets $\bar{R}$ transversely in a circle. Pick a point
$x$ on $c$. The
closed curve $S^{1}\times \{x\}$ lies on the torus $S^{1}\times c$ and
meets $\bar{R}$ in a
single point. We write
\[
w \to Z
\]
for a hermitian line bundle on $Z$ whose first Chern class is dual to
$S^{1}\times\{x\}$. Since $c_{1}(w)$ has odd evaluation on the closed
surface $\bar{R}$, the instanton homology group $I_{*}(Z)_{w}$ is
well-defined. As in \cite{KM-sutures}, we write
\[
I_{*}(Z|\bar{R})_{w} \subset I_{*}(Z)_{w}
\]
for the simultaneous generalized eigenspace of the pair of operators
\[(\mu(y),\mu(\bar{R}))\] belonging to the eigenvalues $(2,2g-2)$, where
$g$ is the genus of $\bar{R}$. (See \eqref{eq:eigenvalue-pairs}.)
\begin{definition}
For a balanced sutured manifold $(M,\gamma)$,
the instanton Floer homology group $\SHI(M,\gamma)$ is defined to
be $I_{*}(Z|\bar{R})_{w}$, where $(Z,\bar{R})$ is any admissible
closure of $(M,\gamma)$. \CloseDef.
\end{definition}
It was shown in \cite{KM-sutures} that $\SHI(M,\gamma)$ is
well-defined, in the sense that any two choices of $T$ or $h$ will
lead to isomorphic versions of $\SHI(M,\gamma)$.
\subsection{Relaxing the rules on $T$}
\label{subsec:disconnected-T}
As stated, the definition of $\SHI(M,\gamma)$ requires that we form a
closure $(Z,\bar{R})$ using a \emph{connected} auxiliary surface $T$.
We can relax this condition on $T$, with a little care, and the extra
freedom gained will be convenient in later arguments.
So let $T$ be a possibly disconnected, oriented surface with boundary.
The number of boundary components of $T$ needs to be equal to the
number of sutures in $(M,\gamma)$. We then need to choose an
orientation-reversing
diffeomorphism between $\partial T$ and $\partial R_{+}(\gamma)$, so
as to be able to form a manifold $\bar{M}$ as in \eqref{eq:barM},
gluing $[-1,1]\times \partial T$ to the annuli $A(\gamma)$. We
continue to write $\bar{R}_{+}$, $\bar{R}_{-}$ for the ``top'' and
``bottom'' parts of the boundary of $\partial \bar{M}$, as at
\eqref{eq:boundary-barM}. Neither of these need be connected, although
they have the same Euler number. We shall impose the following
conditions.
\begin{enumerate}
\item On each connected component $T_{i}$ of $T$, there is an
oriented
simple closed curve $c_{i}$ such that the corresponding curves
$\{1\}\times c_{i}$ and $\{-1\}\times c_{i}$ are both
non-separating on $\bar{R}_{+}$ and $\bar{R}_{-}$ respectively.
\item \label{item:T-condition-2} There exists a diffeomorphism $h :
\bar{R}_{+}\to\bar{R}_{-}$ which carries $\{1\}\times c_{i}$ to
$\{-1\}\times c_{i}$ for all $i$, as oriented curves.
\item There is a $1$-cycle $c'$ on $\bar{R}_{+}$ which intersects
each curve $\{1\}\times c_{i}$ once.
\end{enumerate}
We then choose any $h$ satisfying \ref{item:T-condition-2} and use $h$
to identify the top and bottom, so forming a closed pair $(Z,\bar{R})$ as
before. The surface $\bar{R}$ may have more than one component (but no
more than the number of components of $T$). No component of $\bar{R}$
is a sphere, because each component contains a non-separating curve.
We may regard $T$ as a codimension-zero submanifold of $\bar{R}$ via
the inclusion of $\{1\}\times T$ in $\bar{R}_{+}$.
For each component $\bar{R}_{k}$ of $\bar{R}$, we now choose one
corresponding component $T_{i_{k}}$ of $T\cap\bar{R}_{k}$. We take
$w\to Z$
to be the complex line bundle with $c_{1}(w)$ dual to the sum of the
circles $S^{1}\times \{x_{k}\}\subset S^{1}\times c_{i_{k}}$. Thus
$c_{1}(w)$ evaluates to $1$ on each component
$\bar{R}_{k}\subset\bar{R}$. We may then consider the instanton Floer
homology group $I_{*}(Z|\bar{R})_{w}$.
\begin{lemma}\label{lem:relaxed-independence}
Subject to the conditions we have imposed, the Floer homology
group $I_{*}(Z|\bar{R})_{w}$ is independent of the choices made.
In particular, $I_{*}(Z|\bar{R})_{w}$ is isomorphic to
$\SHI(M,\gamma)$.
\end{lemma}
\begin{proof}
By a sequence of applications of the excision property of Floer
homology \cite{Floer-Durham-paper, KM-sutures}, we shall establish that
$I_{*}(Z|\bar{R})_{w}$ is isomorphic to $I_{*}(Z'|\bar{R}')_{w'}$,
where the latter arises from the same construction but with a
\emph{connected} surface $T'$. Thus $I_{*}(Z'|\bar{R}')_{w'}$ is
isomorphic to $\SHI(M,\gamma)$ by definition: its independence
of the choices made is proved in \cite{KM-sutures}.
We will show how to reduce the number of components
of $T$ by one. Following the argument of \cite[section
7.4]{KM-sutures}, we have an isomorphism
\begin{equation}\label{eq:u-to-w}
I_{*}(Z|\bar{R})_{w} \cong I_{*}(Z|\bar{R})_{u},
\end{equation}
where $u\to Z$ is the complex line bundle whose first Chern class
is dual to the cycle $c'\subset Z$.
We shall suppose in the fist instance that at least one of $c_{i}$
or $c_{j}$ is non-separating in the corresponding component
$T_{i}$ or $T_{j}$.
Since $c_{1}(u)$ is odd on
the $2$-tori $S^{1}\times c_{i}$ and $S^{1}\times c_{j}$, we can
apply Floer's excision theorem (see also
\cite[Theorem~7.7]{KM-sutures}): we cut $Z$ open along these two
$2$-tori and glue back to obtain a new pair $(Z' | \bar{R}')$,
carrying a line bundle $u'$, and we have
\[
I_{*}(Z|\bar{R})_{u} \cong I_{*}(Z'|\bar{R}')_{u'}.
\]
Reversing the construction that led to the isomorphism
\eqref{eq:u-to-w}, we next have
\[
I_{*}(Z'|\bar{R}')_{u'} \cong I_{*}(Z'|\bar{R}')_{w'},
\]
where the line bundle
$w'$ is dual to a collection of circles $S^{1}\times\{x'_{k'}\}$,
one for each component of $\bar{R}'$.
The pair $(Z',\bar{R}')$ is obtained from the sutured
manifold $(M,\gamma)$ by the same construction that led to
$(Z,R)$, but with a surface $T'$ having one fewer components: the
components $T_{i}$ and $T_{j}$ have been joined into one component
by cutting open along the circles $c_{i}$ and $c_{j}$ and
reglueing.
If both $c_{i}$ and $c_{j}$ are separating in $T_{i}$ and $T_{j}$
respectively, then the above argument fails, because $T'$ will
have the same number of components as $T$. In this case, we can
alter $T_{i}$ and $c_{i}$ to make a new $T'_{i}$ and $c'_{i}$,
with $c'_{i}$ non-separating in $T'_{i}$. For example, we may
replace $Z$ by the disjoint union $Z \amalg Z_{*}$, where $Z_{*}$
is a product $S^{1}\times T_{*}$, with $T_{*}$ of genus $2$. In
the same manner as above, we can cut $Z$ along $S^{1}\times c_{i}$
and cut $Z_{*}$ along $S^{1}\times c_{*}$, and then reglue,
interchanging the boundary components. The effect of this is to
replace $T_{i}$ be a surface $T'_{i}$ of genus one larger.
We can take $c'_{i}$ to be a non-separating curve on $T_{*}
\setminus c_{*}$.
\end{proof}
\subsection{Instanton homology for knots and links}
\label{subsec:inst-homology-link}
Consider a link $K$ in a closed oriented $3$-manifold $Y$. Following
Juh\'asz
\cite{Juhasz-1}, we can associate to $(Y,K)$ a sutured manifold
$(M,\gamma)$ by taking $M$ to be the link complement and taking the
sutures $s(\gamma)$ to consist of two oppositely-oriented meridional
curves on each of the tori in $\partial M$. As in \cite{KM-sutures},
where the case of knots was discussed, we take Juh\'asz'
prescription as a definition for the instanton knot (or link) homology
of the pair $(Y,K)$:
\begin{definition}[\textit{cf.} \cite{Juhasz-1}]
We define the instanton homology of the link $K\subset Y$ to be
the instanton Floer homology of the sutured manifold $(M,\gamma)$
obtained from the link complement as above. Thus,
\[
\KHI(Y,K) = \SHI(M,\gamma).
\]
\CloseDef
\end{definition}
Although we are free to choose any admissible closure $Z$ in
constructing $\SHI(M,\gamma)$, we can exploit the fact that we are
dealing with a link complement to narrow our choices. Let $r$ be the
number of components of the link $K$. Orient $K$ and choose a
longitudinal oriented curve $l_{i}\subset \partial M$ on the
peripheral torus of each component $K_{i}\subset K$. Let $F_{r}$ be a
genus-1 surface with $r$ boundary components, $\delta_{1},\dots,\delta_{r}$. Form a closed manifold
$Z$ by attaching $F_{r}\times S^{1}$ to $M$ along their
boundaries:
\begin{equation}\label{eq:special-closure}
Z = (Y\setminus N^{o}(K))
\cup (F_{r}\times S^{1}).
\end{equation}
The attaching is done so that the curve $p_{i}\times
S^{1}$ for $p_{i}\in \delta_{i}$ is attached to the meridian of
$K_{i}$ and $\delta_{i}\times \{q\}$ is attached to the chosen
longitude $l_{i}$. We can view $Z$ as a closure of $(M,\gamma)$ in
which the auxiliary surface $T$ consists of $r$ annuli,
\[
T = T_{1}\cup \dots \cup T_{r}.
\]
The two
sutures of the product sutured manifold
$[-1,1]\times T_{i}$ are attached to meridional sutures on the
components of $\partial M$ corresponding to $K_{i}$ and $K_{i-1}$ in
some cyclic ordering of the components.
Viewed this way, the corresponding surface
$\bar{R}\subset Z$ is the torus
\[
\bar{R} = \nu \times S^{1}
\]
where $\nu\subset F_{r}$ is a closed curve representing a generator
of the homology of the closed genus-1 surface obtained by adding disks
to $F_{r}$.
Because $\bar{R}$ is a torus,
the group
$I_{*}(Z|\bar{R})_{w}$ can be more simply described as the
generalized eigenspace of $\mu(y)$ belonging to the eigenvalue $2$,
for which we temporarily introduce the notation
$I_{*}(Z)_{w,+2}$. Thus we can write
\[
\KHI(Y,K) = I_{*}(Z)_{w,+2}.
\]
An important special case for us is when $K \subset Y$
is null-homologous in $Y$ with its given orientation. In this case,
we may choose a Seifert surface $\Sigma$, which we regard as a
properly embedded oriented surface in $M$ with oriented boundary a
union of longitudinal curves, one for each component of $K$. When a
Seifert surface is given, we have a \emph{uniquely preferred} closure $Z$,
obtained as above but using the longitudes provided by $\partial
\Sigma$. Let us fix a Seifert surface
$\Sigma$ and write $\sigma$ for its homology class in
$H_{2}(M,\partial M)$. The preferred closure of the sutured link
complement is entirely determined by $\sigma$.
\subsection{The decomposition into generalized eigenspaces}
We continue to suppose that $\Sigma$ is a Seifert surface for the
null-homologous oriented knot $K\subset Y$. We write $(M,\gamma)$ for
the sutured link complement and $Z$ for the preferred closure.
The homology class $\sigma = [\Sigma]$ in $H_{2}(M,\partial M)$
extends to a class $\bar\sigma = [\bar\Sigma]$ in $H_{2}(Z)$: the
surface $\bar\Sigma$ is formed from the Seifert surface $\Sigma$ and
$F_{r}$,
\[
\bar\Sigma = \Sigma\cup F_{r}.
\]
The homology class $\bar\sigma$ determines an endomorphism
\[
\mu(\bar\sigma) : I_{*}(Z)_{w,+2} \to I_{*}(Z)_{w,+2}.
\]
This endomorphism is traceless, a consequence of the relative $\Z/8$
grading: there is an endomorphism $\epsilon$ of $I_{*}(Z)_{w}$ given
by multiplication by $(\sqrt{-1})^{s}$ on the part of relative grading
$s$, and this $\epsilon$ commutes with $\mu(y)$ and anti-commutes
with $\mu(\bar\sigma)$. We write this traceless
endomorphism as
\begin{equation}\label{eq:mu-o}
\mu^{o}(\sigma) \in \sl( \KHI(Y,K)).
\end{equation}
Our notation hides the fact that the construction depends (a priori)
on the existence of the preferred closure $Z$, so that $\KHI(Y,K)$ can
be canonically identified with $I_{*}(Z)_{w,+2}$.
It now follows from \cite[Proposition~7.5]{KM-sutures} that the eigenvalues of
$\mu^{o}(\sigma)$ are even integers $2j$ in the range
$-2\bar{g}+2 \le 2j \le 2\bar{g}-2$, where $\bar{g}=g(\Sigma)+r $ is
the genus of $\bar{\Sigma}$. Thus:
\begin{definition}
For a null-homologous oriented link $K\subset Y$ with a chosen
Seifert surface $\Sigma$, we write
\[
\KHI(Y,K,[\Sigma], j)
\subset \KHI(Y,K)
\]
for the generalized eigenspace of $\mu^{o}([\Sigma])$ belonging to
the eigenvalue $2j$, so that
\[
\KHI(Y,K) =
\bigoplus_{j=-g(\Sigma)+1-r}^{g(\Sigma)-1+r}
\KHI(Y,K,[\Sigma], j),
\]
where $r$ is the number of components of $K$.
If $Y$ is a homology sphere, we may omit
$[\Sigma]$ from the notation; and if $Y$ is $S^{3}$ then we simply
write $\KHI(K,j)$. \CloseDef
\end{definition}
\begin{remark}
The authors believe that, for a general sutured manifold
$(M,\gamma)$, one can define a unique linear map
\[
\mu^{o} : H_{2}(M,\partial M) \to \sl ( \SHI(M,\gamma))
\]
characterized by the property that for any admissible closure
$(Z,\bar{R})$ and any $\bar{\sigma}$ in $H_{2}(Z)$
extending $\sigma \in H_{2}(M,\partial M)$ we have
\[ \mu^{o}(\sigma) = \text{traceless part of $\mu(\bar\sigma)$},
\]
under a suitable
identification of $I_{*}(Z| \bar{R})_{w}$ with $\SHI(M,\gamma)$. The
authors will return to this question in a future paper. For now, we
are exploiting the existence of a preferred closure $Z$ so as to
side-step the issue of whether $\mu^{o}$ would be well-defined,
independent of the choices made.
\end{remark}
\subsection{The mod 2 grading}
\label{subsec:mod-2-grading}
If $Y$ is a closed $3$-manifold, then the instanton homology group
$I_{*}(Y)_{w}$ has a canonical decomposition into parts of even and
odd grading mod $2$. For the purposes of this paper, we normalize our
conventions so that the two generators of $I_{*}(T^{3})_{w}=\C^{2}$
are in \emph{odd} degree. As in \cite[section~25.4 ]{KM-book}, the
canonical mod $2$ grading is then essentially determined by the
property that, for a cobordism $W$ from a manifold $Y_{-}$ to $Y_{+}$,
the induced map on Floer homology has even or odd grading according to
the parity of the integer
\begin{equation}\label{eq:iota-W} \iota(W) = \frac{1}{2}
\Bigl( \chi(W) + \sigma(W) +
b_1(Y_+) - b_0(Y_+) - b_1(Y_-) + b_0(Y_-)\Bigr).
\end{equation}
(In the case of connected manifolds $Y_{+}$ and $Y_{-}$,
this formula reduces to the one that appears in \cite{KM-book} for the monopole
case. There is more than one way to extend the formula to the case of
disconnected manifolds, and we have simply chosen one.)
By declaring that the generators for
$T^{3}$ are in odd degree, we ensure that the canonical mod $2$
gradings behave as expected for disjoint unions of the $3$-manifolds.
Thus, if $Y_{1}$ and $Y_{2}$ are the connected components of a
$3$-manifold $Y$ and $\alpha_{1}\otimes \alpha_{2}$ is a class on $Y$
obtained from $\alpha_{i}$ on $Y_{i}$, then $\gr(\alpha_{1}\otimes
\alpha_{2})$ is $\gr(\alpha_{1}) + \gr(\alpha_{2})$ in $\Z/2$ as
expected.
Since the Floer homology $\SHI(M,\gamma)$ of a sutured manifold
$(M,\gamma)$ is defined in terms of $I_{*}(Z)_{w}$ for an admissible
closure $Z$, it is tempting to try to define a canonical mod $2$
grading on $\SHI(M,\gamma)$ by carrying over the canonical mod $2$
grading from $Z$. This does not work, however, because the result will
depend on the choice of closure. This is illustrated by the fact that
the mapping torus of a Dehn twist on $T^{2}$ may have Floer homology
in \emph{even} degree in the canonical mod $2$ grading (depending on
the sign of the Dehn twist), despite the fact that both $T^{3}$ and
this mapping torus can be viewed as closures of the same sutured
manifold.
We conclude from this that, without auxiliary choices, there is no
\emph{canonical} mod $2$ grading on $\SHI(M,\gamma)$ in general: only
a relative grading. Nevertheless, in the special case of an oriented
null-homologous knot or link $K$ in a closed $3$-manifold $Y$, we
\emph{can} fix a convention that gives an absolute mod $2$ grading,
once a Seifert surface $\Sigma$ for $K$ is given. We simply take the
preferred closure $Z$ described above in
section~\ref{subsec:inst-homology-link}, using $\partial\Sigma$ again
to define the longitudes, so that $\KHI(Y,K)$ is identified with
$I_{*}(Z)_{w,+2}$, and we use the canonical mod $2$ grading from the
latter.
With this convention, the unknot $U$ has $\KHI(U)$ of rank $1$, with
a single generator in odd grading mod $2$.
\section{The skein sequence}
\label{sec:skein}
\subsection{The long exact sequence}
Let $Y$ be any closed, oriented $3$-manifold, and let $K_{+}$, $K_{-}$
and $K_{0}$ be any three oriented knots or links in $Y$ which are related by
the standard skein moves: that is, all three links coincide outside a
ball $B$ in
$Y$, while inside the ball they are as shown in
Figure~\ref{fig:Oriented-Skein}. There are two cases which occur
here: the two strands of $K_{+}$ in $B$ may belong to the same
component of the link, or to different components. In the first case
$K_{0}$ has one more component than $K_{+}$ or $K_{-}$, while in the
second case it has one fewer.
\begin{theorem}[\textit{cf.} Theorem 5 of \cite{Floer-Durham-paper}]
\label{thm:skein}
Let $K_{+}$, $K_{-}$ and $K_{0}$ be oriented links in $Y$ as
above. Then, in the case that $K_{0}$ has one more component than
$K_{+}$ and $K_{-}$, there is a long exact sequence relating the
instanton homology groups of the three links,
\begin{equation}\label{eq:skein-first}
\cdots\to \KHI(Y,K_{+}) \to \KHI(Y,K_{-}) \to \KHI(Y,K_{0}) \to
\cdots.
\end{equation}
In the case that $K_{0}$ has fewer components that $K_{+}$ and
$K_{-}$, there is a long exact sequence
\begin{equation}\label{eq:skein-second}
\cdots\to \KHI(Y,K_{+}) \to \KHI(Y,K_{-}) \to
\KHI(Y,K_{0})\otimes V^{\otimes 2} \to
\cdots
\end{equation}
where $V$ a 2-dimensional vector space arising as
the instanton Floer homology of the sutured manifold
$(M,\gamma)$, with $M$ the solid torus $S^{1}\times D^{2}$
carrying four parallel sutures $S^{1}\times \{p_{i}\}$ for four
points $p_{i}$ on $\partial D^{2}$ carrying alternating
orientations.
\end{theorem}
\begin{proof}
Let $\lambda$ be a standard circle in the complement of $K_{+}$
which encircles the two strands of $K_{+}$ with total linking
number zero, as shown in Figure~\ref{fig:K-plus-with-lambda}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{K-plus-with-lambda}
\end{center}
\caption{\label{fig:K-plus-with-lambda}
The knot $K_{+}$, with a standard circle $\lambda$ around a
crossing, with linking number zero.}
\end{figure}
Let
$Y_{-}$ and $Y_{0}$ be the $3$-manifolds obtained from $Y$ by
$-1$-surgery and $0$-surgery on $\lambda$ respectively. Since
$\lambda$ is disjoint from $K_{+}$, a copy of $K_{+}$ lies in
each, and we have new pairs $(Y_{-1},K_{+})$ and $(Y_{0},K_{+})$.
The pair $(Y_{-1},K_{+})$ can be identified with $(Y,K_{-})$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{Skein-Tubes}
\end{center}
\caption{\label{fig:Skein-Tubes}
Sutured manifolds obtained from the knot complement, related by a
surgery exact triangle.}
\end{figure}
Let
$(M_{+},\gamma_{+})$, $(M_{-},\gamma_{-})$ and
$(M_{0},\gamma_{0})$ be the sutured manifolds associated to the
links $(Y,K_{+})$, $(Y,K_{-})$ and $(Y_{0},K_{0})$ respectively:
that is, $M_{+}$, $M_{-}$ and $M_{0}$ are the link complements of
$K_{+}\subset Y$, $K_{-}\subset Y$ and $K_{0}\subset Y_{0}$
respectively, and there are two sutures on each boundary
component. (See Figure~\ref{fig:Skein-Tubes}.)
The sutured manifolds $(M_{-},\gamma_{-})$ and
$(M_{0}, \gamma_{0})$ are obtained from $(M_{+},\gamma_{+})$ by
$-1$-surgery and $0$-surgery respectively on the circle
$\lambda\subset M_{+}$. If $(Z,\bar{R})$ is any admissible closure
of $(M_{+},\gamma_{+})$ then surgery on $\lambda\subset Z$ yields
admissible closures for the other two sutured manifolds. From
Floer's surgery exact triangle \cite{Braam-Donaldson}, it follows
that there is a long exact sequence
\begin{equation}\label{eq:SHI-long-exact}
\cdots\to \SHI(M_{+},\gamma_{+}) \to
\SHI(M_{-},\gamma_{-}) \to
\SHI(M_{0},\gamma_{0}) \to
\cdots
\end{equation}
in which the maps are induced by surgery cobordisms between
admissible closures of the sutured manifolds.
By definition, we have
\[
\begin{aligned}
\SHI(M_{+},\gamma_{+}) &= \KHI(Y,K_{+}) \\
\SHI(M_{-},\gamma_{-}) &= \KHI(Y,K_{-}) .
\end{aligned}
\]
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{Decompose-M0}
\end{center}
\caption{\label{fig:Decompose-M0}
Decomposing $M_{0}$ along a product annulus to obtain a link
complement in $S^{3}$.}
\end{figure}%
However, the situation for $(M_{0}, \gamma_{0})$ is a little
different. The manifold $M_{0}$ is obtained by zero-surgery on the
circle $\lambda$ in $M_{+}$, as indicated in
Figure~\ref{fig:Skein-Tubes}. This sutured manifold contains a
product annulus $S$, consisting of the union of the
twice-punctured disk shown in Figure~\ref{fig:Decompose-M0} and
a disk $D^{2}$ in the surgery solid-torus $S^{1}\times D^{2}$. As
shown in the figure, sutured-manifold decomposition along the
annulus $S$ gives a sutured manifold $(M'_{0},\gamma'_{0})$ in
which $M'_{0}$ is the link complement of $K_{0}\subset Y$:
\[
(M_{0},\gamma_{0}) \decomp{S} (M'_{0}, \gamma'_{0}).
\]
By Proposition~6.7 of \cite{KM-sutures} (as adapted to the
instanton homology setting in section~7.5 of that paper), we
therefore have an isomorphism
\[
\SHI (M_{0},\gamma_{0})\cong \SHI (M'_{0},
\gamma'_{0}).
\]
We now have to separate cases according to the number of
components of $K_{+}$ and $K_{0}$. If the two strands of $K_{+}$
at the crossing belong to the same component, then every component
of $\partial M'_{0}$ contains exactly two, oppositely-oriented
sutures, and we therefore have
\[
\SHI (M'_{0},
\gamma'_{0}) = \KHI(Y, K_{0}).
\]
In this case, the sequence \eqref{eq:SHI-long-exact} becomes the
sequence in the first case of the theorem.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{Remove-Sutures}
\end{center}
\caption{\label{fig:Remove-Sutures}
Removing some extra sutures using a decomposition along a product
annulus. The solid torus in the last step has four
sutures.}
\end{figure}
If the two strands of $K_{+}$ belong to different components, then
the corresponding boundary components of $M_{+}$ each carry two
sutures. These two boundary components become one boundary
component in $M'_{0}$, and the decomposition along $S$ introduces
two new sutures; so the resulting boundary component in $M'_{0}$
carries six meridional sutures, with alternating orientations.
Thus $(M'_{0}, \gamma'_{0})$ fails to be the sutured manifold
associated to the link $K_{0}\subset Y$, on account of having four
additional sutures. As shown in Figure~\ref{fig:Remove-Sutures}
however, the number of sutures on a torus boundary component can
always be reduced by $2$ (as long as there are at least four to
start with) by using a decomposition along a separating annulus.
This decomposition results in a manifold with one additional
connected component, which is a solid torus with four longitudinal
sutures. This operation needs to be performed twice to reduce the
number of sutures in $M'_{0}$ by four, so we obtain two copies of
this solid torus. Denoting by $V$ the Floer homology of this
four-sutured solid-torus, we therefore have
\[
\SHI (M'_{0},
\gamma'_{0}) = \KHI(Y, K_{0})\otimes V\otimes V
\]
in this case. Thus the sequence \eqref{eq:SHI-long-exact} becomes
the second long exact sequence in the theorem.
At this point, all that remains is to show that $V$ is
$2$-dimensional, as asserted in the theorem. We will do this
indirectly, by identifying $V\otimes V$ as a $4$-dimensional
vector space. Let $(M_{4},\gamma_{4})$ be the sutured solid-torus
with $4$ longitudinal sutures, as described above, so that
$\SHI(M_{4},\gamma_{4})=V$. Let $(M,\gamma)$ be two disjoint
copies of $(M_{4},\gamma_{4})$,
so that
\[
\SHI(M,\gamma) = V\otimes V.
\]
We can describe an admissible closure of $(M,\gamma)$ (with a
disconnected $T$ as in section~\ref{subsec:disconnected-T}) by
taking $T$ to be four annuli: we attach $[-1,1]\times T$ to
$(M,\gamma)$ to form $\bar{M}$ so that $\bar{M}$ is $\Sigma\times
S^{1}$ with $\Sigma$ a four-punctured sphere. Thus
$\partial\bar{M}$ consists of four tori, two of which belong to
$\bar{R}_{+}$ and two to $\bar{R}_{-}$. The closure $(Y,\bar{R})$
is obtained by gluing the tori in pairs; and this can be done so
that $Y$ has the form $\Sigma_{2}\times S^{1}$, where $\Sigma_{2}$
is now a closed surface of genus $2$. The surface $\bar{R}$ in
$\Sigma_{2}\times S^{1}$ has the form $\gamma\times S^{1}$, where
$\gamma$ is a union of two disjoint closed curves in independent
homology classes. The line bundle $w$ has $c_{1}(w)$ dual to
$\gamma'$, where $\gamma'$ is a curve on $\Sigma_{2}$ dual to one
component of $\gamma$.
Thus we can identify $V\otimes V$ with the generalized eigenspace
of $\mu(y)$
belonging to the eigenvalue $+2$ in the Floer homology
$I_{*}(\Sigma_{2}\times S^{1})_{w}$,
\begin{equation}
\label{eq:VVisSigma2}
V\otimes V = I_{*}(\Sigma_{2} \times S^{1})_{w,+2},
\end{equation}
where $w$ is dual to a curve
lying on $\Sigma_{2}$. Our next task is therefore to identify this
Floer homology group. This was done (in slightly different
language) by Braam and Donaldson
\cite[Proposition~1.15]{Braam-Donaldson}. The
main point is to identify the relevant representation variety in
$\bonf(Y)_{w}$, for which we quote:
\begin{lemma}[{\cite{Braam-Donaldson}}]
\label{lem:Sigma2-calc}
For $Y=\Sigma_{2}\times S^{1}$ and $w$ as above,
the critical-point set of the Chern-Simons functional in
$\bonf(Y)_{w}$ consists of two disjoint $2$-tori. Furthermore,
the Chern-Simons functional is of Morse-Bott type along its
critical locus. \qed
\end{lemma}
To continue the calculation, following
\cite{Braam-Donaldson}, it now follows from the
lemma that $I_{*}(\Sigma_{2}\times S^{1})_{w}$ has dimension at most
$8$ and that the even and odd parts of this Floer group, with
respect to the relative mod 2 grading, have equal dimension:
each at most $4$. On the other hand, the group
$I_{*}(\Sigma_{2}\times S^{1}| \Sigma_{2})_{w}$ is non-zero.
So the generalized eigenspaces belonging to the
eigenvalue-pairs $((-1)^{r}2, i^{r}2)$, for $r=0,1,2,3$, are
all non-zero. Indeed, each of these generalized eigenspaces is
$1$-dimensional, by Proposition~7.9 of \cite{KM-sutures}.
These four 1-dimensional generalized eigenspaces all belong
to the same relative mod-2 grading. It follows that
$I_{*}(\Sigma_{2}\times S^{1})_{w}$ is 8-dimensional, and can
be identified as a vector space with the homology of the
critical-point set. The generalized eigenspace belonging to
$+2$ for the operator $\mu(y)$ is therefore $4$-dimensional;
and this is $V\otimes V$. This completes the argument.
\end{proof}
\subsection{Tracking the mod 2 grading}
Because we wish to examine the Euler characteristics, we need to know
how the canonical mod 2 grading behaves under the maps in
Theorem~\ref{thm:skein}. This is the content of the next lemma.
\begin{lemma}\label{lem:mod-2-sequence}
In the situation of Theorem~\ref{thm:skein}, suppose that the link
$K_{+}$ is null-homologous (so that $K_{-}$ and $K_{0}$ are
null-homologous also). Let $\Sigma_{+}$ be a Seifert surface for
$K_{+}$, and let $\Sigma_{-}$ and $\Sigma_{0}$ be Seifert surfaces
for the other two links, obtained from $\Sigma_{+}$ by a
modification in the neighborhood of the crossing. Equip the
instanton knot homology groups of these links with their canonical
mod $2$ gradings, as determined by the preferred closures arising
from these Seifert surfaces.
Then in the first case of the two cases of the
theorem, the map from $\KHI(Y,K_{-})$ to $\KHI(Y,K_{0})$ in the
sequence \eqref{eq:skein-first} has odd degree, while the other
two maps have even degree, with respect to the canonical mod 2
grading.
In the second case, if we grade the 4-dimensional vector space
$V\otimes V$ by identifying it with $I_{*}(\Sigma_{2}\times
S^{1})_{w,+2}$ as in \eqref{eq:VVisSigma2}, then the map from
$\KHI(Y,K_{0})\otimes V^{\otimes 2}$ to $\KHI(Y,K_{+})$
in \eqref{eq:skein-second}
has odd degree, while the other
two maps have even degree.
\end{lemma}
\begin{proof}
We begin with the first case. Let $Z_{+}$ be the preferred
closure of the sutured knot complement $(M_{+},\gamma_{+})$
obtained from the knot $K_{+}$, as defined by
\eqref{eq:special-closure}. In the notation of the proof of
Theorem~\ref{thm:skein}, the curve $\lambda$ lies in $Z_{+}$. Let
us write $Z_{-}$ and $Z_{0}$ for the manifolds obtained from
$Z_{+}$ by $-1$-surgery and $0$-surgery on $\lambda$ respectively.
It is a straightforward observation that $Z_{-}$ and $Z_{0}$ are
respectively the preferred closures of the sutured complements of
the links $K_{-}$ and $K_{0}$. The surgery cobordism $W$ from
$Z_{+}$ to $Z_{-}$ gives rise to the map from $\KHI(Y,K_{+})$ to
$\KHI(Y,K_{-})$. This $W$ has the same homology as the cylinder
$[-1,1]\times Z_{+}$ blown up at a single point. The quantity
$\iota(W)$ in \eqref{eq:iota-W} is therefore even, and it follows
that the map
\[
\KHI(Y,K_{+}) \to \KHI(Y,K_{-})
\]
has even degree. The surgery cobordism $W_{0}$ induces a map
\begin{equation}\label{eq:second-cobordism}
I_{*}(Z_{-})_{w} \to I_{*}(Z_{0})_{w}
\end{equation}
which has odd degree, by another application of \eqref{eq:iota-W}.
This concludes the proof of the first case.
In the second case of the theorem,
we still have a long exact
sequence
\[
\to I_{*}(Z_{+})_{w} \to I_{*}(Z_{-})_{w} \to
I_{*}(Z_{0})_{w} \to
\]
in which the map $I_{*}(Z_{-})_{w} \to I_{*}(Z_{0})_{w}$ is
odd and the other two are even.
However, it is no longer true that the manifold $Z_{0}$ is
the preferred closure of the sutured manifold obtained from
$K_{0}$. The manifold $Z_{0}$ can be described as being obtained
from the complement of $K_{0}$ by attaching $G_{r}\times S^{1}$,
where $G_{r}$ is a surface of genus $2$ with $r$ boundary
components. Here $r$ is the number of components of $K_{0}$, and
the attaching is done as before, so that the curves
$\partial G_{r}\times
\{q\}$ is attached to the longitudes and the curves
$\{p_{i}\}\times S^{1}$
are attached to the meridians. The \emph{preferred} closure, on
the other hand, is defined using a surface $F_{r}$ of genus
$1$, not genus $2$. We write $Z'_{0}$ for the preferred closure, and our
remaining task is to compare the instanton Floer homologies of
$Z_{0}$ and $Z'_{0}$, with their canonical $\Z/2$ gradings.
An application of Floer's excision theorem provides an
isomorphism
\[
I_{*}(Z_{0})_{w,+2} \to I_{*}(Z'_{0})_{w,+2} \otimes
I_{*}(\Sigma_{2}\times S^{1})_{w,+2}
\]
where (as before) the class $w$ in the last term is dual to a
non-separating curve in the genus-2 surface $\Sigma_{2}$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{F-and-G-new}
\end{center}
\caption{\label{fig:F-and-G}
The surfaces $G_{r}$ and $F_{r} \amalg \Sigma_{2}$, used in constructing $Z_{0}$
and $Z'_{0}$ respectively.}
\end{figure}
(See
Figure~\ref{fig:F-and-G} which depicts the excision cobordism
from $G_{r}\times S^{1}$ to $(F_{r}\amalg \Sigma_{2})\times
S^{1}$, with the $S^{1}$ factor suppressed.) The
isomorphism is realized by an explicit cobordism $W$, with
$\iota(W)$ odd, which accounts for the difference between the
first and second cases and concludes the proof.
\end{proof}
\subsection{Tracking the eigenspace decomposition}
The next lemma is similar in spirit to Lemma~\ref{lem:mod-2-sequence},
but deals with eigenspace decomposition rather than the mod $2$
grading.
\begin{lemma}\label{lem:eigenspace-sequence}
In the situation of Theorem~\ref{thm:skein}, suppose again that
the links
$K_{+}$, $K_{-}$ and $K_{0}$ are
null-homologous. Let $\Sigma_{+}$ be a Seifert surface for
$K_{+}$, and let $\Sigma_{-}$ and $\Sigma_{0}$ be Seifert surfaces
for the other two links, obtained from $\Sigma_{+}$ by a
modification in the neighborhood of the crossing.
Then in the first case of the two cases of the theorem, the
maps in the long exact
sequence \eqref{eq:skein-first} intertwine the three operators
$\mu^{o}([\Sigma_{+}])$, $\mu^{o}([\Sigma_{-}])$ and
$\mu^{o}([\Sigma_{0}])$. In particular then, we have a long exact
sequence
\begin{equation*}
\to \KHI(Y,K_{+},[\Sigma_{+}],j) \to
\KHI(Y,K_{-},[\Sigma_{-}],j) \to
\KHI(Y,K_{0},[\Sigma_{0}],j) \to
\end{equation*}
for every $j$.
In the second case of Theorem~\ref{thm:skein}, the maps in the
long exact sequence \eqref{eq:skein-second} intertwine the
operators $\mu^{o}([\Sigma_{+}])$ and $\mu^{o}([\Sigma_{-}])$ on
the first two terms with the operator
\[
\mu^{o}([\Sigma_{0}]) \otimes 1 +
1 \otimes \mu([\Sigma_{2}])
\]
acting on
\[
\KHI(Y,K_{0})\otimes I_{*}(\Sigma_{2}\times
S^{1})_{w,+2}\cong \KHI(Y,K_{0})\otimes V^{\otimes 2}.
\]
\end{lemma}
\begin{proof}
The operator $\mu^{o}([\Sigma])$ on the knot homology groups is
defined in terms of the action of $\mu([\bar\Sigma])$ for a
corresponding closed surface $\bar\Sigma$ in the preferred closure
of the link complement. The maps in the long exact sequences arise
from cobordisms between the preferred closures. The lemma follows
from the fact that the corresponding closed surfaces are
homologous in these cobordisms.
\end{proof}
\subsection{Proof of the main theorem}
For a null-homologous link $K \subset Y$ with a chosen Seifert surface
$\Sigma$, let us write
\[
\begin{aligned}
\chi (Y,K,[\Sigma])&= \sum_{j}
\chi(\KHI(Y,K,[\Sigma],j))t^{j} \\
&= \sum_{j} \bigl ( \dim\KHI_{0}(Y,K,[\Sigma],j) -
\dim\KHI_{1}(Y,K,[\Sigma],j)\bigr) t^{j} \\
&= \str( t^{\mu^{o}(\Sigma)/2}),
\end{aligned}
\]
where $\str$ denotes the alternating trace.
If $K_{+}$, $K_{-}$ and $K_{0}$ are three skein-related links with
corresponding Seifert surfaces $\Sigma_{+}$, $\Sigma_{-}$ and
$\Sigma_{0}$, then Theorem~\ref{thm:skein}, Lemma~\ref{lem:mod-2-sequence} and
Lemma~\ref{lem:eigenspace-sequence} together tell us that we have the
relation
\[
\chi (Y,K_{+},[\Sigma_{+}]) - \chi (Y,K_{-},[\Sigma_{-}]) + \chi
(Y,K_{0},[\Sigma_{0}]) = 0
\]
in the first case of Theorem~\ref{thm:skein}, and
\[
\chi (Y,K_{+},[\Sigma_{+}]) - \chi (Y,K_{-},[\Sigma_{-}]) - \chi
(Y,K_{0},[\Sigma_{0}]) r(t) = 0
\]
in the second case. Here $r(t)$ is the contribution from the term
$I_{*}(\Sigma_{2}\times S^{1})_{w,+2}$, so that
\[
r(t) = \str (t^{\mu([\Sigma_{2}])/2}).
\]
From the proof of Lemma~\ref{lem:Sigma2-calc} we can read off the
eigenvalues of $[\Sigma_{2}]/2$: they are $1$, $0$ and $-1$, and the
$\pm 1$ eigenspaces are each $1$-dimensional. Thus
\[
r(t) = \pm (t - 2 + t^{-1}).
\]
To determine the sign of $r(t)$, we need to know the canonical $\Z/2$
grading of (say) the $0$-eigenspace of $\mu([\Sigma_{2}])$ in
$I_{*}(\Sigma_{2}\times S^{1})_{w,+2}$. The trivial $3$-dimensional
cobordism from $T^{2}$ to $T^{2}$ can be decomposed as $N^{+}\cup
N^{-}$, where $N^{+}$ is a cobordism from $T^{2}$ to $\Sigma_{2}$ and
$N_{-}$ is a cobordism the other way. The $4$-dimensional cobordisms
$W^{\pm}= N^{\pm}\times S^{1}$ induce isomorphisms on the
$0$-eigenspace of $\mu([T^{2}])=\mu([\Sigma_{2}])$; and
$\iota(W^{\pm})$ is odd. Since the generator for $T^{3}$ is in odd
degree, we conclude that the $0$-eigenspace of $\mu([\Sigma_{2}])$ is
in even degree, and that
\[
\begin{aligned}
r(t) &= - (t - 2 + t^{-1}) \\
&= - q(t)^{2}
\end{aligned}
\]
where
\[
q(t) = (t^{1/2}-t^{-1/2}).
\]
We can roll the two case of Theorem~\ref{thm:skein} into one by
defining the ``normalized'' Euler characteristic as
\begin{equation}\label{eq:renormalized}
\tilde\chi(Y,K,[\Sigma]) =
q(t)^{1-r}\chi(Y,K,[\Sigma])
\end{equation}
where $r$ is the number of components of the link $K$. With this
notation we have:
\begin{proposition}
For null-homologous skein-related links $K_{+}$, $K_{-}$ and
$K_{0}$ with corresponding Seifert surface $\Sigma_{+}$,
$\Sigma_{-}$ and $\Sigma_{0}$, the normalized Euler
characteristics \eqref{eq:renormalized} satisfy
\[
\tilde \chi (Y,K_{+},[\Sigma_{+}]) - \tilde \chi
(Y,K_{-},[\Sigma_{-}])= (t^{1/2}-t^{-1/2})\,\tilde \chi
(Y,K_{0},[\Sigma_{0}]).
\]
\qed
\end{proposition}
In the case of classical knots and links, we may write this simply as
\[
\tilde \chi (K_{+}) - \tilde\chi
(K_{-})= (t^{1/2}-t^{-1/2})\,\tilde\chi
(K_{0}).
\]
This is the exactly the skein relation of the (single-variable)
normalized Alexander polynomial
$\Delta$. The latter is
normalized so that $\Delta=1$ for the unknot, whereas our $\tilde\chi$ is
$-1$ for the unknot because the generator of its knot homology is in
odd degree. We therefore have:
\begin{theorem}
For any link $K$ in $S^{3}$, we have
\[
\tilde\chi(K) = - \Delta_{K}(t),
\]
where $\tilde\chi(K)$ is the normalized Euler characteristic
\eqref{eq:renormalized} and $\Delta_{K}$ is the Alexander polynomial
of the link with Conway's normalization.\qed
\end{theorem}
In the case that $K$ is a knot, we have $\tilde\chi(K)=\chi(K)$, which
is the case given in Theorem~\ref{thm:main} in the introduction. \qed
\begin{remark}
The equality $r(t)= - q(t)^{2}$ can be interpreted as arising from
the isomorphism
\[
I_{*}(\Sigma_{2} \times S^{1})_{w,+2} \cong V\otimes V,
\]
with the additional observation that the isomorphism between
these two is odd with respect to the preferred $\Z/2$ gradings.
\end{remark}
\section{Applications}
\label{sec:applications}
\subsection{Fibered knots}
In \cite{KM-sutures}, the authors adapted the argument of Ni \cite{Ni-A}
to establish a criterion for a knot $K$ in $S^3$ to be a fibered knot: in
particular, Corollary~7.19 of \cite{KM-sutures} states that $K$
is fibered if the following three conditions hold:
\begin{enumerate}
\item the Alexander polynomial $\Delta_{K}(T)$ is monic, in the
sense that its leading coefficient is $\pm 1$;
\item the leading coefficient occurs in degree $g$, where
$g$ is the genus of the knot; and
\item the dimension of $\KHI(K,g)$ is $1$.
\end{enumerate}
It follows from our Theorem~\ref{thm:main} that the last of these
three conditions implies the other two. So we have:
\begin{proposition}\label{prop:fibered-knot}
If $K$ is a knot in $S^{3}$ of genus $g$, then $K$ is fibered if
and only if the dimension of $\KHI(K,g)$ is $1$. \qed
\end{proposition}
\subsection{Counting representations}
We describe some applications to representation varieties associated
to
classical knots $K\subset S^{3}$. The
instanton knot homology $\KHI(K)$ is defined in terms of the preferred
closure $Z=Z(K)$ described at \eqref{eq:special-closure}, and
therefore involves the flat connections
\[
\Rep(Z)_{w} \subset \bonf(Z)_{w}
\]
in the space of connections
$\bonf(Z)_{w}$: the quotient by the determinant-1 gauge group of the
space of all $\PU(2)$ connections in $\PP(E_{w})$, where $E_{w}\to Z$
is a $U(2)$ bundle with $\det(E)=w$. If the space of
these flat connections in $\bonf(Z)_{w}$ is non-degenerate in the
Morse-Bott sense when regarded as the set of critical points of the
Chern-Simons functional, then we have
\[
\dim I_{*}(Z)_{w} \le \dim H_{*}(\Rep(Z)_{w}).
\]
The generalized eigenspace $I_{*}(Z)_{w,+2}\subset I_{*}(Z)_{w}$ has
half the dimension of the total, so
\[
\dim \KHI(K) \le \frac{1}{2} \dim H_{*}(\Rep(Z)_{w}).
\]
As explained in \cite{KM-sutures}, the representation variety
$\Rep(Z)_{w}$ is closely related to the space
\[
\Rep(K,\bi) = \{ \, \rho: \pi_{1}(S^{3}
\setminus K) \to \SU(2) \mid \rho(m) =
\bi \,\},
\]
where $m$ is a chosen meridian and
\[
\bi =
\begin{pmatrix}
i & 0 \\ 0 & -i
\end{pmatrix}.
\]
More particularly, there is a two-to-one covering map
\begin{equation}\label{eq:covering}
\Rep(Z)_{w} \to \Rep(K,\bi).
\end{equation}
The circle subgroup $\SU(2)^{\bi}\subset \SU(2)$ which stabilizes $\bi$ acts on
$\Rep(K,\bi)$ by conjugation. There is a unique reducible element in
$\Rep(K,\bi)$ which is fixed by the circle action; the remaining
elements are irreducible and have stabilizer $\pm 1$. The most
non-degenerate situation that can arise, therefore, is that
$\Rep(K,\bi)$ consists of a point (the reducible) together with
finitely many circles, each of which is Morse-Bott. In such a case,
the covering \eqref{eq:covering} is trivial. As in
\cite{KM-knot-singular}, the corresponding non-degeneracy condition at
a flat connection $\rho$ can be interpreted as the condition that the
map
\[
H^{1}(S^{3}\setminus K; \g_{\rho}) \to H^{1}(m ;
\g_{\rho}) = \R
\]
is an isomorphism. Here $\g_{\rho}$ is the local system on the knot
complement with fiber $\su(2)$, associated to the representation
$\rho$. We therefore have:
\begin{corollary}
Suppose that the representation variety $\Rep(K,\bi)$ associated
to the complement of a classical knot $K\subset S^{3}$ consists of
the reducible representation and $n(K)$ conjugacy classes of
irreducibles, each of which is non-degenerate in the above sense.
Then
\[
\dim \KHI(K) \le 1 + 2n(K).
\]
\end{corollary}
\begin{proof}
Under the given hypotheses, the representation variety
$\Rep(K,\bi)$ is a union of a single point and $n(K)$ circles. Its
total Betti number is therefore $1 + 2 n(K)$. The representation
variety $\Rep(Z)_{w}$ is a trivial double cover \eqref{eq:covering},
so the total Betti number of $\Rep(Z)_{w}$ is twice as large, $2 +
4n(K)$.
\end{proof}
Combining this with Corollary~\ref{cor:alexander-vs-rank}, we obtain:
\begin{corollary}
Under the hypotheses of the previous corollary, we have
\[
\sum_{j=-d}^{d} |a_{j}| \le 1 + 2n(K)
\]
where the $a_{j}$ are the coefficients of the Alexander
polynomial. \qed
\end{corollary}
Among all the irreducible elements of $\Rep(K,\bi)$, we can
distinguish the subset consisting of those $\rho$ whose image is
binary dihedral: contained, that is, in the normalizer of a circle
subgroup whose infinitesimal generator $J$ satisfies
$\mathrm{Ad}(\bi)(J)=-J$.
If $n'(K)$ denotes the number of such
irreducible binary dihedral representations, then one has
\[
| \det(K) | = 1 + 2n'(K).
\]
(see \cite{Klassen}). On the other hand, the determinant
$\det(K)$ can also be computed as the value of the Alexander
polynomial at $-1$: the alternating sum of the coefficients. Thus we
have:
\begin{corollary}
Suppose that the Alexander polynomial of $K$ fails to be
alternating, in the sense that
\[
\left| \sum_{j=-d}^{d} (-1)^{j} a_{j} \right|
< \sum_{j=-d}^{d} | a_{j}|.
\]
Then either $\Rep(K,\bi)$ contains some representations that
are not binary dihedral, or some of the binary-dihedral
representations are degenerate as points of this
representation variety. \qed
\end{corollary}
This last corollary is nicely illustrated by the torus knot
$T(4,3)$. This knot is the first non-alternating knot in Rolfsen's
tables \cite{Rolfsen}, where it appears as $8_{19}$. The Alexander
polynomial of $8_{19}$ is not alternating in the sense of the
corollary; and as the corollary suggests, the representation variety
$\Rep(8_{19}; \bi)$ contains representations that are not binary
dihedral. Indeed, there are representations whose image is the binary
octahedral group in $\SU(2)$.
\bibliographystyle{abbrv}
|
1,116,691,498,412 | arxiv | \chapter[Community and role detection in directed networks]{Community detection and role identification in directed networks: understanding the Twitter network of the care.data debate}
\author[B. Amor, S. Vuik, R. Callahan, A. Darzi, S. N. Yaliraki \& M. Barahona]{Benjamin R. C. Amor$^{\ddagger,\dagger}$, Sabine I. Vuik$^\ast$, Ryan Callahan$^\ast$, Ara Darzi$^\ast$, \\ Sophia N. Yaliraki$^\dagger$, and Mauricio Barahona$^\ddagger$}
\address{$^\ddagger$Department of Mathematics,
$^\dagger$Department of Chemistry, \\
and $^\ast$Institute of Global Health Innovation, \\ Imperial College London, London SW7 2AZ, U.K.}
\begin{abstract}
With the rise of social media as an important channel for the debate
and discussion of public affairs, online social networks such as
Twitter have become important platforms for public information and
engagement by policy makers. To communicate effectively through
Twitter, policy makers need to understand how influence and interest
propagate within its network of users. In this chapter we use
graph-theoretic methods to analyse the Twitter debate surrounding
NHS England's controversial care.data scheme. Directionality is a
crucial feature of the Twitter social graph - information flows from
the followed to the followers - but is often ignored in social
network analyses; our methods are based on the behaviour of dynamic
processes on the network and can be applied naturally to directed
networks. We uncover robust communities of users and show that
these communities reflect how information flows through the Twitter
network. We are also able to classify users by their differing
roles in directing the flow of information through the network. Our
methods and results will be useful to policy makers who would like
to use Twitter effectively as a communication medium.
\end{abstract}
\body
\section{Introduction}\label{sec:intro}
The care.data programme\index{care.data programme} is a scheme
proposed by NHS England for collating patient-level data from all GP
surgeries in England into a centralised national Health and Social
Care Information Centre (HSCIC) database\cite{nhsengland15}. This
scheme would complement existing hospital records to create a linked
primary- and secondary-care database, which could be used for
improving healthcare provisioning and for medical research. The
potential benefits of such a database are
well-recognised\cite{raghupathi2014big, darzi14}; however, poor
communication\cite{vallance14} prior to the roll-out of the scheme in
early-2014, alongside concerns around privacy, data security, and the
possibility of the sale of data\cite{nature14}, led to the eventual
postponement of the scheme\cite{triggle14}. In the months leading up
to the initial roll-out, these issues had become a major topic amongst
Twitter users interested in healthcare as well as data privacy issues.
Twitter is a popular social network that allows users to post and read
short messages with fewer than 140 characters. With 300 million
active monthly users, it has become an influential digital medium for
debates, mobilising support or opposition, and directing people
towards other online material\cite{honey2009beyond}. Twitter thus
provides a means for policy makers to engage with the general public
and to use it as an effective communication platform, alongside more
traditional methods of public engagement. In order to use Twitter
effectively, it is important to understand how information and
influence spreads within its network of users\cite{wu2011says,
lerman2010information}. The flow of information through Twitter
depends on the pattern of connections between
users~\cite{romero2011differences}, i.e., what Twitter calls the
`social graph'. Tweets from a particular user appear on the
`timeline' of that user's `followers,' and these followers are then
able to respond or `retweet' the message, propagating the information
on to their own followers. Within Twitter the directionality of links
is therefore critically important; anybody is free to follow and
retweet the President of the United States, but, for most users, to be
retweeted by the President would be a significant event! It is clear
that this asymmetry\index{asymmetry} is a crucial ingredient defining
how information propagates through the network.
Extracting information of the detailed directed\index{directed}
structure of the Twitter social graph is therefore a key step towards
understanding the evolution of a debate on a particular issue,
particularly for policy makers who would like to reach the widest
possible audience and effectively influence the debate. Concepts from
graph theory and network analysis can be applied to address such
questions. In particular, community detection\index{community
detection} is the graph-theoretical problem of identifying
meaningful subgroups within a
network~\cite{fortunato2012community}. Within Twitter, this might
correspond to groups of users who share similar interests, or who are
engaging with each other on a particular topic. Although previous
studies have used community detection methods to analyse Twitter
networks~\cite{conover2011political, weng2013virality}, these have
generally ignored the directionality of the edges. Indeed, most of
the widely-used community detection methods are defined for undirected
networks and are not easily adapted to the directed
case\cite{malliaros2013clustering}.
In contrast, we use here two methods, Markov Stability\index{Markov
stability}\cite{delvenne2010stability,delvenne2010stability2,
delvenne2013stability,lambiotte2014random,lambiotte2008laplacian}
and Role-Based Similarity (RBS)~\cite{beguerisse2013finding,
cooper2010role}, which are based on the behaviour of dynamical
processes on the network and can thus be seemlessly applied to
directed networks. Since they are flow-based, these methods naturally
explore how information and influence propagate across the network of
Twitter users, i.e., the communities and roles found by our analysis
reflect the process of information spreading on the network. Markov
Stability is a community detection method which identifies groups of
nodes in the graph in which the flow of a diffusion process becomes
trapped over a particular time scale\cite{lambiotte2014random}.
Role-based similarity finds groups of nodes based on the similarity of
the in- and out-flow patterns, i.e., how flows enter and leave each
node based on paths of all lengths. RBS thus provides a deeper insight
into the flow roles of individual users within the network than
traditional classifications into leaders and followers, or hubs and
authorities\cite{beguerisse2014interest}. We have previously used
these methods to analyse a network of influential Twitter users during
the 2010 London riots\cite{beguerisse2014interest}.
In this chapter, we apply and extend these methods to analyse a set of
tweets relating to the care.data programme, demonstrating how the
information derived from graph-theoretical analyses of Twitter data
can provide insight to policy makers on how to effectively engage with
a Twitter audience. For a discussion of the implications of our
research for policy makers see Ref. \citen{vuik2015understanding}; here
we present in greater detail the technical background to the analysis,
as well as additional, extended results. We begin in
Sections~\ref{sec:MS}~and~\ref{sec:RBS} by explaining the mathematics
of the Markov Stability and Role-Based Similarity methods. In section
\ref{sec:data:twitter} we describe how we construct different
directed\index{directed} networks of Twitter users from the set of
tweets, based on declared interest (follower relationships) and active
participation (retweets). We apply our methods to these networks in
section \ref{sec:results:twitter}, revealing the different communities
involved in the care.data debate and the different roles played by
users within the debate.
\section{The Markov Stability community detection methodology}
\label{sec:MS}
A frequent goal in network analysis is to partition the graph into
meaningful subgroups, or \textit{communities}, leading to a mesoscopic
description of the network that can be extremely useful for making
sense of large and complex data sets. The communities so obtained can
also help reveal how global structure and function emerges from local
connections. The literature contains a large number of methods for
community detection (see Ref.~\citenum{fortunato2012community} for a
review). The variety of community detection methods reflects the fact
that there cannot be a universal definition of what constitutes a
`good' partition of the network. However, most methods follow
heuristics based on structural and combinatorial features of the
network: typically a subset of nodes is thought of as a good community
if the connections between the nodes within the subset are denser than
the connections with nodes outside of the
subset~\cite{fortunato2012community}. Such heuristics are applied
through optimisations of a variety of quality functions. A quality
function based on this idea underlies the popular modularity
method\cite{newman2006modularity}\index{structural quality function}.
In addition to the well-known limitations of many of these methods,
(such as the `resolution limit'~\cite{fortunato2007resolution}, the
intrinsic presence of a particular scale, or the bias towards
overpartitioning into clique-like
communities~\cite{schaub2012markov,schaub2012encoding}), structural
quality functions are not easily adapted to directed
networks\cite{leicht2008community,kim2010finding}. On the other
hand, the Markov Stability community detection method is based on the
behaviour of dynamical processes on the network and, as such, it
applies naturally to both undirected and directed
networks\cite{delvenne2013stability,lambiotte2014random}.
Furthermore, since Markov Stability is based on the flow of a Markov
process on the graph, and not on structural features such as edge
density, it can detect non-clique-like
communities\cite{schaub2012markov}. Other methods have been proposed
to detect communities based on diffusion processes, including
Infomap\cite{rosvall2007information} and
Walktrap\cite{pons2006computing}, yet these methods do not concentrate
on fully exploiting the transient information contained in the
dynamics corresponding to the analysis of paths at all lengths. It is
this dynamical zooming that allows Markov Stability to extract
information of the graph at all scales and the plausibility of
different coarse-grained descriptions of the graph over different time
scales. For a full description of the method see
Refs. \citenum{delvenne2010stability,delvenne2013stability,schaub2012markov,lambiotte2014random}.
Here we focus on the specifics of the application to directed
networks; we start by outlining the necessary mathematical formalism
for random walks on directed networks, and then introduce the Markov
Stability quality function and discussing some practical issues
related to its optimisation.
\subsection{Random walks on directed networks and Markov Stability}
\subsubsection{Preliminaries}
A directed graph with $N$ nodes can be encoded by an $N \times N$
adjacency matrix $A$, where $A_{ij} = 1$ if there is a directed edge
from node $i$ to node $j$, and $A_{ij} = 0$ otherwise. Nodes in
directed\index{directed} graphs have an out-degree (given by the sum
of \textit{rows} of the adjacency matrix, $d_{\text{in}} =
A\mathbf{1}$) and an in-degree (given by the sum of \textit{columns},
$d_{\text{out}} = A^T\mathbf{1}$).
The evolution of the probability distribution of a simple
discrete-time random-walk\index{random walk} on a directed network
defined by the (non-symmetric) adjacency matrix $A \neq A^T$ is given
by
\begin{equation}\label{eq:rw1}
\mathbf{p}_{t+1} = \mathbf{p}_tD_{\text{out}}^{-1}A = \mathbf{p}_tM_{\text{dir}},
\end{equation}
where $\mathbf{p}_t$ is a $1 \times N$ vector, $D_{\text{out}} =
\text{diag}(d_{\text{out}})$, and $M_{\text{dir}} =
D_{\text{out}}^{-1}A$ is the Markov transition matrix\index{transition
matrix}. If the graph is strongly connected (i.e., if any node can
be reached from any other node) and aperiodic, then the random walk is
ergodic with stationary distribution $\pi$, the dominant left
eigenvector of $M_{\text{dir}}$, i.e., $\pi = \pi M_{\text{dir}}$. The
entries of $\pi$ are the \textit{PageRank} of the nodes in the graph,
a well known variant of the eigenvector centrality which is used by
the Google search algorithm.
In general, real-world networks will not be strongly connected and so
the dynamics are not guaranteed to be ergodic. A common approach for
ensuring the dynamics are ergodic is to use the `Google trick' of
random teleportation\index{teleportation}: if the random-walk is at a
node with at least one out-link, then with probability $\alpha$ it
will follow one of its outlinks, and with probability $1 - \alpha$ it
will `teleport' to a random node in the graph with uniform
probability. If it is at a node with no out-links, then it will
teleport with probability 1. The transition matrix for such a
random-walk is
\begin{equation}
M_{\text{dir}}(\alpha) = \alpha \, M_{\text{dir}} + \left[(1 - \alpha) \,I + \alpha \, \text{diag}(a)\right]\frac{\mathbf{11}^T}{N}
\end{equation}
where $a$ is a dangling-node indicator vector ($a_i = 1$ if $i$ has no
out-links and $a_i = 0$ otherwise).
The customary value used for $\alpha$ is 0.85, which we adopt below.
The equivalent continuous-time random-walk is governed by
\begin{equation}
\dot{\mathbf{p}} = -\mathbf{p} \, \left(I - M_{\text{dir}}(\alpha)\right),
\end{equation}
and the transition matrix for the continuous time random-walk is then
\begin{equation}
\label{eq:cont-time-transition}
P(t) = \exp(-t(I - M_{\text{dir}}(\alpha)).
\end{equation}
\subsubsection{Directed Markov Stability: definitions and
optimisation}
The Markov Stability community detection method is based on the
analysis of a dynamical process - such as the random-walk described
above - on the network. The underlying idea is that the behaviour of
dynamical processes on a network can reveal meaningful information
about the structure of the graph. Intuitively, `good' communities are
regions of the network in which the dynamical process is coherent over
a particular time scale. In the case of random walks (akin to
diffusion processes), a good community is defined as a subgraph on
which the diffusion is well mixed and trapped over a given time scale.
By allowing the random-walk to evolve for progressively longer times,
the method acts as a `zooming lens', uncovering structure (if present)
at all scales. This dynamical zooming allows the method to extract a
multi-resolution description without prescribing a scale for the
partitions. In addition, the method can find not only the standard
clique-like communities, but also non-clique communities, which are of
interest in geographic, engineering and social systems.
Operationally, the method works by optimising a time-dependent quality
function as follows. A particular partition of the network is
represented by the $N \times c$ community indicator matrix $H$. Each
row of $H$ corresponds to a node and each column a community: if node
$i$ is in community $j$ then $H_{ij} = 1$ and the rest of row $i$ is
zeros. We then define the \textit{clustered autocovariance matrix} as
\begin{equation}\label{eq:autocovariance}
R(t, H) = H\left[\Pi P(t) -\pi^T\pi\right]H^T := H Q H^T,
\end{equation}
where $\Pi = \text{diag}(\pi)$ and $P(t)$ is the random-walk
transition matrix over time $t$ (e.g., for the discrete-time simple
random walk this is $M_{\text{dir}}^t$). Note that in the undirected
case, $Q=\Pi P(t) - \pi^T\pi$ is the actual autocovariance matrix of
the diffusion process defined by $P(t)$, whereas for
directed\index{directed} networks the matrix $Q$ is not symmetric and
so it is not an autocovariance in the strict sense. The entries of
the $R$ matrix have an intuitive interpretation in terms of the
random-walk: $R(t,H)_{ij}$ is the probability of starting in community
$i$ at stationarity and being at community $j$ at $t$ discounting the
probability of two independent random-walkers being in $i$ and $j$ at
stationarity. The diagonal entries $R(t,H)_{ii}$ can therefore be
seen as a measure of the extent to which community $i$ traps the flow
of the process over time $t$. The overall `quality' of the partition,
in terms of trapping the flow of the diffusion process, is the sum of
these diagonal entries, and we define the \textit{Markov Stability of
a partition} as
\begin{equation}\label{eq:stability}
r(t, H) = \text{trace } R(t,H) = \text{trace} \, H Q H^T.
\end{equation}
Markov Stability can be used to evaluate the quality of a particular
partition found by whichever means or, alternatively, we can use it as
an objective function to be maximised over the space of all possible
partitions at each value of the Markov time, $t$. This latter approach
is followed in the examples below to find good communities with high
Markov Stability.
For Markov time $t$, we maximise Markov Stability~\eqref{eq:stability}
over the space of all possible network partitions $H$. This
optimisation is NP-complete\cite{brandes2008modularity}, and so we use
the heuristic greedy Louvain algorithm\cite{blondel2008fast}, which
has been shown to provide an efficient optimisation of this function
both in benchmarks and in real-life examples. Note that
although the Louvain algorithm\index{Louvain algorithm} is formulated
for symmetric matrices, and the matrix $Q$ is not symmetric, we can
optimise the directed Markov Stability objective
function~\eqref{eq:stability} by exploiting the fact that
$\text{trace}(H^TQH) = \frac{1}{2}\text{trace}(H^T(Q + Q^T)H)$ and
optimising this symmetrised function.
The greedy Louvain algorithm is deterministic, but the outcome of the
optimisation is dependent on the random initialisation seed. We
therefore run the algorithm 100 times with different random seeds and
choose the partition with the highest Markov Stability. We also record
the variability in the ensemble of optimised solutions by computing
the average normalised variation of information (VI), a measure of the
distance between two
partitions~\cite{meila2007comparing}\index{variation of information},
between all pairs in the ensemble of 100 optimised partitions. A low
VI signifies that there is little difference between the obtained
partitions, and we use this as an indication that the community
structure of the network at this scale is robust.
By optimising the Markov Stability $r(t, H)$ across a range of times
$t$ (usually spanning several orders of magnitude), we obtain a
sequence of progressively coarser partitions. We do not expect to
find relevant structure at all scales. Meaningful communities are
chosen according to a double measure of robustness: they should be
optimal, according to their Markov Stability, over long expanses of
time, making them robust across time scales; they should have low
values of their VI, making them robust solutions to the optimisation
problem.
\section{Finding flow roles in directed networks using Role-Based Similarity}
\label{sec:RBS}
In the above discussion, Markov Stability was introduced as a method
for identifying groups of nodes based on the flow of information
retained within them over time. We now introduce another graph-theoretical
method that uses flow for a different purpose; namely, to identify
instead groups of individuals who, although not necessarily close
within the Twitter network, have similar patterns of incoming and
outgoing flows at all scales. Such groups can be identified as
\textit{flow roles} in the network
(e.g., source-like or sink-like in the simplest cases), and can
be found through a node similarity measure called \textit{role-based similarity}
(RBS)\index{role-based similarity}\cite{cooper2010role, cooper2010complex}.
Once this RBS node similarity is obtained, we transform it into
role-based similarity \textit{graph}
through the use of the relaxed minimum spanning tree (RMST) algorithm\index{relaxed minimum spanning tree}.
The analysis of this RBS similarity graph reveals the existence of groups
of nodes with similar roles in the network. These two methods
are outlined below.
\subsection{Role-based similarity}\label{sec:rbs_method}
Each node in the network is assigned a `profile vector' that encodes
the pattern of in-flows and out-flows passing through that node,
computed from the numbers of incoming and outgoing paths of all
lengths from that node\index{flow profile vector}. The cosine
similarity between the profile vectors of all nodes is then computed
to obtain the RBS similarity matrix. Two nodes are similar if they
have similar in- and out-patterns of network flow through them for all
path lengths.\cite{cooper2010role, cooper2010complex,
beguerisse2014interest}
Formally, consider a graph with $N$ nodes and adjacency matrix $A \neq
A^T$. The profile vector for a node is a $1 \times 2K_{\text{max}}$
vector: the first $K_{\text{max}}$ entries describe the number of
paths of length 1 to $K_{\text{max}} < N - 1$ which \textit{begin} at
that node, and the second $K_{\text{max}}$ entries give the number of
paths which \textit{end} at that node (scaled by a tunable constant).
These vectors can be computed straightforwardly by observing that the
entries of successive powers of the adjacency matrix give the number
of paths of increasing lengths between any two nodes
(i.e. $(A^k)_{ij}$ is the number of paths of length $k$ between nodes
$i$ and $j$). The profile vectors are then the row vectors of the $N
\times 2K_{\text{max}}$ matrix given by
\begin{equation}\label{eq:X_rbs}
X(\alpha) = \overbrace{\Bigg[ \dots \left( \frac{\alpha}{\lambda_1} A^T \right )^k\mathbf{1}\dots \Bigg\vert}^{\text{incoming}} \overbrace{\dots \left( \frac{\alpha}{\lambda_1} A \right )^k\mathbf{1}\dots \Bigg]}^{\text{outgoing}},
\end{equation}
where $\alpha \in (0, 1)$ and $\lambda_1$ is the largest eigenvalue of
$A$. The choice of $\alpha$ changes the rate of convergence of the
terms $((\alpha/\lambda_1)A^T)^k$, and hence controls the relative
influence of the large-scale structure of the graph.
For small $\alpha$,
the RBS similarity is based mostly on short paths, i.e., local
neighbourhoods. For instance, in the limit $\alpha \rightarrow 0$
only $d_{\text{in}}$ and $d_{\text{out}}$ are taken into account.
Conversely, using larger values of $\alpha$
leads to profile vectors which include more global information from
the graph.
The RBS similarity of two nodes $i$ and $j$ is then given by the
cosine distance between their profile vectors
\begin{equation}\label{eq:Y_rbs}
Y_{ij} = \frac{\mathbf{x}_i\mathbf{x}_j^T}{\|\mathbf{x}_i\|\|\mathbf{x}_j\|},
\end{equation}
where $\mathbf{x}_i$ and $\mathbf{x}_j$ are the $i$th and $j$th rows
of $X$.
\subsection{Relaxed minimum spanning tree}\label{sec:rmst_method}
The similarity matrix $Y$ defined by~\eqref{eq:Y_rbs} can be thought of as a
complete, weighted graph on the nodes, with edges between every pair of nodes
weighted by the cosine similarity of their respective profile vectors.
Note however that the matrix $Y$ also represents the similarity between
transient (forward and backward) time courses of the linear dynamics
on the network. Given the intrinsic continuity of this dynamic representation,
we obtain a sparser projection through the use of
the relaxed minimum spanning tree (RMST) algorithm\index{relaxed minimum spanning tree},
a method to obtain a graph-theoretical projection
that captures the underlying continuous geometry of the vectors
being considered---here, the points are the
profile vectors, which lie in a $2K_{\text{max}}$-dimensional space.
\cite{beguerisse2014interest,vangelov2014unravelling,beguerisse2013finding}
The algorithm proceeds as follows: the minimum spanning tree (MST) of the
complete graph $Y$ is calculated. For each pair of points $i$ and $j$ the
edge $Y_{ij}$ is then added to the graph if it is not too much larger
than than the \textit{largest edge weight in the MST path between $i$
and $j$}. Formally the edges in the RMST are given by
\begin{equation}
\text{RMST}_{ij} = \begin{cases}
1 \mbox{ if } y_{ij} < \text{mlink}_{ij} + \gamma(d_i^k + d_j^k), \\
0 \mbox{ otherwise,}
\end{cases}
\end{equation}
where $\text{mlink}_{ij}$ is the largest edge weight in the MST path
between nodes $i$ and $j$, $d_i^k$ is the distance from node $i$ to
its $k$th nearest neighbour and $\gamma$ is a positive parameter (here
we have used $k=1$ and $\gamma=0.5$). The term $\gamma d_i^k$ is a
measure of the local density around every point.
\section{Twitter data of the care.data debate: follower and retweet networks}\label{sec:data:twitter}
The networks analysed here are obtained from a set of tweets relating
to the care.data debate. All tweets sent between 1 December 2013 and
27 March 2014 containing the text ``care.data'', ``caredata'' or
``care data'' were obtained from the provider Gnip~\footnote{www.gnip.com}.
There were 36,745 tweets from 10,031 accounts. The data included the tweeters
screen name, the tweet text and the date and time the tweet was sent.
Lists of followers of each user in the data set were obtained using
the Twitter API (this was carried out in April 2015).
We then constructed two directed networks
(Fig. \ref{fig:network_construction}): (a) the usual network of followers
(`who follows whom') amongst the users who appeared in the
data set\index{follower graph}; and (b) the weighted network of retweets (`who has retweeted
whom and how much')\index{retweet graph}. We study the largest connected components of
these two networks: the follower network has a single connected
component with $N=10,031$ users (nodes) and $E=472,428$ edges, corresponding
to declared following;
the largest connected component of the retweet network
has $N = 7303$ nodes and $E=14542$ edges, corresponding to actual
retweet activity during this period.
The follower network (a) is analysed in Sections~\ref{sec:interest_comms}--\ref{sec:roles},
whereas the retweet network (b) is studied in Section~\ref{sec:conversation_comms}.
Using directed Markov Stability, we identify communities in both
networks. The communities of users obtained in the network of
followers are called \textit{interest communities}, whereas the communities found in the
retweet network are referred to as \textit{conversation communities}.
To provide a visual representation of the common interests within
interest communities, and the topics of discussion within conversation
communities, we have used the profile text (self-descriptions) of the
users and the text of their tweets, usually in the form of word clouds.
It is important to remark that the text of the tweets and self-descriptions
is only used \textit{a posteriori} to illustrate our findings.
The follower network is also used to identify roles in the network using
the RBS-RMST algorithm, as described in Section~\ref{sec:RBS}.
\begin{figure*}[t]
\centering \includegraphics[width=\textwidth]{Figure_1.pdf}
\caption[Construction of networks]{Interpretation of the nodes and
edges in the two directed networks studied in this chapter.}
\label{fig:network_construction}
\end{figure*}
\section{Results}\label{sec:results:twitter}
\subsection{Identification of interest communities in the follower
network}\label{sec:interest_comms}
By applying the flow-based community detection method Markov Stability
to the directed graph\index{directed} of follower relations\index{follower graph} we identify
\textit{interest communities}: groups of users between whom
information, interest and influence is propagated. As seen in our previous
studies of Twitter networks, the directionality of the
edges is important for capturing this information flow; communities in
undirected networks are diffuse and blurred compared to those in the
equivalent directed network~\cite{beguerisse2014interest}.
Our computations of the directed Markov Stability across times shows
a long plateau between Markov times $4.3$ and $6.1$ accompanied by a low
variation of information, indicating that the 13-way partition found
during this period is robust. Below, we concentrate on this partition although
other levels of resolution can provide different information.
\begin{figure*}[t]
\centering \includegraphics[width=\textwidth]{Figure_2.pdf}
\caption[Interest communities]{Interest communities identified by
Markov Stability in the follower network. The word clouds show
the most commonly appearing words in the personal profiles of the
users in the different communities.}
\label{fig:interest_comms}
\end{figure*}
The 13-way partition is composed of four large
communities (comprising 99.16\% of the users) and nine minor
communities, which were not consider further.
As shown in Figure~\ref{fig:interest_comms}, our \textit{a posteriori} analysis of the
most frequently appearing words in the users' personal
profiles (self-descriptions)\index{personal profiles} reveals that the three major interest
communities correspond to: healthcare professionals, politicians and
political activists, and self-confessed `data geeks' and media types.
The most common words in the self-descriptions of the
healthcare community were `health', `nhs', and `care';
the politics community featured words such as `labour',
`politics', and `people'; and the media/data community users used words such
as `data', `geek', and `science'.
The care.data programme is a
healthcare scheme, but the issues surrounding its implementation
concerned the proper user of personal data and related security and
privacy issues.
The fourth largest community presented a mixed set of words including
`healthcare'/`health'/`medical', but also `data', `technology' or `business'.
Interestingly, a closer analysis of the users of this community
revealed that this group was mainly US-based,
and only collaterally participating in the debate due to interest both in data issues
and the relevance of NHS reforms to healthcare reforms in the US.
Our analysis thus confirms that the nature of the debate
is reflected in the different interests of those Twitter users who actively
engaged with the debate.
\subsection{Audience of the interest communities}\label{sec:audience}
Although Twitter is an open platform, in which anybody is able to
create a free account and participate, the analysis of personal
profiles suggests that users who engaged in the care.data debate had
pre-existing personal interest in the issues being discussed
(healthcare, privacy and data security, politics etc.). To understand
the global reach of the debate outside the network analysed, we
collected the follower list\index{follower list} of each user in our
network, i.e., all the Twitter users who could have seen a tweet or
retweet related to care.data. The number of \textit{unique} followers
was 9.6 million - nearly as many as could be reached by a prime-time
Saturday night television advert - demonstrating the clear potential
of Twitter as a medium for policy communications (although it is
likely that some of these users are `fake' accounts).
Our analysis reveals relatively little overlap between the outside
followers of the different communities: 70\% of followers of the
politics group, 76\% of followers of the media/data group, 54\% of followers
of the healthcare group, and 64.4\% of the US group followed only
people in that particular interest community
(Fig. \ref{fig:followers_overlap}). To ensure that a wide and diverse
audience is reached, it is therefore important for policy makers to
understand and engage with the different communities in the debate.
\begin{figure*}[t]
\centering \includegraphics[width=0.6\textwidth]{Figure_3.pdf}
\caption[Interest communities]{Total unique followers of users in
each of the four main interest communities}
\label{fig:followers_overlap}
\end{figure*}
Table~\ref{tbl:interest_top_followed} shows the users within each
community with the largest number of followers. Users in the media/data
community with large numbers of followers include the satirist Armando
Iannucci (@Aiannucci); the physician and
popular science writer Ben Goldacre (@bengoldacre); and the blogger and digital rights activist Cory Doctorow
(@doctorow). Users in the healthcare community
with a large reach include the British Medical Journal (@bmj\_latest),
the English NHS (@NHSChoices), and the Department
of Health (@DHGovuk). The three users
with the most followers in the politics community were slightly
unusual: a user posting mainly photos of art (@Asamsakti), the
controversial conspiracy theorist David Icke (@davidicke), and a
support group for amputees (@walkon\_crafters). However, using an
online tool\footnote{www.twitteraudit.com} we found that 81\% of
followers of @Asamsakti and 85\% of the followers of @walkon\_crafters
are estimated to be `fake' user accounts. Less surprising were the
official accounts for the political party the National Health Action
party (@NHAparty), the Labour
Press Team (@labourpress), and the
anti-capitalist protest group Occupy London
(@OccupyLondon).
\begin{table}[t]
\centering
\tbl{Top users by number of followers in the three main interest communities.}
{
\resizebox{\textwidth}{!}{\begin{tabular}{@{}lclclc@{}}
\toprule
\multicolumn{2}{c}{Media/Data} & \multicolumn{2}{c}{Politics} & \multicolumn{2}{c}{Healthcare}\\
\colrule
User & No. Followers & User & No. Followers & User & No. Followers\\
Aiannucci & 422829 & \textit{Asamsakti}* (81\%) & 596380 & \textit{Dr\_Sean\_001}* (82\%) & 226264 \\
bengoldacre & 378681 & davidicke & 131739 & bmj\_latest & 161007 \\
thetimes & 360178 & \textit{walkon\_crafters}* (85\%) & 117813 & NHSChoices & 159852 \\
doctorow & 359954 & HouseofCommons & 68802 & DHgovuk & 139876\\
digiphile & 236273 & NHAparty & 64416 & mencap\_charity & 84889\\
WiredUK & 224780 & labourpress & 58264 & TheStrokeAssoc & 67491\\
cyberdefensemag & 189766 & OccupyLondon & 56773 & NHSEngland & 65673\\
pzmyers & 163682 & IndyVoices & 52191 & TheEIU & 60561\\
tom\_watson & 161073 & politicshome & 50554 & TheBMA & 47059\\
arusbridger & 153233 & sahil\_anas & 46096 & GdnHealthcare & 44587\\
\botrule
\end{tabular}
}
}
\begin{tabnote}
\mbox{* Users in \textit{italics} have $>80\%$ estimated fake followers (percentage in parenthesis)}
\end{tabnote}
\label{tbl:interest_top_followed}
\end{table}
\subsection{Sentiment analysis of tweets}\label{sec:sentiment}
To determine the sentiment of the discussion and identify some of the
topics of discussion, we manually analysed a sample of 250 tweets from
the dataset (Table \ref{tbl:sentiment})\index{sentiment analysis}.
Very few of the tweets were classified as positive (3-5\%), the rest
being neutral or negative. This is characteristic of how Twitter is
used---spikes in tweet activity around a particular event tend to be
of a negative nature\cite{thelwall2011sentiment}. Interestingly,
however, the proportion of tweets from users in the healthcare
community which were classified as negative was lower than in the
politics and media/data communities.
There were also differences in the content of the negative tweets
between the three interest communities. We divided concerns into three
distinct classes:
\begin{enumerate}
\item \textbf{Implementation}. Concerns regarding information
provision, the opt-out process, and communication with the public.
\item \textbf{Scheme concept}. Concerns about privacy, sharing of
personal data, and the use/sale of the data.
\item \textbf{Execution}. Concerns around security, effectiveness of
pseudonymisation, and cyber attacks.
\end{enumerate}
While all three communities were predominantly negative about the
care.data scheme, each focused on different arguments. The politic
community mainly discussed the scheme concept of sharing personal
data, as well as the security concerns that are associated with
it. The healthcare and media/data communities on the other hand were
primarily concerned about the implementation of the care.data project,
concentrating on the contested opt-out arrangement and perceived lack
of communication to the public.
\begin{table}[t]
\centering
\tbl{Sentiment and content analysis of a random sample of 250 tweets.}
{
\resizebox{\textwidth}{!}{
\begin{tabular}{llccc}
\toprule
& & Healthcare & Politics & Media/Data \\
\colrule
\multirow{3}{*}{Tweet sentiment} & Positive & 5\% & 4\% & 3\%\\
& Negative & 58\% & 75\% & 62\%\\
& Neutral & 37\% & 21\% & 35\%\\
\colrule
\multirow{3}{*}{Major concerns} & Implementation$^1$ & 65\% & 28\% & 54\%\\
& Scheme concept$^2$ & 28\% & 43\% & 35\%\\
& Execution$^3$ & 7\% & 29\% & 11\%\\
\botrule
\end{tabular}
}
}
\begin{tabnote}
\mbox{$^1$ \text{information provision, the opt-out process,
communication to the public}}
\end{tabnote}
\begin{tabnote}
\mbox{$^2$\text{privacy, sharing of personal data, use/selling of
the dataset}}
\end{tabnote}
\begin{tabnote}
\mbox{$^3$\text{security concerns, re-identification, cyber
attacks}}
\end{tabnote}
\label{tbl:sentiment}
\end{table}
\subsection{Bridgeness between communities}
The communities identified in the follower network are regions where a
dynamical process is likely to become trapped, so information flows
less readily between these communities than within them. This
suggests that relatively few links could act as a `bridge' between
communities and could be effective at propagating the flow from one to
another. An example of such a connection would link one user who is
following influential individuals in one community and another who is
being followed by many people in another community
(Fig. \ref{fig:bridgeness}). To identify the `bridges' from community
$\mathcal{C}_1$ to community $\mathcal{C}_2$, we calculate the
shortest paths between all pairs of nodes $(i,j)$, where $i \in
\mathcal{C}_2$ and $j \in \mathcal{C}_1$. Note that the flow of
information is in the \textit{opposite} direction to that of the
edges: if there is an edge from node $i$ to node $j$, then content
produced by user $j$ is consumed by user $i$. The bridgeness
(centrality)\index{bridgeness} of an edge is then defined as the
proportion of shortest paths which pass through that edge - this is
equivalent to the classic betweeness centrality measure, but now only
shortest paths between specific subgroups of the nodes are considered.
Such information could be useful for policy-makers who find they have
more success in engaging users in community $\mathcal{C}_1$ than in
$\mathcal{C}_2$ - since they will be able to target those users in
$\mathcal{C}_1$ who are most able to propagate that information on to
$\mathcal{C}_2$.
\begin{figure*}[t]
\centering \includegraphics[width=\textwidth]{Figure_4.pdf}
\caption[Bridgeness]{Bridgeness. a) To identify the users important
for information flow between two communities, we compute the
shortest paths for all pairs of nodes $(i,j)$ where $j \in
\mathcal{C}_1, i \in \mathcal{C}_2$ and identify the
between-community edges which feature in these shortest paths most
often. Shortest paths are likely to go through UserA (who is
being followed by many users in $\mathcal{C}_2$) and UserB (who is
following many people in $\mathcal{C}_1$). b) Links with highest
bridgeness centrality between interest communities - note that the
flow of information is in the opposite direction to that of the
edges.}
\label{fig:bridgeness}
\end{figure*}
As an illustration of the type of information that can be extracted,
we have considered the bridging links with the highest bridgeness
centrality between the three largest communities
(Fig.~\ref{fig:bridgeness}). (A more nuanced view can be obtained by
considering a longer list of bridges and their profiles, see Table~\ref{tbl:bridge}.)
The highest bridgeness centrality for flow from the politics community to the
healthcare community is the link from Roy Lilley (@RoyLilley) to the
National Health Action party (@NHAparty). Roy Lilley is followed by 44.4\% of users
in the healthcare community, and the NHA party is following 41.0\% of
users in the politics community. The highest bridgeness centrality
for flow in the opposite direction (from the healthcare community to
politics) is the link from the NHA party to NHS healthcare
professional Helen Bevan (@helenbevan). The NHA
party is being followed by 53.2\% of the politics community and Helen
Bevan is following 19.1\% of the healthcare community. The partial
asymmetry here is interesting: within the politics community, the NHA
party has a large number of followers (53.2\%) and a large number of
users it follows (41.0\%), meaning it is able to act as both a
broadcaster of information \textit{to} this community and a receiver
of information \textit{from} it. In contrast, Roy Lilley is followed
by a large proportion of people in the healthcare community (44.4\%)
but follows relatively few (3.4\%); he is therefore more likely to act
as a broadcaster of information to the community. Helen Bevan follows
a larger proportion of the healthcare community (19.6\%), and is
therefore exposed to a larger amount of the content generated by its
users.
\begin{table}[t]
\centering
\tbl{The top 5 bridging edges in the boundaries across interest communities ranked according to their \textit{bridgeness ratio} (BR). The bridgeness ratio
of an edge is the number of shortest paths from $\mathcal{C}_1$ to $\mathcal{C}_2$ which pass along that edge divided by the expected number of paths to pass along any edge
at that boundary. A high BR means that a disproportionally large number of shortest paths pass through this edge. Due to the asymmetry of the information flow from followed to follower,
the relevant edges are different depending of the direction in which the boundary is crossed.
}
{
\resizebox{\textwidth}{!}{
\begin{tabular}{lclclc}
\toprule
Politics $\rightarrow$ Media/Data & BR & Politics $\rightarrow$ Healthcare & BR & Media/Data $\rightarrow$ Healthcare & BR\\
\colrule \\
@NHAparty $\rightarrow$ @figshare & 59.9 & @NHAparty $\rightarrow$ @helenbevan& 277.8 & @bengoldacre $\rightarrow$ @JuliaHCox & 62.9\\
@NHAparty $\rightarrow$ @PaulLomax & 52.5 & @NHAparty $\rightarrow$ @Richard\_GP& 200.6 & @bengoldacre $\rightarrow$ @WelshGasDoc & 48.8\\
@NHAparty $\rightarrow$ @PaulbernalUK & 52.2 & @butNHS $\rightarrow$ @helenbevan& 91.3 & @bengoldacre $\rightarrow$ @PharmaceuticBen& 44.0 \\
@NHAparty $\rightarrow$ @rahoulb & 43.1 & @NHAparty $\rightarrow$ @BWMedical & 82.3 & @bengoldacre $\rightarrow$ @Azeem\_Majeed & 40.8\\
@haloefekti $\rightarrow$ @cyberdefensemag & 41.6 & @NHAparty $\rightarrow$ @H20MCR & 79.8 & @bengoldacre $\rightarrow$ @bmj\_latest & 37.1 \\
\\
\hline \hline \\
Media/Data $\rightarrow$ Politics & BR &Healthcare $\rightarrow$ Politics & BR & Healthcare $\rightarrow$ Media/Data & BR\\
\colrule\\
@Aiannucci $\rightarrow$ @NHAparty & 208.9 & @RoyLilley $\rightarrow$ @NHAparty& 203.8 & @mencap\_charity $\rightarrow$ @OpenRightsGroup& 35.7 \\
@tom\_watson $\rightarrow$ @roberthenryjohn & 51.8 & @ManchesterCCGs $\rightarrow$ @KayFSheldon& 108.5 & @bmj\_latest $\rightarrow$ @psychemedia& 32.2\\
@bengoldacre $\rightarrow$ @grahamemorris & 50.8 & @bmj\_latest $\rightarrow$ @NHAparty & 91.8 & @bmj\_latest $\rightarrow$ @figshare& 30.5 \\
@laurakalbag $\rightarrow$ @NHAparty & 46.1 & @stevenowottny $\rightarrow$ @KayFSheldon & 49.1 & @JuliaHCox $\rightarrow$ @bainesy1969 & 30.3\\
@bengoldacre $\rightarrow$ @carolinejmolloy & 45.9 & @clarercgp $\rightarrow$ @NHAparty & 48.3 & @Jarmann $\rightarrow$ @bainesy1969 & 27.3 \\
\botrule
\end{tabular}
}
}
\label{tbl:bridge}
\end{table}
A similar asymmetric pattern is observed for information flow between
the healthcare and media/data communities, and between the media/data and politics
communities. The highest bridgeness centrality for healthcare to media/data
is via the link from Ben Goldacre (@bengoldacre)
to Julia Cox (@JuliaHCox), whereas the highest
bridgeness centrality for flow in the opposite direction is via the
link between the Mencap charity (@mencap\_charity) and
the Open Rights Group (@OpenRightsGroup).
Flow from politics to media/data is via the link between Armando
Iannucci and the NHA party, whereas flow from
media/data to politics is via the the link between the NHA party and the
software company figshare (@figshare).
The asymmetry\index{asymmetry} observed in the bridgeness centralities
reinforces the notion that directionality is crucial for understanding
patterns of information flow through the network. It also suggests
that, depending on the users someone is following and being followed
by, individuals might play different \textit{roles} in propagating the
flow of information through the network. We explore this idea in more
detail in the following section.
\subsection{Identifying roles in the follower
network}\label{sec:roles}
\begin{figure*}[t]
\centering \includegraphics[width=\textwidth]{Figure_5.pdf}
\caption[Role communities]{Role communities in the role-based
similarity graph. a) Role-based similarity graph obtained using
the RBS-RMST algorithm, there are 6 robust communities
corresponding to different user roles. b) The original follower
network coarse-grained into role communities, the arrows are
proportional in size to the number of users in one role community
who follow users in the the other role community. c) average
in-degree and out-degree of users in the 6 role communities. d)
Kernel density estimates for the distributions of the proportion of
a user's friends lying outside their own interest community. e)
Cumulative distribution of retweets for the different role
communities.}
\label{fig:rbs}
\end{figure*}
To identify the different roles played by users in propagating the
flow of information via the Twitter social graph, we constructed the
RBS-RMST similarity graph for the follower network. We then used
Markov Stability on this similarity graph to identify groups of nodes
with similar in-flow and out-flow patterns. We find a robust
partition of the similarity graph into 6 groups, which correspond to 6
distinct roles for the Twitter users according to their flow patterns
(Fig. \ref{fig:rbs}a). The meaning of the 6 roles identified can be
understood by considering the aggregated in- and out-flows in the
social graph for each of the roles; by computing the in- and
out-degree for each role; and by obtaining the proportion of their
friends who lie in a different interest community. All of these
characterisations are presented in Fig.~\ref{fig:rbs} b-d.
The combined information from all these measures allows us to describe
the identified roles as:
\begin{enumerate}
\item \textit{Leaders}: users with higher in-degree (number of
followers) than out-degree. Users in this group tend to follow few
people, mainly in the mediator group.
\item \textit{Mediators}: users with roughly the same same in-degree
and out-degree who are both following and being followed by users in
all other groups.
\item \textit{Listeners}: users with few followers, and who are
following a small number of people from primarily the `Leader'
group.
\item \textit{Diversified listeners}: users with few followers, but
who are following a larger and more diverse group of users than the
`Listener' category.
\item \textit{Peripheral followers}: users who are following a very
small number of other users and are being followed by no-one.
\item \textit{Peripheral followed}: users who are being followed by a
small number of users but are following no-one.
\end{enumerate}
The users with the largest number of followers in the `Leader' role
are the physician and science writer Ben Goldacre;
former Chair of the Council of the Royal College of General
Practitioners Clare Gerada (@clarercgp); and the
account of the Department of Health. In the `Mediator' role, the NHA party, the Joseph Rowntree Foundation
(@jrf\_uk), and Care Quality
Commision board member Kay Sheldon (@KayFSheldon)
have the largest number of followers.
We calculated the proportion of each user's friends (users they are
following) who are in a different interest community from themselves
(as calculated in Section \ref{sec:interest_comms}) for each of the
different roles (Fig. \ref{fig:rbs}d). The diversified listeners have
the greatest proportion of friends outside their own interest
community, which confirms that these users are following a broad range
of other accounts involved in the care.data debate. The mediators and
leaders also tend to follow a significant number of people outside
their own interest community. The listeners and peripheral listeners
follow predominantly others within the same interest community,
suggesting that their involvement or interest was focused on one
particular aspect of the debate.
To understand how the different roles identified in the follower
network translate into actual participation in the care.date debate we
calculated the distributions of retweets for each of the role
communities (Fig.~\ref{fig:rbs}e). There is a clear separation
between the `Leader' category, which garners the most re-tweets, and
the follower categories `Listener' and `Diversified Listener', which
are rarely retweeted, with the `Mediator' category lying in-between
but closer to the `Leader' group. These results suggest that
identifying users who have `Leader' and `Mediator' roles in follower
networks can predict those users who are likely to have greatest
influence in the debate. We now explore the structure of the retweet
network obtained from the collected tweet corpus.
\subsection{Conversation communities in the retweet
network}\label{sec:conversation_comms}
\begin{figure*}[t]
\centering \includegraphics[width=\textwidth]{Figure_6.pdf}
\caption[Conversation communities]{The conversation communities
identified in the retweet network. The word clouds show the most
commonly appearing words in the tweets sent by users within the
community.}
\label{fig:conversation_comms}
\end{figure*}
The Twitter social graph (i.e., the follower network studied above)
encodes the \textit{possibility} of information flow through
Twitter---tweets from a user you are following will
appear on your timeline and you have the opportunity to retweet them
or send a related tweet. Of course, most people cannot and do not
engage actively with all information they are exposed to. Since we
have the set of all tweets concerning care.data, we are able to
explore the actual flow of information on this specific topic. To
allow us to understand the issues being discussed, and the groups of
people who are \textit{actively} engaging with each other through
Twitter, we have therefore analysed the network of retweets\index{retweet graph} (`who retweets
whom and how much') using our community detection framework to find
conversation communities\index{conversation communities}.
We then interpret the results through an \textit{a posteriori} summary
of the text of the tweets in the obtained groups.
Applying Markov Stability, we identify a robust partition of the retweet network
into 8 \textit{conversation communities} (Fig. \ref{fig:conversation_comms}).
Table \ref{tbl:convo_interest} shows how participants within each
conversation community are split between the three largest interest communities
(healthcare, media/data, politics). The conversation communities contain an uneven split of
users from the interest communities: except conversations 5 and 8, all
conversations are dominated by users from a particular interest
community. This result confirms that in the care.data debate there is a greater
flow of information between users with similar interests, and this implies that
interest communities (identified from the network of follower relations) provide a good
indication of how information is likely to flow through the Twitter
network.
\begin{table}[t]
\centering
\tbl{Mix of users in the 8 conversation communities according to the 3 main interest communities. The $+$ and $-$ signs indicate whether the observed number of users is above or below expectation. All conversation communities
(except Conversation 4) are significant ($p < 0.001, ^{(***)}$) according to a chi-square statistic calculated for each row independently. }
{\begin{tabular}{lllll}
\toprule
& Politics & Media/Data & Healthcare & \\
\colrule
Conversation 1 & 201$(-)$ & 113$(-)$ & 808$(+)$ & \textit{`Healthcare'-dominated}$^{(***)}$\\
Conversation 2 & 427$(-)$& 778$(+)$ & 334$(-)$ & \textit{`Media/Data'-dominated}$^{(***)}$\\
Conversation 3 & 834$(+)$ & 532$(-)$ & 290$(-)$ & \textit{`Politics'-dominated}$^{(***)}$ \\
Conversation 4 & 0($-$) & 2(+) & 0($-$)& \\
Conversation 5 & 65$(+)$ & 54$(+)$ & 1$(-)$ & \textit{`Politics' \& `Media/Data'}$^{(***)}$\\
Conversation 6 & 29$(-)$ & 261$(+)$ & 16$(-)$ & \textit{`Media/Data'-dominated}$^{(***)}$\\
Conversation 7 & 66$(-)$& 15$(-)$ & 161$(+)$ & \textit{`Healthcare'-dominated}$^{(***)}$\\
Conversation 8 & 754$(+)$& 632$(+)$ & 311$(-)$ & \textit{`Politics' \& `Media/Data'}$^{(***)}$\\
\botrule
\end{tabular}}
\label{tbl:convo_interest}
\end{table}
To identify the topics\index{discussion topics} being discussed
within the different conversations, we extracted the text of the
tweets and retweets sent by users within each group and produced word
clouds with the most frequent words used in those conversations
(Fig.~\ref{fig:conversation_comms}). Conversation 1 centred primarily
around healthcare professionals discussing the impact of the scheme on
patients, containing words such as `patient', `public', and `people'.
The media and data tweeters in conversation 2 were more opinionated,
using words like `mess', `wrong', and `sorry'. In conversation 3,
political activists discussed privacy issues such as the `opt’ out
arrangement, the selling (`sold’) of `records' to `insurance'
companies, and the involvement of the controversial digital services
company Atos. Conversation 6 was dominated by data geeks, who
discussed `medical records’ and ’privacy’ issues. Finally,
conversation 8 brought together users from both the healthcare and
data communities in a more general discussion.
\section{Conclusion}
By applying the multiscale flow-based community detection method
Markov Stability to follower networks of Twitter users, we have
identified separate participating groups in the debate concerning the
healthcare programme care.data. We have shown that users within
these groups share similar interests, and that the audience of Twitter users outside the
network (i.e. those who did not participate in discussion of care.data,
but follow someone who did) are distinct for the different
communities. By analysing the retweet network, we have identified
specific topics being discussed in different conversation communities.
Furthermore, by comparing the communities found in the follower and
retweet networks, we have shown that the actual flow of information
(in the form of retweets) is heavily influenced by the network of
follower relations. Using role-based similarity, we have classified
the users in the care-data debate according to the role they play in
propagating information across the network. The information uncovered
by these methods could be of great value to policy makers, who, in
order to target the largest possible audience, need to understand the
different communities and the different roles played by the
individuals within them.
\bibliographystyle{ws-rv-van}
|
1,116,691,498,413 | arxiv | \section{Introduction}
\label{sec:introduction}
The main tool used in our work on Hamilton cycles in line
graphs~\cite{KV-hamilton} is a result called `Skeletal
Lemma'~\cite[Lemma 17]{KV-hamilton}. It deals with quasigraphs in
3-hypergraphs (see below for definitions) and is related to Tutte's
and Nash-Williams' characterisation of graphs with two disjoint
spanning trees.
In our recent paper~\cite{KV-ess-9}, we need to use the lemma in a
slightly stronger form that unfortunately does not follow from the
formulation given in the paper. Instead of pointing out the necessary
modifications to the long and complicated proof, we decided to use
this opportunity to rewrite the proof completely, trying to formulate
it in a way as conceptually simple as we can. That is the purpose of
the present paper which is a companion paper to~\cite{KV-ess-9}. In
addition, the present paper aims to give the full proof in detail,
even in parts where the argument in \cite{KV-hamilton} is somewhat
sketchy.
The structure of the paper is as follows. In Section~\ref{sec:quasi},
we review the basic notions related to quasigraphs, the structures
forming a central concept of our proof. In
Section~\ref{sec:anti-conn}, we develop the basic properties of the
notion of connectivity and especially anticonnectivity of a quasigraph
on a set of vertices. This allows us to define, for any quasigraph, a
sequence of successively more and more refined partitions of the
vertex set that serves as a measure of `quality' of the
quasigraph. This is done in
Section~\ref{sec:sequence}. Section~\ref{sec:skeletal} gives the proof
of the main result, a stronger version of the Skeletal Lemma
(Theorem~\ref{t:enhancing}). Finally, in Section~\ref{sec:bad}, we
infer the result we need for the above mentioned application
in~\cite{KV-ess-9} (Theorem~\ref{t:no-bad}).
\section{Quasigraphs}
\label{sec:quasi}
A \emph{$3$-hypergraph} is a hypergraph whose hyperedges have size $2$
or $3$. Throughout this paper, let $H$ be a 3-hypergraph. A
\emph{quasigraph} in $H$ is a mapping $\pi$ that assigns to each
hyperedge $e$ of $H$ either a subset of $e$ of size 2, or the empty
set. The hyperedges $e$ with $\pi(e)\neq \emptyset$ are said to be
\emph{used} by $\pi$. (See Figure~\ref{fig:quasi} for an
illustration.) The number of hyperedges used by $\pi$ is denoted by
$\sizee\pi$.
\begin{figure}
\begin{center}
\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}\subfloat[]{\fig3}\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}\subfloat[]{\fig4}\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}
\end{center}
\caption{(a) A 3-hypergraph $H$. Hyperedges of size $3$ are
depicted as three lines meeting at a point without a vertex
mark. (b) A quasigraph $\pi$ in $H$. For each hyperedge $e$ used
by $\pi$, the pair $\pi(e)$ is shown using one or two bold
lines, depending on the size of $e$.}
\label{fig:quasi}
\end{figure}
Given a quasigraph $\pi$ in $H$, we let $\pi^*$ denote the graph on
$V(H)$, obtained by considering the pairs $\pi(e')$ as edges whenever
$\pi(e')\neq\emptyset$ ($e'\in E(H)$). If $\pi^*$ is a forest, then
$\pi$ is \emph{acyclic}. If $\pi^*$ is the union of a cycle and a set
of isolated vertices, then $\pi$ is a \emph{quasicycle}. A
3-hypergraph $H$ is \emph{acyclic} if there exists no quasicycle in
$H$.
If $e$ is a hyperedge of $H$, then we define $\pi-e$ as the quasigraph
which satisfies $(\pi-e)(e) = \emptyset$, and coincides with $\pi$ on
all hyperedges other than $e$. If $e$ is a hyperedge not used by
$\pi$, and if $u,v\in e$, then $\pi+(uv)_e$ is the quasigraph that
coincides with $\pi$ except that its value on $e$ is $uv$ rather than
$\emptyset$.
The \emph{complement} $\overline\pi$ of $\pi$ is the subhypergraph of $H$
(on the same vertex set) consisting of the hyperedges not used by
$\pi$.
Let $\PP$ be a partition of $V(H)$. We say that $\PP$ is
\emph{nontrivial} if $\PP\neq\Setx{V(H)}$. If $X\subseteq V(H)$, then
the partition $\PP[X]$ of $X$ \emph{induced} by $\PP$ has all nonempty
intersections $P\cap X$, where $P\in\PP$, as its classes.
If $e\in E(H)$, then $e/\PP$ is defined as the set of all classes of
$\PP$ intersected by $e$. If there is more than one such class, then
$e$ is said to be \emph{$\PP$-crossing}. The hypergraph $H/\PP$ has
vertex set $\PP$ and its hyperedges are all the sets of the form
$e/\PP$, where $e$ is a $\PP$-crossing hyperedge of $H$. Thus, $H/\PP$
is a 3-hypergraph. A quasigraph $\pi/\PP$ in this hypergraph is
defined by setting, for every $\PP$-crossing hyperedge $e$ of $H$,
\begin{equation*}
(\pi/\PP)(e/\PP) =
\begin{cases}
\pi(e)/\PP & \text{if $\pi(e)$ is $\PP$-crossing,}\\
\emptyset & \text{otherwise.}
\end{cases}
\end{equation*}
We extend the above notation and write, e.g., $uv/\PP$ for the set of
classes of $\PP$ intersecting $\Setx{u,v}$, where $u,v\in V(H)$.
We often consider quasigraphs $\gamma$ in the complement of the
quasigraph $\pi/\PP$ (typically, such a $\gamma$ is a
quasicycle). Note that $\gamma$ assigns a value to each hyperedge
$e/\PP$ such that $\pi(e)$ does not cross $\PP$ (including
$\pi(e)=\emptyset$). The situation is illustrated in
Figure~\ref{fig:complement}.
\begin{figure}
\begin{center}
\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}\subfloat[]{\fig5}\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}\subfloat[]{\fig6}\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}
\caption{(a) A quasigraph $\pi$ in a $3$-hypergraph $H$ and a
partition $\PP$ of $V(H)$. The classes of the partition are
shown in grey. (b) A quasicycle $\gamma$ in the complement of
$\pi/\PP$ in the hypergraph $H/\PP$. Note that the vertex set of
this hypergraph is $\PP$.}
\label{fig:complement}
\end{center}
\end{figure}
\section{(Anti)connectivity}
\label{sec:anti-conn}
In this section, we define and explore the notions of components and
anticomponents of a quasigraph (on a set of vertices) that are
completely essential for our arguments.
Recall that $H$ denotes a 3-hypergraph. Let $\pi$ be a quasigraph in
$H$ and $X\subseteq V(H)$. We say that $\pi$ is \emph{connected on
$X$} if the induced subgraph of $\pi^*$ on $X$ is connected. The
\emph{components} of $\pi$ on $X$ are defined as the vertex sets of
the connected components of the induced subgraph of $\pi^*$ on $X$.
We say that $\pi$ is \emph{anticonnected on $X$} if for each
nontrivial partition $\RR$ of $X$, there is a hyperedge $f$ of $H$
intersecting at least two classes of $\RR$, and such that $\pi(f)$ is
a subset of one of the classes of $\RR$ (possibly
$\pi(f)=\emptyset$). If we need to refer to the hypergraph $H$, we say
that $\pi$ is \emph{anticonnected on $X$ in $H$}.
The above notions are illustrated in Figure~\ref{fig:anti}.
\begin{figure}
\begin{center}
\fig7
\end{center}
\caption{A quasigraph $\pi$ in $H$ and a set $X \subseteq V(H)$
(shown grey). The quasigraph $\pi$ is anticonnected on $X$ and has
four components on $X$.}
\label{fig:anti}
\end{figure}
\begin{lemma}\label{l:union}
Let $\pi$ be a quasigraph in (a $3$-hypergraph) $H$ and $X,Y$
subsets of $V(H)$ such that $\pi$ is anticonnected on $X$ and
$Y$. Then $\pi$ is anticonnected on $X\cup Y$ whenever one of the
following holds:
\begin{enumerate}[\quad(i)]
\item $X$ and $Y$ intersect, or
\item there is a hyperedge $h$ of $H$ intersecting both $X$ and $Y$,
such that $\pi(h)$ is a subset of $X$ or $Y$ (possibly
$\pi(h)=\emptyset$).
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\RR$ be a nontrivial partition of $X\cup Y$. We find for $\RR$
the hyperedge whose existence is required by the definition of
anticonnectedness.
Suppose first that $\RR[X]$ is nontrivial. Since $\pi$ is
anticonnected on $X$, there is a hyperedge $f$ of $H$ such that $f$
intersects at least two classes of $\RR[X]$ and one of them contains
$\pi(f)$. Thus, $f$ intersects at least two classes of $\RR$ and one
of them contains $\pi(f)$.
We can thus assume, by symmetry, that both $\RR[X]$ and $\RR[Y]$ are
trivial. This implies that $\RR=\Setx{X,Y}$, so $X$ and $Y$ are
disjoint. In this case, the hyperedge $h$ from (ii) has the required
property.
\end{proof}
By Lemma~\ref{l:union}, the maximal sets $Y\subseteq X$ such that
$\pi$ is anticonnected on $Y$ partition $X$. We call them the
\emph{anticomponents} of $\pi$ on $X$.
\begin{lemma}\label{l:no-change}
Let $\pi$ and $\rho$ be quasigraphs in $H$ and $Y$ be a subset of
$V(H)$ such that $\pi(e) = \rho(e)$ for every hyperedge of $H$ with
$\size{e\cap Y} \geq 2$. Then $\pi$ is anticonnected on $Y$ if and
only $\rho$ is anticonnected on $Y$.
\end{lemma}
\begin{proof}
Suppose that $\pi$ is anticonnected on $Y$ and let $\RR$ be a
nontrivial partition of $Y$. Consider a hyperedge $f$ of $H$ such
that $f$ intersects two classes of $\RR$ and one of them contains
$\pi(f)$. By the assumption, $\rho(f) = \pi(f)$, so the same holds
for $\rho$ in place of $\pi$. Since $\RR$ is arbitrary, $\rho$ is
anticonnected on $Y$. The lemma follows by symmetry.
\end{proof}
We prove several further lemmas that describe some of the basic
properties of (anti)con\-nect\-i\-vi\-ty of quasigraphs.
\begin{lemma}\label{l:H-e}
Let $\pi$ be a quasigraph in $H$, $X\subseteq V(H)$ and $e$ a
hyperedge of $H$ with $\size{e\cap X}\leq 1$. If $\pi$ is
anticonnected on $X$ in $H$, then $\pi$ is anticonnected on $X$ in
$H-e$.
\end{lemma}
\begin{proof}
Let $\RR$ be a nontrivial partition of $X$. Since $\pi$ is
anticonnected on $X$ in $H$, there is a hyperedge $f$ of $H$
intersecting at least two classes of $\RR$, one of which contains
$\pi(f)$. The hyperedge $f$ is distinct from $e$ as
$\size{e\cap X} \leq 1$. Thus, $f \in E(H-e)$. Since $\RR$ is
arbitrary, $\pi$ is anticonnected on $X$ in $H-e$.
\end{proof}
\begin{lemma}\label{l:sub-add}
Let $\pi$ be a quasigraph in $H$ and $Y\subseteq X$ subsets of
$V(H)$. Suppose that $e$ is a hyperedge of $H$ not used by $\pi$ and
containing vertices $u,v\in Y$. Define $\rho$ as the quasigraph $\pi
+ (uv)_e$. The following holds:
\begin{enumerate}[\quad(i)]
\item if $\pi$ is anticonnected on $X$ and $\rho$ is anticonnected
on $Y$, then $\rho$ is anticonnected on $X$,
\item if $\pi$ is connected on $X$, then so is $\rho$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove (i). Consider an arbitrary partition $\RR$ of $X$. We aim
to show that there is a hyperedge $f$ of $H$ such that $f$
intersects two classes of $\RR$ and $\rho(f)$ is contained in one of
them. This is certainly true if $\RR[Y]$ is nontrivial, since $\rho$
is assumed to be anticonnected on $Y$. Thus, we may assume that $Y$
is contained in a class of $\RR$.
Since $\pi$ is anticonnected on $X$, there is a hyperedge $h$ of $H$
such that $h$ intersects two classes of $\RR$ and $\pi(h)$ is
contained in one of them. We set $f := h$. If $h\neq e$, this choice
works because $\rho(h) = \pi(h)$. If $h=e$, then $\rho(h)$ is
contained in $Y$ and therefore in a class of $\RR$. This concludes
the proof of (i).
Part (ii) follows directly from the fact that $\pi^*$ is a subgraph
of $\rho^*$, and therefore the induced subgraph of $\pi^*$ on $X$ is
a subgraph of the induced subgraph of $\rho^*$ on $X$.
\end{proof}
\begin{lemma}\label{l:sub-remove}
Let $\pi$ be a quasigraph in $H$ and $Y\subseteq X$ subsets of
$V(H)$. Suppose that $e$ is a hyperedge of $H$ with $\pi(e)\subseteq
Y$. Define $\sigma$ as the quasigraph $\pi-e$. It holds that
\begin{enumerate}[\quad(i)]
\item if $\pi$ is anticonnected on $X$, then so is $\sigma$,
\item if $\pi$ is connected on $X$ and $\sigma$ is connected on $Y$,
then $\sigma$ is connected on $X$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove (i). Suppose, for contradiction, that $\sigma$ is not
anticonnected on $X$. By the definition, there is a partition
$\SSS$ of $X$ such that for all hyperedges $f$ of $H$ intersecting
at least two classes of $\SSS$, $\sigma(f)$ intersects two classes
of $\SSS$ as well. On the other hand, since $\pi$ is anticonnected
on $X$ and $\pi=\sigma$ except for the value at $e$, it must be that
the hyperedge $e$ intersects two classes of $\SSS$ (and $\pi(e)$ is
contained in one class). Since $\sigma(e)=\emptyset$, we obtain a
contradiction.
Next, we prove (ii). Note that $\sigma^*$ is a subgraph of
$\pi^*$. Since $\sigma$ is connected on $Y$, so is $\pi$. We show
that $\sigma$ is connected on $X$. Let $\pi^*_X$ be the induced
subgraph of $\pi^*$ on $X$, and let $\pi(e) = \Setx{u,v}$. We need
to prove that any two vertices in $X$ are joined by a walk in the
induced subgraph of $\sigma^*$ on $X$, which equals
$\pi^*_X-uv$. This is easy from the fact that $\pi^*_X$ is
connected, and that the edge $uv$ may be replaced in any walk by a
path from $u$ to $v$ in the induced subgraph of $\sigma^*$ on $Y$
(which is connected).
\end{proof}
Let us now define two notions that will play a role when we introduce the
sequence of a quasigraph in Section~\ref{sec:sequence}. Suppose that
$X\subseteq V(H)$ such that the quasigraph $\pi$ is both connected and
anticonnected on $X$. Let $e$ be a hyperedge with
$\size{e\cap X} = 2$.
We say that $e$ is an \emph{$X$-bridge} (with respect to $\pi$) if $e$
is used by $\pi$, $\pi(e) \subseteq X$, and $\pi-e$ is not connected
on $X$ in $H-e$. Similarly, $e$ is an \emph{$X$-antibridge} (with
respect to $\pi$) if $e$ is not used by $\pi$ and $\pi$ is not
anticonnected on $X$ in $H-e$.
\begin{figure}
\begin{center}
\fig8
\end{center}
\caption{A quasigraph $\pi$ in a $3$-hypergraph $H$ and a set
$X\subseteq V(H)$ (shown grey) such that $\pi$ is both connected
and anticonnected on $X$. The hyperedge $e$ is an $X$-bridge with
respect to $\pi$, while $f$ is an $X$-antibridge with respect to
$\pi$.}
\end{figure}
\section{The plane sequence of a quasigraph}
\label{sec:sequence}
Let $\pi$ be a quasigraph in a $3$-hypergraph
$H$. In~\cite{KV-hamilton}, we associate with $\pi$ a sequence of
partitions of $V(H)$. In the present paper, we proceed similarly, but
for technical reasons, we need to extend the original definition to
involve a two-dimensional analogue of a sequence. A \emph{plane
sequence} is a family $(\PP_{i,j})_{i,j\geq 0}$ of partitions of
$V(H)$.
It will be convenient to consider the lexicographic order $\leq$ on
pairs of nonnegative integers: $(i,j) \leq (i',j')$ if either
$i < i'$, or $i = i'$ and $j \leq j'$. This is extended in the natural
way to the set
\begin{equation*}
\TT = \Set{(i,j)}{0\leq i < \infty, 0\leq j\leq\infty}\cup\Setx{(\infty,\infty)}.
\end{equation*}
For instance, $(1,\infty) < (2,0) < (\infty,\infty)$. This is a
well-ordering on the set $\TT$, which allows us to perform
induction over $\TT$.
The \emph{(plane) sequence of $\pi$}, denoted by $\lseq\pi$, consists of
partitions $\ptn\pi i j$ of $V(H)$, where $(i,j)\in\TT$. We let
$\ptn\pi00$ be the trivial partition $\Setx{V(H)}$. If $j \geq 1$ and
$\ptn\pi i {j-1}$ is defined, then we let
\begin{equation*}
\ptn\pi i j =
\begin{cases}
\Set{K}{K\text{ is a component of $\pi$ on some
$X\in\ptn\pi i {j-1}$}} & \text{if $j$ is
odd,}\\
\Set{K}{K\text{ is an anticomponent of $\pi$ on some
$X\in\ptn\pi i {j-1}$}} & \text{if $j$ is even.}
\end{cases}
\end{equation*}
See Figure~\ref{fig:seq} for an example.
\begin{figure}
\centering\fig9
\caption{The sequence of partitions for a quasigraph $\pi$, from
$\ptn\pi00$ (lightest gray) to $\ptn\pi03$ (darkest gray). In this
case, $\ptn\pi03 = \ptn\pi0\infty = \ptn\pi\infty\infty$.}
\label{fig:seq}
\end{figure}
So far, this yields the partitions $\ptn\pi00,\ptn\pi01,\dots$. We
first notice that since $H$ is finite, there is some $j_0$ such that
\begin{equation*}
\ptn\pi 0 {j_0} = \ptn\pi 0 {j_0 + 1} = \dots,
\end{equation*}
and we set $\ptn\pi 0 \infty$ equal to $\ptn\pi 0 {j_0}$. We will use
an analogous definition to construct $\ptn\pi i \infty$ for $i > 0$
when $\ptn\pi i 0,\ptn\pi i 1,\dots$ will have been defined. (See
Figure~\ref{fig:layout} for a schematic illustration.)
\begin{figure}
\centering\fig{10}
\caption{The order of partitions in the construction of a plane
sequence of a quasigraph.}
\label{fig:layout}
\end{figure}
By the construction, $\ptn\pi 0 \infty$ has the property that $\pi$ is
both connected and anticonnected on each of its classes. We call any
such partition of $V(H)$ \emph{$\pi$-solid}.
The definition of the plane sequence of $\pi$ will be completed once
we define $\ptn\pi i 0$ for all $i\geq 1$. Thus, let $i\geq 1$ be
fixed, and suppose that the partition $\PP:=\ptn\pi {i-1} \infty$ is
already defined.
Let $A,B\in\PP$. The \emph{exposure step} of the pair $AB$ is the
least $(s,t)$ (with respect to the ordering defined above) such that
$A$ and $B$ are contained (as subsets) in different classes of
$\ptn\pi s t$. Similarly, for a pair of vertices $u,v$ of $H$
contained in different classes of $\PP$, the exposure step of the pair
$uv$ is the least $(s,t)$ such that $u$ and $v$ are contained in
different classes of $\ptn\pi s t$.
Suppose that $\gamma$ is a quasicycle in $\overline{\pi/\PP}$. The
\emph{exposure step} of $\gamma$ is the least exposure step of
$\gamma(e/\PP)$, where $e$ ranges over all hyperedges of $H$ such that
$e/\PP$ is used by $\gamma$. If the exposure step of $\gamma$ is
$(s,t)$, we also say that $\gamma$ is \emph{exposed} at (step)
$(s,t)$. We say that a hyperedge $e$ of $H$ is a \emph{leading
hyperedge} of $\gamma$ if $e/\PP$ is used by $\gamma$ and the
exposure step of $\gamma(e/\PP)$ equals that of $\gamma$.
The terms `exposure step' and `leading hyperedge' are defined
similarly for a cycle in the graph $(\pi/\PP)^*$, by viewing it as a
quasicycle in $H/\PP$. Later in this section, we will generalise these
notions to the situation where the plane sequence has already been
completely defined.
We extend the notions of $X$-bridge and $X$-antibridge defined in
Section~\ref{sec:anti-conn} as follows: given a $\pi$-solid partition
$\RR$ of $V(H)$, $e$ is an \emph{$\RR$-bridge}
(\emph{$\RR$-antibridge}) if there is $X\in\RR$ such that $e$ is an
$X$-bridge ($X$-antibridge, respectively).
We say that a hyperedge $e$ of $H$ crossing $\RR$ is \emph{redundant}
(with respect to $\pi$ and $\RR$) if $e$ is not used by $\pi$ and $e$
is not an $\RR$-antibridge. Note that a hyperedge $e$ unused by $\pi$
is redundant if $\size e = 2$, or more generally, if each of its
vertices is in a different class of $\RR$.
Furthermore, a hyperedge $e$ of $H$ is \emph{weakly redundant} (with
respect to $\pi$ and $\RR$) if either it is redundant, or it is used
by $\pi$ and is not an $\RR$-bridge.
We are now ready to define the partition $\ptn\pi i 0$. We will say
that it is obtained from $\ptn\pi{i-1}\infty$ by the \emph{limit step
$(i-1,\infty)$}. At the same time, we will define the \emph{decisive
hyperedge at $(i-1,\infty)$}, $d^\pi_{i-1}$, for the current limit
step. This will be a hyperedge of $H$; for technical reasons, we also
allow two extra values, $\text{\textsc{stop}}$ and $\text{\textsc{terminate}}$.
If the complement of $\pi/\PP$ in $H/\PP$ is acyclic, we define
$\ptn\pi i 0 = \PP$ and say that $\pi$ \emph{terminates at
$(i-1,\infty)$}. We set $d^\pi_{i-1} = \text{\textsc{terminate}}$.
Otherwise, let $L$ be the set of such hyperedges $f$ of $H$ for which
there exists a quasicycle $\gamma$ in $\overline{\pi/\PP}$ such that
$f$ is a leading hyperedge of $\gamma$. We define $\ptn\pi i 0 = \PP$
if $L$ contains a weakly redundant hyperedge $f$ (with respect to
$\pi$ and $\PP$). In this case, we say that $\pi$ \emph{stops at
$(i-1,\infty)$}; we set $d^\pi_{i-1} = \text{\textsc{stop}}$.
If no weakly redundant hyperedge exists in $L$, choose the maximum
hyperedge $e$ in $L$ according to a fixed linear ordering $\leq_E$ of
all hyperedges in $H$. (For the purposes of this and the following
section, the choice of $\leq_E$ is not important; it will be discussed
in more detail in Section~\ref{sec:bad}). Set $d^\pi_{i-1}$ equal to
$e$. We say that $\pi$ \emph{continues} at $(i-1,\infty)$. Moreover,
in this case, any quasicycle in $\overline{\pi/\PP}$ whose leading
hyperedge is $e$ will be referred to as a \emph{decisive quasicycle at
$(i-1,\infty)$}.
Since $e$ is not weakly redundant, we can distinguish the following
two cases for the definition of $\ptn\pi i 0$:
\begin{itemize}
\item if $e$ is a $\PP$-antibridge, then the classes of $\ptn\pi i 0$
are all the anticomponents of $\pi$ on $X$ in $H-e$, where $X$
ranges over all classes of $\PP$,
\item if $e$ is a $\PP$-bridge, then the classes of $\ptn\pi i 0$ are
all the components of $\pi-e$ on $X$ in $H$, where $X$ ranges over
all classes of $\PP$.
\end{itemize}
The subsequent partitions $\ptn\pi i 1,\ptn\pi i 2,\dots$ are then
defined as described above, and the partition $\ptn\pi i\infty$ is
defined analogously to $\ptn\pi 0\infty$. Iter\-ating, we obtain the
whole plane sequence $\lseq\pi$ and the partitions
$\ptn\pi 0\infty,\ptn\pi 1 \infty,\dots$. By the finiteness of $H$,
there is some $i_0$ such that
$\ptn\pi {i_0} \infty = \ptn\pi {i_0+1} \infty$, and we define
$\ptn\pi\infty\infty$ as $\ptn\pi {i_0} \infty$.
Observe that in the cases where $\pi$ stops or terminates at $(i-1,\infty)$
and we define $\ptn \pi i 0$ as $\ptn \pi {i-1} \infty$, this
partition will in fact equal $\ptn\pi\infty\infty$ since none of the
subsequent steps in the construction of the plane sequence of $\pi$
will lead to any modifications.
Now that the sequence of a quasigraph $\pi$ has been completely
defined, let us revisit the definitions of the terms `exposure step'
and `leading hyperedge'. Although these are defined relative to a
partition $\ptn\pi{i-1}\infty$ (for some $i\geq 1$), this only affects
the scope of the definitions: for instance, if vertices $u,v$ are
contained in different classes of $\ptn\pi\ell\infty$, where
$\ell\geq i$, then the exposure step of $uv$ is the same whether we
use $\ptn\pi{i-1}\infty$ or $\ptn\pi\ell\infty$ for the definition.
In particular, if we let $\QQ=\ptn\pi\infty\infty$, then it makes
sense to speak of leading hyperedges of any quasicycle in
$\overline{\pi/\QQ}$ or the exposure step of a pair of vertices
contained in different classes of $\QQ$.
We now define a partial order on the set of all quasigraphs in $H$
that is crucial for our argument. First, we define the
\emph{signature} $\sig\pi$ of a quasigraph $\pi$ as the sequence
\begin{equation*}
\sig\pi =
(\ptn\pi00,\ptn\pi01,\dots,\ptn\pi0\infty,d^\pi_0,\ptn\pi10,\dots,
\ptn\pi1\infty,d^\pi_1,\dots,\ptn\pi \ell\infty,d^\pi_\ell),
\end{equation*}
where $\ell$ is minimum such that $d^\pi_\ell\in\Setx{\text{\textsc{terminate}},\text{\textsc{stop}}}$.
We derive from this an order $\sqsubseteq$ on quasigraphs in $H$, setting
$\pi \sqsubseteq \rho$ if $\sig\pi$ is smaller than or equal to
$\sig\rho$ in the lexicographic order on the set of signatures of
quasigraphs.
It will be convenient to define several related notions to facilitate
the comparison of quasigraphs. Let $(i,j)\in\TT$. We define the
\emph{$(i,j)$-prefix} $\sig\pi_{(i,j)}$ of $\sig\pi$ as follows:
\begin{itemize}
\item if $j < \infty$, then $\sig\pi_{(i,j)}$ is the initial segment
of $\sig\pi$ ending with (and including) $\ptn\pi i j$,
\item if $j = \infty$, then $\sig\pi_{(i,j)}$ is the initial segment
of $\sig\pi$ ending with (and including) $d^\pi_i$.
\end{itemize}
We let $\pi\worseeqx i j \rho$ if $\sig\pi_{(i,j)}$ is
lexicographically smaller or equal to $\sig\rho_{(i,j)}$. Furthermore,
we define
\begin{align*}
\pi \equiv \rho &\text{\quad if\quad} \pi\sqsubseteq\rho \text{ and
}\rho\sqsubseteq\pi,\\
\pi \eqx i j \rho &\text{\quad if\quad} \pi\worseeqx i j \rho \text{ and
}\rho\worseeqx i j \pi.
\end{align*}
Lastly, the notation $\pi\sqsubset\rho$ means $\pi\sqsubseteq\rho$ and
$\pi\not\equiv\rho$.
\section{The main result: a variant of the Skeletal Lemma}
\label{sec:skeletal}
In this section, we are finally in a position to state and prove the
main result of this paper that is essentially a more specific version
of~\cite[Lemma 17]{KV-hamilton}. Before we state it, we need one more
definition.
Let $H$ be a $3$-hypergraph and $\pi$ an acyclic quasigraph in $H$. A
partition $\PP$ of $V(H)$ is \emph{$\pi$-skeletal} if both of the
following conditions hold:
\begin{enumerate}[(1)]
\item for each $X\in\PP$, $\pi$ is both connected on $X$ and
anticonnected on $X$ (i.e., $\PP$ is $\pi$-solid),
\item the complement of $\pi/\PP$ in $H/\PP$ is acyclic.
\end{enumerate}
\begin{theorem}[Skeletal Lemma, stronger version]\label{t:enhancing}
Let $\pi$ be a quasigraph in a 3-hypergraph $H$. If
$\ptn\pi\infty\infty$ is not $\pi$-skeletal or $\pi$ is not acyclic,
then there is a quasigraph $\rho$ in $H$ such that either
$\rho \sqsupset \pi$, or $\rho\equiv\pi$ and $\sizee\rho < \sizee\pi$.
\end{theorem}
An obvious corollary of Theorem~\ref{t:enhancing} (which will be
further strengthened in Section~\ref{sec:bad}) is the following:
\begin{corollary}\label{cor:main}
For any $3$-hypergraph $H$, there exists an acyclic quasigraph $\pi$
such that $\ptn\pi\infty\infty$ is $\pi$-skeletal.
\end{corollary}
Before proving Theorem~\ref{t:enhancing}, we need to establish the
following crucial lemma. The situation is illustrated in
Figure~\ref{fig:qc-addition}.
\begin{figure}
\centering\fig{11}
\caption{The situation in Lemma~\ref{l:qc-addition}: the quasigraph
$\pi$ (bold) and the partition $\QQ$ (dark gray). Only some of
the hyperedges and vertices are shown; in particular, $\QQ$ is
assumed to be $\pi$-solid.}
\label{fig:qc-addition}
\end{figure}
\begin{lemma}\label{l:qc-addition}
Let $\pi$ be a quasigraph in a 3-hypergraph $H$ and $X\subseteq
V(H)$ such that $\pi$ is anticonnected on $X$. Suppose that $\QQ$ is
a $\pi$-solid partition of $V(H)$ refining $\Setx{X,V(H)-X}$ and
that $\gamma$ is a quasicycle in $\overline{\pi/\QQ}$ all of whose
vertices are subsets of $X$ (as classes of $\QQ$).
If $\gamma$ has a redundant leading hyperedge $e$ (with respect to
$\pi$ and $\QQ$), then there are vertices $u,v\in e$ such that each
of $u$ and $v$ is contained in a different class in $\gamma(e)$, and
the quasigraph $\pi + (uv)_e$ is anticonnected on $X$.
\end{lemma}
\begin{proof}
Let the vertices of $\gamma^*$ be $Q_1,\dots,Q_k\in\QQ$ in order,
such that $\gamma(e) = \Setx{Q_k,Q_1}$.
\begin{claim}\label{cl:S}
The quasigraph $\pi$ is anticonnected on $Q_1\cup\dots\cup Q_k$
in $H-e$.
\end{claim}
\begin{claimproof}
Since $e$ is redundant, it is not a $\QQ$-antibridge, so $\pi$
is anticonnected on each $Q_i$ in $H-e$, where $i=1,\dots,k$. We
prove, by induction on $j$, that $\pi$ is anticonnected on
$Q_1\cup\dots\cup Q_j$ in $H-e$, where $1\leq j\leq k$. The case
$j=1$ is clear. Supposing that $j > 1$ and the statement is
valid for $j-1$, we prove it for $j$. Consider two consecutive
vertices $Q_{j-1}, Q_j$ of $\gamma^*$ and the edge $f$ of
$\gamma^*$ joining them. Since $\gamma$ is a quasigraph in
$\overline{\pi/\QQ}$, $f$ corresponds to a hyperedge $h\neq e$
of $H$ intersecting both $Q_{j-1}$ and $Q_j$, and such that
$\pi(h)$ is contained in $Q_{j-1}$ or $Q_j$ (including the case
$\pi(h)=\emptyset$). By the induction hypothesis and
Lemma~\ref{l:union}, $\pi$ is anticonnected on
$(Q_1\cup\dots\cup Q_{j-1})\cup Q_j$ in $H-e$.
\end{claimproof}
Let $u\in Q_k\cap e$ and $v\in Q_1\cap e$. Observe that $u$ and
$v$ are contained in different classes of $\gamma(e)$ as stated in
the lemma. By Claim~\ref{cl:S}, $u$ and $v$ are contained in the
same anticomponent of $\pi$ on $X$ in $H-e$. It follows that $u$
and $v$ are contained in the same anticomponent $A$ of
$\rho := \pi+(uv)_e$ on $X$ in $H$. In fact, the following holds:
\begin{claim}\label{cl:rho-anti}
The quasigraph $\rho$ is anticonnected on $X$ in $H$.
\end{claim}
\begin{claimproof}
Suppose the contrary and consider a partition $\RR$ of $X$ such
that for each hyperedge $f$ of $H$ crossing $\RR$, $\rho(f)$
also crosses $\RR$. Then $e$ must cross $\RR$, since otherwise
$\RR$ would demonstrate that $\pi$ is not anticonnected on $X$,
contrary to the assumption of the lemma. Thus, $\rho(e)$ crosses
$\RR$, so $u$ and $v$ are contained in distinct classes of
$\RR$. The partition $\RR[A]$ of $A$ (the anticomponent of
$\rho$ defined above) induced by $\RR$ is therefore
nontrivial. Since $\rho$ is anticonnected on $A$, there is a
hyperedge $h$ of $H$ such that $h$ crosses $\RR[A]$ and $\rho(h)$
is contained in a class of $\RR[A]$. But then $h$ crosses $\RR$
while $\rho(h)$ does not, a contradiction with the choice of
$\RR$ which proves the claim.
\end{claimproof}
We have shown that the present choice of $u$ and $v$ satisfies all
requirements of the lemma. This concludes the proof.
\end{proof}
\begin{lemma}\label{l:qc}
Suppose that $\RR$ is a partition of $V(H)$ and $X\in\RR$. If $e$ is
a hyperedge of $H$ with $e\cap X\supseteq\Setx{u,v}$ and $\tau$ is
the quasigraph $\pi + (uv)_e$, then
$\overline{\pi/\RR} = \overline{\tau/\RR}$.
\end{lemma}
\begin{proof}
The hypergraph $\overline{\pi/\RR}$ consists of hyperedges $f/\RR$
such that $f$ crosses $\RR$ and $f/\RR$ is not used by
$\pi/\RR$. For $f\neq e$ we clearly have $f\in\overline{\pi/\RR}$ if
and only if $f\in\overline{\tau/\RR}$. As for the hyperedge $e$, if
it does not cross $\RR$, there is no corresponding hyperedge in
either of $\overline{\pi/\RR}$ and $\overline{\tau/\RR}$. If $e$
crosses $\RR$, then $e/\RR$ is not used by $\pi/\RR$ (since $e$ is
not used by $\pi$) nor by $\tau/\RR$ (since $u,v\in X\in\RR$). Thus,
$e/\RR$ is a hyperedge of both $\overline{\pi/\RR}$ and
$\overline{\tau/\RR}$. We conclude that
$\overline{\pi/\RR}=\overline{\tau/\RR}$.
\end{proof}
Given $i,j\in\TT$ with $i,j < \infty$ and $(i,j)\neq (0,0)$, the
\emph{predecessor} of the partition $\ptn\pi i j$ is the partition
$\ptn\pi i {j-1}$ if $j > 0$, and $\ptn\pi {i-1}\infty$ if $j = 0$ and
$i > 0$. The predecessors of the other partitions in the sequence for
$\pi$ are undefined.
\begin{observation}\label{obs:exposed}
Let $\QQ$ be a partition of $V(H)$. If a quasicycle $\gamma$ in
$\overline{\pi/\QQ}$ is exposed at $(i,j)$ with respect to $\pi$,
then $i,j < \infty$ and $(i,j)\neq (0,0)$; in particular, the
predecessor of $\ptn\pi i j$ exists. In addition, if $j = 1$ and
$i\geq 1$, then $d^\pi_{i-1}$ is a hyperedge not used by $\pi$.
\end{observation}
\begin{proof}
Clearly, $(i,j) \neq (0,0)$ since $\ptn\pi00 = \Setx{V(H)}$. By the
definition of the sequence of $\pi$, for any $r\geq 0$,
$\ptn\pi r \infty$ is equal to one (actually, infinitely many) of
the partitions $\ptn\pi r s$, where $0\leq s < \infty$, and hence it
cannot be the exposing partition for $\gamma$. A similar argument
applies to $\ptn\pi\infty\infty$. As for the last statement, suppose
that the exposing partition is $\ptn\pi i 1$. Clearly,
$d^\pi_{i-1}\notin\Setx{\text{\textsc{terminate}},\text{\textsc{stop}}}$, so it is a hyperedge. If it
were used by $\pi$, it would be a $\ptn\pi{i-1}\infty$-bridge, and
the classes of $\ptn\pi i 0$ would be the components of
$\pi-d^\pi_{i-1}$ on the classes of $\ptn\pi{i-1}\infty$. By the
definition of the sequence of $\pi$, $\ptn\pi i 1 = \ptn\pi i 0$ and
so $\ptn\pi i 1$ cannot be the exposing partition for $\gamma$.
\end{proof}
\begin{observation}
\label{obs:anti}
Suppose that $\pi$ is a quasigraph in $H$, $X\subseteq V(H)$ and $e$
is a hyperedge such that $e\cap X = \Setx{u,v}$. If $\pi+(uv)_e$ is
anticonnected on $X$, then $e$ is not an $X$-antibridge with respect
to $\pi$.
\end{observation}
\begin{proof}
Assume the contrary. Then there is a nontrivial partition $\RR$ of
$X$ such that for each hyperedge $f$ of $H-e$, $\pi(f)$ crosses
$\RR$ whenever $f$ does. Since there is no such partition for
$\pi+(uv)_e$ in $H$, it must be that $e$ crosses $\RR$ but $uv$ does
not. That is impossible since $e\cap X = \Setx{u,v}$.
\end{proof}
\begin{lemma}\label{l:stable}
Let $\pi$ be a quasigraph in $H$. Let $e$ be a hyperedge not used by
$\pi$ such that vertices $u,v\in e$ are contained in different
classes of a partition $\ptn\pi i j$ (where $i,j\geq 0$), but both
of them are contained in the same class $X$ of its predecessor. If
the quasigraph $\pi + (uv)_e$ is anticonnected on $X$, then
$\pi + (uv)_e \sqsupset \pi$.
\end{lemma}
\begin{proof}
Let $\rho = \pi + (uv)_e$. Suppose that $\rho\not\sqsupset\pi$. We
begin by proving the following claim:
\begin{equation}\label{eq:stable} \text{$\pi \worseeqx s t
\rho$ for all $(s,t) \leq (i,j)$.}
\end{equation}
We proceed by induction on $(s,t)$, assuming the claim for all
smaller pairs in $\TT$. The claim~\eqref{eq:stable} holds for
$(s,t) = (0,0)$; assume therefore that $(s,t) > (0,0)$. Suppose
first that $0 < t < \infty$ and the statement holds for
$(s,t-1)$. If $t$ is odd, then any class $A$ of $\ptn\pi{s}{t}$ is a
component of $\pi$ on a class of $\ptn\pi{s}{t-1}$. We have either
$A\supseteq X$, or $A\cap X = \emptyset$. In both cases, $\rho$ is
clearly connected on $A$ (by Lemma~\ref{l:sub-add}(ii) in the former
case). Thus, $\ptn\pi {s}{t} \leq \ptn\rho{s}{t}$ and
$\pi\worseeqx s t \rho$.
Next, if $t$ is even (and nonzero), we proceed similarly: if $A$ is
an anticomponent of $\pi$ on a class of $\ptn\pi{s}{t-1}$ and
$A\supseteq X$, then $\rho$ is anticonnected on $A$ by
Lemma~\ref{l:sub-add}(i), while if $A\cap X = \emptyset$, the same
is true for trivial reasons. Thus, again, $\pi\worseeqx s t \rho$.
The next case is $t = \infty$. By the induction hypothesis,
$\ptn\pi s \infty \leq \ptn\rho s \infty$; without loss of
generality, we may assume that the partitions are equal. Let
$\SSS = \ptn\pi s \infty$. We need to show that
$d^\pi_s \leq_E d^\rho_s$. By Lemma~\ref{l:qc},
$\overline{\pi/\SSS} = \overline{\rho/\SSS}$. In particular, the two
hypergraphs have the same quasicycles, and these quasicycles have
the same leading hyperedges. It follows that
$d^\pi_s \leq_E d^\rho_s$ or $d^\pi_s = \text{\textsc{stop}}$ --- but the latter
does not hold since $\pi$ cannot stop (or terminate) at $(s,\infty)$
as $\ptn\pi i j$ differs from its predecessor and
$(s,\infty) < (i,j)$.
It remains to consider the case $t=0$. Here, we have $s > 0$ and the
induction hypothesis implies that $\pi\worseeqx
{s-1}\infty\rho$. Let us assume that $\pi\eqx {s-1}\infty\rho$ and
denote $\ptn\pi{s-1}\infty$ by $\SSS$. We want to show that $\ptn\pi
s 0 \leq \ptn\rho s 0$.
Let $f := d^\pi_{s-1}$. For the same reason as above, $\pi$
continues at $(s-1,\infty)$, so $f\notin\Setx{\text{\textsc{terminate}},\text{\textsc{stop}}}$. This
means that $f$ is an $\SSS$-bridge or an $\SSS$-antibridge with
respect to $\pi$. In addition, $d^\rho_{s-1} = f$ by the assumption
that $\pi\eqx {s-1}\infty\rho$. Hence, $f$ is an $\SSS$-bridge or an
$\SSS$-antibridge with respect to $\rho$ as well.
Consider the set $X$ from the statement of the lemma. Since the
anticomponents of $\pi-f$ and $\rho-f$ on $X$ are clearly the same
if $\size{f\cap X}\leq 1$, we may assume that $\size{f\cap X}\geq 2$
--- indeed, since $f$ crosses $\SSS$, we must have equality. Let $S$
be the class of $\SSS$ containing $X$.
We distinguish three cases:
\begin{enumerate}[(a)]
\item $f = e$,
\item $f\neq e$ and $f$ is not used by $\pi$,
\item $f\neq e$ and $f$ is used by $\pi$.
\end{enumerate}
In case (a), $f$ ($=e$) is not used by $\pi$, so it is an
$S$-antibridge with respect to $\pi$. Since it intersects $S$ in two
vertices, these two vertices are $u$ and $v$, and each of them is
contained in a different anticomponent of $\pi$ on $S$ in
$H-f$. Moreover, it must be that $i = s$, $j = 0$ and $X = S$ (since
$u,v$ are contained in distinct classes of $\ptn\pi s 0$ but in one
class $S$ of its predecessor). However, an assumption of the lemma
is that $\rho$ is anticonnected on $X$, a contradiction with
Observation~\ref{obs:anti}. In other words, case (a) cannot occur.
In case (b), $f$ is also an $S$-antibridge with respect to $\pi$
(and $\rho$). To prove that $\ptn\pi s 0 \leq \ptn\rho s 0$, it is
enough to show that $\rho$ is anticonnected on each anticomponent of
$\pi$ on $S$ in $H-f$. Let $A$ be such an anticomponent. We have
$\size{e\cap A}=2$, otherwise it is easy to see that $f$ could not
be an $S$-antibridge with respect to $\pi$. It follows that
$X\subseteq A$. By Lemma~\ref{l:sub-add}(i), $\rho$ is anticonnected
on $A$ as claimed. The discussion of case (b) is complete.
Lastly, in case (c), $f$ is an $S$-bridge with respect to $\pi$ and
$\rho$. The quasigraph $\rho-f$ is clearly connected on each
component of $\pi-f$ on $S$ in $H$, so $\ptn\pi s 0 \leq \ptn\rho s 0$.
To summarise, each of the cases (a)--(c) leads either to a
contradiction, or to the sought conclusion
$\ptn\pi s 0 \leq \ptn\rho s 0$. This concludes the proof
of~\eqref{eq:stable}.
It remains to show that $\rho\sqsupset\pi$. Since $j < \infty$,
there are three cases to distinguish based on the value of $j$. If
$j$ is odd, then the classes of $\ptn\pi i j\setminus\ptn\pi i {j-1}$ are
the components of $\pi$ on $X$. Since $u$ and $v$ are in different
classes of $\ptn\pi i j$, the replacement of $\pi$ with $\rho$ has
the effect of adding the edge $uv$ to $\pi^*$, joining the two
components into one. Therefore, $\ptn\rho i j > \ptn\pi i j$, and
by~\eqref{eq:stable}, $\rho\sqsupset\pi$.
If $j$ is even and $j > 0$, the classes of $\ptn\pi i j$ are the
anticomponents of $\pi$ on $X$. Let $A_1$ and $A_2$ be such
anticomponents containing $u$ and $v$, respectively. By
Lemma~\ref{l:union}, $\pi$ is anticonnected on $A_1$ and $A_2$ since
$e$ intersects both of of these sets and is not used by $\pi$. This
contradiction means that the present case is not possible.
Finally, if $j=0$, $uv$ is exposed at $(i,0)$ with respect to
$\pi$. Let $f = d^\pi_{i-1}$ be the corresponding decisive
hyperedge. Since $\pi$ continues at $(i-1,\infty)$, $f$ is an
$X$-bridge or an $X$-antibridge with respect to $\pi$. This shows
that $f\neq e$ because $e$ is not an $X$-antibridge by
Observation~\ref{obs:anti}, and is not an $X$-bridge because it is
not used by $\pi$. Furthermore, similarly to the preceding case, it
cannot be that $f$ is an $X$-antibridge, for then $e$ would
intersect two anticomponents of $\pi$ on $X$ in $H-f$, which is
impossible by Lemma~\ref{l:union}.
Thus, $f$ is an $X$-bridge with respect to $\pi$, and the classes of
$\ptn\pi i 0$ are the components of $\pi-f$ on $X$ in $H$. Since
$u,v$ are in different components, the quasigraph $\rho-f$ is
connected on $X$ in $H$, and consequently $\rho$ stops at
$\ptn\rho{i-1}\infty$ and $\rho\sqsupset\pi$. This concludes the
proof.
\end{proof}
\begin{lemma}\label{l:stable-cycle}
Let $\pi$ be a quasigraph in $H$. Let $e$ be a hyperedge of $H$ used
by $\pi$ and $X\in V(H)$ such that $\pi(e)\subseteq X$, $\pi-e$ is
connected on $X$, and one of the following conditions is satisfied:
\begin{enumerate}[(a)]
\item the vertices of $\pi(e)$ are contained in different classes of
$\ptn\pi i j$ ($0\leq i,j < \infty$) and $X$ is a class of its
predecessor,
\item $X\in\ptn\pi\infty\infty$ and $\ptn\pi\infty\infty$ is
$\pi$-skeletal.
\end{enumerate}
Then $\pi-e \sqsupseteq \pi$. In addition, if (a) is satisfied, then
$\pi-e \sqsupset \pi$.
\end{lemma}
\begin{proof}
Suppose that condition (a) is satisfied. We prove, by induction on
$(s,t)$, that
\begin{equation}
\label{eq:1}
\pi \worseeqx s t \pi-e \text{ for all $(s,t) \leq (i,j)$}.
\end{equation}
The statement holds if $(s,t)=(0,0)$. Consider $(s,t) > (0,0)$. If
$0 < t < \infty$, then we may suppose that $\pi \eqx s {t-1} \pi-e$; by
Lemma~\ref{l:sub-remove}, $\ptn\pi s t \leq \ptn{\pi-e} s t$ and
therefore $\pi \worseeqx s t \pi-e$ as desired.
Suppose that $t = \infty$. Let $\PP = \ptn\pi s\infty$. Without loss
of generality, $\PP = \ptn{\pi-e} s \infty$. Since $X$ is a subset
of a class of $\PP$, Lemma~\ref{l:qc} implies that the complement of
$\pi/\PP$ is the same as the complement of $(\pi-e)/\PP$. In
particular, the quasicycles in these hypergraphs, as well as the
sets of their leading hyperedges, are the same.
We state the following simple observation as a claim for easier
reference later in the proof:
\setcounter{claim}0
\begin{claim}\label{cl:no-wr}
No leading hyperedge of any quasicycle in the complement of
$\pi/\PP$ is weakly redundant.
\end{claim}
Indeed, $\pi$ would have stopped at $(s,\infty)$, but condition (a)
implies that $\ptn\pi i j$ differs from its predecessor. Since
$(s,\infty) < (i,j)$, this would be a contradiction.
By Claim~\ref{cl:no-wr}, if $\pi-e$ stops at $(s,\infty)$, then
$\pi\sqsubset\pi-e$. We may therefore assume that $\pi-e$ continues at
$(s,\infty)$, in which case the decisive hyperedges at $(s,\infty)$
for $\pi$ and $\pi-e$ coincide. We conclude that
$\pi\worseeqx s \infty\pi-e$ in this case.
It remains to consider the case $t=0$. It suffices to show that
$\ptn\pi s 0 \leq \ptn{\pi-e} s 0$ assuming that
$\pi\eqx{s-1}\infty\pi-e$. Let $f := d^\pi_{s-1}$ be the decisive
hyperedge at $(s-1,\infty)$ with respect to $\pi$. The assumption
implies that $f = d^{\pi-e}_{s-1}$. Moreover,
$f\notin\Setx{\text{\textsc{terminate}},\text{\textsc{stop}}}$ because $\ptn\pi i j$ differs from its
predecessor, so $\pi$ has to continue at $(s-1,\infty)$ as
$(i,j) \geq (s,0)$.
If there is a class $A\in\ptn\pi s 0$ with $\pi(e)\subseteq A$, then
we have $X\subseteq A$ (where $X$ is the set from the lemma). It
follows from Lemma~\ref{l:sub-remove} that $\pi-e$ is connected on
$A$ whenever $\pi$ is, and similarly for
anticonnectivity. Consequently, $\ptn\pi s 0 \leq \ptn{\pi-e} s 0$.
We may therefore assume that $\pi(e)$ intersects two classes of
$\ptn\pi s 0$. By condition (a), $(i,j)=(s,0)$ and
$X\in\ptn\pi{s-1}\infty$. Moreover, $f$ is an $X$-bridge or
$X$-antibridge with respect to $\pi$. We can see that $f\neq e$:
otherwise, $e$ would necessarily be an $X$-bridge, but $\pi-e$ is
assumed to be connected on $X$. Since $\pi(e)$ intersects two
classes of $\ptn\pi s 0$ and it is used by $\pi$, we find that $f$
is not used by $\pi$ and the classes of $\ptn\pi s 0$ are the
anticomponents of $\pi$ on $X$ in $H-f$. But then $f$ is a redundant
leading hyperedge in a quasicycle in the complement of
$(\pi-e)/\ptn\pi{s-1}\infty$, contradicting our assumption that
$\pi-e$ continues at $(s-1,\infty)$. Summing up, if $\pi(e)$
intersects two classes of $\ptn\pi s 0$, then
$\pi\worsex{s-1}\infty\pi-e$. The case $t=0$ is settled.
Having proved~\eqref{eq:1}, let us now show that
\begin{equation*}
\pi\worsex i j\pi-e.
\end{equation*}
By the assumption of the lemma, $\pi(e)$ intersects two classes of
$\ptn\pi i j$. We note that $j < \infty$ and consider two
possibilities: $j = 0$ and $j > 0$ ($j$ finite). If $j = 0$, then we
have seen in the above paragraph that if $\pi(e)$ intersects two
classes of $\ptn\pi s 0$ (for any $s\leq i$), then $s=i$ and
$\pi\worsex{s-1}\infty\pi-e$. In particular, $\pi\worsex i j\pi-e$.
Suppose now that $j > 0$ is finite. Without loss of generality,
$\pi\eqx i {j-1}\pi-e$. Since $\pi(e)$ intersects two classes of
$\ptn\pi i j$, we see from the definition of the sequence of $\pi$
that $j$ is even. Hence, the vertices of $\pi(e)$ lie in different
anticomponents of $\pi$ on $X$, and $\pi-e$ is anticonnected on the
union of these anticomponents by Lemma~\ref{l:union}. Consequently,
$\ptn\pi i j < \ptn{\pi-e} i j$ and $\pi\worsex i j \pi-e$. This
concludes the proof for condition (a).
A very similar inductive proof to that used to prove~\eqref{eq:1}
works when condition (b) is satisfied. The main difference is that
in the $t = \infty$ case, Claim~\ref{cl:no-wr} now holds for a
different reason, namely that $\ptn\pi\infty\infty$ is
$\pi$-skeletal, which means that $\pi$ never stops --- consequently,
there is no weakly redundant leading hyperedge of a quasicycle in
$\overline{\pi/\ptn\pi{s-1}\infty}$.
Another difference is that the case discussed in the last paragraph
of the proof of~\eqref{eq:1} cannot occur if condition (b) is
satisfied, which makes the proof for condition (b) somewhat shorter.
\end{proof}
We can now proceed to the proof of Theorem~\ref{t:enhancing}.
\begin{proof}[Proof of Theorem~\ref{t:enhancing}]
Let $\QQ=\ptn\pi\infty\infty$. We distinguish the following cases.
\begin{xcase}{$\QQ$ is not $\pi$-skeletal.}%
By the construction, it is clear that $\QQ$ is $\pi$-solid. Thus,
$\overline{\pi/\QQ}$ contains a quasicycle. Consider the least $s$
such that $\ptn\pi s \infty = \QQ$. Since $\pi$ stops at
$\ptn\pi s \infty$, there is a quasicycle $\gamma$ in
$\overline{\pi/\QQ}$ and a leading hyperedge $e$ of $\gamma$ such
that $e$ is weakly redundant. Let the exposure step for $\gamma$
be $(i,j)$, where $0\leq i,j < \infty$. We put
$\PP = \ptn\pi i j$, and let $X$ be the class of the predecessor
of $\ptn\pi i j$ containing both vertices of
$\gamma(e)$. Furthermore, let $Q_1,Q_2\in\QQ$ be the two vertices
of $\gamma(e)$, and let $P_i\in\PP$ be such that
$P_i\supseteq Q_i$ ($i = 1,2$).
\begin{xsubcase}{$e$ is not used by $\pi$.}%
First, $j$ is odd or $j = 0$; otherwise, $P_1,P_2$ would be
anticomponents of $\pi$ on $X$, but Lemma~\ref{l:union} shows
that $\pi$ is anticonnected on $P_1\cup P_2$, which would be a
contradiction.
Consider first the case that $j$ is odd, so $P_1$ and $P_2$ are
components of $\pi$ on $X$. Moreover, suppose for now that
$j > 1$. By the construction of the sequence for $\pi$, $\pi$ is
anticonnected on $X$. Applying Lemma~\ref{l:qc-addition} (with
the current values of $X$, $Q$, $\gamma$ and $e$), we obtain
vertices $u,v$ such that $u\in Q_1$, $v\in Q_2$ and
$\pi + (uv)_e$ is anticonnected on $X$. Lemma~\ref{l:stable}
then implies that $\pi + (u_1u_2)_e \sqsupset \pi$ and we are
done.
Suppose that $j = 1$. If $i=0$ or the decisive hyperedge
$d^\pi_{i-1}$ at $(i-1,\infty)$ is not used by $\pi$, then the
above argument works, since in this case $\pi$ is anticonnected
on $X$. The other case ($i > 0$ and $d^\pi_{i-1}$ is used by $\pi$) is
excluded by Observation~\ref{obs:exposed}.
This settles the case $j=1$ and more broadly the case that $j$
is odd.
It remains to consider the possibility that $j = 0$. Clearly,
$i > 0$ as $\ptn\pi00 = \Setx{V(H)}$. The predecessor of
$\ptn\pi i 0$ is $\ptn\pi{i-1}\infty$, which is
$\pi$-solid. Thus, $\pi$ is anticonnected on $X$ and the
argument used for odd $j>1$ applies.%
\end{xsubcase}
\begin{xsubcase}{$e$ is used by $\pi$.}%
Since $e$ is a leading hyperedge of a quasicycle in
$\overline{\pi/\QQ}$, $\pi(e)$ is a subset of some
$Q\in\QQ$. Since $e$ is weakly redundant with respect to $\QQ$,
it is not a $Q$-bridge --- in other words, $\pi-e$ is connected
on $Q$. Lemma~\ref{l:stable-cycle} implies that
$\pi-e \sqsupseteq \pi$. Since $\pi-e$ uses fewer hyperedges
than $\pi$, it has the desired properties.
\end{xsubcase}%
\end{xcase}
\begin{xcase}{$\pi$ is not acyclic.}%
Suppose that there is a cycle $C$ in the graph
$\pi^*$. That means that either $\pi/\QQ$ is not acyclic, or there
is a cycle in the induced subgraph of $\pi^*$ on some $Q\in\QQ$.
\begin{xsubcase}{The quasigraph $\pi/\QQ$ is not acyclic.}%
Let $C$ be a cycle in $(\pi/\QQ)^*$ and suppose its exposure
step is $(i,j)$, where $0 \leq i,j < \infty$. Let $e$ be a
leading hyperedge of $C$. Note that each of the vertices of
$\pi(e)$ is in a different class of the partition $\ptn\pi i j$,
but both are contained in the same class $X$ of its predecessor.
Since $e$ is contained in the cycle $C$ all of whose vertices
are subsets of $X$, $\pi-e$ is connected on $X$. Thus, the
assumptions of Lemma~\ref{l:stable-cycle} are satisfied (we use
the current values of $e$ and $X$). It follows that
$\pi-e\sqsupseteq \pi$; since $\sizee{\pi-e} < \sizee\pi$,
$\pi-e$ has the desired properties.
\end{xsubcase}
\begin{xsubcase}{There is $Q\in\QQ$ such that the induced subgraph
of $\pi^*$ on $Q$ contains a cycle.}%
Let $C$ be a cycle in the induced subgraph of $\pi^*$ on $Q$, and
let $e$ be a hyperedge such that $\pi(e)$ is an edge of
$C$. Clearly, $\pi-e$ is connected on $Q$. By
Lemma~\ref{l:stable-cycle}, $\pi-e\sqsupseteq\pi$. Since $\pi-e$
uses fewer hyperedges than $\pi$ does, we are done.%
\end{xsubcase}%
\end{xcase}
\end{proof}
\section{Removing bad leaves}
\label{sec:bad}
For the purposes of the application of the Skeletal Lemma
in~\cite{KV-ess-9}, we have to prove the lemma in a stronger form
(Theorem~\ref{t:enhancing}) allowing us to deal with a certain
configuration that is problematic for the analysis in~\cite{KV-ess-9},
namely a `bad leaf' in a quasigraph. As a result, we will be able to
exclude this configuration in Theorem~\ref{t:no-bad} (at the cost of
some local modifications to the hypergraph).
Let $H$ be a 3-hypergraph and let $\pi$ be an acyclic quasigraph in
$H$. In each component of the graph $\pi^*$, we choose an arbitrary
root and orient all the edges of $\pi^*$ toward the root. A hyperedge
$e$ of $H$ is \emph{associated with} a vertex $u$ if it is used by
$\pi$ and $u$ is the tail of $\pi(e)$ in the resulting oriented
graph. Thus, every vertex has at most one associated hyperedge, and
conversely, each hyperedge is associated with at most one vertex.
A vertex $u$ of $H$ is a \emph{bad leaf} for $\pi$ if all of the
following hold:
\begin{enumerate}[\quad(i)]
\item $u$ is a leaf of $\pi^*$,
\item $u$ is incident with exactly three hyperedges, exactly one of
which has size 3 (say, $e$), and
\item $e$ is associated with $u$.
\end{enumerate}
\begin{figure}
\centering
\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}
\sfig1{}\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}\sfig2{}
\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}
\caption{(a) A bad leaf $u$. (b) The result of the switch at $u$. }
\label{fig:bad}
\end{figure}
To eliminate a bad leaf $u$, we use a \emph{switch} operation
illustrated in Figure~\ref{fig:bad}(b). Suppose that $u$ is incident
with hyperedges $ua$, $ub$ and $ucd$, where $ucd$ is associated with
$u$, and $\pi(ucd) = uc$. We remove from $H$ the hyperedges $ua$, $ub$
and $ucd$ and add the hyperedges $uab$, $uc$ and $ud$; the resulting
hypergraph is denoted by $H^{(u)}$. We say that a hypergraph $\tilde
H$ is \emph{related} to $H$ if it can be obtained from $H$ by a finite
series of switch operations.
With $\pi$ as above, a quasigraph $\pi^{(u)}$ in $H^{(u)}$ is obtained
by setting $\pi^{(u)}(uc) = uc$, and leaving both $ud$ and $uab$
unused. Observe that $(\pi^{(u)})^* = \pi^*$, and $\pi^{(u)}$ has
fewer bad leaves than $\pi$.
A problem we have to address is that a partition $\PP$ which is
$\pi$-skeletal in $H$ need no longer be $\pi^{(u)}$-skeletal in
$H^{(u)}$, since the switch may create an unwanted cycle in
$\overline{\pi^{(u)}/\PP}$. This is illustrated in
Figure~\ref{fig:switch-cycle}. The following paragraphs describe the
steps taken to resolve this problem. First, we extend the order
$\sqsubseteq$ defined on quasigraphs in $H$ to the set of all
quasigraphs in hypergraphs related to $H$. Since all such hypergraphs
have the same vertex set, we can readily compare the partitions of
their vertex sets.
\begin{figure}
\centering\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}\subfloat[]{\fig{12}}\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}\subfloat[]{\fig{13}}\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}
\caption{(a) A quasigraph $\pi$ in $H$ such that the complement of
$\pi$ is acyclic. The quasigraph obtained by a switch at the
vertex $u$ no longer has acyclic complement.}
\label{fig:switch-cycle}
\end{figure}
We have to be more careful, however, in the definition of the sequence
of $\pi$, where a linear ordering $\leq_E$ of hyperedges of $H$ is
used: this ordering should involve all hyperedges of hypergraphs
related to $H$. We define $\leq_E$ as follows. We fix a linear
ordering $\leq$ of $V(H)$. On the set of 3-hyperedges of hypergraphs
related to $H$, $\leq_E$ is the associated lexicographic ordering, and
the same holds for the set of 2-hyperedges of hypergraphs related to
$H$. Finally, we make each 2-hyperedge greater than any 3-hyperedge
with respect to $\leq_E$.
This allows for a definition of the sequence of $\pi$ consistent with
the switch operation. Furthermore, the definition of the
ordering $\sqsubseteq$ as given in Section~\ref{sec:sequence} is well
suited for our purpose, and remains without change.
Let us mention that although the incorporation of the decisive hyperedges in
the signature of a quasigraph may have seemed unnecessary (mainly
thanks to Lemma~\ref{l:qc}), the present section is the reason why we
chose this definition. In fact, the only situation in our arguments
when the comparison of the decisive hyperedges is relevant is
immediately after a switch, as in Lemma~\ref{l:stable-bad} below.
We first prove that switching a bad leaf of $\pi$ does not affect the
(anti)connecti\-vi\-ty of $\pi$ on a set of vertices.
\begin{lemma}
\label{l:sub-switch}
Let $\pi$ be a quasigraph in $H$ and $X\subseteq V(H)$. Suppose that
$\pi$ has a bad leaf $u$ and $\sigma$ is obtained from $\pi$ by
switching at $u$. The following holds:
\begin{enumerate}[\quad(i)]
\item if $\pi$ is anticonnected on $X$, then so is $\sigma$,
\item if $\pi$ is connected on $X$, then so is $\sigma$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove (i). Suppose that $\pi$ is anticonnected on $X$, but the
quasigraph $\sigma$ (in a hypergraph $\tilde H$ related to $H$) is
not. The definition implies that there is a partition $\PP$ of $X$
such that for every hyperedge $f$ of $\tilde H$ crossing $\PP$,
$\sigma(f)$ crosses $\PP$. At the same time, there is a hyperedge $e$
of $H$ such that $e$ crosses $\PP$ but $\pi(e)$ does not. Clearly,
$e$ must be incident with $u$ (since the other hyperedges exist both
in $H$ and $\tilde H$, and the values of $\pi$ and $\sigma$ coincide).
Let the neighbours of $u$ in $\tilde H$ be labelled as in
Figure~\ref{fig:bad}b. Let $A$ be the class of $\PP$ containing $u$;
by the above property of $\sigma$, we can easily see that $a\in A$ if
$a\in X$, and similarly for $b$ and $d$. This implies that
$c\in X-A$ and $e=ucd$, but then $\pi(ucd)$ crosses $\PP$, a
contradiction.
Part (ii) is immediate from the fact that $\pi^* = \sigma^*$.
\end{proof}
\begin{lemma}\label{l:stable-bad}
Let $\pi$ be a quasigraph in $H$ such that $\ptn\pi\infty\infty$ is
$\pi$-skeletal. If $\pi$ has a bad leaf $u$ and the quasigraph
$\sigma$ (in a hypergraph related to $H$) is obtained from $\pi$ by a
switch at $u$, then $\pi \sqsupseteq \sigma$.
\end{lemma}
\begin{proof}
We show that
\begin{equation}
\label{eq:stable-bad}
\text{$\pi \worseeqx i j
\sigma$ for all $(i,j) \geq (0,0)$.}
\end{equation}
We proceed by induction on $(i,j)$. The claim is trivial for
$(i,j)=(0,0)$. We may assume that $\pi\not\sqsubset\sigma$ for
otherwise we are done. Suppose thus that $j > 0$ and
$\pi\eqx i {j-1}\sigma$. If $j$ is odd, then the classes of
$\ptn\pi i j$ are the components of $\pi$ on classes of
$\ptn\pi i {j-1}$. Let $X\in\ptn\pi i {j-1}$ and let $A$ be a
component of $\sigma$ on $X$. By Lemma~\ref{l:sub-switch}(ii), $\sigma$
is connected on $A$. It follows that $\ptn\pi i j\leq \ptn\sigma i j$
and~\eqref{eq:stable-bad} follows. An analogous argument, using
Lemma~\ref{l:sub-switch}(i), can be used for even $j > 0$.
Let us consider the case $j = \infty$. We assume without loss of
generality that $\pi\eqx i \infty\sigma$. Since $\ptn\pi\infty\infty$
is assumed to be $\pi$-skeletal, $\pi$ does not stop at
$(i,\infty)$. Furthermore, we may assume that $\pi$ does not
terminate at $(i,\infty)$, for otherwise we immediately conclude
$\pi\worseeqx i \infty\sigma$.
Let $\PP=\ptn\pi i\infty$ and let $\gamma$ be a quasicycle in the
complement of $\pi/\PP$ in $H/\PP$. Define a quasigraph $\gamma'$ in
the complement of $\sigma/\PP$ in $\tilde H/\PP$ as follows (see
Figure~\ref{fig:switch-cases} for an illustration of several of the
cases):
\begin{itemize}
\item if $\gamma$ uses a hyperedge $f/\PP$, where $f$ is a hyperedge
of $H$ not incident with $u$, then set $\gamma'(f/\PP) =
\gamma(f/\PP)$,
\item if $\gamma$ uses $au/\PP$ and $bu/\PP$, then set
$\gamma'(abu/\PP) = ab/\PP$,
\item if $\gamma$ uses $au/\PP$ but not $bu/\PP$, then set
$\gamma'(abu/\PP) = au/\PP$ (and symmetrically with $au$ and $bu$
reversed),
\item if $\gamma$ uses $ucd/\PP$ (so $u$ and $d$ are in different
classes of $\PP$), then set $\gamma'(ud/\PP) = ud/\PP$.
\end{itemize}
\begin{figure}
\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}\subfloat[]{\fig{14}}\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}\subfloat[]{\fig{15}}\hspace*{0pt}\hspace*{\fill}\hspace*{0pt}
\caption{Corresponding quasicycles in the complement of $\pi/\PP$
(in $H/\PP$) and in the complement of $\pi^{(u)}/\PP$ (in
$\tilde H/\PP$). The quasigraph $\pi$ is represented by bold
lines, the partition $\PP$ is shown in gray. (a) A quasicycle
$\gamma$ in the complement of $\pi/\PP$ (dotted). (b) The
corresponding quasicycle $\gamma'$ in $\pi^{(u)}/\PP$ (also
dotted).}
\label{fig:switch-cases}
\end{figure}
A look at Figures~\ref{fig:bad} and \ref{fig:switch-cases} shows
that $\gamma'$ is a quasicycle. Thus, $\sigma$ does not terminate at
$(i,\infty)$. We need to relate leading hyperedges of $\gamma$ to
those of $\gamma'$.
Any leading hyperedge of $\gamma$ that is not incident with $u$ is a
leading hyperedge of $\gamma'$, and vice versa. We assert that
neither $au$ nor $bu$ is a leading hyperedge of $\gamma$. If they
were, they would be redundant (since their size is $2$) and
$\ptn\pi\infty\infty$ would not be $\pi$-skeletal, contrary to the
assumption. Finally, if $ucd$ is a leading hyperedge of $\gamma$,
then $ud$ is a leading hyperedge of $\gamma'$. Note that
$ud >_E ucd$.
It follows that if $\sigma$ does not stop at $(i,\infty)$, then
$d^\pi_i \leq_E d^\sigma_i$. On the other hand, if it does stop, then
the same inequality holds by the definition of $\leq_E$. In both
cases, we have $\pi\worseeqx i\infty\sigma$.
The last possibility left to consider is $j=0$. We need to show that
$\ptn\pi i 0 \leq\ptn\sigma i 0$ under the assumption that
$\pi\eqx {i-1}\infty\sigma$. Let $\RR = \ptn\pi{i-1}\infty$ and let
$f := d^\pi_{i-1} = d^\sigma_{i-1}$. Since $\ptn\pi\infty\infty$ is
$\pi$-skeletal, $f\neq\text{\textsc{stop}}$. If $f = \text{\textsc{terminate}}$, then
$\pi\equiv\sigma$. We may thus assume that $f$ is a hyperedge, and in
that case it is not incident with $u$ (since $H$ and $\tilde H$ have
no common hyperedge incident with $u$). Thus, $\sigma-f$ is obtained
from $\pi-f$ by a switch at $u$. Similarly, if $f$ is not used by
$\pi$, then $\sigma$ is obtained from $\pi$ by a switch at $u$, in the
hypergraph $H-f$.
If $f$ is an $X$-antibridge with respect to $\pi$ for some
$X\in\RR$, we may use Lemma~\ref{l:sub-switch} in the hypergraph
$H-f$. We find that $\sigma$ is anticonnected on each anticomponent of
$\pi$ on $X$ in $H-f$, and hence $\pi\worseeqx i 0 \sigma$. On the
other hand, if $f$ is an $X$-bridge with respect to $\pi$, then
Lemma~\ref{l:sub-switch} implies that $\sigma-f$ is connected on each
component of $\pi-f$ on $X$ in $H$, and $\pi\worseeqx i 0 \sigma$
again. This proves~\eqref{eq:stable-bad} and the lemma follows.
\end{proof}
Let us now state the result we need to use in~\cite{KV-ess-9}.
\begin{theorem}\label{t:no-bad}
Let $H$ be a $3$-hypergraph. There exists a hypergraph $\tilde H$
related to $H$ and an acyclic quasigraph $\sigma$ in $\tilde H$ such
that $\sigma$ has no bad leaves and $V(\tilde H)$ admits a
$\sigma$-skeletal partition $\SSS$.
\end{theorem}
\begin{proof}
Let $\sigma$ be a quasigraph in a hypergraph $\tilde H$ related to $H$
chosen as follows:
\begin{enumerate}[(1)]
\item $\sigma$ is $\sqsubseteq$-maximal in the set of all quasigraphs
in 3-hypergraphs related to $H$,
\item subject to (1), $\sigma$ uses as few hyperedges as possible,
\item subject to (1) and (2), the number of bad leaves is as small
as possible.
\end{enumerate}
We define $\SSS := \ptn\sigma\infty\infty$, where the partition is
obtained via the plane sequence with respect to $\tilde H$. Note
that $\sigma$ is acyclic and $\SSS$ is $\sigma$-skeletal by
Theorem~\ref{t:enhancing} and the choice of $\sigma$.
It remain to prove that $\sigma$ has no bad leaves. Suppose to the
contrary that there is a bad leaf $u$ for $\sigma$. By
Lemma~\ref{l:stable-bad}, $\sw\sigma u \sqsupseteq \sigma$. Furthermore,
$\sw\sigma u$ uses the same number of hyperedges as $\sigma$, and has one bad
leaf fewer, a contradiction with the choice of $\sigma$.
\end{proof}
|
1,116,691,498,414 | arxiv | \section{Introduction}
\setcounter{equation}{0}
In spite of the quantitative successes of renormalized perturbation
theory, no nontrivial quantum field theory (QFT) in four spacetime
dimensions (4D) has been constructed rigorously. It is sometimes
suggested that the Wightman axioms might be too restrictive, but most
attempts at relaxing them lead to physically unacceptable consequences.
E.g., a violation of causality cannot be exponentially small
\cite{P}. Admitting indefinite (physical) Hilbert spaces not only
jeopardizes the statistical interpretation of correlations, but it
makes statements about convergence of approximations delicate if not
meaningless.
On the contrary, the usual attitude is to strengthen the axioms, e.g., by
imposing additional symmetries or phase space or additivity properties.
This facilitates the analysis of models but clearly reduces the number of
theories. Even worse, demanding too much may trivialize the theory.
E.g., requiring that the two-point function be supported on a mass
shell \cite{JS} or only to have finite mass support \cite{G}, already
forces the field to be a (generalized) free field.
It is one of the merits of the axiomatic approach \cite{SW,FRS} that it
allows to pinpoint such
obstructions even before a model is formulated. It makes no
assumptions about intermediate steps and limits through which a theory
is constructed. Referring only to its intrinsic features, it avoids
assigning significance to the artifacts of the description.
Especially, it guides navigation between Scylla (physical meaningless
theories) and Charybdis (free fields) in the quest for a proper set of
model assumptions that can possibly be satisfied by other than free fields.
\smallskip
In a series of recent papers \cite{NST,NT}, two of us
have suggested to demand ``global conformal invariance'' (GCI) which
is the postulate that the conformal group is represented in a true
(i.e., not a covering) representation. (The term ``global'' stresses
the fact that the conformal transformations are defined globally on
the Dirac compactification of Minkowski spacetime.)
GCI implies rationality of all correlation functions \cite{NT}. On
one side, this is a desired feature since it allows to
parameterize each correlation function by the coefficients of a finite
set of admitted rational structures. The latter are determined by
conformal invariance, and by the unitarity bound on the representations
of the conformal group that possibly contribute to operator product
expansions (OPE) giving rise to upper bounds on the pole in each
pair of variables. The coefficients are then further restricted only
by Hilbert space positivity. While the latter is highly nontrivial to
control in general, there has been considerable progress at the four-point
level \cite{DO,NRT1}.
On the other side, GCI is equivalent to Huygens locality \cite{NRT1},
i.e., commutativity of fields not only at spacelike but also at
timelike separation. This feature seems to be conspicuosly close to
free field theory, since any scattering of field quanta should give rise to
causal propagation within the forward lightcone. Indeed, it has been
shown \cite{B} that if a Huygens local QFT has a complete particle
interpretation, then its scattering matrix is trivial. Therefore, GCI
allows only nontrivial theories without asymptotic completeness, which
may not be entirely unphysical in a scale invariant theory. Note that by
rationality, all scaling dimensions are integer numbers, which does
not mean that they are canonical. It is conceivable that in a QFT with
anomalous dimensions, at some value of the coupling constant all
dimensions simultaneously become integers. Examples for such a hazard
are well known in 2D conformal QFT \cite{BMT}.
\smallskip
We shall report here on a recent analysis \cite{NRT2} of the consequences
of the following fact. For a symmetric tensor field of rank $r$ and
dimension $d$, one calls $d-r$ the ``twist''. Twist two fields are
necessarily conserved, because their two-point functions, completely
determined by $r$ and $d$, are conserved. (Hilbert space positivity
crucially enters in this argument: if the norm square of a vector
vanishes, then the vector itself vanishes. The rest of the argument
invokes the Reeh-Schlieder theorem for which in turn locality
and energy positivity are essential.)
These conservation laws can be
reformulated in the form
\bea\label{bh}
\square_x V(x,y) = 0 = \square_y V(x,y)
\end{eqnarray}
where $V(x,y)$ is the sum of all twist two contributions to the OPE
$\phi_1(x)\phi_2(y)$ of two scalar fields of equal dimension $d$.
We say that the ``bi-field'' $V$ is biharmonic.
Our first main result is the unravelling of a hidden consequence of this
equation: a third-order partial differential equation to be satisfied
by (certain parts of) all correlation functions involving
$\phi_1(x)\phi_2(y)$, that is a necessary and sufficient condition for
biharmonicity.
We present a solution to this PDE, and the corresponding
transcendental six-point correlation function of $V(x,y)$ that cannot
be produced by Wick products of free fields. On the basis of this
solution, which we believe to be prototypical for the general
case, we then study the locality properties of the bi-field.
\section{Biharmonicity}
\setcounter{equation}{0}
We first explain how harmonicity of $V(x,y)$ serves to define its
correlation functions.
Because $V(x,y)$ is biharmonic, there are
in fact two such prescriptions, and hence $V(x,y)$ is
overdetermined. The consistency condition gives rise to a restriction
on the correlation functions involving $\phi_1(x)\phi_2(y)$.
\smallskip
The crucial fact is the following ``classical'' result \cite{BT}. If
$u(x)$ is a power series in $x\in{\mathbb R}^D$ or $\CC^D$, then there is a
unique ``harmonic decomposition''
\bea\label{hd}
u(x) = v(x) + x^2\hat u(x),
\end{eqnarray}
such that $v$ is harmonic ($\square_x v=0$) and $\hat u$ is again a
power series. We call $v$ the ``harmonic part'' of $u$. (Questions
of convergence will be discussed in Sect.\ 5.)
We apply this fact as follows.
Twist two contributions to correlations
involving $\phi_1(x)\phi_2(y)$ have a leading singularity
$((x-y)^2)^{1-d}$, while all higher twist contributions are less
singular. Therefore, the (Huygens bilocal, but not conformally
covariant) bi-field $U(x,y)$ defined by
\bea\label{U}
\phi_1(x)\phi_2(y) - \vac{\phi_1(x)\phi_2(y)} =:
((x-y)^2)^{-(d-1)}\cdot U(x,y)
\end{eqnarray}
is regular at $(x-y)^2=0$. $U(x,y)$ contains all twist $\geq$ two
contributions to the OPE, and because the twist $>$ two contributions
are suppressed by another factor $(x-y)^2$, we may write
\bea\label{Uhd}
U(x,y) = V(x,y) + (x-y)^2\; \hat U(x,y)
\end{eqnarray}
where both $V$ and $\hat U$ are regular at $(x-y)^2=0$. Now consider
any correlation function
\bea\label{Ucorr}
u(x,y,\dots) = \vac{U(x,y)\phi_3(x_3)\cdots\phi_n(x_n)},
\end{eqnarray}
where $\dots$ stands for the arguments of all other fields.
Its Taylor expansion in $x$ around $y$ is a
power series in $x-y$ with coefficients independent of $x$. Thus, by
(\ref{Uhd}) and because $\square_xV(x,y)=0$, the desired
correlation function (again as a power series)
\bea\label{Vcorr}
v(x,y,\dots) = \vac{V(x,y)\phi_3(x_3)\cdots\phi_n(x_n)}
\end{eqnarray}
is the harmonic part of this series. By construction,
$v(x,y,\dots)$ transforms like the correlation function of a conformal
bi-scalar of dimension $(1,1)$.
On the other hand, the Taylor expansion of $u(x,y,\dots)$ in $y$
around $x$ is another power series in $x-y$, whose coefficients do not
depend on $y$, and because also $\square_yV(x,y)=0$, $v(x,y,\dots)$
may as well be determined as the harmonic part of this latter series.
This overdetermination imposes a consistency condition on the function
$u(x,y,\dots)$. Its nontriviality may be seen from the following
example. Consider $u(x,y,\dots) =
(y-x_6)^2/(x-x_3)^2(y-x_4)^2(y-x_5)^2$. This function
is harmonic with respect to $x$, hence is harmonic part with the first
prescription coincides with $u$ itself, but it is not harmonic with
respect to $y$, so the harmonic part with respect to the second
prescription differs from $u$, and the two definitions
of $v$ are conflicting each other. Thus, a function $u(x,y,\dots)$
as in this example cannot occur as a correlation function of $U(x,y)$.
We conclude that biharmonicity of the bi-field $V(x,y)$, which
follows from the conservation of conformal twist two tensor fields,
implies a nontrivial restriction on the possible correlation functions
$u(x,y,\dots)$ of the bi-field $U(x,y)$, and hence on the
correlations involving $\phi_1(x)\phi_2(y)$.
\smallskip
We shall now turn this condition into a partial differential equation.
Global conformal invariance implies that scalar correlation functions
are Laurent polynomials in the variables $\rho_{ij} = \rho_{ji} =
(x_i-x_j)^2$ of the form
\bea\label{corr}
\vac{\phi_1(x_1)\cdots\phi_n(x_n)}
= \sum\nolimits_{\underline\mu} C_{\underline\mu}\;
\prod\nolimits_{i<j}\rho_{ij}^{\mu_{ij}}
\end{eqnarray}
where the integer powers $\mu_{ij} = \mu_{ji}$ satisfy the homogeneity rules
\bea\label{sum}
\sum\nolimits_j \mu_{ij} = -d_i,
\end{eqnarray}
and the absence of non-unitary representations of the conformal group
in the OPE implies the lower bound for the connected parts of (\ref{corr})
\bea\label{pb}
2\mu_{ij} \geq -d_i-d_j.
\end{eqnarray}
Let $\phi_1$ and $\phi_2$ have the same dimension $d$. It follows
that all correlations (\ref{Ucorr})
(that give contributions to (\ref{corr})) are Laurent polynomials in
$\rho_{ij}$, which are separately homogeneous of total degree $-1$ in
$\rho_{1k}$ ($k\neq 1$) and in $\rho_{2k}$ ($k\neq 2$), and which are
true polynomials in $\rho_{12}$. Because all terms involving a factor
$\rho_{12}$ have zero harmonic part in the harmonic decompositions, we
need to consider only the function $u_0$ which is the contribution of
order $(\rho_{12})^0$ of $u$. Then $u_0$ is separately homogeneous of
total degree $-1$ in $\rho_{1k}$ ($k>2$) and in $\rho_{2k}$ ($k> 2$).
It is now important that the harmonic part $v$ is a real analytic
function in a neighborhood of $x_1=x_2$, provided $(x_2-x_i)^2\neq 0$
for all $j>2$ (see Sect.\ 5). We may therefore expand
$v$ as a power series $\sum_{n=1}^\infty h_n/n! \cdot
\rho_{12}^n$. The coefficients $h_n$ are functions of all the
remaining variables $\rho_{ij}\neq \rho_{12}$, and are separately
homogeneous of total degree $-n-1$ in $\rho_{1k}$ ($k>2$) and in
$\rho_{2k}$ ($k> 2$).
Let us write
$\partial_{jk}=\partial_{kj}=\frac{\partial}{\partial\rho_{jk}}$. Then
the wave operator $\square_{x_1}$ has the form
\bea\label{wave}
\square_{x_1} =-4\Big(\sum\nolimits_{2\leq j<k\leq
n}\rho_{jk}\partial_{1j}\partial_{1k}\Big) = -4(D_1 + E_1\,\partial_{12})\end{eqnarray}
valid on homogeneous functions of total degree $-1$ in $\rho_{1k}$
($k\neq 1$) \cite{NRT1}, where
\bea\label{ED}
D_1 =
\sum\nolimits_{3\leq j<k\leq n}\rho_{jk}\partial_{1j}\partial_{1k}
\qquad\hbox{and}\qquad E_1=\sum\nolimits_{3\leq i}\rho_{2i}\partial_{1i}.
\end{eqnarray}
Similarly, replacing $1\leftrightarrow 2$ everywhere, one represents
$\square_{x_2} = -4(D_2 + E_2\,\partial_{12})$. Thus, the two
conditions $\square_{x_1} f = 0 =\square_{x_2} f$ give rise to two
recursive systems of partial differential equations for the
coefficient functions $h_n$ of the form
\bea\label{rec12}
E_1 \, h_{n+1} = - D_1 \, h_n \qquad\hbox{and}\qquad
E_2 \, h_{n+1} = - D_2 \, h_n.
\end{eqnarray}
At $n=0$ we obtain the integrability condition
\bea\label{int} (E_1D_2 - E_2D_1)\, h_0= (E_2E_1-E_1E_2) \, h_1.
\end{eqnarray}
Because $h_1$ is separately homogeneous of total degree
$-2$ in $\rho_{1k}$ ($k\geq 3$) and in
$\rho_{2k}$ ($k\geq 3$), the commutator $(E_2E_1-E_1E_2)$
vanishes on $h_1$. Since $v$ is the harmonic part of $u$, its leading
term $h_0$ equals $u_0$. Thus, we arrive at
\smallskip
\noindent \R{1}{The function $u_0$ solves the third-order partial
differential equation}
\bea\label{pde} (E_1D_2 - E_2D_1)\, u_0= 0.
\end{eqnarray}
Next, when (\ref{pde}) holds and the recursion is solved for $h_1$,
one has $(D_1E_2-D_2E_1)h_1=-(D_1D_2-D_2D_1)h_0=0$ because $D_1$ and
$D_2$ commute. But $D_1E_2-D_2E_1 = E_2D_1-E_1D_2$. Therefore, the
integrability condition for the next step of the recursion is
automatically satisfied, and the argument passes to all the higher
steps. Thus, (\ref{pde}) secures solvability of the entire recursive
systems (\ref{rec12}).
\section{Consequences}
\setcounter{equation}{0}
One should worry how there can be a new universal (model-independent)
partial differential equation for the correlation functions. It is
important to notice that this PDE can not be regarded as some
``equation of motion''. The reason is that it is satisfied only by the
leading part $u_0$ of $u$. The splitting of $u$ into $u_0+$ a
remainder does not correspond to any local decomposition of the
bi-field $U(x,y)$. Thus, the PDE (\ref{pde}) cannot be formulated as a
differential equation for some (bi-)fields in the theory.
Instead, it should be understood as a kinematical constraint. Because
its solutions $u_0$ must at the same time be Laurent polynomials, the
PDE rather selects a (finite) set of admissible singularity
structures, that depends on the dimensions of the scalar fields
involved through the lower bounds on $\mu_{ij}$.
\smallskip
Indeed, we have shown in \cite{NRT2} that the PDE (\ref{pde}) implies
the following constraint on the pole structure of a Laurent polynomial
$u_0$ in $\rho_{1k}$ and $\rho_{2k}$ ($k>2$), that is homogeneous of
degree $-1$ in both sets of variables separately: Suppose $u_0$
contains a monomial
\bea\label{mon}
\prod\nolimits_{k>2} \rho_{1k}^{\mu_{1k}} \rho_{2k}^{\mu_{2k}} \times
\hbox{other factors}
\end{eqnarray}
where the other factors depend only on $\rho_{kl}$ ($k,l>2$). If there
are $i\neq j$ such that $\mu_{1i}<0$ and $\mu_{1j}<0$ (a ``double
pole'' in $x_1$), then one must have $\mu_{2k}\geq 0$ for all $k>2$,
$k\neq i,j$. In particular, this excludes ``triple poles'', because a
triple pole in $x_1$ would imply that {\em all} $\mu_{2k}\geq 0$,
contradicting homogeneity. The most involved possible pole structure
of $u_0$ is therefore of the form
\bea\label{pole}
\frac{\hbox{polynomial}}{\rho_{1i}^p\rho_{1j}^q\rho_{2i}^r\rho_{2j}^s}
\times \hbox{other factors}
\end{eqnarray}
where the polynomial takes care of the proper homogeneity.
The corresponding contribution to the connected correlations involving
$\phi_1(x)\phi_2(y)$ is therefore
\bea\label{cont}
\frac 1{\rho_{12}^{d-1}}
\frac{\hbox{polynomial}}{\rho_{1i}^p\rho_{1j}^q\rho_{2i}^r\rho_{2j}^s}
\times \hbox{other factors}.
\end{eqnarray}
Note that no constraints arise on higher twist contributions
($\mu_{12} > 1-d$) or on poles of two fields with $d_1\neq d_2$.
\smallskip
The interest in double poles is due to the fact that twist two bi-fields
made of free fields, such as $\wick{\varphi(x)\varphi(y)}$ or
$(x-y)^\mu\wick{\bar\psi(x)\gamma_\mu\psi(y)}$ are always Wick
bilinears, so that their correlation functions can never contain a
double pole. A nontrivial double pole solution (an example will be
displayed below) is therefore a candidate for a Huygens local QFT not
generated by free fields.
We have also shown in \cite{NRT2} that
\smallskip
\noindent \R2{A correlation function involving
$V(x_1,x_2)$, i.e., the harmonic part of the Laurent polynomial
$u_0(x_1,x_2,\dots)$, is again a Laurent polynomial if and only if
$u_0$ does not contain a double pole in $x_1$ or $x_2$. }
\smallskip
Four-point functions $\langle U(x_1,x_2)\phi_3(x_3)\phi_4(x_4)\rangle$
can never exhibit double poles in $x_1$ or $x_2$, just ``by lack of
independent variables''. Therefore, four-point functions of
twist two bi-fields are always rational. From this one can deduce
that their partial wave expansion cannot terminate after finitely many
terms, i.e., the OPE of $\phi_1(x)\phi_2(y)$ must contain infinitely
many conserved tensor fields.
\smallskip
If all fields are scalars of dimension $2$, hence $\mu_{ij}\geq -1$,
double poles cannot occur in any $n$-point function subject to the
cluster decay property. We have exploited this fact in \cite{NRT2} to
prove that scalar fields $\phi$ of dimension $2$ are always Wick
products of the form $\sum M_{ij}\,\wick{\varphi_i\varphi_j}(x)$ of
massless free fields.
In this argument, Hilbert space positivity plays a crucial role
because one has to solve a moment problem in order to get the correct
coefficients for all $n$-point functions simultaneously. (When we do
not insist that the theory possesses a stress-energy tensor with a
finite two-point function, then the fields $\phi$ may also have
contributions of generalized free fields.)
The simple pole structure of correlation functions of dimension 2
fields can be converted into commutation relations of the twist two
biharmonic fields occurring in their OPE. The result is an
infinite-dimensional Lie algebra, whose unitary positive-energy
representations can be studied with methods of highest weight
modules. It turns out that there are no other representations than
those induced by the free field construction \cite{BNRT}.
\section{An example with double poles}
\setcounter{equation}{0}
The following six-point structure solves the PDE (\ref{pde}) both
in the variables $x_1,x_2$ and in the variables $x_5,x_6$:
\bea\label{dp6}
u(x_1,\dots,x_6) =
\frac{\left(\rho_{15}\rho_{26}\rho_{34} - 2\rho_{15}\rho_{23}\rho_{46}
- 2\rho_{15}\rho_{24}\rho_{36}\right)_{[1,2][5,6]}}
{\rho_{13}\rho_{14}\rho_{23}\rho_{24}\cdot
\rho_{34}^{d'-3}\cdot \rho_{35}\rho_{45}\rho_{36}\rho_{46}}\;,
\end{eqnarray}
where $(\dots)_{[i,j]}$ stands for the antisymmetrization in the arguments
$x_i$, $x_j$, and $\rho_{ij} = (x_i-x_j)^2$ as before. This structure
in addition obeys all homogeneity rules (\ref{sum}), pole bounds
(\ref{pb}) and cluster conditions in order to qualify as (a
contribution to) the correlation function
\bea\label{U6}
\langle U(x_1,x_2)\phi'(x_3)\phi'(x_4)U(x_5,x_6)\rangle
\end{eqnarray}
where the scalar field $\phi'$ has dimension $d'$. The multiple
poles in the variables $x_3,x_4$ do not contradict the previous
argument (Sect.\ 3) excluding triple poles in the twist two ``channel'',
when either $d$ (the dimension of the fields $\phi_1,\phi_2$ in (\ref{U}),
generating $U$) or $d'$ is $>2$, because they don't arise in a channel
of twist two ($1/\rho_{34}^{d'-3}$ is twist six, and $1/\rho_{3i}$ and
$1/\rho_{4i}$ are twist two only if $d=d'=2$).
\smallskip
We determined the corresponding (contribution to the) correlation
\bea\label{V6}
\vac{ V(x_1,x_2)\phi'(x_3)\phi'(x_4)V(x_5,x_6)},
\end{eqnarray}
$v(x_1,\dots,x_6)$, as the (simultaneous) harmonic part(s) of
$u(x_1,\dots,x_6)$. Let
\bea\label{ccr}
s=\frac{\rho_{12}\rho_{34}}{\rho_{13}\rho_{24}}, \qquad
t=\frac{\rho_{14}\rho_{23}}{\rho_{13}\rho_{24}}
\end{eqnarray}
denote the conformal cross ratios, and $s'$ and $t'$ the same with
$1,2$ replaced by $5,6$.
Then
\bea\label{v6}
v(x_1,\dots,x_6)=u(x_1,\dots,x_6)\cdot g(t,s)g(t',s') +
\hspace{20mm} \nonumber \\ +
\frac{2\left(\rho_{13}\rho_{24}\cdot\rho_{35}\rho_{46}\right)_{[1,2][5,6]}}
{\rho_{13}\rho_{14}\rho_{23}\rho_{24}\cdot
\rho_{34}^{d'-2}\cdot \rho_{35}\rho_{45}\rho_{36}\rho_{46}}\cdot
(1-g(t,s)g(t',s'))
\end{eqnarray}
has the required power series expansion $u(x_1,\dots,x_6) +
O(\rho_{12},\rho_{56})$ provided $g(s,t)$ is of the form
$g(s,t) = \sum_{n\geq 0} g_n(t)/n!\cdot s^n$ with $g_0(t)=1$, and it is
harmonic in all four variables $x_1,x_2,x_5,x_6$ provided $g$ solves
the PDE
\bea\label{diffg}
\Big((1-t\partial_t)(1+t\partial_t+s\partial_s) -
[(1-t\partial_t)+t(2+t\partial_t+s\partial_s)]\partial_s\Big)\, g=0.
\end{eqnarray}
The solution is
\bea\label{g}
g(s,t) &=& \frac 1{s}\cdot\Big[Li_2(u)+Li_2(v) - Li_2(u+v-uv)\Big] + \\
&+& \frac ts \cdot\left[Li_2\left(\frac {-u}{1-u}\right)
+ Li_2\left(\frac {-v}{1-v}\right)
- Li_2\left(\frac{uv-u-v}{(1-u)(1-v)}\right)\right],\nonumber
\end{eqnarray}
where $u$ and $v$ (apologies for the duplicate use of letters!) here
stand for the ``chiral'' variables defined by the algebraic equations
\bea\label{uv}
s=uv\qquad\hbox{and}\qquad t = (1-u)(1-v).
\end{eqnarray}
$Li_2$ is the dilogarithmic function
defined by analytic continuation of its integral or power series
representations ($0\leq x < 1$)
\bea\label{dilog}
Li_2(x) = -\int_0^x\frac{\log(1-t)}t\,dt = \sum_{n>0}\frac{x^n}{n^2}.
\end{eqnarray}
Notice that $g$ is regular at $s=0$ in spite of the prefactors $\sim
1/s$. This transcendental correlation function can
definitely not be produced by free fields.
It was found by turning the differential equation
(\ref{diffg}) into the recursive system
\bea\label{recg}
(1+(n+1)t-t(1-t)\partial_t)g_{n} =
(1-t\partial_t)(n+t\partial_t)g_{n-1}
\end{eqnarray}
with $g_0(t)=1$,
and resumming the solution
\bea\label{gn}
\frac{g_n(t)}{n!} = \frac{n!(n+1)!}{(2n+1)!}\cdot{}_2F_1(n,n+1;2n+2;1-t)
\end{eqnarray}
by exploiting the integral representation of hypergeometric
functions.
\section{Local commutativity}
\setcounter{equation}{0}
We shall now discuss the issue of local commutativity of the bi-field
$V(x,y)$. The naive argument would go as follows: since $U(x,y)$ is
Huygens bilocal in the sense of local commutativity for spacelike or
timelike separation from $x$ and $y$, the correlation functions
\bea\label{uk}
u_k(x,y,\dots) = \vac{\phi_3(x_3)\dots\phi_k(x_k) U(x,y)
\phi_{k+1}(x_{k+1})\dots\phi_n(x_n) }
\end{eqnarray}
are independent of the position $k$ where $U(x,y)$ is inserted. By the
uniqueness of the harmonic decomposition, the same should be true for
their harmonic parts
\bea\label{vk}
v_k(x,y,\dots) = \vac{\phi_3(x_3)\dots\phi_k(x_k) V(x,y)
\phi_{k+1}(x_{k+1})\dots\phi_n(x_n) },
\end{eqnarray}
hence $V(x,y)$ commutes with $\phi_k(x_k)$.
However, this argument is not correct because of convergence problems
of the power series. The transcendentality of the correlation function
(\ref{v6}) shows that $V(x,y)$ in this case is certainly not a Huygens bilocal
field, which must have rational correlation functions by the same
argument \cite{NT} as for Huygens local fields. On the other hand,
the Result 2 in Sect.\ 3 yields a necessary and sufficient condition
(obviously violated by (\ref{dp6})):
\smallskip
\noindent
\R3{$V(x,y)$ is Huygens bilocal if and only if the coefficients of
the twist two pole $((x-y)^2)^{-(d-1)}$ in every correlation involving
$\phi(x)\phi(y)$ (i.e., the leading parts $u_0$ of the correlations
of $U(x,y)$) never exhibit ``double poles'' in the variables $x$ or
$y$ (as explained Sect.\ 3). }
\smallskip
In general, i.e., when there are double poles, $V(x,y)$ is originally
only defined as a formal power series (in $x-y$) within each
correlation function. Even when these series converge, it is not a
priori clear what the labelling pair of points $x,y$ has to do with
its localization in the sense of local commutativity with other
fields, because splitting the OPE into pieces is a highly nonlocal
operation (involving projections onto eigenspaces of conformal Casimir
operators).
\smallskip
In order to study local commutativity, we need to control convergence
of the series defining the harmonic part. The latter can be addressed
with the help of the ``generalized residue formula''. This integral
representation of the harmonic part was found recently \cite{BN} in
the context of higher-dimensional vertex algebras:
\bea\label{res}
v(x)=\frac1{i\pi\vert\SS^{D-1}\vert}
\int_{M_{r}}d^Dz\vert_{M_r}\frac{1-x^2/z^2}{((z-x)^2)^{D/2}}\; u(z).
\end{eqnarray}
Here, $z^2=\sum_{a=1}^D z_a^2$ is the complex Euclidean square. $M_r$
is the compact submanifold $r\cdot \SS^1\cdot \SS^{D-1}$
($\SS^1\subset\CC$ is the complex unit circle, and
$\SS^{D-1}\subset{\mathbb R}^D\subset\CC^D$ the real unit sphere), and
$d^Dz\vert_{M_r}$ the induced complex measure.
The radius $r>0$ has to be chosen such that $u(z)$ converges absolutely
for $z\in M_r$. Then for $x$ small enough such that the kernel
converges absolutely as a power series in $x$ for every $z\in M_r$,
the integral converges as a power series in $x$, and is independent of
the choice of $r$.
This formula for the harmonic part w.r.t.\ the Euclidean Laplacian
remains valid for the Lorentzian Laplacian, provided $z^2$ is replaced
by the (complex) Lorentzian square, and the unit sphere by the set
$\{(ix^0,\vec x): (x^0,\vec x\in\SS^{D-1})\}$. This is true because
the map $\CC^D\to\CC^D$, $(z^0,\vec z)\mapsto (iz^0,\vec z)$,
intertwines the Euclidean with the Lorentzian harmonic decomposition.
\smallskip
In the case at hand, where $x-y$ plays the role of $x$ and $u(x)$ is
the Taylor series around $x$ or around $y$, respectively, of a Laurent
polynomial with poles at $(x-x_j)^2=0$ and $(y-x_j)^2=0$, we find
absolute convergence in the domain
\bea\label{dom}
\norm{x-y} + \sqrt{\norm{x-y}^2+\vert(x-y)^2\vert} < \hspace{37mm}
\nonumber \\ <
\sqrt{\norm{x-x_j}^2+\vert(x-x_j)^2\vert} - \norm{x-x_j}
\quad\forall\;j=1,\dots n
\end{eqnarray}
in the first case, and the same with $x$ replaced by $y$ in the
second case. Especially, if $x$ and $y$ are spacelike or timelike
separated from all other points $x_j$, these domains are not empty.
We have therefore
\smallskip
\noindent
\R4{The formal power series $v_k(x,y,\dots)$ for the
correlation functions (\ref{vk}) converge absolutely within the
domains (\ref{dom}), and the resulting functions $v_k$ do not depend
on the position $k$ where $V(x,y)$ is inserted in (\ref{vk}).}
\smallskip
The issue of local commutativity of $V(x,y)$ with $\phi_k(x_k)$ now
amounts to the question whether $v_{k-1}=v_{k}$ still holds outside
the domain (\ref{dom}), as long as $x_k$ is spacelike (or timelike)
from $x$ and $y$. We conservatively anticipate that the correlation
functions are real analytic functions of real spacetime points within
the region where local commutativity holds. Then the existence of
a unique real analytic continuation from (\ref{dom}) to some other
configuration implies $v_{k-1}=v_{k}$ at the latter configuration by
virtue of Result 4, and hence commutativity. Continuation beyond a
singularity requires to go through a suitable complex cone which
depends on the position $k$ where $V(x,y)$ is inserted in (\ref{vk}),
hence commutativity will fail.
Put differently, our strategy to establish locality by inspection of
analyticity inverts the usual axiomatic reasoning \cite{SW} by which
one {\em derives} the domain of analyticity from the {\em known} locality
(and energy positivity).
\smallskip
We want to discuss specifically the local commutativity of
$V(x_1,x_2)$ with $\phi'(x_3)$ in the case of the example (\ref{v6}),
by studying its maximal real analytic continuation starting from the
domain (\ref{dom}), which is a neighborhood of $x_1=x_2$ where $s=0$,
$t=1$, hence $u=v=0$. Clearly, we can only reach configurations where
$(x_1-x_k)^2\neq 0$ has the same sign as $(x_2-x_k)^2$ for $k=3$ and
$k=4$, because this is trivially true at $x_1=x_2$ and we cannot pass
through $t=0$ or $t=\infty$ where the variables $u$ or $v$ in
(\ref{g}) would hit the singularities of the dilogarithmic function
$Li_2(z)$ at $z=1$ and $z=\infty$.
We claim that (\ref{v6}) has a
unique real analytic continuation to {\em all} these points, or
equivalently, that $g(s,t)$ given by (\ref{g}) has a unique real
analytic continuation in the region $t>0$, $s$ arbitrary (real). This
is obvious for the last terms in the two lines of (\ref{g}) because
for $t>0$ their arguments are $<1$. For the study of the remaining
terms, we solve (\ref{uv}) for $u$ and $v$ (where it does not matter
which one is which because of the manifest symmetry of (\ref{g}) under
$u\leftrightarrow v$)
\bea\label{chi}
u,v = \frac12\Big(1-t+s\pm\sqrt{(1-t+s)^2-4s}\Big).
\end{eqnarray}
In the range $s \leq
(1-\sqrt t)^2$, $u$ and $v$ are real and $u+v = 1-t+s<2$. From
$(1-u)(1-v)=t>0$, we see that both $u$ and $v$ are $<1$, and so are
$\frac{-u}{1-u}$ and $\frac{-v}{1-v}$.
The continuation to these points is unambiguous. In the range
$(1-\sqrt t)^2<s<(1+\sqrt t)^2$, $u$ and $v$ are complex and conjugate
to each other, so that the first two terms in both lines of (\ref{g})
are always the sum of the values on the two branches above and below
the cut. In particular, $g(s,t)$ is real and $Li_2$ in (\ref{g}) may
be replaced by its real part. Finally, in the range $s \geq (1+\sqrt
t)^2$, we find $u+v >2$, hence both $u$ and $v$ and also
$\frac{-u}{1-u}$ and $\frac{-v}{1-v}$ are $>1$. All four arguments
hit the cut of $Li_2(z)$. But because its discontinuity is
{\em imaginary}, the real parts are real analytic. This proves the
claim.
As explained before, the maximal domain of real analyticity specifies
those configurations $x_1,x_2,x_3$, where $\phi'(x_3)$ commutes with
$V(x_1,x_2)$. We may assume $x_4^2\to\pm\infty$ (which can be achieved
by a conformal transformation), hence
$t=(x_2-x_3)^2/(x_1-x_3)^2$. Thus we get commutativity whenever $x_3$
is {\em simultaneously} spacelike or timelike from $x_1$ {\em and} $x_2$.
We summarize
\smallskip
\noindent
\R5{The transcendental (part of a) correlation function
(\ref{v6}) is compatible with local
commutativity between $V(x,y)$ and $\phi'(z)$ when $x-z$ and
$y-z$ are either both spacelike or both timelike.}
\smallskip
The set of these configurations is locally, but {\em not globally}
conformal invariant, since a conformal transformation may switch the
sign $\sigma$ of $(x-z)^2/(y-z)^2$. This is not a
contradiction: connecting configurations with $\sigma=+$ with those
with $\sigma=-$ by a path in the conformal group, one must necessarily
pass through $x=\infty$ or $y=\infty$, where the OPE in terms of power
series ceases to make sense. This breakdown of GCI for the biharmonic
field $V(x,y)$ is, of course, just another manifestation of its
violation of Huygens bilocality.
\smallskip
It is worth noticing that another decomposition theory for the
OPE in conformal QFT was developed in \cite{SS,SSV}. While it
is coarser than the twist decomposition (it is even trivial in the
GCI case), it was found to exhibit, at least in two dimensions
\cite{RS}, a ``localization between the points'' with similar
implications as the present one.
\section{Conclusion}
We have outlined recent progress in the intrinsic structure analysis
of quantum field theories in four dimensions, under the assumption of
``global conformal invariance'' \cite{NRT2}.
We have found nontrivial restrictions on the singularity structure of
correlation functions. The encouraging aspect is that these restrictions
allow a small ``margin'' beyond free correlations, for which we have
given a nontrivial example. It exhibits a local but not Huygens local
bi-field $V(x,y)$ whose correlation functions involve dilogarithmic
functions. Local commutativity with a third field at a point $z$ is
shown to hold (in this example) whenever $x-z$ and $y-z$ are
either both spacelike or both timelike. The possible failure of local
commutativity when one is spacelike while the other is timelike,
occurs only in correlations of at least five points, because
four-point functions cannot exhibit the characteristic ``double
poles'' in the twist two channel that are responsible for the
transcendental correlations involving $V(x,y)$.
A serious question remains to be answered before our six-point
structure is established as (a contribution to) a manifestly non-free
correlation function: we cannot control (at the moment) Wightman
positivity at the six-point level. In the case at hand, this means
that we do not know whether the vectors $\phi'(z)V(x,y)\vert 0\rangle$
span a Hilbert (sub-)space with positive metric. Because our six-point
structure reduces in the leading OPE channels to five- and four-point
structures that can also be obtained from free fields, positivity can
only be violated in higher channels where our present knowledge of
partial waves is not sufficient. Far more ambitious is the problem
whether a given six-point function can be supplemented by higher
correlations satisfying Wightman positivity in full generality, i.e.,
to recover the full Hilbert space on which the bi-field $V(x,y)$ and
its generating field $\phi(x)$ act.
\section*{Acknowledgments}
The authors thank the organizers of the conference ``LT7 -- Lie
Theory and its Applications in Physics'' (Varna, June 2007) for giving
them the opportunity to present these results, and the
Alexander von Humboldt Foundation for financial support.
\small
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.